[libvirt] [PATCH] qemu: Use -nodefconfig when probing for CPU models
by Jiri Denemark
In case qemu supports -nodefconfig, libvirt adds uses it when launching
new guests. Since this option may affect CPU models supported by qemu,
we need to use it when probing for available models.
---
src/qemu/qemu_conf.c | 16 ++++++++++++----
src/qemu/qemu_conf.h | 1 +
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 988220b..1d0bd88 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -718,11 +718,17 @@ error:
int
qemudProbeCPUModels(const char *qemu,
+ unsigned long long qemuCmdFlags,
const char *arch,
unsigned int *count,
const char ***cpus)
{
- const char *const qemuarg[] = { qemu, "-cpu", "?", NULL };
+ const char *const qemuarg[] = {
+ qemu,
+ "-cpu", "?",
+ (qemuCmdFlags & QEMUD_CMD_FLAG_NODEFCONFIG) ? "-nodefconfig" : NULL,
+ NULL
+ };
const char *const qemuenv[] = { "LC_ALL=C", NULL };
enum { MAX_MACHINES_OUTPUT_SIZE = 1024*4 };
char *output = NULL;
@@ -916,7 +922,7 @@ qemudCapsInitGuest(virCapsPtr caps,
guest->arch.defaultInfo.emulator_mtime = binary_mtime;
if (caps->host.cpu &&
- qemudProbeCPUModels(binary, info->arch, &ncpus, NULL) == 0 &&
+ qemudProbeCPUModels(binary, 0, info->arch, &ncpus, NULL) == 0 &&
ncpus > 0 &&
!virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0))
goto error;
@@ -3365,6 +3371,7 @@ static int
qemuBuildCpuArgStr(const struct qemud_driver *driver,
const virDomainDefPtr def,
const char *emulator,
+ unsigned long long qemuCmdFlags,
const struct utsname *ut,
char **opt)
{
@@ -3378,7 +3385,8 @@ qemuBuildCpuArgStr(const struct qemud_driver *driver,
int i;
if (def->cpu && def->cpu->model) {
- if (qemudProbeCPUModels(emulator, ut->machine, &ncpus, &cpus) < 0)
+ if (qemudProbeCPUModels(emulator, qemuCmdFlags, ut->machine,
+ &ncpus, &cpus) < 0)
goto cleanup;
if (!ncpus || !host) {
@@ -3706,7 +3714,7 @@ int qemudBuildCommandLine(virConnectPtr conn,
ADD_ARG_LIT(def->os.machine);
}
- if (qemuBuildCpuArgStr(driver, def, emulator, &ut, &cpu) < 0)
+ if (qemuBuildCpuArgStr(driver, def, emulator, qemuCmdFlags, &ut, &cpu) < 0)
goto error;
if (cpu) {
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index ab5f158..dfdc0bb 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -289,6 +289,7 @@ int qemudProbeMachineTypes (const char *binary,
int *nmachines);
int qemudProbeCPUModels (const char *qemu,
+ unsigned long long qemuCmdFlags,
const char *arch,
unsigned int *count,
const char ***cpus);
--
1.7.1.1
14 years, 4 months
[libvirt] [PATCH] virsh: Fix man page syntax
by Jiri Denemark
pod2man prints the following warning when generating virsh.1:
tools/virsh.pod:890: Unmatched =back
---
tools/virsh.pod | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 64cd0d0..e03dbe8 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -781,6 +781,8 @@ Returns the UUID of the named I<pool>.
=head1 VOLUME COMMANDS
+=over 4
+
=item B<vol-create> I<pool-or-uuid> I<FILE>
Create a volume from an XML <file>.
--
1.7.1.1
14 years, 4 months
[libvirt] [PATCH] Fix potential crash in QEMU monitor JSON impl
by Daniel P. Berrange
An indentation mistake meant that a check for return status
was not properly performed in all cases. This could result
in a crash on NULL pointer in a following line.
* src/qemu/qemu_monitor_json.c: Fix check for return status
when processing JSON for blockstats
---
src/qemu/qemu_monitor_json.c | 15 ++++++++-------
1 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 01be86d..4487ff5 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1059,11 +1059,10 @@ int qemuMonitorJSONGetBlockStatsInfo(qemuMonitorPtr mon,
ret = qemuMonitorJSONCommand(mon, cmd, &reply);
- if (ret == 0) {
+ if (ret == 0)
ret = qemuMonitorJSONCheckError(cmd, reply);
- if (ret < 0)
- goto cleanup;
- }
+ if (ret < 0)
+ goto cleanup;
ret = -1;
devices = virJSONValueObjectGet(reply, "return");
@@ -1164,11 +1163,13 @@ int qemuMonitorJSONGetBlockExtent(qemuMonitorPtr mon,
if (!cmd)
return -1;
- if (qemuMonitorJSONCommand(mon, cmd, &reply) < 0)
- goto cleanup;
+ ret = qemuMonitorJSONCommand(mon, cmd, &reply);
- if (qemuMonitorJSONCheckError(cmd, reply) < 0)
+ if (ret == 0)
+ ret = qemuMonitorJSONCheckError(cmd, reply);
+ if (ret < 0)
goto cleanup;
+ ret = -1;
devices = virJSONValueObjectGet(reply, "return");
if (!devices || devices->type != VIR_JSON_TYPE_ARRAY) {
--
1.7.1.1
14 years, 4 months
[libvirt] FYI: a short guide to libvirt & network filtering iptables/ebtables use
by Daniel P. Berrange
I just wrote this to assist some Red Hat folks understanding
what libvirt does with iptables, and thought it is useful info
for the whole libvirt community. When I have time I'll adjust
this content so that it can fit into the website in relevant
pages/places.
Firewall / network filtering in libvirt
=======================================
There are three pieces of libvirt functionality which do network
filtering of some type. At a high level they are:
- The virtual network driver.
This provides a isolated bridge device (ie no physical NICs
enslaved). Guest TAP devices are attached to this bridge.
Guests can talk to each other & the host, and optionally the
wider world.
- The QEMU driver MAC filtering
This provides a generic filtering of MAC addresses to prevent
the guest spoofing its MAC address. This is mostly obsoleted by
the next item, so won't be discussed further.
- The network filter driver
This provides fully configurable, arbitrary network filtering
of traffic on guest NICs. Generic rulesets are defined at the
host level to control traffic in some manner. Rules sets are
then associated with individual NICs of a guest. While not as
expressive as directly using iptables/ebtables, this can still
do nearly everything you would want to on a guest NIC filter.
The virtual network driver
==========================
The typical configuration for guests is to use bridging of the
physical NIC on the host to connect the guest directly to the LAN.
In RHEL6 there is also the possibility of using macvtap/sr-iov
and VEPA connectivity. None of this stuff plays nicely with wireless
NICs, since they will typically silently drop any traffic with a
MAC address that doesn't match that of the physical NIC.
Thus the virtual network driver in libvirt was invented. This takes
the form of an isolated bridge device (ie one with no physical NICs
enslaved). The TAP devices associated with the guest NICs are attached
to the bridge device. This immediately allows guests on a single host
to talk to each other and to the host OS (modulo host IPtables rules).
libvirt then uses iptables to control what further connectivity is
available. There are three configurations possible for a virtual
network at time of writing
- isolated: all off-node traffic is completely blocked
- nat: outbound traffic to the LAN is allowed, but MASQUERADED
- forward: outbound traffic to the LAN is allowed
The latter 'forward' case requires the virtual network be on a
separate sub-net from the main LAN, and that the LAN admin has
configured routing for this subnet. In the future we intend to
add support for IP subnetting and/or proxy-arp. This allows for
the virtual network to use the same subnet as the main LAN &
should avoid need for the LAN admin to configure special routing.
Libvirt will optionally also provide DHCP services to the virtual
network using DNSMASQ. In all cases, we need to allow DNS/DHCP
queries to the host OS. Since we can't predict whether the host
firewall setup is already allowing this, we insert 4 rules into
the head of the INPUT chain
target prot opt in out source destination
ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Note we have restricted our rules to just the bridge associated
with the virutal network, to avoid opening undesirable holes in
the host firewall wrt the LAN/WAN.
The next rules depend on the type of connectivity allowed, and go
in the main FORWARD chain:
type=isolated
-------------
Allow traffic between guests. Deny inbound. Deny outbound.
target prot opt in out source destination
ACCEPT all -- virbr1 virbr1 0.0.0.0/0 0.0.0.0/0
REJECT all -- * virbr1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
REJECT all -- virbr1 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
type=nat
--------
Allow inbound related to an established connection. Allow
outbound, but only from our expected subnet. Allow traffic
between guests. Deny all other inbound. Deny all other outbound.
target prot opt in out source destination
ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED
ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0
ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
type=routed
-----------
Allow inbound, but only to our expected subnet. Allow
outbound, but only from our expected subnet. Allow traffic
between guests. Deny all other inbound. Deny all other outbound.
target prot opt in out source destination
ACCEPT all -- * virbr2 0.0.0.0/0 192.168.124.0/24
ACCEPT all -- virbr2 * 192.168.124.0/24 0.0.0.0/0
ACCEPT all -- virbr2 virbr2 0.0.0.0/0 0.0.0.0/0
REJECT all -- * virbr2 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
REJECT all -- virbr2 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Finally, with type=nat, there is also an entry in the POSTROUTING
chain to apply masquerading
target prot opt in out source destination
MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
The network filter driver
=========================
This driver provides a fully configurable network filtering capability
that leverages ebtables, iptables and ip6tables. This was written by
the libvirt guys at IBM and although its XML schema is defined by libvirt,
the conceptual model is closely aligned with the DMTF CIM schema for
network filtering
http://www.dmtf.org/standards/cim/cim_schema_v2230/CIM_Network.pdf
The filters are managed in libvirt as a top level, standalone object.
This allows the filters to then be referenced by any libvirt object
that requires their functionality, instead tieing them only to use
by guest NICs. In the current implementation, filters can be associated
with individual guest NICs via the libvirt domain XML format. In the
future we might allow filters to be associated with the virtual network
objects. Further we're expecting to define a new 'virtual switch' object
to remove the complexity of configuring bridge/sriov/vepa networking
modes. This make also end up making use of network filters.
There are a new set of virsh commands for managing network filters
virsh nwfilter-define define or update a network filter from an XML file
virsh nwfilter-undefine undefine a network filter
virsh nwfilter-dumpxml network filter information in XML
virsh nwfilter-list list network filters
virsh nwfilter-edit edit XML configuration for a network filter
There are equivalently named C APIs for each of these commands.
As with all objects libvirt manages, network filters are configured
using an XML format. At a high level the format looks like this:
<filter name='no-spamming' chain='XXXX'>
<uuid>d217f2d7-5a04-0e01-8b98-ec2743436b74</uuid>
<rule ...>
....
</rule>
<filterref filter='XXXX'/>
</filter>
Every filter has a name and UUID which serve as unique identifiers.
A filter can have zero-or-more <rule> elements which are used to
actually define network controls. Filters can be arranged into a
DAG, so zero-or-more <filterref/> elements are also allowed. Cycles
in the graph are not allowed.
The <rule> element is where all the interesting stuff happens. It
has three attributes, an action, a traffic direction and an optional
priority. eg
<rule action='drop' direction='out' priority='500'>
Within the rule there are a wide variety of elements allowed, which
do protocol specific matching. Supported protocols currently include
'mac', 'arp', 'rarp', 'ip', 'ipv6', 'tcp/ip', 'icmp/ip', 'igmp/ip',
'udp/ip', 'udplite/ip' 'esp/ip', 'ah/ip', 'sctp/ip', 'tcp/ipv6',
'icmp/ipv6', 'igmp/ipv6', 'udp/ipv6', 'udplite/ipv6', 'esp/ipv6',
'ah/ipv6', 'sctp/ipv6'. Each protocol defines what is valid inside
the <rule> element, the general pattern though is
<protocol match='yes|no' attribute1='value1' attribute2='value2'/>
so, eg a TCP protocol, matching ports 0-1023 would be expressed
as:
<tcp match='yes' srcportstart='0' srcportend='1023'/>
Attributes can included references to variables defined by the
object using the rule. So the guest XML format allows each NIC
to have a MAC address and IP address defined. These are made
available to filters via the variables $IP and $MAC.
So to define a filter the prevents IP address spoofing we can
simply match on source IP address != $IP
<filter name='no-ip-spoofing' chain='ipv4'>
<rule action='drop' direction='out'>
<ip match='no' srcipaddr='$IP' />
</rule>
</filter>
I'm not going to go into details on all the other protocol
matches you can do, because it'll take far too much space.
You can read about the options here
http://libvirt.org/formatnwfilter.html#nwfelemsRulesProto
Out of the box in RHEL6/Fedora rawhide, libvirt ships with a
set of default useful rules
# virsh nwfilter-list
UUID Name
----------------------------------------------------------------
15b1ab2b-b1ac-1be2-ed49-2042caba4abb allow-arp
6c51a466-8d14-6d11-46b0-68b1a883d00f allow-dhcp
7517ad6c-bd90-37c8-26c9-4eabcb69848d allow-dhcp-server
3d38b406-7cf0-8335-f5ff-4b9add35f288 allow-incoming-ipv4
5ff06320-9228-2899-3db0-e32554933415 allow-ipv4
db0b1767-d62b-269b-ea96-0cc8b451144e clean-traffic
f88f1932-debf-4aa1-9fbe-f10d3aa4bc95 no-arp-spoofing
772f112d-52e4-700c-0250-e178a3d91a7a no-ip-multicast
7ee20370-8106-765d-f7ff-8a60d5aaf30b no-ip-spoofing
d5d3c490-c2eb-68b1-24fc-3ee362fc8af3 no-mac-broadcast
fb57c546-76dc-a372-513f-e8179011b48a no-mac-spoofing
dba10ea7-446d-76de-346f-335bd99c1d05 no-other-l2-traffic
f5c78134-9da4-0c60-a9f0-fb37bc21ac1f no-other-rarp-traffic
7637e405-4ccf-42ac-5b41-14f8d03d8cf3 qemu-announce-self
9aed52e7-f0f3-343e-fe5c-7dcb27b594e5 qemu-announce-self-rarp
Most of these are just building blocks. The interesting one here
is 'clean-traffic'. This pulls together all the building blocks
into one filter that you can then associate with a guest NIC.
This stops the most common bad things a guest might try, IP
spoofing, arp spoofing and MAC spoofing. To look at the rules for
any of these just do
virsh nwfilter-dumpxml FILTERNAME|UUID
They are all stored in /etc/libvirt/nwfilter, but don't edit
files there directly. Use 'virsh nwfilter-define' to update
them. This ensures the guests have their iptables/ebtables
rules recreated.
To associate the clean-trafffic filter with a guest, edit the
guest XML config and change the <interface> element to include
a <filterref> and also specify the whitelisted <ip addres/> the
guest is allowed to use
<interface type='bridge'>
<mac address='52:54:00:56:44:32'/>
<source bridge='br1'/>
<ip address='10.33.8.131'/>
<target dev='vnet0'/>
<model type='virtio'/>
<filterref filter='clean-traffic'/>
</interface>
If no <ip address> is included, the network filter driver will
activate its 'learning mode'. This uses libpcap to snoop on
network traffic the guest sends and attempts to identify the
first IP address it uses. It then locks traffic to this address.
Obviously this isn't entirely secure, but it does offer some
protection against the guest being trojaned once up & running.
In the future we intend to enhance the learning mode so that it
looks for DHCPOFFERS from a trusted DHCP server and only allows
the offered IP address to be used.
Now, how is all this implemented... The network filter driver
uses a combination of ebtables, iptables and ip6tables, depending
on which protocols are referenced in a filter. The out of the box
'clean-traffic' filter rules only require use of ebtables. If you
want to do matching at tcp/udp/etc protocols (eg to add a new
filter 'no-email-spamming' to block port 25), then iptables will
also be used.
The driver attempts to keep its rules separate from those that
the host admin might already have configured. So the first thing
it does with ebtables, is to add two hooks in POSTROUTING &
PREROUTING chains, to redirect traffic to custom chains. These
hooks match on the TAP device name of the guest NIC, so they
should not interact badly with any administrator defined rules
Bridge chain: PREROUTING, entries: 1, policy: ACCEPT
-i vnet0 -j libvirt-I-vnet0
Bridge chain: POSTROUTING, entries: 1, policy: ACCEPT
-o vnet0 -j libvirt-O-vnet0
To keep things managable & easy to follow, the driver will then
create further sub-chains for each protocol then it needs to match
against:
Bridge chain: libvirt-I-vnet0, entries: 5, policy: ACCEPT
-p IPv4 -j I-vnet0-ipv4
-p ARP -j I-vnet0-arp
-p 0x8035 -j I-vnet0-rarp
-p 0x835 -j ACCEPT
-j DROP
Bridge chain: libvirt-O-vnet0, entries: 4, policy: ACCEPT
-p IPv4 -j O-vnet0-ipv4
-p ARP -j O-vnet0-arp
-p 0x8035 -j O-vnet0-rarp
-j DROP
Finally, come the actual implementation of the filters. These
example is showing the 'clean-traffic' filter implementation.
I'm not going to explain what this is doing now :-)
Bridge chain: I-vnet0-ipv4, entries: 2, policy: ACCEPT
-s ! 52:54:0:56:44:32 -j DROP
-p IPv4 --ip-src ! 10.33.8.131 -j DROP
Bridge chain: O-vnet0-ipv4, entries: 1, policy: ACCEPT
-j ACCEPT
Bridge chain: I-vnet0-arp, entries: 6, policy: ACCEPT
-s ! 52:54:0:56:44:32 -j DROP
-p ARP --arp-mac-src ! 52:54:0:56:44:32 -j DROP
-p ARP --arp-ip-src ! 10.33.8.131 -j DROP
-p ARP --arp-op Request -j ACCEPT
-p ARP --arp-op Reply -j ACCEPT
-j DROP
Bridge chain: O-vnet0-arp, entries: 5, policy: ACCEPT
-p ARP --arp-op Reply --arp-mac-dst ! 52:54:0:56:44:32 -j DROP
-p ARP --arp-ip-dst ! 10.33.8.131 -j DROP
-p ARP --arp-op Request -j ACCEPT
-p ARP --arp-op Reply -j ACCEPT
-j DROP
Bridge chain: I-vnet0-rarp, entries: 2, policy: ACCEPT
-p 0x8035 -s 52:54:0:56:44:32 -d Broadcast --arp-op Request_Reverse --arp-ip-src 0.0.0.0 --arp-ip-dst 0.0.0.0 --arp-mac-src 52:54:0:56:44:32 --arp-mac-dst 52:54:0:56:44:32 -j ACCEPT
-j DROP
Bridge chain: O-vnet0-rarp, entries: 2, policy: ACCEPT
-p 0x8035 -d Broadcast --arp-op Request_Reverse --arp-ip-src 0.0.0.0 --arp-ip-dst 0.0.0.0 --arp-mac-src 52:54:0:56:44:32 --arp-mac-dst 52:54:0:56:44:32 -j ACCEPT
-j DROP
NB, we would have liked to include the prefix 'libvirt-' in all
of our chain names, but unfortunately the kernel limits names
to a very short maximum length. So only the first two custom
chains can include that prefix. The others just include the
TAP device name + protocol name.
If I define a new filter 'no-spamming' and then add this to the
'clean-traffic' filter, I can illustrate how iptables usage works.
# cat > /root/spamming.xml <<EOF
<filter name='no-spamming' chain='root'>
<uuid>d217f2d7-5a04-0e01-8b98-ec2743436b74</uuid>
<rule action='drop' direction='out' priority='500'>
<tcp dstportstart='25' dstportend='25'/>
</rule>
</filter>
EOF
# virsh nwfilter-define /root/spamming.xml
# virsh nwfilter-edit clean-traffic
...add <filterref filter='no-spamming'/>
All active guests immediately have their iptables/ebtables rules
rebuilt.
The network filter driver deals with iptables in a very similar
way. First it separates out its rules from those the admin may
have defined, by adding a couple of hooks into the INPUT/FORWARD
chains
Chain INPUT (policy ACCEPT 13M packets, 21G bytes)
target prot opt in out source destination
libvirt-host-in all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 5532K packets, 3010M bytes)
target prot opt in out source destination
libvirt-in all -- * * 0.0.0.0/0 0.0.0.0/0
libvirt-out all -- * * 0.0.0.0/0 0.0.0.0/0
libvirt-in-post all -- * * 0.0.0.0/0 0.0.0.0/0
These custom chains, then do matching based on the TAP device
name, so they won't open holes in the admin defined matches for
the LAN/WAN (if any).
Chain libvirt-host-in (1 references)
target prot opt in out source destination
HI-vnet0 all -- * * 0.0.0.0/0 0.0.0.0/0 [goto] PHYSDEV match --physdev-in vnet0
Chain libvirt-in (1 references)
target prot opt in out source destination
FI-vnet0 all -- * * 0.0.0.0/0 0.0.0.0/0 [goto] PHYSDEV match --physdev-in vnet0
Chain libvirt-in-post (1 references)
target prot opt in out source destination
ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vnet0
Chain libvirt-out (1 references)
target prot opt in out source destination
FO-vnet0 all -- * * 0.0.0.0/0 0.0.0.0/0 [goto] PHYSDEV match --physdev-out vnet0
Finally, we can see the interesting bit which is the actual
implementation of my filter to block port 25 access:
Chain FI-vnet0 (1 references)
target prot opt in out source destination
DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25
Chain FO-vnet0 (1 references)
target prot opt in out source destination
DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:25
Chain HI-vnet0 (1 references)
target prot opt in out source destination
DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25
One thing in looking at this that you may notice is that if there
are many guests all using the same filters, we will be duplicating
the iptables rules over & over for each guest. This is merely a
limitation of the current rules engine implementation. At the libvirt
object modelling level you can clearly see we've designed the model
so that filter rules are define in one place, and indirectly referenced
by guests. Thus it should be possible to change the impl in the future
so that we can share the actual iptables/ebtables rules for each
guest to create a more scalable system. The stuff in current libvirt
is more or less the very first working impl we've had of this stuff,
so there's not been much optimization work yet.
Also notice that at the XML level we don't expose the fact that we
are using iptables or ebtables at all. The rule definition is done in
terms of network protocols. Thus if we ever find a need, we could
plug in an alternative implementation that calls out to a different
firewall implementation instead of ebtables/iptables (providing that
impl was suitably expressive of course)
Finally, in terms of problems we have in deployment. The biggest
problem is that if the admin does 'service iptables restart' all
our work gets blown away. We've experimented with using lokkit
to record our custom rules in a persistent config file, but that
caused different problem. Admins who were not using lokkit for
their config found that all their own rules got blown away. So
we threw away our lokkit code. Instead we document that if you
run 'service iptables restart', you need to send SIGHUP to libvirt
to make it recreate its rules.
Finally a reminder, that the main documentation we have on this
is online at http://libvirt.org/formatnwfilter.html
Regards,
Daniel
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
14 years, 4 months
[libvirt] KVM Forum 2010: schedule and registration reminder
by KVM Forum 2010 Program Committee
As a reminder...the registration fees will increase on July 15th, so
register now to save the fees.
KVM Forum registration link is here:
http://events.linuxfoundation.org/component/registrationpro/?func=details...
Hotel and travel information is here (same hotel and venue as LinuxCon):
http://events.linuxfoundation.org/events/linuxcon/hotel
Here is the schedule (should be up on the LF site with more details shortly).
Monday, August 9th
------------------
9:00 9:30 Welcome + Keynote
9:30 10:15 Desktop virtualization with spice
10:15 10:45 Developing tests for the KVM autotest framework
10:45 11:00 - break -
11:00 11:30 Threading the QEMU Device Model
11:30 12:00 QEMU's new device model qdev
12:00 12:30 KVM on Server Class PowerPC
12:30 13:30 - lunch -
13:30 14:00 Transparent Hugepage Support
14:00 14:30 Migration: how to hop from machine to machine without losing state
14:30 15:00 Sheepdog: distributed storage system for QEMU
15:00 15:30 - break -
15:30 16:00 How to keep time correctly, and avoid SDR-awk cab emit peek problems
16:00 16:30 PV-DMA using IOMMU Emulation
16:30 16:45 Network Virtualization in KVM
16:45 17:00 Ganeti as a KVM cluster management interface
17:00 17:15 Porting virtio to PowerVM Hypervisors
17:15 17:30 lightning talks
17:30 19:00 BoFs
Tuesday, August 10th
--------------------
9:00 9:15 Keynote
9:15 9:45 vhost-net and virtio-net: need for speed
9:45 10:15 The QEMU Monitor Protocol (QMP)
10:15 10:45 Integrating KVM with Linux
10:45 11:00 - break -
11:00 11:30 KVM in Embedded: Requirements, Experiences and Open Challenges
11:30 12:00 Kemari: Fault Tolerant Virtual Machine Synchronization based on KVM
12:00 12:30 Managing Resources on Over-committed Virtualization Hosts
12:30 13:30 - lunch -
13:30 14:00 A Walkthrough on some recent KVM performance improvements
14:00 14:30 Examing KVM as Nested Virtualization Friendly Guest
14:30 15:00 PCI direct device assignment: pwned! all your devices are belong to guest
15:00 15:30 - break -
15:30 16:00 Performance and Scalability of Server Consolidation using KVM
16:00 16:15 WinKVM: Windows kernel-based Virtual Machine
16:15 16:30 Nahanni: Inter-VM Shared Memory
16:30 16:45 Asynchronous Page Faults: AIX did it.
16:45 17:00 PCI Express support in QEmu
17:00 17:15 lightning talks
17:15 17:30 Closing comments
17:30 19:00 BoFs
We look forward to seeing you there.
thanks,
-KVM Forum 2010 Program Commitee
14 years, 4 months
[libvirt] Question about libxml.
by erik.e.bengtson@americas.bnpparibas.com
Hi,
I saw a post by Daniel veillard regarding memory leaks using libxml.
The suggestion was to use malloc_trim(0), to correct the issue.
However, I canot find it anywhere on Solaris. Do you know if this is
located on Solaris 5.10,
and if so, what header file is it located in. If it is not on Solaris, do
you know of an alternative
function that could take it's place ?
Apologies if this is directed to a group, but the post I saw Daniels name
did not have a personal email address.
Regards,
Erik
This message and any attachments (the "message") is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.
14 years, 4 months
[libvirt] [PATCHv2 1/2] virsh: add new --details option to pool-list
by Justin Clift
This patch adds a new --details option to the virsh pool-list
command, making its output more useful to people who use virsh
for significant lengths of time.
Addresses BZ # 605543
https://bugzilla.redhat.com/show_bug.cgi?id=605543
---
Output from the new option (hopefully this doesn't wrap):
virsh # pool-list
Name State Autostart
-----------------------------------------
default active yes
image_dir active yes
virsh # pool-list --all
Name State Autostart
-----------------------------------------
default active yes
image_dir active yes
tmp inactive no
virsh # pool-list --details
Name State Autostart Persistent Capacity Allocation Available
--------------------------------------------------------------------------------------
default running yes yes 1.79 TB 1.47 TB 326.02 GB
image_dir running yes yes 1.79 TB 1.47 TB 326.02 GB
virsh # pool-list --all --details
Name State Autostart Persistent Capacity Allocation Available
--------------------------------------------------------------------------------------
default running yes yes 1.79 TB 1.47 TB 326.02 GB
image_dir running yes yes 1.79 TB 1.47 TB 326.02 GB
tmp inactive no yes - - -
virsh #
Much more practical than running pool-info individually on each pool.
tools/virsh.c | 130 ++++++++++++++++++++++++++++++++++++++++++++++++------
tools/virsh.pod | 6 ++-
2 files changed, 119 insertions(+), 17 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index d8d2220..afa84e6 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -4882,14 +4882,17 @@ static const vshCmdInfo info_pool_list[] = {
static const vshCmdOptDef opts_pool_list[] = {
{"inactive", VSH_OT_BOOL, 0, N_("list inactive pools")},
{"all", VSH_OT_BOOL, 0, N_("list inactive & active pools")},
+ {"details", VSH_OT_BOOL, 0, N_("display extended details for pools")},
{NULL, 0, 0, NULL}
};
static int
cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED)
{
+ virStoragePoolInfo info;
int inactive = vshCommandOptBool(cmd, "inactive");
int all = vshCommandOptBool(cmd, "all");
+ int details = vshCommandOptBool(cmd, "details");
int active = !inactive || all ? 1 : 0;
int maxactive = 0, maxinactive = 0, i;
char **activeNames = NULL, **inactiveNames = NULL;
@@ -4937,36 +4940,114 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED)
qsort(&inactiveNames[0], maxinactive, sizeof(char*), namesorter);
}
}
- vshPrintExtra(ctl, "%-20s %-10s %-10s\n", _("Name"), _("State"), _("Autostart"));
- vshPrintExtra(ctl, "-----------------------------------------\n");
+
+ /* Display the appropriate heading */
+ if (details) {
+ vshPrintExtra(ctl, "%-20s %-10s %-10s %-11s %-9s %-11s %-10s\n",
+ _("Name"), _("State"), _("Autostart"), _("Persistent"),
+ _("Capacity"), _("Allocation"), _("Available"));
+ vshPrintExtra(ctl,
+ "--------------------------------------------------------------------------------------\n");
+ } else {
+ vshPrintExtra(ctl, "%-20s %-10s %-10s\n", _("Name"), _("State"),
+ _("Autostart"));
+ vshPrintExtra(ctl, "-----------------------------------------\n");
+ }
for (i = 0; i < maxactive; i++) {
- virStoragePoolPtr pool = virStoragePoolLookupByName(ctl->conn, activeNames[i]);
- const char *autostartStr;
- int autostart = 0;
+ const char *autostartStr, *persistentStr, *stateStr = NULL;
+ int autostart = 0, persistent = 0;
/* this kind of work with pools is not atomic operation */
+ virStoragePoolPtr pool = virStoragePoolLookupByName(ctl->conn, activeNames[i]);
if (!pool) {
VIR_FREE(activeNames[i]);
continue;
}
+ /* Retrieve the pool autostart status */
if (virStoragePoolGetAutostart(pool, &autostart) < 0)
autostartStr = _("no autostart");
else
autostartStr = autostart ? _("yes") : _("no");
- vshPrint(ctl, "%-20s %-10s %-10s\n",
- virStoragePoolGetName(pool),
- _("active"),
- autostartStr);
+ /* If requested, collect the extended information for this pool */
+ if (details) {
+ if (virStoragePoolGetInfo(pool, &info) != 0) {
+ vshError(ctl, "%s", _("Could not retrieve pool information"));
+ VIR_FREE(activeNames[i]);
+ continue;
+ }
+
+ /* Decide which state string to display */
+ switch (info.state) {
+ case VIR_STORAGE_POOL_INACTIVE:
+ stateStr = _("inactive");
+ break;
+ case VIR_STORAGE_POOL_BUILDING:
+ stateStr = _("building");
+ break;
+ case VIR_STORAGE_POOL_RUNNING:
+ stateStr = _("running");
+ break;
+ case VIR_STORAGE_POOL_DEGRADED:
+ stateStr = _("degraded");
+ break;
+ case VIR_STORAGE_POOL_INACCESSIBLE:
+ stateStr = _("inaccessible");
+ break;
+ }
+
+ /* Check if the pool is persistent or not */
+ persistent = virStoragePoolIsPersistent(pool);
+ vshDebug(ctl, 5, "Persistent flag value: %d\n", persistent);
+ if (persistent < 0)
+ persistentStr = _("unknown");
+ else
+ persistentStr = persistent ? _("yes") : _("no");
+
+ /* Display all information for this pool */
+ vshPrint(ctl, "%-20s %-10s %-10s %-11s",
+ virStoragePoolGetName(pool),
+ stateStr,
+ autostartStr,
+ persistentStr);
+
+ /* Display the capacity related quantities */
+ if (info.state == VIR_STORAGE_POOL_RUNNING ||
+ info.state == VIR_STORAGE_POOL_DEGRADED) {
+ double val;
+ const char *unit;
+ virBuffer infoBufStr = VIR_BUFFER_INITIALIZER;
+
+ val = prettyCapacity(info.capacity, &unit);
+ virBufferVSprintf(&infoBufStr, "%.2lf %s", val, unit);
+ vshPrint(ctl, " %-9s", virBufferContentAndReset(&infoBufStr));
+
+ val = prettyCapacity(info.allocation, &unit);
+ virBufferVSprintf(&infoBufStr, "%.2lf %s", val, unit);
+ vshPrint(ctl, " %-11s", virBufferContentAndReset(&infoBufStr));
+
+ val = prettyCapacity(info.available, &unit);
+ virBufferVSprintf(&infoBufStr, "%.2lf %s", val, unit);
+ vshPrint(ctl, " %-10s\n", virBufferContentAndReset(&infoBufStr));
+ } else
+ vshPrint(ctl, " %-9s %-11s %-10s\n", "-", "-", "-");
+ } else {
+ /* Display basic information pool information */
+ vshPrint(ctl, "%-20s %-10s %-10s\n",
+ virStoragePoolGetName(pool),
+ _("active"),
+ autostartStr);
+ }
+
virStoragePoolFree(pool);
VIR_FREE(activeNames[i]);
}
for (i = 0; i < maxinactive; i++) {
virStoragePoolPtr pool = virStoragePoolLookupByName(ctl->conn, inactiveNames[i]);
- const char *autostartStr;
- int autostart = 0;
+ const char *autostartStr, *persistentStr;
+ int autostart = 0, persistent = 0;
/* this kind of work with pools is not atomic operation */
if (!pool) {
@@ -4979,10 +5060,29 @@ cmdPoolList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED)
else
autostartStr = autostart ? _("yes") : _("no");
- vshPrint(ctl, "%-20s %-10s %-10s\n",
- inactiveNames[i],
- _("inactive"),
- autostartStr);
+ if (details) {
+ /* Check if the pool is persistent or not */
+ persistent = virStoragePoolIsPersistent(pool);
+ vshDebug(ctl, 5, "Persistent flag value: %d\n", persistent);
+ if (persistent < 0)
+ persistentStr = _("unknown");
+ else
+ persistentStr = persistent ? _("yes") : _("no");
+
+ /* Display detailed pool information */
+ vshPrint(ctl, "%-20s %-10s %-10s %-11s %-9s %-11s %-10s\n",
+ inactiveNames[i],
+ _("inactive"),
+ autostartStr,
+ persistentStr,
+ "-", "-", "-");
+ } else {
+ /* Display basic pool information */
+ vshPrint(ctl, "%-20s %-10s %-10s\n",
+ inactiveNames[i],
+ _("inactive"),
+ autostartStr);
+ }
virStoragePoolFree(pool);
VIR_FREE(inactiveNames[i]);
diff --git a/tools/virsh.pod b/tools/virsh.pod
index b1917ee..cec07e3 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -732,11 +732,13 @@ variables, and defaults to C<vi>.
Returns basic information about the I<pool> object.
-=item B<pool-list> optional I<--inactive> I<--all>
+=item B<pool-list> optional I<--inactive> I<--all> I<--details>
List pool objects known to libvirt. By default, only pools in use by
active domains are listed; I<--inactive> lists just the inactive
-pools, and I<--all> lists all pools.
+pools, and I<--all> lists all pools. The I<--details> option instructs
+virsh to additionally display pool persistence and capacity related
+information where available.
=item B<pool-name> I<uuid>
--
1.7.0.1
14 years, 4 months
[libvirt] [PATCH] authors: update my authors details
by Justin Clift
---
.mailmap | 1 +
AUTHORS | 2 +-
2 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/.mailmap b/.mailmap
index 8c1eed3..608780d 100644
--- a/.mailmap
+++ b/.mailmap
@@ -11,3 +11,4 @@
<socketpair(a)gmail.com> <socketpair gmail com>
<soren(a)canonical.com> <soren(a)ubuntu.com>
<jfehlig(a)novell.com> <jfehlig(a)linux-ypgk.site>
+<jclift(a)redhat.com> <justin(a)salasaga.org>
diff --git a/AUTHORS b/AUTHORS
index 222e1b7..aa1a1cf 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -124,7 +124,7 @@ Patches have also been contributed by:
Thomas Treutner <thomas(a)scripty.at>
Jean-Baptiste Rouault <jean-baptiste.rouault(a)diateam.net>
Марк Коренберг <socketpair(a)gmail.com>
- Justin Clift <justin(a)salasaga.org>
+ Justin Clift <jclift(a)redhat.com>
Alan Pevec <apevec(a)redhat.com>
[....send patches to get your name here....]
--
1.7.1
14 years, 4 months
[libvirt] qemu hook is not starting
by Csom Gyula
Hi there!
Since our cloud system that is currently under development, might need it, I've tested libvirt
domain hooks but unfortunately failed. I've tried the following scenario:
1. create the hook directory...
# mkdir /etc/libvirt/hook
# chmod 755 /etc/libvirt/hook
2. ... or maybe hooks?
# ln -s hook hooks
3. create the hook script
# vi /etc/libvirt/hook/qemu
# cat /etc/libvirt/hook/qemu
#!/bin/bash
echo "*****************************************************" >> /tmp/libvirt.log
echo "qemu-hook: $@" >> /tmp/libvirt.log
echo "*****************************************************" >> /tmp/libvirt.log
echo "test error" >&2
exit 1
# chmod 755 /etc/libvirt/hook/qemu
4. Then I restarted the libvirt daemon and created a domain through virsh. Unfortunately nothing
happened. That is the vm started - although according to the documentation [1] it should not have
to start: "a non-zero return value from the script will abort the domain startup operation, and if
an error string is passed on stderr by the hook script, it will be provided back to the user
at the libvirt API level". Also the messages echod by the script did not appear in the log.
Could you please tell what I missed? What (else) should be done in order to run hooks?
Thanks in advance,
Gyula
---
[1] http://libvirt.org/hooks.html
14 years, 4 months