[libvirt] [PATCH] Fix python error reporting for some storage operations
by Cole Robinson
In the python bindings, all vir* classes expect to be
passed a virConnect object when instantiated. Before
the storage stuff, these classes were only instantiated
in virConnect methods, so the generator is hardcoded to
pass 'self' as the connection instance to these classes.
Problem is there are some methods that return pool or vol
instances which aren't called from virConnect: you can
lookup a storage volume's associated pool, and can lookup
volumes from a pool. In these cases passing 'self' doesn't
give the vir* instance a connection, so when it comes time
to raise an exception crap hits the fan.
Rather than rework the generator to accomodate this edge
case, I just fixed the init functions for virStorage* to
pull the associated connection out of the passed value
if it's not a virConnect instance.
Thanks,
Cole
diff --git a/python/generator.py b/python/generator.py
index 01a17da..c706b19 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -962,8 +962,12 @@ def buildWrappers():
list = reference_keepers[classname]
for ref in list:
classes.write(" self.%s = None\n" % ref[1])
- if classname in [ "virDomain", "virNetwork", "virStoragePool", "virStorageVol" ]:
+ if classname in [ "virDomain", "virNetwork" ]:
classes.write(" self._conn = conn\n")
+ elif classname in [ "virStorageVol", "virStoragePool" ]:
+ classes.write(" self._conn = conn\n" + \
+ " if not isinstance(conn, virConnect):\n" + \
+ " self._conn = conn._conn\n")
classes.write(" if _obj != None:self._o = _obj;return\n")
classes.write(" self._o = None\n\n");
destruct=None
1 month
[libvirt] virDomainInterfaceStats why is there a size?
by Stefan de Konink
Another simple question, what is the reasoning about the size field in
this call. I would really be a happy boy if anyone said:
if you put in path == NULL, it will fill your stats structure up to size
interfaces.
...but since this is not the case (yet) what is the reason behind it?
Stefan
16 years, 3 months
[libvirt] anyone implementing host device enumeration?
by David Lively
Hi -
I'm about to start working on host device enumeration, along the
(HAL-ish) lines of what was discussed back in April:
https://www.redhat.com/archives/libvir-list/2008-April/msg00005.html
I know the xml details haven't been fully fleshed out, but there seems
to be agreement that it will be a fairly direct mapping from (a subset
of the) HAL info to the details that we need in the xml. Doubtless it
will take a while to figure out exactly what subset suffices (and, for
that matter, if everything needed is available via HAL ...), but I think
the work is well-defined for some of the obvious details (discussed in
the above thread) on which there's broad agreement.
Is anyone working on such an implementation?
Thanks,
Dave
16 years, 5 months
[libvirt] Live migration sanity checks
by Chris Lalancette
All,
One thing that oVirt would like to have (and that might be useful for other
users) is a call that would do some basic sanity checking for live migration.
This call would go over to the remote libvirtd, do some checks, and
return whether we think migration is likely to succeed. Note that I say
"likely to succeed", because there are certainly things that can cause migration
to fail after we've made checks, but anything is better than what we have today
("try it and pray").
Now, in order for this call to be widely useful, I think we would have to allow
the caller to specify *which* of the available checks they would like to
perform, and then have some sort of return value that indicated if there are
show-stopper problems, or just problems that may cause things to be sub-optimal
on the remote side. The caller could then decide what action it wants to take.
There is also a corollary to the "is it sane for me to migrate", which is, given
two hosts A and B, what's the lowest common denominator I need to run my guest
at so that migration will likely be successful between them. This could also be
used by management apps to make sure things are configured properly for the
guest before ever starting it.
The biggest problem with implementing these calls, however, is that there is no
comprehensive list of things we should check. This e-mail is an attempt to
write down some of the more obvious things we need to check, and to garner
discussion of things I might have missed. Once I have a proper list, I'll add
it to the TODO page on the libvirt Wiki so it's at least somewhere permanent.
Note that we don't have to implement *all* of these as a first go at this; if we
leave it open enough, we can add more checks as we go along without breaking
compatibility.
MIGRATION CRITERIA:
0) Matching hypervisors - seems obvious, but I'm not sure if we have these
checks today. Make sure we don't try to migrate Xen to KVM or vice-versa. We
also might want to think of at least warning the caller if you try to migrate
from a "newer" hypervisor (say, Xen 3.2) to an "older" hypervisor (say, Xen
3.1). That should, in theory, work, but maybe the caller would prefer not to do
that if possible. Rich has pointed out that KVM and Xen are accidentally
incompatible in libvirt, but we should make it explicit.
1) Matching CPU architectures - also obvious, but as far as I know today,
there's no checking for this (well, at least in Xen; I don't know about
libvirt). So you can happily attempt to migrate from i386 -> ia64, and watch
the fireworks. We also need to make sure you can't migrate x86_64 -> i386. I
believe i386 -> x86_64 should work, but this might be hypervisor dependent.
2) Matching CPU vendors - this one isn't a hard requirement; given the things
below, we may still be likely to succeed even if we go from AMD to Intel or
vice-versa. It still might be useful information for the caller to know.
3) CPU flags - the CPU flags of the destination *must* be a superset of the CPU
flags that were presented to the guest at startup. Many OS's and application
check for CPU flags once at startup to choose optimized routines, and then never
check again; if they happened to select sse3, and sse3 is not there on the
destination, then they will (eventually) crash.
This is where the CPU masking technology and the lowest common denominator
libvirt call can make a big difference. If you make sure to mask some of the
CPU flags off of the guest when you are first creating it, then the destination
host just needs a superset of the flags that were presented to the guest at
bootup, which makes the problem easier.
4) Number of CPUs - generally, you want the destination to have at least one
physical CPU for each virtual CPU assigned to the guest. However, I can see use
cases where this might not be the case (temporary or emergency migrations). So
this would probably be a warning, and the caller can make the choice of whether
to proceed.
5a) Memory - non-NUMA -> non-NUMA - fairly straightforward. The destination
must have enough memory to fit the guest memory. We might want to do some
"extra" checking on the destination to make sure we aren't going to OOM the
destination as soon as we arrive.
5b) Memory - non-NUMA -> NUMA - A little trickier. There are no cpusets we
have to worry about, since we are coming from non-NUMA, but for absolute best
performance, we should try to fit the whole guest into a single NUMA node. Of
course, if that node is overloaded, that may be a bad idea. NUMA placement
problem, basically.
5c) Memory - NUMA -> non-NUMA - Less tricky. On the destination, all memory is
"equally" far away, so no need to worry about cpusets. Just have to make sure
that there is enough memory on the destination for the guest.
5d) Memory - NUMA -> NUMA - Tricky, just like case 5b). Need to determine if
there is enough memory in the machine first, then check if we can fit the guest
in a single node, and also check if we can match the cpuset from the source on
the destination.
6a) Networks - at the very least, the destination must have the same bridges
configured as the source side. Whether those bridges are hooked to the same
physical networks as the source or not is another question, and may be outside
the bounds of what we can/should check.
6b) Networks - we need to make sure that the device model on the remote side
supports the same devices as the source side. That is, if you have a e1000 nic
on the source side, but your destination doesn't support it, you are going to
fail the migration.
7a) Disks - we have to make sure that all of the disks on the source side are
available on the destination side, at the same paths. To be entirely clear, we
have to make sure that the file on the destination side is the *same* file as on
the source side, not just a file with the same name. For traditional
file-based, the best we can do may be path names. For device names (like LVM,
actual disk partitions, etc.), we might be able to take advantage of device
enumaration API's and validate that the device info is the same (UUID matching,
etc.).
7b) Disks - Additionally, we need to make sure that the device model on the
remote side supports the same devices as the source side. That is, if you have
a virtio drive on the source side, but your destination host doesn't support
virtio on the backend, you are going to fail. (virtio might be a bad example,
but there might be further things in the device model in the future that we
might not necessarily have on both ends).
------ That's the absolute basic criteria. More esoteric/less thought-out
criteria follow:
8) Time skew - this is less thought out at the moment, but if you calibrated
your lpj at boot time, and now you migrate to a host with a different clock
frequency, your time will run either fast or slow compared to what you expect.
Also synchronized vs. unsynchronized TSC could cause issues, etc.
9) PCI-passthrough - this is actually a check on the *source* side. If the
guest is using a PCI passthrough device, it *usually* doesn't make sense to
live migrate it. On the other hand, it looks like various groups are trying to
make this work (with the bonding of a PV NIC to a PCI-passthrough NIC), so we
need to keep that in mind, and not necessarily make this a hard failure.
10) MSR's?? - I've thought about this one before, but I'm not sure what the
answer is. Unfortunately, MSR's in virtualization are sort of done hodge-podge.
That is, some MSR's are emulated for the guests, some MSR's guests have direct
control over, and some aren't emulated at all. This one can get ugly fast; it's
probably something we want to leave until later.
11) CPUID?? - Not entirely sure about this one; there is a lot of
model-specific information encoded in the various CPUID calls (things like cache
size, cache line size, etc. etc). However, I don't know if CPUID instructions
are trapped-n-emulated under the different hypervisors, or if they are done
right on the processor itself. I guess if an OS or application called this at
startup, cached the information, and then checked again later it might get
upset, but it seems somewhat unlikely.
Things that I've missed?
Thanks,
Chris Lalancette
16 years, 5 months
[libvirt] [PATCH] kvm: maxVCPU runtime detection
by Guido Günther
Hi,
with recent linux kernels we can detect the maximum number of virtual
cpus at runtime via an ioctl. Possible patch attached. It does this on
every call to qemudGetMaxVCPUs. Would you prefer something that does
this only once in qemudStartup()?
Cheers,
-- Guido
16 years, 6 months
[libvirt] LXC: making the private root filesystem more secure
by Daniel P. Berrange
When I wrote the private root filesystem stuff for LXC (which I just
committed) I noted that we couldn't actually make this secure, because
someone inside the chroot can just 'mknod' and access the host devices.
What I completely forgot was that cgroups as of 2.6.26 has device ACLs
If we place every container in a cgroup (which was planned anyway), then
we can trivially prevent containers accessing host devices
One time setup
mount -t cgroups /dev/cgroups
mkdir /dev/cgroups/libvirt
mkdir /dev/cgroups/libvirt/lxc
For each new container 'NAME'
mkdir /dev/cgroups/libvirt/lxc/{NAME}
echo "a" > /dev/cgroups/libvirt/lxc/{NAME}/devices.deny
echo "c 1:3 rwm" > /dev/cgroups/libvirt/lxc/{NAME}/devices.allow
echo "c 1:5 rwm" > /dev/cgroups/libvirt/lxc/{NAME}/devices.allow
echo "c 1:7 rwm" > /dev/cgroups/libvirt/lxc/{NAME}/devices.allow
echo "c 5:1 rwm" > /dev/cgroups/libvirt/lxc/{NAME}/devices.allow
echo "c 1:8 rwm" > /dev/cgroups/libvirt/lxc/{NAME}/devices.allow
echo "c 1:9 rwm" > /dev/cgroups/libvirt/lxc/{NAME}/devices.allow
This denies all devices, and then allows null, zero, full, console, random
and urandom. Allowing use of 'random' is debatable.
The 'devpts' namespace stuff is also needed to provide private PTYs.
The 'user' namespace stuff is needed to prevent an unprivileged user
in the host OS from killing off processes with same UID inside the
container. There looks to be active patchsets for both of these being
discussed, so we're getting close to having a genuinely useful
container based virt driver with LXC
Daniel
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
16 years, 7 months
[libvirt] [PATCH] fix a error message and coding-style cleanup
by Nguyen Anh Quynh
Hi,
This patch fixes a "copy&paste" error message in MemoryPeek() function
of qemu driver.
And while I am at it, this patch does some clean-up on coding style.
Without this, my vim goes crazy while browsing the source.
Please enforce the coding style (more or less similar to GNU style?)
when merging.
Thanks,
Quynh
# diffstat fix5.patch
qemu_driver.c | 261 ++++++++++++++++++++++++++++++++++++++--------------------
1 file changed, 175 insertions(+), 86 deletions(-)
16 years, 7 months
[libvirt] [PATCH] Use gcc style for structure fields declaration
by Nguyen Anh Quynh
Hi,
This patch uses gcc style to declare fields in virDriver structures.
This also makes it easier to spot bugs.
$ diffstat fix3.patch
lxc_driver.c | 118 ++++++++++++++++++++++++++--------------------------
openvz_driver.c | 118 ++++++++++++++++++++++++++--------------------------
qemu_driver.c | 126 ++++++++++++++++++++++++++++----------------------------
test.c | 118 ++++++++++++++++++++++++++--------------------------
4 files changed, 240 insertions(+), 240 deletions(-)
Thanks,
Quynh
16 years, 7 months
[libvirt] [PATCH] Implement vol delete for disk pools
by Cole Robinson
The patch below implements virStorageVolDelete for volumes
on a disk pool.
The only interesting thing here is that parted wants a
partition number to delete, so we need to peel off the
end of the volume's target path which will be of the form
'/dev/sda1' or similar (I assume. If not, it's still
better than having nothing implemented).
Thanks,
Cole
16 years, 7 months
[libvirt] [PATCH]Fix minor issues in logical storage backend
by Jim Fehlig
Hi All!
I came across some problems trying to create a new LVM-based storage
pool using this config
<pool type="logical">
<name>test_vg</name>
<source>
<device path="/dev/sdb1"/>
</source>
<target>
<path>/dev/test_vg</path>
</target>
</pool>
Volume group did not previously exist so I did
virsh pool-define <above.xml>
virsh pool-build test_vg
pool-build failed since the backend logical storage driver does not have
VIR_STORAGE_BACKEND_POOL_SOURCE_DEVICE set in flags. Without this flag
set, the device element is never parsed in virStoragePoolDefParseDoc()
(storage_conf.c), causing pvcreate to fail since no physical volume is
specified.
After this problem was memory corruption cause by miscalculating the
size of vgcreate command line :-).
This patch fixes both issues.
Regards,
Jim
16 years, 7 months