[libvirt] Migration with non-shared storage
by Kenneth Nagin
Support for live migration between hosts that do not share storage was
added to qemu-kvm release 0.12.1.
It supports two flags:
-b migration without shared storage with full disk copy
-i migration without shared storage with incremental copy (same base image
shared between source and destination).
I suggest adding these flags to virDomainMigrate.
If I'm not mistaken qemuMonitorTextMigrate is the function that actually
invokes the kvm monitor.
Thus, it would be necessary to pass the flags to qemuMonitorTextMigrate..
But qemuMonitorTextMigrate does not have a flag input parameter. I think
the least disruptive way to support the new flags is use the existing
"background" parameter
to pass the flags. Of course this would require some changes to the
upstream functions that are invoked for migration.
What do you think?
Kenneth Nagin
14 years, 10 months
[libvirt] [PATCH] openvz_conf.c: don't dereference NULL upon failure
by Jim Meyering
"dom" is set to NULL within the while loop:
virDomainObjUnlock(dom);
dom = NULL;
If on a subsequent iteration something fails,
we goto "cleanup" or "no_memory", both of which
have us run this code:
fclose(fp);
virDomainObjUnref(dom);
return -1;
And the virDomainObjUnref function would dereference "dom".
>From 3971ff17c7e9f1ddbc443d48b86fe6ba60a2d4a0 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Tue, 15 Dec 2009 16:16:57 +0100
Subject: [PATCH] openvz_conf.c: don't dereference NULL upon failure
* src/openvz/openvz_conf.c (openvzLoadDomains): Avoid NULL deref
of "dom".
---
src/openvz/openvz_conf.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c
index 7e9abbf..43bbaf2 100644
--- a/src/openvz/openvz_conf.c
+++ b/src/openvz/openvz_conf.c
@@ -535,7 +535,8 @@ int openvzLoadDomains(struct openvz_driver *driver) {
cleanup:
fclose(fp);
- virDomainObjUnref(dom);
+ if (dom)
+ virDomainObjUnref(dom);
return -1;
}
--
1.6.6.rc2.275.g51e2d
14 years, 10 months
[libvirt] [PATCH 0/3] Block assignment of PCI devices below non-ACS capable switch
by Jiri Denemark
Hi.
This is a patchset for blocking assignment of PCI devices below non-ACS
capable switch. In case user still wants to assign such device even though it
might not be safe, she can specify permissive='yes' attribute for <hostdev>.
Special thanks to Chris L. who created a standalone program the PCI check.
This code is a port of that check to libvirt.
Jiri Denemark (3):
Tests for ACS in PCIe switches
New 'permissive' attribute for hostdev
Use pciDeviceIsAssignable in qemu driver
docs/schemas/domain.rng | 8 +++
src/conf/domain_conf.c | 14 ++++-
src/conf/domain_conf.h | 1 +
src/libvirt_private.syms | 3 +
src/qemu/qemu_driver.c | 9 +++-
src/util/pci.c | 147 ++++++++++++++++++++++++++++++++++++++++++++++
src/util/pci.h | 7 ++
7 files changed, 186 insertions(+), 3 deletions(-)
14 years, 10 months
Re: [libvirt] New libvirt API for domain memory statistics reporting (V3)
by Thomas Treutner
On Wednesday 23 December 2009 19:42:20 Adam Litke wrote:
> Attached to this email are two patches:
>
> memstats-kernel-2.6.32-rc5.patch:
> Applies to 2.6.32-rc5 which should be a capable-enough kernel for
> testing and development.
>
> memstats-qemu-0.12.1.patch:
> Applies to qemu-0.12.1 which can be found here:
> http://mirrors.igsobe.com/nongnu/qemu/qemu-0.12.1.tar.gz
> Unfortunately, it is not trivial for me to port this work to 0.11.0 so
> you will have to find a resolution to your BIOS woes first and then use
> this version. Who knows, it might already be fixed in 0.12.1.
I tried with qemu-*kvm*-0.12.1.1, no memstats yet, but dynamically setting the
amount of memory now doesn't work at all. I'll try to isolate the problem, I
suspect it comes from qemu-kvm-0.12.1.1.
I can't use qemu-0.12.1:
$ ./configure --enable-kvm --disable-xen
#error Missing KVM capability KVM_CAP_DESTROY_MEMORY_REGION_WORKS
NOTE: To enable KVM support, update your kernel to 2.6.29+ or install
recent kvm-kmod from http://sourceforge.net/projects/kvm.
ERROR
ERROR: User requested feature kvm
ERROR: configure was not able to find it
ERROR
$ uname -r
2.6.32.2
$ modinfo kvm_amd
...
vermagic: 2.6.32.2 SMP mod_unload modversions
....
kr,tom
14 years, 10 months
[libvirt] [PATCH 0/12] Standardized device addressing & SCSI controller/disk hotplug
by Daniel P. Berrange
This patch series is a combination of series done by
Wolfgang Mauerer to support proper SCSI drive hotplug
and new work by myself to introduce generic addressing
for all devices.
Wolfgang's most recent posting was
http://www.redhat.com/archives/libvir-list/2009-November/msg00574.html
http://www.redhat.com/archives/libvir-list/2009-November/msg00701.html
When testing that series I came across a few minor issues,
but more importantly it made me realize how important it is
that we introduce explicit device addressing in our XML format.
Wolfgang's series had added new element for SCSI controllers,
with PCI address info about the controller
<controller type='ide' index='0'>
<address domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
And had also extended the <disk> element to include a SCSI
controller name, and bus/unit ID. eg
<disk>
...
<controller name="<identifier>" pci_addr="<addr>" bus="<number>" unit="<number>"/>
</disk>
I then remembered that for support NIC/VirtIO/hostdev disk unplug
Mark M had previously added PCI address information to the internal
XML state files for <interface>, <disk> and <hostdev> elements.
All of these places using PCI addresses suffered from the fact
that we only knew the addresses of devices we'd hotplug and had no
idea of addresses of devices present at boot time.
A further issue with the addition of <controller> to the <disk>
element, is that not all disk types have a concept of a controller.
For example, every VirtIO disk is in fact a full PCI device, so
having a <controller> element in <disk> is not meaningful for
VirtIO.
The solution that I believe solves all our problems, is to add a
generic <address> element to every single device. This address
element contains info about how the device is associated with the
logical parent device. There will be three types of address in
this series of patches, though we could imagine adding more later.
- PCI address - domain, bus, slot, function
- USB address - bus, device
- Drive address - controller, bus, unit
Anything that is a PCI device will obviously use PCI addresses.
- PCI: sound, audio, video, all virtio, watchdog, disk controllers
- USB: all usb devices
- Drive: SCSI, IDE & Floppy disks, but *not* VirtIO/Xen disks
Xen paravirt devices aren't really covered in this scheme. I
could imagine adding a fourth address type for Xen. This would in
fact let us handle driver domains - ie a backend outside dom0.
I won't deal with Xen in this series though.
The XML for each address type looks like
<address type='pci' mode='static' domain='0x0000' bus='0x1e' slot='0x07' function='0x0'/>
<address type='usb' mode='dynamic' bus='007' dev='003'/>
<address type='drive' mode='dynamic' controller='1' bus='0' unit='5'/>
The 'mode' attribute for any of them is allowed to be either
'static' or 'dynamic'. A static address is one specified by
the end user when defining the XML, while a dynamic address is
one automatically chosen by libvirt/QEMU every time a guest
boots. The idea of static addresses is to allow management
apps to guarentee that PCI device & drive numbering never
changes. This series does not actually implement static
addressing for PCI yet, because it requires that we change
the way we generate QEMU command line arguments. It does
do static addressing for disks.
libvirt itself will auto-assign all drive addresses, and QEMU
will auto-assign PCI adresses in dynamic mode. When starting
a guest VM we run 'info pci' to get a list of PCI vendor/product
IDs and matching PCI addresses. We then attempt to match those
up with the devices we specified to QEMU. It sounds nasty, but
it actually works fairly well. This means we also now make it
possible to hotunplug any device, even those the VM was initially
booted with
There are two ways I can envisage mgmt apps using this address
functionality
- Boot a guest with no addresses specified, grab the XML and
change all 'dynamic' attrs to 'static' and then define the
persistent config with this. The addresses will then be
unchanged forever more
- Explicitly give a full list of addresses the very first time
a guest is created.
There is one small issue with all this, we need to know every
PCI device in the guest. It turns out there are a handful of
devices in QEMU we don't represent in XML yet
- Virtio balloon device
- Virtio console (this is easy to address with matt's patches)
- ISA bridge
- USB controller
- Some kind of PCI bridge (not clear what this is, it has
PCI ID 8086:7113
If an management application is to be able to fully control
static PCI addressing, we need to represent these somehow,
so apps can give them addresses. Technically we could get
away with not representing the ISA/PCI bridge since QEMU
always gives them the first PCI slot no matter what. Still
need the VirtIO & USB devices dealt with.
Finally, here is an example of a guest running with a huge
number of devices. Notice how we've auto-detected the PCI
address of every device, and every disk. In particular
notice how VirtiO disks got the PCI address, while SCSI
disks got the drive address.
<domain type='kvm' id='2'>
<name>plain</name>
<uuid>c7a1edbd-edaf-9455-926a-d65c16db1809</uuid>
<memory>219200</memory>
<currentMemory>219136</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='i686' machine='pc-0.11'>hvm</type>
<kernel>/home/berrange/vmlinuz-PAE</kernel>
<initrd>/home/berrange/initrd-PAE.img</initrd>
<boot dev='hd'/>
</os>
<features>
<acpi/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/berrange/VirtualMachines/plain.qcow'/>
<target dev='vda' bus='virtio'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/home/berrange/gpxe.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' mode='dynamic' controller='0' bus='1' unit='0'/>
</disk>
<disk type='file' device='disk'>
<source file='/home/berrange/output.img'/>
<target dev='sda' bus='scsi'/>
<address type='drive' mode='dynamic' controller='0' bus='0' unit='0'/>
</disk>
<disk type='file' device='disk'>
<source file='/home/berrange/output.img'/>
<target dev='sdd' bus='scsi'/>
<address type='drive' mode='dynamic' controller='0' bus='0' unit='3'/>
</disk>
<disk type='file' device='disk'>
<source file='/home/berrange/output.img'/>
<target dev='sdf' bus='scsi'/>
<address type='drive' mode='dynamic' controller='0' bus='0' unit='5'/>
</disk>
<controller type='scsi' index='0'>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</controller>
<controller type='ide' index='0'>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='fdc' index='0'/>
<interface type='user'>
<mac address='52:54:00:5b:ef:21'/>
<model type='ne2k_pci'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='user'>
<mac address='52:54:00:1c:dc:98'/>
<model type='virtio'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<interface type='user'>
<mac address='52:54:00:f7:c5:0e'/>
<model type='e1000'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='user'>
<mac address='52:54:00:56:6c:55'/>
<model type='pcnet'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
<interface type='user'>
<mac address='52:54:00:ca:0d:58'/>
<model type='rtl8139'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/5'/>
<target port='0'/>
</serial>
<console type='pty' tty='/dev/pts/5'>
<source path='/dev/pts/5'/>
<target port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='5900' autoport='yes'/>
<sound model='ac97'>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</sound>
<sound model='es1370'>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</sound>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<watchdog model='i6300esb' action='reset'>
<address type='pci' mode='dynamic' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
</watchdog>
</devices>
<seclabel type='dynamic' model='selinux'>
<label>system_u:system_r:svirt_t:s0:c181,c286</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c181,c286</imagelabel>
</seclabel>
</domain>
Daniel
14 years, 10 months
[libvirt] FIrst approximation of creating volumes directly with desired uid
by Laine Stump
|I've made an attempt to create storage volumes directly with the
desired uid/gid (by forking a new process, calling setuid/setgid in that
process, and then creating the file). Since it's sure to get ripped
apart, I've put it up on gitorious rather than sending patches to the list.
The repository is:
git://gitorious.org/~laine/libvirt/laine-staging.git
<git://gitorious.org/%7Elaine/libvirt/laine-staging.git>
and the branch is (in a quite non-sequiter fashion) "xml2xmltest"
Only the last 3 commits on the branch are related to this topic.
The first adds uid and gid args to virRun (and all related functions) so
that new processes can be run as a different user. This is necessary for
the cases where we call an external program to create the image
(qemu-img, for example).
The second commit adds two new functions to util.c: virFileCreate and
virDirCreate. In the case that the current process is running as root,
and the caller has requested a different uid or gid for the new
file/directory, these functions do the proper fork dance to get this
done and return proper status to the caller.
The third commit uses the enhanced virRun, and the two new functions to
change the way that storage volumes are created.
I've noted some of my concerns about doing things this way in a bugzilla
report about the problem I'm trying to fix:
https://bugzilla.redhat.com/show_bug.cgi?id=547543
|
14 years, 10 months
[libvirt] virsh -c xen:/// list: = Connection refused
by Gerry Reno
I'm running the 2.6.31.6 pv_ops dom0 kernel, libvirt 0.7.0 and Xen 3.4.1.
When I try connecting to the Xen hypervisor using virsh it gives me a
"Connection refused":
root@grp-01-23-02:~# xm list
Name ID Mem VCPUs State
Time(s)
Domain-0 0 1020 4 r-----
858.3
root@grp-01-23-02:~# virsh -c xen:/// list
Connecting to uri: xen:///
error: unable to connect to 'localhost:8000': Connection refused
error: failed to connect to the hypervisor
How can I get virsh to connect with Xen?
-Gerry
14 years, 10 months