[libvirt] [PATCH] Fix python error reporting for some storage operations
by Cole Robinson
In the python bindings, all vir* classes expect to be
passed a virConnect object when instantiated. Before
the storage stuff, these classes were only instantiated
in virConnect methods, so the generator is hardcoded to
pass 'self' as the connection instance to these classes.
Problem is there are some methods that return pool or vol
instances which aren't called from virConnect: you can
lookup a storage volume's associated pool, and can lookup
volumes from a pool. In these cases passing 'self' doesn't
give the vir* instance a connection, so when it comes time
to raise an exception crap hits the fan.
Rather than rework the generator to accomodate this edge
case, I just fixed the init functions for virStorage* to
pull the associated connection out of the passed value
if it's not a virConnect instance.
Thanks,
Cole
diff --git a/python/generator.py b/python/generator.py
index 01a17da..c706b19 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -962,8 +962,12 @@ def buildWrappers():
list = reference_keepers[classname]
for ref in list:
classes.write(" self.%s = None\n" % ref[1])
- if classname in [ "virDomain", "virNetwork", "virStoragePool", "virStorageVol" ]:
+ if classname in [ "virDomain", "virNetwork" ]:
classes.write(" self._conn = conn\n")
+ elif classname in [ "virStorageVol", "virStoragePool" ]:
+ classes.write(" self._conn = conn\n" + \
+ " if not isinstance(conn, virConnect):\n" + \
+ " self._conn = conn._conn\n")
classes.write(" if _obj != None:self._o = _obj;return\n")
classes.write(" self._o = None\n\n");
destruct=None
5 days, 4 hours
[libvirt] Supporting vhost-net and macvtap in libvirt for QEMU
by Anthony Liguori
Disclaimer: I am neither an SR-IOV nor a vhost-net expert, but I've CC'd
people that are who can throw tomatoes at me for getting bits wrong :-)
I wanted to start a discussion about supporting vhost-net in libvirt.
vhost-net has not yet been merged into qemu but I expect it will be soon
so it's a good time to start this discussion.
There are two modes worth supporting for vhost-net in libvirt. The
first mode is where vhost-net backs to a tun/tap device. This is
behaves in very much the same way that -net tap behaves in qemu today.
Basically, the difference is that the virtio backend is in the kernel
instead of in qemu so there should be some performance improvement.
Current, libvirt invokes qemu with -net tap,fd=X where X is an already
open fd to a tun/tap device. I suspect that after we merge vhost-net,
libvirt could support vhost-net in this mode by just doing -net
vhost,fd=X. I think the only real question for libvirt is whether to
provide a user visible switch to use vhost or to just always use vhost
when it's available and it makes sense. Personally, I think the later
makes sense.
The more interesting invocation of vhost-net though is one where the
vhost-net device backs directly to a physical network card. In this
mode, vhost should get considerably better performance than the current
implementation. I don't know the syntax yet, but I think it's
reasonable to assume that it will look something like -net
tap,dev=eth0. The effect will be that eth0 is dedicated to the guest.
On most modern systems, there is a small number of network devices so
this model is not all that useful except when dealing with SR-IOV
adapters. In that case, each physical device can be exposed as many
virtual devices (VFs). There are a few restrictions here though. The
biggest is that currently, you can only change the number of VFs by
reloading a kernel module so it's really a parameter that must be set at
startup time.
I think there are a few ways libvirt could support vhost-net in this
second mode. The simplest would be to introduce a new tag similar to
<source network='br0'>. In fact, if you probed the device type for the
network parameter, you could probably do something like <source
network='eth0'> and have it Just Work.
Another model would be to have libvirt see an SR-IOV adapter as a
network pool whereas it handled all of the VF management. Considering
how inflexible SR-IOV is today, I'm not sure whether this is the best model.
Has anyone put any more thought into this problem or how this should be
modeled in libvirt? Michael, could you share your current thinking for
-net syntax?
--
Regards,
Anthony Liguori
1 year, 4 months
[libvirt] [PATCH 0/4] Multiple problems with saving to block devices
by Daniel P. Berrange
This patch series makes it possible to save to a block device,
instead of a plain file. There were multiple problems
- WHen save failed, we might de-reference a NULL pointer
- When save failed, we unlinked the device node !!
- The approach of using >> to append, doesn't work with block devices
- CGroups was blocking QEMU access to the block device when enabled
One remaining problem is not in libvirt, but rather QEMU. The QEMU
exec: based migration often fails to detect failure of the command
and will thus hang forever attempting a migration that'll never
succeed! Fortunately you can now work around this in libvirt using
the virsh domjobabort command
11 years, 11 months
[libvirt] qemu-namespace handling?
by Philipp Hahn
Hello,
some time ago I hand to manipulate the domain XML description using Pythons
Elemtree XML implementation, which had problems generating the right format
for libvirt: elemtree just supports adding Qname elements (that
is "{http://libvirt.org/schemas/domain/qemu/1.0}commandline") which
internally would create a temporary binding of this namespace to the "ns0"
Prefix.
My work-around for Elemtree was the add the name-space mapping for "qemu"
to "http://libvirt.org/schemas/domain/qemu/1.0" to ETs internal mapping table
and add an "xmlns:qemu" attribute by hand:
ET._namespace_map[QEMU_URI] = 'qemu'
domain.attrib['xmlns:qemu'] = QEMU_URI
libvirt on the other hand expects the prefix to be "qemu" and only checks,
that this prefix is bound to the URI mentioned above at the root node).
The following examples would be XML valid, but are not accepted by libvirt:
<domain>...
<qemu:commandline xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
...</qemu:commandline>
</domain>
<domain xmlns:ns0="http://libvirt.org/schemas/domain/qemu/1.0">...
<ns0:commandline>
...</ns0:commandline>
</domain>
The following (esoteric) example might be wrongly accepted by libvirt
(untested):
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
<qemu:commandline xmlns:qemu="urn:foo">
...</qemu:commandline>
</domain>
I don't know if this is worth fixing, but I still encountered the first two
problems myself and had to spend some time to detecting what I did wrong. So
at least I want to share my finding with others, so they don't do the same
mistake.
Sincerely
Philipp Hahn
--
Philipp Hahn Open Source Software Engineer hahn(a)univention.de
Univention GmbH Linux for Your Business fon: +49 421 22 232- 0
Mary-Somerville-Str.1 D-28359 Bremen fax: +49 421 22 232-99
http://www.univention.de/
13 years, 4 months
[libvirt] LPC2011 Virtualization Micro Conf
by Jes Sorensen
Hi,
With the success of last year's Virtualization micro-conference track
at Linux Plumbers 2010, I have accepted to organize a similar track
for Linux Plumbers 2011 in Santa Rosa. Please see the official Linux
Plumbers 2011 website for full details about the conference:
http://www.linuxplumbersconf.org/2011/
The Linux Plumbers 2011 Virtualization track is focusing on general
free software Linux Virtualization. It is not reserved for a specific
hypervisor, but will focus on general virtualization issues and in
particular collaboration amongst projects. This would include KVM,
Xen, QEMU, containers etc.
Deadline:
---------
The deadline for submissions is April 30th. Please visit the following
link to submit your proposal:
http://www.linuxplumbersconf.org/2011/ocw/events/LPC2011MC/proposals
Example topics:
---------------
- Kernel and Hypervisor KVM/QEMU/Xen interaction
- QEMU integration, sharing of code between the different projects
- IO Performance and scalability
- Live Migration
- Managing and supporting enterprise storage
- Support for new hardware features, and/or provide guest access to
these features.
- Guest agents
- Virtualization management tools, libvirt, etc.
- Desktop integration
- Consumer Electronics device emulation
- Custom platform configuration and coordination with the kernel
Audience:
---------
Virtualization hypervisor developers, developers of virtualization
management tools and applications, embedded virtualization developers,
vendors and others.
Best regards,
Jes
13 years, 5 months
[libvirt] libvirt(-java): virDomainMigrateSetMaxDowntime
by Thomas Treutner
Hi,
I'm facing some troubles with virDomainMigrate &
virDomainMigrateSetMaxDowntime. The core problem is that KVM's default
value for the maximum allowed downtime is 30ms (max_downtime in
migration.c, it's nanoseconds there; 0.12.3) which is too low for my VMs
when they're busy (~50% CPU util and above). Migrations then take
literally forever, I had to abort them after 15 minutes or so. I'm using
GBit Ethernet, so plenty bandwidth should be available. Increasing the
allowed downtime to 50ms seems to help, but I have not tested situations
where the VM is completely utilized. Anyways, the default value is too
low for me, so I tried virDomainMigrateSetMaxDowntime resp. the Java
wrapper function.
Here I'm facing a problem I can overcome only with a quite crude hack:
org.libvirt.Domain.migrate(..) blocks until the migration is done, which
is of course reasonable. So I tried calling migrateSetMaxDowntime(..)
before migrating, causing an error:
"Requested operation is not valid: domain is not being migrated"
This tells me that calling migrateSetMaxDowntime is only allowed during
migrations. As I'm migrating VMs automatically and without any user
intervention I'd need to create some glue code that runs in an extra
thread, waiting "some time" hoping that the migration was kicked off in
the main thread yet and then calling migrateSetMaxDowntime. I'd like to
avoid such quirks in the long run, if possible.
So my question: Would it be possible to extend the migrate() method
resp. virDomainMigrate() function with an optional maxDowntime parameter
that is passed down as QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME so that
qemuDomainWaitForMigrationComplete would set the value? Or are there
easier ways?
Thanks and regards,
-t
13 years, 6 months
Re: [libvirt] migration of vnlink VMs
by Oved Ourfalli
----- Original Message -----
> From: "Laine Stump" <lstump(a)redhat.com>
> To: "Oved Ourfalli" <ovedo(a)redhat.com>
> Cc: "Ayal Baron" <abaron(a)redhat.com>, "Barak Azulay" <bazulay(a)redhat.com>, "Shahar Havivi" <shaharh(a)redhat.com>,
> "Itamar Heim" <iheim(a)redhat.com>, "Dan Kenigsberg" <danken(a)redhat.com>
> Sent: Thursday, April 28, 2011 10:20:35 AM
> Subject: Re: migration of vnlink VMs
> Oved,
>
> Would it be okay to repost this message to the thread on libvir-list
> so
> that other parties can add their thoughts?
>
Of course. I'm sending my answer to the libvirt list.
> On 04/27/2011 09:58 AM, Oved Ourfalli wrote:
> > Laine, hello.
> >
> > We read your proposal for abstraction of guest<--> host network
> > connection in libvirt.
> >
> > You has an open issue there regarding the vepa/vnlink attributes:
> > "3) What about the parameters in the<virtualport> element that are
> > currently used by vepa/vnlink. Do those belong with the host, or
> > with the guest?"
> >
> > The parameters for the virtualport element should be on the guest,
> > and not the host, because a specific interface can run multiple
> > profiles,
>
> Are you talking about host interface or guest interface? If you mean
> that multiple different profiles can be used when connecting to a
> particular switch - as long as there are only a few different
> profiles,
> rather than each guest having its own unique profile, then it still
> seems better to have the port profile live with the network definition
> (and just define multiple networks, one for each port profile).
>
The profile names can be changed regularly, so it looks like it will be better to put them in the guest level, so that the network host file won't have to be changed on all hosts once something has changed in the profiles.
Also, you will have a duplication of data, writing all the profile name on all the hosts that are connected to the vn-link/vepa switch.
>
> > so it will be a mistake to define a profile to be interface
> > specific on the host. Moreover, putting it in the guest level will
> > enable us in the future (if supported by libvirt/qemu) to migrate
> > a vm from a host with vepa/vnlink interfaces, to another host with
> > a bridge, for example.
>
> It seems to me like doing exactly the opposite would make it easier to
> migrate to a host that used a different kind of switching (from vepa
> to
> vnlink, or from a bridged interface to vepa, etc), since the port
> profile required for a particular host's network would be at the host
> waiting to be used.
You are right, but we would want to have the option to prevent that from happening in case we wouldn't want to allow it.
We can make the ability to migrate between different network types configurable, and we would like an easy way to tell libvirt - "please allow/don't allow it".
>
> > So, in the networks at the host level you will have:
> > <network type='direct'>
> > <name>red-network</name>
> > <source mode='vepa'>
> > <pool>
> > <interface>
> > <name>eth0</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth4</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth18</name>
> > .....
> > </interface>
> > </pool>
> > </source>
> > </network>
> >
> > And in the guest you will have (for vepa):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbg">
> > <parameters managerid="11" typeid="1193047" typeidversion="2"
> > instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
> > </virtualport>
> > </interface>
> >
> > Or (for vnlink):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbh">
> > <parameters profile_name="profile1"/>
> > </virtualport>
> > </interface>
>
> This illustrates the problem I was wondering about - in your example
> it
> would not be possible for the guest to migrate from the host using a
> vepa switch to the host using a vnlink switch (and it would be
> possible
You are right. When trying to migrate between vepa and vnlink there will be missing attributes in each in case we leave it on the host.
> to migrate to a host using a standard bridge only if the virtualport
> element was ignored). If the virtualport element lived with the
> network
> definition of red-network on each host, it could be migrated without
> problem.
>
> The only problematic thing would be if any of the attributes within
> <parameters> was unique for each guest (I don't know anything about
> the
> individual attributes, but "instanceid" sounds like it might be
> different for each guest).
>
> > Then, when migrating from a vepa/vnlink host to another vepa/vnlink
> > host containing red-network, the profile attributes will be
> > available at the guest domain xml.
> > In case the target host has a red-network, which isn't vepa/vnlink,
> > we want to be able to choose whether to make the use of the profile
> > attributes optional (i.e., libvirt won't fail in case of migrating
> > to a network of another type), or mandatory (i.e., libvirt will fail
> > in case of migration to a non-vepa/vnlink network).
> >
> > We have something similar in CPU flags:
> > <cpu match="exact">
> > <model>qemu64</model>
> > <topology sockets="S" cores="C" threads="T"/>
> > <feature policy="require/optional/disable......"
> > name="sse2"/>
> > </cpu>
>
> In this analogy, does "CPU flags" == "mode (vepa/vnlink/bridge)" or
> does
> "CPU flags" == "virtualport parameters" ? It seems like what you're
> wanting can be satisfied by simply not defining "red-network" on the
> hosts that don't have the proper networking setup available (maybe
> what
> you *really* want to call it is "red-vnlink-network").
What I meant to say in that is that we would like to have the ability to say if an attribute must me used, or not.
The issues you mention are indeed interesting. I'm cc-ing libvirt-list to see what other people think.
Putting it on the guest will indeed make it problematic to migrate between networks that need different parameters (vnlink/vepa for example).
Oved
13 years, 6 months
[libvirt] [BUG] Xen->libvirt: localtime reported as UTC
by Philipp Hahn
Hello,
just a report, no fix for that bug yet.
If I create a domain and set <clock offset='localtime'/>, that information is
correctly translated to Xends sxpr data, but on reading it back I get it
reported as 'utc':
# virsh dumpxml 85664d3f-68dd-a4c2-4d2f-be7f276b95f0 | grep clock
<clock offset='utc'/>
# gfind localtime
./85664d3f-68dd-a4c2-4d2f-be7f276b95f0/config.sxp: (platform
((device_model /usr/lib64/xen/bin/qemu-dm) (localtime 1)))
./85664d3f-68dd-a4c2-4d2f-be7f276b95f0/config.sxp: (localtime 1)
BYtE
Philipp
--
Philipp Hahn Open Source Software Engineer hahn(a)univention.de
Univention GmbH Linux for Your Business fon: +49 421 22 232- 0
Mary-Somerville-Str.1 D-28359 Bremen fax: +49 421 22 232-99
http://www.univention.de/
13 years, 6 months