[libvirt] virStoragePoolGetXMLDesc - how to specify format type
by Sharadha Prabhakar (3P)
Hi,
I'm trying to write virStoragePoolGetXMLDesc() for XenAPI remote storage.
I'd like to produce an XML similar to this
<pool type="netfs">
<name>....</name>
<uuid>....</uuid>
<source>
<format type="nfs"/>
<host name="telos"/>
<dir path="/images"/>
</source>
</pool>
I'm trying to fill in the virStoragePoolDefPtr for this.
I need to know if struct _virStoragePoolSource->format
Is the one to fill for format type="nfs".
It's seemingly an integer. Is there any enum for format types for
Nfs and ext3? I couldn't find any in storage_conf.h
My next query is, when would I have to fill in device path? What is it used for
And which pool types use it for remote storage?
Could someone explain?
Regards,
Sharadha
14 years, 8 months
[libvirt] Assigning Static IP through libvirt.
by Kumar L Srikanth-B22348
Hi,
I want to assign a static IP address to one of the interfaces created
through libvirt. Can anyone please let me know the network XML format?
I explored lot of sites on this, but I only found assigning IP address
through DHCP rather than Static.
Can u please help me.
Regards,
Srikanth.
14 years, 8 months
[libvirt] [PATCH] docs: <pre> cannot be nested in <p>
by Matthias Bolte
xsltproc complained about this.
---
docs/hacking.html.in | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/docs/hacking.html.in b/docs/hacking.html.in
index f5ec635..03a1bee 100644
--- a/docs/hacking.html.in
+++ b/docs/hacking.html.in
@@ -128,20 +128,19 @@
<p>
For variadic macros, stick with C99 syntax:
+ </p>
<pre>
#define vshPrint(_ctl, ...) fprintf(stdout, __VA_ARGS__)
</pre>
- </p>
<p>Use parenthesis when checking if a macro is defined, and use
indentation to track nesting:
-
+ </p>
<pre>
#if defined(HAVE_POSIX_FALLOCATE) && !defined(HAVE_FALLOCATE)
# define fallocate(a,ignored,b,c) posix_fallocate(a,b,c)
#endif
</pre>
- </p>
<h2><a href="types">C types</a></h2>
@@ -552,9 +551,11 @@
also rebuild locally, run 'make check syntax-check', and make sure you
don't raise errors. Try to look for warnings too; for example,
configure with
+ </p>
<pre>
--enable-compile-warnings=error
</pre>
+ <p>
which adds -Werror to compile flags, so no warnings get missed
</p>
--
1.6.3.3
14 years, 8 months
Re: [libvirt] virsh stucked in virDomainCreate
by Matthias Bolte
[Let's keep it on the list, so other could help too]
2010/3/15 Anthony Lannuzel <lannuzel(a)gmail.com>:
> On Sat, Mar 13, 2010 at 6:51 PM, Matthias Bolte
> <matthias.bolte(a)googlemail.com> wrote:
>> 2010/3/13 Anthony Lannuzel <lannuzel(a)gmail.com>:
>>> Hi,
>>>
>>> I'm having an issue when trying to start a (previously created) vbox
>>> item : virsh gets stucked in virDomainCreate and I see no error
>>> message (LIBVIRT_DEBUG=1).
>>> Is there any mean to get more information on what is happening ?
>>>
>>> Moreover, now that virsh has stalled, running "connect
>>> vbox:///session" on another virsh instance gets an error from "secret
>>> driver". Is this normal, as if I'm only supposed to have 1 connection
>>> at a time ?
>>>
>>> Thanks
>>> Regards
>>>
>>> Anthony
>>>
>>
>> I tested starting vbox guest using virsh and it works as expected and
>> I can use multiple connections at the same time as expected.
>>
>> What libvirt and vbox versions do you use?
>>
>> Can you give some more details about what you did to trigger this problem?
>>
>> Matthias
>>
>
> Hi,
>
> I'm using virsh 0.7.6 with VirtualBox 3.1.4 on a 2.6.24 kernel with
> openVZ, unionfs and squashfs patches.
>
> Here is what I do :
>
> The domain is created with virDomainDefineXML. Here is the XML file content:
>
> <domain type="vbox">
> <name>c7c0ab9f-9a19-4a45-92a1-799367789509</name>
> <uuid>c7c0ab9f-9a19-4a45-92a1-799367789509</uuid>
> <vcpu>1</vcpu>
> <memory>196608</memory>
> <on_poweroff>destroy</on_poweroff>
> <on_reboot>destroy</on_reboot>
> <on_crash>destroy</on_crash>
> <os>
> <type>hvm</type>
> </os>
> <features>
> <pae></pae>
> <acpi></acpi>
> <apic></apic>
> </features>
> <devices>
> <disk device="disk" type="file">
> <source file="/data/c7c0ab9f-9a19-4a45-92a1-799367789509/ubuntuDesktop-8.04.vdi"/>
> <target bus="ide" dev="hda"/>
> </disk>
> <interface type="bridge">
> <source bridge="hnsTap0_tap"/>
> <mac address="00:22:94:ae:a1:86"/>
> <model type="82543gc"/>
> </interface>
> <graphics port="12000" type="rdp"/>
> </devices>
> </domain>
>
>
>
> Then I try to start it with virsh:
>
> root@hynesim-live:/home/hynesim# virsh
> 10:09:06.685: debug : virInitialize:336 : register drivers
> 10:09:06.685: debug : virRegisterDriver:837 : registering Test as driver 0
> 10:09:06.685: debug : virRegisterNetworkDriver:675 : registering Test
> as network driver 0
> 10:09:06.685: debug : virRegisterInterfaceDriver:706 : registering
> Test as interface driver 0
> 10:09:06.685: debug : virRegisterStorageDriver:737 : registering Test
> as storage driver 0
> 10:09:06.685: debug : virRegisterDeviceMonitor:768 : registering Test
> as device driver 0
> 10:09:06.685: debug : virRegisterSecretDriver:799 : registering Test
> as secret driver 0
> 10:09:06.685: debug : virRegisterDriver:837 : registering OPENVZ as driver 1
> 10:09:06.709: debug : vboxRegister:87 : VBoxCGlueInit found API
> version: 3.1.4 (3001004)
> 10:09:06.718: debug : vboxRegister:104 : VirtualBox API version: 3.1
> 10:09:06.718: debug : virRegisterDriver:837 : registering VBOX as driver 2
> 10:09:06.718: debug : virRegisterNetworkDriver:675 : registering VBOX
> as network driver 1
> 10:09:06.718: debug : virRegisterStorageDriver:737 : registering VBOX
> as storage driver 1
> 10:09:06.718: debug : virRegisterDriver:837 : registering remote as driver 3
> 10:09:06.718: debug : virRegisterNetworkDriver:675 : registering
> remote as network driver 2
> 10:09:06.719: debug : virRegisterInterfaceDriver:706 : registering
> remote as interface driver 1
> 10:09:06.719: debug : virRegisterStorageDriver:737 : registering
> remote as storage driver 2
> 10:09:06.719: debug : virRegisterDeviceMonitor:768 : registering
> remote as device driver 1
> 10:09:06.719: debug : virRegisterSecretDriver:799 : registering remote
> as secret driver 1
> 10:09:06.719: debug : virConnectOpenAuth:1355 : name=(null),
> auth=0x2b78d7561c80, flags=0
> 10:09:06.719: debug : do_open:1112 : no name, allowing driver auto-select
> 10:09:06.719: debug : do_open:1120 : trying driver 0 (Test) ...
> 10:09:06.719: debug : do_open:1126 : driver 0 Test returned DECLINED
> 10:09:06.719: debug : do_open:1120 : trying driver 1 (OPENVZ) ...
> 10:09:06.759: debug : virExecWithHook:637 : LC_ALL=C /usr/sbin/vzctl --help
> 10:09:06.771: debug : do_open:1126 : driver 1 OPENVZ returned SUCCESS
> 10:09:06.771: debug : do_open:1146 : network driver 0 Test returned DECLINED
> 10:09:06.771: debug : do_open:1146 : network driver 1 VBOX returned DECLINED
> 10:09:06.772: debug : doRemoteOpen:564 : proceeding with name = openvz:///system
> 10:09:06.774: debug : remoteIO:8429 : Do proc=66 serial=0 length=28 wait=(nil)
> 10:09:06.774: debug : remoteIO:8491 : We have the buck 66
> 0x2b78dbcf2010 0x2b78dbcf2010
> 10:09:06.776: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 64 total (60 more)
> 10:09:06.776: debug : remoteIOEventLoop:8355 : Giving up the buck 66
> 0x2b78dbcf2010 (nil)
> 10:09:06.776: debug : remoteIO:8522 : All done with our call 66 (nil)
> 0x2b78dbcf2010
> 10:09:06.776: debug : remoteIO:8429 : Do proc=1 serial=1 length=56 wait=(nil)
> 10:09:06.776: debug : remoteIO:8491 : We have the buck 1 0x63fe50 0x63fe50
> 10:09:06.848: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:06.848: debug : remoteIOEventLoop:8355 : Giving up the buck 1
> 0x63fe50 (nil)
> 10:09:06.848: debug : remoteIO:8522 : All done with our call 1 (nil) 0x63fe50
> 10:09:06.848: debug : doRemoteOpen:891 : Adding Handler for remote events
> 10:09:06.848: debug : doRemoteOpen:898 : virEventAddHandle failed: No
> addHandleImpl defined. continuing without events.
> 10:09:06.848: debug : do_open:1146 : network driver 2 remote returned SUCCESS
> 10:09:06.848: debug : do_open:1165 : interface driver 0 Test returned DECLINED
> 10:09:06.849: debug : doRemoteOpen:564 : proceeding with name = openvz:///system
> 10:09:06.850: debug : remoteIO:8429 : Do proc=66 serial=0 length=28 wait=(nil)
> 10:09:06.850: debug : remoteIO:8491 : We have the buck 66 0x67ff00 0x67ff00
> 10:09:06.852: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 64 total (60 more)
> 10:09:06.852: debug : remoteIOEventLoop:8355 : Giving up the buck 66
> 0x67ff00 (nil)
> 10:09:06.852: debug : remoteIO:8522 : All done with our call 66 (nil) 0x67ff00
> 10:09:06.853: debug : remoteIO:8429 : Do proc=1 serial=1 length=56 wait=(nil)
> 10:09:06.853: debug : remoteIO:8491 : We have the buck 1 0x67ff00 0x67ff00
> 10:09:06.901: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:06.901: debug : remoteIOEventLoop:8355 : Giving up the buck 1
> 0x67ff00 (nil)
> 10:09:06.901: debug : remoteIO:8522 : All done with our call 1 (nil) 0x67ff00
> 10:09:06.901: debug : doRemoteOpen:891 : Adding Handler for remote events
> 10:09:06.901: debug : doRemoteOpen:898 : virEventAddHandle failed: No
> addHandleImpl defined. continuing without events.
> 10:09:06.901: debug : do_open:1165 : interface driver 1 remote returned SUCCESS
> 10:09:06.901: debug : do_open:1185 : storage driver 0 Test returned DECLINED
> 10:09:06.901: debug : do_open:1185 : storage driver 1 VBOX returned DECLINED
> 10:09:06.901: debug : do_open:1185 : storage driver 2 remote returned SUCCESS
> 10:09:06.901: debug : do_open:1205 : node driver 0 Test returned DECLINED
> 10:09:06.901: debug : do_open:1205 : node driver 1 remote returned SUCCESS
> 10:09:06.901: debug : do_open:1232 : secret driver 0 Test returned DECLINED
> 10:09:06.901: debug : do_open:1232 : secret driver 1 remote returned SUCCESS
> Bienvenue dans virsh, le terminal de virtualisation interactif.
>
> Taper : « help » pour l'aide ou « help » avec la commande
> « quit » pour quitter
>
> virsh # connect vbox:///session
> 10:09:13.993: debug : virConnectClose:1381 : conn=0x63b060
> 10:09:13.994: debug : virUnrefConnect:259 : unref connection 0x63b060 1
> 10:09:13.994: debug : remoteIO:8429 : Do proc=2 serial=2 length=28 wait=(nil)
> 10:09:13.994: debug : remoteIO:8491 : We have the buck 2 0x680170 0x680170
> 10:09:13.995: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:13.995: debug : remoteIOEventLoop:8355 : Giving up the buck 2
> 0x680170 (nil)
> 10:09:13.995: debug : remoteIO:8522 : All done with our call 2 (nil) 0x680170
> 10:09:13.996: debug : remoteIO:8429 : Do proc=2 serial=2 length=28 wait=(nil)
> 10:09:13.996: debug : remoteIO:8491 : We have the buck 2 0x680170 0x680170
> 10:09:13.997: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:13.997: debug : remoteIOEventLoop:8355 : Giving up the buck 2
> 0x680170 (nil)
> 10:09:13.997: debug : remoteIO:8522 : All done with our call 2 (nil) 0x680170
> 10:09:13.997: debug : virDomainObjUnref:680 : obj=0x63ebf0 refs=0
> 10:09:13.997: debug : virDomainObjFree:658 : obj=0x63ebf0
> 10:09:13.997: debug : virReleaseConnect:216 : release connection 0x63b060
> 10:09:13.997: debug : virConnectOpenAuth:1355 : name=vbox:///session,
> auth=0x2b78d7561c80, flags=0
> 10:09:13.997: debug : do_open:1110 : name "vbox:///session" to URI components:
> scheme vbox
> opaque (null)
> authority (null)
> server (null)
> user (null)
> port 0
> path /session
>
> 10:09:13.997: debug : do_open:1120 : trying driver 0 (Test) ...
> 10:09:13.997: debug : do_open:1126 : driver 0 Test returned DECLINED
> 10:09:13.997: debug : do_open:1120 : trying driver 1 (OPENVZ) ...
> 10:09:13.997: debug : do_open:1126 : driver 1 OPENVZ returned DECLINED
> 10:09:13.997: debug : do_open:1120 : trying driver 2 (VBOX) ...
> 10:09:14.021: debug : vboxOpen:836 : in vboxOpen
> 10:09:14.021: debug : do_open:1126 : driver 2 VBOX returned SUCCESS
> 10:09:14.021: debug : do_open:1146 : network driver 0 Test returned DECLINED
> 10:09:14.021: debug : vboxNetworkOpen:5434 : network initialized
> 10:09:14.021: debug : do_open:1146 : network driver 1 VBOX returned SUCCESS
> 10:09:14.022: debug : do_open:1165 : interface driver 0 Test returned DECLINED
> 10:09:14.022: debug : doRemoteOpen:564 : proceeding with name = vbox:///session
> 10:09:14.024: debug : remoteIO:8429 : Do proc=66 serial=0 length=28 wait=(nil)
> 10:09:14.024: debug : remoteIO:8491 : We have the buck 66 0x6eaa90 0x6eaa90
> 10:09:14.025: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 64 total (60 more)
> 10:09:14.025: debug : remoteIOEventLoop:8355 : Giving up the buck 66
> 0x6eaa90 (nil)
> 10:09:14.025: debug : remoteIO:8522 : All done with our call 66 (nil) 0x6eaa90
> 10:09:14.026: debug : remoteIO:8429 : Do proc=1 serial=1 length=56 wait=(nil)
> 10:09:14.026: debug : remoteIO:8491 : We have the buck 1 0x6eaa90 0x6eaa90
> 10:09:14.027: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:14.027: debug : remoteIOEventLoop:8355 : Giving up the buck 1
> 0x6eaa90 (nil)
> 10:09:14.027: debug : remoteIO:8522 : All done with our call 1 (nil) 0x6eaa90
> 10:09:14.027: debug : doRemoteOpen:891 : Adding Handler for remote events
> 10:09:14.027: debug : doRemoteOpen:898 : virEventAddHandle failed: No
> addHandleImpl defined. continuing without events.
> 10:09:14.027: debug : do_open:1165 : interface driver 1 remote returned SUCCESS
> 10:09:14.027: debug : do_open:1185 : storage driver 0 Test returned DECLINED
> 10:09:14.027: debug : vboxStorageOpen:6186 : vbox storage initialized
> 10:09:14.027: debug : do_open:1185 : storage driver 1 VBOX returned SUCCESS
> 10:09:14.028: debug : do_open:1205 : node driver 0 Test returned DECLINED
> 10:09:14.028: debug : doRemoteOpen:564 : proceeding with name = vbox:///session
> 10:09:14.030: debug : remoteIO:8429 : Do proc=66 serial=0 length=28 wait=(nil)
> 10:09:14.037: debug : remoteIO:8491 : We have the buck 66 0x72ab40 0x72ab40
> 10:09:14.038: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 64 total (60 more)
> 10:09:14.038: debug : remoteIOEventLoop:8355 : Giving up the buck 66
> 0x72ab40 (nil)
> 10:09:14.038: debug : remoteIO:8522 : All done with our call 66 (nil) 0x72ab40
> 10:09:14.038: debug : remoteIO:8429 : Do proc=1 serial=1 length=56 wait=(nil)
> 10:09:14.039: debug : remoteIO:8491 : We have the buck 1 0x72ab40 0x72ab40
> 10:09:14.040: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:14.040: debug : remoteIOEventLoop:8355 : Giving up the buck 1
> 0x72ab40 (nil)
> 10:09:14.040: debug : remoteIO:8522 : All done with our call 1 (nil) 0x72ab40
> 10:09:14.040: debug : doRemoteOpen:891 : Adding Handler for remote events
> 10:09:14.040: debug : doRemoteOpen:898 : virEventAddHandle failed: No
> addHandleImpl defined. continuing without events.
> 10:09:14.040: debug : do_open:1205 : node driver 1 remote returned SUCCESS
> 10:09:14.040: debug : do_open:1232 : secret driver 0 Test returned DECLINED
> 10:09:14.040: debug : doRemoteOpen:564 : proceeding with name = vbox:///session
> 10:09:14.041: debug : remoteIO:8429 : Do proc=66 serial=0 length=28 wait=(nil)
> 10:09:14.041: debug : remoteIO:8491 : We have the buck 66 0x76abf0 0x76abf0
> 10:09:14.042: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 64 total (60 more)
> 10:09:14.042: debug : remoteIOEventLoop:8355 : Giving up the buck 66
> 0x76abf0 (nil)
> 10:09:14.042: debug : remoteIO:8522 : All done with our call 66 (nil) 0x76abf0
> 10:09:14.043: debug : remoteIO:8429 : Do proc=1 serial=1 length=56 wait=(nil)
> 10:09:14.043: debug : remoteIO:8491 : We have the buck 1 0x76abf0 0x76abf0
> 10:09:14.044: debug : remoteIODecodeMessageLength:7913 : Got length,
> now need 56 total (52 more)
> 10:09:14.044: debug : remoteIOEventLoop:8355 : Giving up the buck 1
> 0x76abf0 (nil)
> 10:09:14.044: debug : remoteIO:8522 : All done with our call 1 (nil) 0x76abf0
> 10:09:14.044: debug : doRemoteOpen:891 : Adding Handler for remote events
> 10:09:14.044: debug : doRemoteOpen:898 : virEventAddHandle failed: No
> addHandleImpl defined. continuing without events.
> 10:09:14.044: debug : do_open:1232 : secret driver 1 remote returned SUCCESS
>
> virsh # list --all
> 10:09:19.563: debug : virConnectNumOfDomains:1749 : conn=0x63ebf0
> 10:09:19.568: debug : virConnectNumOfDefinedDomains:4609 : conn=0x63ebf0
> 10:09:19.568: debug : virConnectListDefinedDomains:4648 :
> conn=0x63ebf0, names=0x692430, maxnames=1
> ID Nom État
> ----------------------------------
> 10:09:19.570: debug : virDomainLookupByName:2010 : conn=0x63ebf0,
> name=c7c0ab9f-9a19-4a45-92a1-799367789509
> 10:09:19.571: debug : virGetDomain:345 : New hash entry 0x6a6000
> 10:09:19.571: debug : virDomainGetInfo:2867 : domain=0x6a6000,
> info=0x7fffd3a519e0
> - c7c0ab9f-9a19-4a45-92a1-799367789509 closed
> 10:09:19.573: debug : virDomainFree:2098 : domain=0x6a6000
> 10:09:19.573: debug : virUnrefDomain:422 : unref domain 0x6a6000
> c7c0ab9f-9a19-4a45-92a1-799367789509 1
> 10:09:19.573: debug : virReleaseDomain:376 : release domain 0x6a6000
> c7c0ab9f-9a19-4a45-92a1-799367789509
> 10:09:19.573: debug : virReleaseDomain:392 : unref connection 0x63ebf0 2
>
> virsh # start c7c0ab9f-9a19-4a45-92a1-799367789509
> 10:09:26.108: debug : virDomainLookupByName:2010 : conn=0x63ebf0,
> name=c7c0ab9f-9a19-4a45-92a1-799367789509
> 10:09:26.112: debug : virGetDomain:345 : New hash entry 0x68bd70
> 10:09:26.112: debug : virDomainGetID:2658 : domain=0x68bd70
> 10:09:26.112: debug : virDomainCreate:4690 : domain=0x68bd70
> 10:09:26.112: debug : virDomainCreate:4693 : 1 0x68bd70
> 10:09:26.112: debug : virDomainCreate:4700 : 2 0x68bd70
> 10:09:26.112: debug : virDomainCreate:4707 : 3 0x68bd70
> 10:09:26.112: debug : virDomainCreate:4710 : 3.1 0x68bd70
> 10:09:26.223: debug : vboxDomainCreate:3321 : before close 0x68bd70
>
>
> I have added some debug lines, that lead to:
>
> in libvirt.c:
> function int virDomainCreate(virDomainPtr domain)
> ret = conn->driver->domainCreate (domain); // does not return
>
> in vbox/vbox_tmpl.c:
> function static int vboxDomainCreate(virDomainPtr dom)
> data->vboxSession->vtbl->Close(data->vboxSession);
> // does not return
>
> Thanks
>
At that point the domain should already be running. I wonder why
closing the session handle would block.
You could try variations of the domain XML config, assuming that a
specify part of the config triggers this problem. For example use your
posted domain XML config, but leave out the graphics part or the
interface part or the disk part to see if that makes a difference.
Matthias
14 years, 8 months
[libvirt] Userdata for libvirt?
by Yushu Yao
Hi Experts,
Forgive me if not clear enough, will try to be describe what I need.
What I need: Start a VM on a remote libvirtd server based on an existing image (Might or might not exist on server), plus some userdata.
It is pretty much the same like EC2 or Eucalyptus, but we need to run on our private cluster with only libvirtd. (Eucalyptus doesn't support using physical disk in VMs, that is what we need).
I went through the Remote Libvirt control section of the Documentation, but couldn't find a way to pass the userdata to the VM started.
Does libvirtd already include such feature of do I need to make my own.
Thanks
-Yushu
14 years, 8 months
[libvirt] Just pushed a few small fixes
by Cole Robinson
Hi all,
I've just pushed three small fixes:
$ git log -3 --pretty=short
commit 2ef091efcc5cd02bbd496972b141cf253f713fdd
Author: Philip Hahn <hahn(a)univention.de>
python: Fix networkLookupByUUID
commit 0ef58c315506d094a0a4d64f022119a04a1916c0
Author: Cole Robinson <crobinso(a)redhat.com>
.gitignore: Ignore generated daemon/libvirtd.logrotate
commit 89d8cdfc7e0ba7b2d9cf16a34b8e4a035c851d28
Author: Cole Robinson <crobinso(a)redhat.com>
Fix make dist with XenAPI changes
The last chronological change was from:
https://bugzilla.redhat.com/show_bug.cgi?id=574366
Apparently networkLookupByUUID has always been busted in the python
bindings, but since we don't use it in virtinst/virt-manager, it wasn't
noticed.
- Cole
14 years, 8 months
[libvirt] disk XML for error policy
by Dave Allan
Hi Dan,
What do you think of this for the disk XML specifying what to do on error:
<disk type='file' device='disk'>
<source file='/storage/guest_disks/libgdu'/>
<target dev='vda' bus='virtio'/>
<on_err rerror='stop' werror='stop'/>
</disk>
Dave
14 years, 8 months
[libvirt] [PATCH] do not require two ./autogen.sh runs to permit "make"
by Jim Meyering
I tracked down the source of the two-autogen.sh-runs-required bug.
Here's the fix:
>From 0583de2de2038d4f8875268391fd4994329d39bf Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Tue, 16 Mar 2010 21:08:31 +0100
Subject: [PATCH] do not require two ./autogen.sh runs to permit "make"
* autogen.sh (bootstrap_hash): New function.
Running bootstrap may update the gnulib SHA1, yet we were computing
t=$(git submodule status ...) *prior* to running bootstrap, and
then recording that sometimes-stale value in the stamp file upon
a successful bootstrap run. That would require two (lengthy!)
bootstrap runs to update the stamp file.
---
autogen.sh | 21 ++++++++++++++-------
1 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/autogen.sh b/autogen.sh
index ff94678..b93cdba 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -62,20 +62,27 @@ else
fi
fi
+# Compute the hash we'll use to determine whether rerunning bootstrap
+# is required. The first is just the SHA1 that selects a gnulib snapshot.
+# The second ensures that whenever we change the set of gnulib modules used
+# by this package, we rerun bootstrap to pull in the matching set of files.
+bootstrap_hash()
+{
+ git submodule status | sed 's/^[ +-]//;s/ .*//'
+ git hash-object bootstrap.conf
+}
+
# Ensure that whenever we pull in a gnulib update or otherwise change to a
# different version (i.e., when switching branches), we also rerun ./bootstrap.
curr_status=.git-module-status
-t=$(git submodule status|sed 's/^[ +-]//;s/ .*//'; \
- git hash-object bootstrap.conf)
+t=$(bootstrap_hash)
if test "$t" = "$(cat $curr_status 2>/dev/null)"; then
: # good, it's up to date, all we need is autoreconf
autoreconf -if
else
- echo running bootstrap...
- ./bootstrap && echo "$t" > $curr_status || {
- echo "Failed to bootstrap gnulib, please investigate."
- exit 1;
- }
+ echo running bootstrap...
+ ./bootstrap && bootstrap_hash > $curr_status \
+ || { echo "Failed to bootstrap gnulib, please investigate."; exit 1; }
fi
cd "$THEDIR"
--
1.7.0.1
14 years, 8 months
[libvirt] [PATCH] Allow suspend during live migration
by Jiri Denemark
Currently no command can be sent to a qemu process while another job is
active. This patch adds support for signaling long-running jobs (such as
migration) so that other threads may request predefined operations to be
done during such jobs. Two signals are defined so far:
- QEMU_JOB_SIGNAL_CANCEL
- QEMU_JOB_SIGNAL_SUSPEND
The first one is used by qemuDomainAbortJob.
The second one is used by qemudDomainSuspend for suspending a domain
during migration, which allows for changing live migration into offline
migration. However, there is a small issue in the way qemudDomainSuspend
is currently implemented for migrating domains. The API calls returns
immediately after signaling migration job which means it is asynchronous
in this specific case.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_driver.c | 149 ++++++++++++++++++++++++++++++++---------------
1 files changed, 101 insertions(+), 48 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index f8ab545..5b2c26a 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -87,14 +87,26 @@
#define VIR_FROM_THIS VIR_FROM_QEMU
+/* Only 1 job is allowed at any time
+ * A job includes *all* monitor commands, even those just querying
+ * information, not merely actions */
+enum qemuDomainJob {
+ QEMU_JOB_NONE = 0, /* Always set to 0 for easy if (jobActive) conditions */
+ QEMU_JOB_UNSPECIFIED,
+ QEMU_JOB_MIGRATION,
+};
+
+enum qemuDomainJobSignals {
+ QEMU_JOB_SIGNAL_CANCEL = 1 << 0, /* Request job cancellation */
+ QEMU_JOB_SIGNAL_SUSPEND = 1 << 1, /* Request VM suspend to finish live migration offline */
+};
+
typedef struct _qemuDomainObjPrivate qemuDomainObjPrivate;
typedef qemuDomainObjPrivate *qemuDomainObjPrivatePtr;
struct _qemuDomainObjPrivate {
virCond jobCond; /* Use in conjunction with main virDomainObjPtr lock */
- unsigned int jobActive : 1; /* Non-zero if a job is active. Only 1 job is allowed at any time
- * A job includes *all* monitor commands, even those just querying
- * information, not merely actions */
- unsigned int jobCancel : 1; /* Non-zero if a cancel request from client has arrived */
+ enum qemuDomainJob jobActive; /* Currently running job */
+ unsigned int jobSignals; /* Signals for running job */
virDomainJobInfo jobInfo;
unsigned long long jobStart;
@@ -338,8 +350,8 @@ static int qemuDomainObjBeginJob(virDomainObjPtr obj)
return -1;
}
}
- priv->jobActive = 1;
- priv->jobCancel = 0;
+ priv->jobActive = QEMU_JOB_UNSPECIFIED;
+ priv->jobSignals = 0;
priv->jobStart = (now.tv_sec * 1000ull) + (now.tv_usec / 1000);
memset(&priv->jobInfo, 0, sizeof(priv->jobInfo));
@@ -385,8 +397,8 @@ static int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
return -1;
}
}
- priv->jobActive = 1;
- priv->jobCancel = 0;
+ priv->jobActive = QEMU_JOB_UNSPECIFIED;
+ priv->jobSignals = 0;
priv->jobStart = (now.tv_sec * 1000ull) + (now.tv_usec / 1000);
memset(&priv->jobInfo, 0, sizeof(priv->jobInfo));
@@ -410,8 +422,8 @@ static int ATTRIBUTE_RETURN_CHECK qemuDomainObjEndJob(virDomainObjPtr obj)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
- priv->jobActive = 0;
- priv->jobCancel = 0;
+ priv->jobActive = QEMU_JOB_NONE;
+ priv->jobSignals = 0;
priv->jobStart = 0;
memset(&priv->jobInfo, 0, sizeof(priv->jobInfo));
virCondSignal(&priv->jobCond);
@@ -3560,6 +3572,7 @@ static int qemudDomainSuspend(virDomainPtr dom) {
virDomainObjPtr vm;
int ret = -1;
virDomainEventPtr event = NULL;
+ qemuDomainObjPrivatePtr priv;
qemuDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
@@ -3571,30 +3584,48 @@ static int qemudDomainSuspend(virDomainPtr dom) {
_("no domain with matching uuid '%s'"), uuidstr);
goto cleanup;
}
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
- goto cleanup;
-
if (!virDomainObjIsActive(vm)) {
qemuReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
- goto endjob;
+ goto cleanup;
}
- if (vm->state != VIR_DOMAIN_PAUSED) {
- qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorStopCPUs(priv->mon) < 0) {
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+
+ priv = vm->privateData;
+
+ if (priv->jobActive == QEMU_JOB_MIGRATION) {
+ if (vm->state != VIR_DOMAIN_PAUSED) {
+ VIR_DEBUG("Requesting domain pause on %s",
+ vm->def->name);
+ priv->jobSignals |= QEMU_JOB_SIGNAL_SUSPEND;
+ }
+ ret = 0;
+ goto cleanup;
+ } else {
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
+ if (!virDomainObjIsActive(vm)) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is not running"));
goto endjob;
}
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- vm->state = VIR_DOMAIN_PAUSED;
- event = virDomainEventNewFromObj(vm,
- VIR_DOMAIN_EVENT_SUSPENDED,
- VIR_DOMAIN_EVENT_SUSPENDED_PAUSED);
+ if (vm->state != VIR_DOMAIN_PAUSED) {
+ int rc;
+
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ rc = qemuMonitorStopCPUs(priv->mon);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ if (rc < 0)
+ goto endjob;
+ vm->state = VIR_DOMAIN_PAUSED;
+ event = virDomainEventNewFromObj(vm,
+ VIR_DOMAIN_EVENT_SUSPENDED,
+ VIR_DOMAIN_EVENT_SUSPENDED_PAUSED);
+ }
+ if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0)
+ goto endjob;
+ ret = 0;
}
- if (virDomainSaveStatus(driver->caps, driver->stateDir, vm) < 0)
- goto endjob;
- ret = 0;
endjob:
if (qemuDomainObjEndJob(vm) == 0)
@@ -3946,6 +3977,35 @@ cleanup:
}
+/** qemuDomainMigrateOffline:
+ * Pause domain for non-live migration.
+ */
+static int
+qemuDomainMigrateOffline(struct qemud_driver *driver,
+ virDomainObjPtr vm)
+{
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+ int ret;
+
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ret = qemuMonitorStopCPUs(priv->mon);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+
+ if (ret == 0) {
+ virDomainEventPtr event;
+
+ vm->state = VIR_DOMAIN_PAUSED;
+ event = virDomainEventNewFromObj(vm,
+ VIR_DOMAIN_EVENT_SUSPENDED,
+ VIR_DOMAIN_EVENT_SUSPENDED_MIGRATED);
+ if (event)
+ qemuDomainEventQueue(driver, event);
+ }
+
+ return ret;
+}
+
+
static int
qemuDomainWaitForMigrationComplete(struct qemud_driver *driver, virDomainObjPtr vm)
{
@@ -3964,8 +4024,8 @@ qemuDomainWaitForMigrationComplete(struct qemud_driver *driver, virDomainObjPtr
struct timeval now;
int rc;
- if (priv->jobCancel) {
- priv->jobCancel = 0;
+ if (priv->jobSignals & QEMU_JOB_SIGNAL_CANCEL) {
+ priv->jobSignals ^= QEMU_JOB_SIGNAL_CANCEL;
VIR_DEBUG0("Cancelling migration at client request");
qemuDomainObjEnterMonitorWithDriver(driver, vm);
rc = qemuMonitorMigrateCancel(priv->mon);
@@ -3973,6 +4033,11 @@ qemuDomainWaitForMigrationComplete(struct qemud_driver *driver, virDomainObjPtr
if (rc < 0) {
VIR_WARN0("Unable to cancel migration");
}
+ } else if (priv->jobSignals & QEMU_JOB_SIGNAL_SUSPEND) {
+ priv->jobSignals ^= QEMU_JOB_SIGNAL_SUSPEND;
+ VIR_DEBUG0("Pausing domain for non-live migration");
+ if (qemuDomainMigrateOffline(driver, vm) < 0)
+ VIR_WARN0("Unable to pause domain");
}
qemuDomainObjEnterMonitorWithDriver(driver, vm);
@@ -8979,7 +9044,7 @@ qemudDomainMigratePerform (virDomainPtr dom,
virDomainObjPtr vm;
virDomainEventPtr event = NULL;
int ret = -1;
- int paused = 0;
+ int resume = 0;
qemuDomainObjPrivatePtr priv;
qemuDriverLock(driver);
@@ -8995,6 +9060,7 @@ qemudDomainMigratePerform (virDomainPtr dom,
if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
goto cleanup;
+ priv->jobActive = QEMU_JOB_MIGRATION;
if (!virDomainObjIsActive(vm)) {
qemuReportError(VIR_ERR_OPERATION_INVALID,
@@ -9005,23 +9071,10 @@ qemudDomainMigratePerform (virDomainPtr dom,
memset(&priv->jobInfo, 0, sizeof(priv->jobInfo));
priv->jobInfo.type = VIR_DOMAIN_JOB_UNBOUNDED;
+ resume = vm->state == VIR_DOMAIN_RUNNING;
if (!(flags & VIR_MIGRATE_LIVE) && vm->state == VIR_DOMAIN_RUNNING) {
- /* Pause domain for non-live migration */
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorStopCPUs(priv->mon) < 0) {
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ if (qemuDomainMigrateOffline(driver, vm) < 0)
goto endjob;
- }
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- paused = 1;
-
- vm->state = VIR_DOMAIN_PAUSED;
- event = virDomainEventNewFromObj(vm,
- VIR_DOMAIN_EVENT_SUSPENDED,
- VIR_DOMAIN_EVENT_SUSPENDED_MIGRATED);
- if (event)
- qemuDomainEventQueue(driver, event);
- event = NULL;
}
if ((flags & (VIR_MIGRATE_TUNNELLED | VIR_MIGRATE_PEER2PEER))) {
@@ -9035,7 +9088,7 @@ qemudDomainMigratePerform (virDomainPtr dom,
/* Clean up the source domain. */
qemudShutdownVMDaemon(driver, vm);
- paused = 0;
+ resume = 0;
event = virDomainEventNewFromObj(vm,
VIR_DOMAIN_EVENT_STOPPED,
@@ -9049,7 +9102,7 @@ qemudDomainMigratePerform (virDomainPtr dom,
ret = 0;
endjob:
- if (paused) {
+ if (resume && vm->state == VIR_DOMAIN_PAUSED) {
/* we got here through some sort of failure; start the domain again */
qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorStartCPUs(priv->mon, dom->conn) < 0) {
@@ -9425,7 +9478,7 @@ static int qemuDomainAbortJob(virDomainPtr dom) {
if (virDomainObjIsActive(vm)) {
if (priv->jobActive) {
VIR_DEBUG("Requesting cancellation of job on vm %s", vm->def->name);
- priv->jobCancel = 1;
+ priv->jobSignals |= QEMU_JOB_SIGNAL_CANCEL;
} else {
qemuReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("no job is active on the domain"));
--
1.7.0.2
14 years, 8 months