[Libvir] PATCH: support Xen 3.0.5

I've been doing some testing with current xen-unstable (ie what will very shortly be 3.0.5) and came across a whole bunch of things which needed fixing - some expected, others not expected. The attached patch addresses the following issues: - Many of the hypercalls have their structs changed so that int64_t or 'foo *' members are always 64-bit aligned even on 32-bit platforms. This is part of the work to allow 32-bit Dom0/DomU to work on 64-bit hypervisor. For the int64_t types I had to annotate with __attribute__((aligned(8))). This did not work for pointer data types, so I for those I had to do a more complex hack with union { foo *v; int64_t pad __attribute__((aligned(8))) } This matches what is done in the public (BSD licensed) Xen HV header files. We already had ways to deal with v0 vs v2 hypercalls structs. This change is still techincally v2, but just a minor revision of the domctl or sysctl interfaces. Thus I have named the extra structs v2d5 or v2s3 to indicated hypercall version 2, domctl version 5 or hypercall version 2, sysctl version 3 respectively. - The 'flags' field in the getdomaininfo hypercall now has an extra flag defined '(1<<1)' which was previously not used, is now used to indicate that the guest is HVM. Thus when fetching domain state, we have to mask out that flag, otherwise we'll never match the correct paused/running/ blocked/etc states. - In the xenHypervisorNumOfDomains method, under certain scenarios we will re-try the hypercall, allocating a bigger memory buffer. Well due to the ABI alignment changes we hit that scenario everytime, and ended up allocating a multi-GB buffer :-) The fixed structs sort this out, but as a preventative measure for any future HV changes the patch will break out of the loop at the 10,000 guest mark to avoid allocating GB of memory. - The unified Xen driver broke the GetVCPUs method - it was mistakenly checking for return value == 0, instead of > 0. Trivial fix. - The method to open the XenD connection was calling xenDaemonGetVersion to test if the connection succeeded. But then also calling the xend_detect_config_version which does pretty much same thing. So I removed the former, and now we only do the latter as a 'ping' test when opening. This removes 1 HTTP GET which is worthwhile performance boost given how horrifically slow XenD is. - The HVM SEXPR for configuring the VNC / SDL graphics is no longere part of the (image) block. it now matches the PVFB graphics config and is an explicit (vfb) block within the (devices) block. So if xend_config_format >= 4 we use the new style config - this is assuming upstream XenD is patched to increment xend_config_format from 3 to 4 - I send a patch & am confident it will be applied very shortly. - The QEMU device model allows a user to specify multiple devices for the boot order, eg 'andc' to indicated 'floppy', 'network' 'cdrom', 'disk'. We assumed it was a single letter only. I now serialize this into multiple <boot dev='XXX'/> elements, ordered according to priority. The XML -> SEXPR conversion allows the same. I've tested all this on a 32-bit Dom0 running on 32-bit HV, and 64-bit HV, but not tested a 64-bit Dom0 on 64-bit HV. I'm pretty sure it'll work,but if anyone is runnning 64-on-64 please test this patch. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi, Dan Thank you for submitting patch! I am eased to see this. Since I can test on current xen-unstable. Anyway I will test on with this patch. Thanks Atsushi SAKAI "Daniel P. Berrange" <berrange@redhat.com> wrote:
I've been doing some testing with current xen-unstable (ie what will very shortly be 3.0.5) and came across a whole bunch of things which needed fixing - some expected, others not expected. The attached patch addresses the following issues:
- Many of the hypercalls have their structs changed so that int64_t or 'foo *' members are always 64-bit aligned even on 32-bit platforms. This is part of the work to allow 32-bit Dom0/DomU to work on 64-bit hypervisor.
For the int64_t types I had to annotate with __attribute__((aligned(8))). This did not work for pointer data types, so I for those I had to do a more complex hack with
union { foo *v; int64_t pad __attribute__((aligned(8))) }
This matches what is done in the public (BSD licensed) Xen HV header files.
We already had ways to deal with v0 vs v2 hypercalls structs. This change is still techincally v2, but just a minor revision of the domctl or sysctl interfaces. Thus I have named the extra structs v2d5 or v2s3 to indicated hypercall version 2, domctl version 5 or hypercall version 2, sysctl version 3 respectively.
- The 'flags' field in the getdomaininfo hypercall now has an extra flag defined '(1<<1)' which was previously not used, is now used to indicate that the guest is HVM. Thus when fetching domain state, we have to mask out that flag, otherwise we'll never match the correct paused/running/ blocked/etc states.
- In the xenHypervisorNumOfDomains method, under certain scenarios we will re-try the hypercall, allocating a bigger memory buffer. Well due to the ABI alignment changes we hit that scenario everytime, and ended up allocating a multi-GB buffer :-) The fixed structs sort this out, but as a preventative measure for any future HV changes the patch will break out of the loop at the 10,000 guest mark to avoid allocating GB of memory.
- The unified Xen driver broke the GetVCPUs method - it was mistakenly checking for return value == 0, instead of > 0. Trivial fix.
- The method to open the XenD connection was calling xenDaemonGetVersion to test if the connection succeeded. But then also calling the xend_detect_config_version which does pretty much same thing. So I removed the former, and now we only do the latter as a 'ping' test when opening. This removes 1 HTTP GET which is worthwhile performance boost given how horrifically slow XenD is.
- The HVM SEXPR for configuring the VNC / SDL graphics is no longere part of the (image) block. it now matches the PVFB graphics config and is an explicit (vfb) block within the (devices) block. So if xend_config_format >= 4 we use the new style config - this is assuming upstream XenD is patched to increment xend_config_format from 3 to 4 - I send a patch & am confident it will be applied very shortly.
- The QEMU device model allows a user to specify multiple devices for the boot order, eg 'andc' to indicated 'floppy', 'network' 'cdrom', 'disk'. We assumed it was a single letter only. I now serialize this into multiple <boot dev='XXX'/> elements, ordered according to priority. The XML -> SEXPR conversion allows the same.
I've tested all this on a 32-bit Dom0 running on 32-bit HV, and 64-bit HV, but not tested a 64-bit Dom0 on 64-bit HV. I'm pretty sure it'll work,but if anyone is runnning 64-on-64 please test this patch.
Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Thu, Apr 12, 2007 at 10:56:32AM +0900, Atsushi SAKAI wrote:
Hi, Dan
Thank you for submitting patch! I am eased to see this. Since I can test on current xen-unstable. Anyway I will test on with this patch.
Great. FYI - we're actively working on trying to get Fedora 7 rawhide updated to xen-unstable, with the plan to have Fedora 7 GA on Xen 3.0.5. The timing is really tight, but I'm hopeful we'll manage it. Xen 3.0.4 proving very unstable, particularly in its userspace so we really want 3.0.5 if possible. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

A Man Without A Country Daniel P. Berrange wrote:
I've been doing some testing with current xen-unstable (ie what will very shortly be 3.0.5) and came across a whole bunch of things which needed fixing - some expected, others not expected. The attached patch addresses the following issues:
As a general question about policy, are we going to support every rev of the HV interface? If Xen change the interface again before 3.0.5 will we support this "pre-3.0.5" and the final interface?
- Many of the hypercalls have their structs changed so that int64_t or 'foo *' members are always 64-bit aligned even on 32-bit platforms. This is part of the work to allow 32-bit Dom0/DomU to work on 64-bit hypervisor.
For the int64_t types I had to annotate with __attribute__((aligned(8))). This did not work for pointer data types, so I for those I had to do a more complex hack with
union { foo *v; int64_t pad __attribute__((aligned(8))) }
What is the problem here? I did some tests and it seems to work just fine: /* Test alignment of pointers. * Richard Jones <rjones~at~redhat~dot~com> */ #include <stdio.h> #undef offsetof #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) struct s { char c; //int v; //int v __attribute__((aligned(16))); int *v __attribute__((aligned(16))); //struct s *v __attribute__((aligned(16))); }; int main () { printf ("offset = %d\n", offsetof (struct s, v)); return 0; } $ gcc -Wall -Werror align.c -o align $ ./align offset = 16 $ arch i686 $ gcc --version gcc (GCC) 3.3.5 (Debian 1:3.3.5-13) Copyright (C) 2003 Free Software Foundation, Inc. [etc]
- The unified Xen driver broke the GetVCPUs method - it was mistakenly checking for return value == 0, instead of > 0. Trivial fix.
Ooops.
I've tested all this on a 32-bit Dom0 running on 32-bit HV, and 64-bit HV, but not tested a 64-bit Dom0 on 64-bit HV. I'm pretty sure it'll work,but if anyone is runnning 64-on-64 please test this patch.
I don't have 3.0.5 to test here, but I can possibly test this on Monday. In the meantime I eyeballed the patch and from what I can tell it seems fine. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Thu, Apr 12, 2007 at 10:02:45AM +0100, Richard W.M. Jones wrote:
A Man Without A Country Daniel P. Berrange wrote:
I've been doing some testing with current xen-unstable (ie what will very shortly be 3.0.5) and came across a whole bunch of things which needed fixing - some expected, others not expected. The attached patch addresses the following issues:
As a general question about policy, are we going to support every rev of the HV interface? If Xen change the interface again before 3.0.5 will we support this "pre-3.0.5" and the final interface?
Yeah we aim to support pretty much all HV versions in 3.0.x It isn't actually as bad as it sounds. In v2 of the hypercall ABI, there have been 3 revisions of the sysctl ABI, and 5 of the domctl ABI. For the limited set of hypercalls that libvirt makes though, there were no changes in domctl 1->4, or in the sysctl 1->2. So we only need 2 code branches to deal with those 5 revisions of the ABI.
- Many of the hypercalls have their structs changed so that int64_t or 'foo *' members are always 64-bit aligned even on 32-bit platforms. This is part of the work to allow 32-bit Dom0/DomU to work on 64-bit hypervisor.
For the int64_t types I had to annotate with __attribute__((aligned(8))). This did not work for pointer data types, so I for those I had to do a more complex hack with
union { foo *v; int64_t pad __attribute__((aligned(8))) }
What is the problem here? I did some tests and it seems to work just fine:
I'm not sure - all I know is that it didn't work - all I got back from the hypervisor was complete garbage. I will try it again, but I'm not hopeful and its near impossible to debug. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Thu, Apr 12, 2007 at 02:46:46AM +0100, Daniel P. Berrange wrote:
I've been doing some testing with current xen-unstable (ie what will very shortly be 3.0.5) and came across a whole bunch of things which needed fixing - some expected, others not expected. The attached patch addresses the following issues:
Okay, thanks a lot for this !
- Many of the hypercalls have their structs changed so that int64_t or 'foo *' members are always 64-bit aligned even on 32-bit platforms. This is part of the work to allow 32-bit Dom0/DomU to work on 64-bit hypervisor.
For the int64_t types I had to annotate with __attribute__((aligned(8))). This did not work for pointer data types, so I for those I had to do a more complex hack with
union { foo *v; int64_t pad __attribute__((aligned(8))) }
This matches what is done in the public (BSD licensed) Xen HV header files.
We already had ways to deal with v0 vs v2 hypercalls structs. This change is still techincally v2, but just a minor revision of the domctl or sysctl interfaces. Thus I have named the extra structs v2d5 or v2s3 to indicated hypercall version 2, domctl version 5 or hypercall version 2, sysctl version 3 respectively.
Though not completely see xen_op_v2_dom remark below
- The 'flags' field in the getdomaininfo hypercall now has an extra flag defined '(1<<1)' which was previously not used, is now used to indicate that the guest is HVM. Thus when fetching domain state, we have to mask out that flag, otherwise we'll never match the correct paused/running/ blocked/etc states.
<grin/>
- In the xenHypervisorNumOfDomains method, under certain scenarios we will re-try the hypercall, allocating a bigger memory buffer. Well due to the ABI alignment changes we hit that scenario everytime, and ended up allocating a multi-GB buffer :-) The fixed structs sort this out, but as a preventative measure for any future HV changes the patch will break out of the loop at the 10,000 guest mark to avoid allocating GB of memory.
That was a bug on our side :-)
- The unified Xen driver broke the GetVCPUs method - it was mistakenly
That too !
- The method to open the XenD connection was calling xenDaemonGetVersion to test if the connection succeeded. But then also calling the xend_detect_config_version which does pretty much same thing. So I removed the former, and now we only do the latter as a 'ping' test when opening. This removes 1 HTTP GET which is worthwhile performance boost given how horrifically slow XenD is.
Good catch, I guess the detection was done only in one of the pre-driver code then it was cleaned up, and the test was added at connection open time, but unfortunately wasn't removed in the latter case, bug++
- The HVM SEXPR for configuring the VNC / SDL graphics is no longere part of the (image) block. it now matches the PVFB graphics config and is an explicit (vfb) block within the (devices) block. So if xend_config_format >= 4 we use the new style config - this is assuming upstream XenD is patched to increment xend_config_format from 3 to 4 - I send a patch & am confident it will be applied very shortly.
you mean the patch will be in before 3.0.5 ?
- The QEMU device model allows a user to specify multiple devices for the boot order, eg 'andc' to indicated 'floppy', 'network' 'cdrom', 'disk'. We assumed it was a single letter only. I now serialize this into multiple <boot dev='XXX'/> elements, ordered according to priority. The XML -> SEXPR conversion allows the same.
I've tested all this on a 32-bit Dom0 running on 32-bit HV, and 64-bit HV, but not tested a 64-bit Dom0 on 64-bit HV. I'm pretty sure it'll work,but if anyone is runnning 64-on-64 please test this patch.
cool thanks, a few comments on the patch below I suggest to commit this, wait for xen-3.0.5 to officially roll out and then make a new libvirt release. The painful thing is regression tests, we don't really have a good answer some of the entry points are tested by virt-manager but for example the CPU affinity stuff is really uncommon, actually it took months before we found an error in the last change of hypercalls. From a patch perspective I feel relatively safe that this won't break with the older hypervisor, but getting to actually testing it doesn't look fun at all. [...]
+ +/* As of Hypervisor Call v2, DomCtl v5 we are now 8-byte aligned + even on 32-bit archs when dealing with uint64_t */ +#define ALIGN_64 __attribute__((aligned(8)))
I'm wondering, should we test for GCC version here and #error if not, so that people who may compile with a different compiler may have a chance to catch potential problem here ?
@@ -415,10 +508,14 @@ struct xen_op_v2_dom { domid_t domain; union { xen_v2_setmaxmem setmaxmem; + xen_v2d5_setmaxmem setmaxmemd5; xen_v2_setmaxvcpu setmaxvcpu; xen_v2_setvcpumap setvcpumap; + xen_v2d5_setvcpumap setvcpumapd5; xen_v2_vcpuinfo getvcpuinfo; + xen_v2d5_vcpuinfo getvcpuinfod5; xen_v2_getvcpumap getvcpumap; + xen_v2d5_getvcpumap getvcpumapd5; uint8_t padding[128]; } u; };
I was a bit surprized by that, somehow I was expecting different struct xen_op_v2_dom and struct xen_op_v2d5_dom but that allows to minimize the change and only the fields in the union are impacted so that's probably better, yes.
@@ -1802,10 +1949,18 @@ xenHypervisorNumOfDomains(virConnectPtr return (-1);
nbids = ret; + /* Can't possibly have more than 10,000 concurrent guests + * so limit how many times we try, to avoid increasing + * without bound & thus allocating all of system memory ! + * XXX I'll regret this comment in a few years time ;-) + */
hehe, now if Xen headers exported a maximum number of domain that would be be clean. I would be surprized if there wasn't a hardcoded limit but I was unable to find one under /usr/include/xen headers ...
if (nbids == maxids) { - last_maxids *= 2; - maxids *= 2; - goto retry; + if (maxids < 10000) { + last_maxids *= 2; + maxids *= 2; + goto retry; + } + nbids = -1; } if ((nbids < 0) || (nbids > maxids)) return(-1);
I tried to look for other places where we may grow/realloc data like that in the code, but apparently that's the only place, maybe except some buffer handling but that looked safe, not as a loop like this !
@@ -1994,7 +2149,8 @@ xenHypervisorGetDomInfo(virConnectPtr co return (-1);
domain_flags = XEN_GETDOMAININFO_FLAGS(dominfo); - domain_state = domain_flags & 0xFF; + domain_flags &= ~DOMFLAGS_HVM; /* Mask out HVM flags */ + domain_state = domain_flags & 0xFF; /* Mask out high bits */ switch (domain_state) { case DOMFLAGS_DYING: info->state = VIR_DOMAIN_SHUTDOWN;
<regrin/> thanks again, please apply, Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

On Thu, Apr 12, 2007 at 12:33:14PM -0400, Daniel Veillard wrote:
- The HVM SEXPR for configuring the VNC / SDL graphics is no longere part of the (image) block. it now matches the PVFB graphics config and is an explicit (vfb) block within the (devices) block. So if xend_config_format >= 4 we use the new style config - this is assuming upstream XenD is patched to increment xend_config_format from 3 to 4 - I send a patch & am confident it will be applied very shortly.
you mean the patch will be in before 3.0.5 ?
It is already in xen-unstable staging tree.
- The QEMU device model allows a user to specify multiple devices for the boot order, eg 'andc' to indicated 'floppy', 'network' 'cdrom', 'disk'. We assumed it was a single letter only. I now serialize this into multiple <boot dev='XXX'/> elements, ordered according to priority. The XML -> SEXPR conversion allows the same.
I've tested all this on a 32-bit Dom0 running on 32-bit HV, and 64-bit HV, but not tested a 64-bit Dom0 on 64-bit HV. I'm pretty sure it'll work,but if anyone is runnning 64-on-64 please test this patch.
cool thanks, a few comments on the patch below I suggest to commit this, wait for xen-3.0.5 to officially roll out and then make a new libvirt release.
I'd like to see a release sooner than that - there's a number of nasty bugs in the networking stuff which causes the daemon to SEGV, and the iptables ruleset changes are pretty important to get out. For Fedora we're planning to ship a xen-3.0.5 pre-release in the next Fedora 7 test in anticipation of the Xen 3.0.5 GA being available RSN. It'd be nice to have a real libvirt build in that, rather than applying a huge number of patches.
The painful thing is regression tests, we don't really have a good answer some of the entry points are tested by virt-manager but for example the CPU affinity stuff is really uncommon, actually it took months before we found an error in the last change of hypercalls.
Its possible to test the VCPU stuff with virsh - that's how I tested it against the new HV. I can test it against 3.0.3 in the same way when I have a minute.
+ +/* As of Hypervisor Call v2, DomCtl v5 we are now 8-byte aligned + even on 32-bit archs when dealing with uint64_t */ +#define ALIGN_64 __attribute__((aligned(8)))
I'm wondering, should we test for GCC version here and #error if not, so that people who may compile with a different compiler may have a chance to catch potential problem here ?
In theory yes, but in practice the user is doomed anyway, because we already have #include <xen/dom0_ops.h> #include <xen/version.h> #include <xen/xen.h> #include <xen/linux/privcmd.h> which are littered with __attribute__((aligned(8))) with no check for GCC.
nbids = ret; + /* Can't possibly have more than 10,000 concurrent guests + * so limit how many times we try, to avoid increasing + * without bound & thus allocating all of system memory ! + * XXX I'll regret this comment in a few years time ;-) + */
hehe, now if Xen headers exported a maximum number of domain that would be be clean. I would be surprized if there wasn't a hardcoded limit but I was unable to find one under /usr/include/xen headers ...
Welll domid_t is a uint16_t, so that's 65,3556 guests total. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Thu, Apr 12, 2007 at 05:51:04PM +0100, Daniel P. Berrange wrote:
On Thu, Apr 12, 2007 at 12:33:14PM -0400, Daniel Veillard wrote:
- The HVM SEXPR for configuring the VNC / SDL graphics is no longere part of the (image) block. it now matches the PVFB graphics config and is an explicit (vfb) block within the (devices) block. So if xend_config_format >= 4 we use the new style config - this is assuming upstream XenD is patched to increment xend_config_format from 3 to 4 - I send a patch & am confident it will be applied very shortly.
you mean the patch will be in before 3.0.5 ?
It is already in xen-unstable staging tree.
good :-)
cool thanks, a few comments on the patch below I suggest to commit this, wait for xen-3.0.5 to officially roll out and then make a new libvirt release.
I'd like to see a release sooner than that - there's a number of nasty bugs in the networking stuff which causes the daemon to SEGV, and the iptables ruleset changes are pretty important to get out. For Fedora we're planning to ship a xen-3.0.5 pre-release in the next Fedora 7 test in anticipation of the Xen 3.0.5 GA being available RSN. It'd be nice to have a real libvirt build in that, rather than applying a huge number of patches.
Okay, so before Fedora 7 deadline, which was like a few weeks ago ... I will do my best :-)
The painful thing is regression tests, we don't really have a good answer some of the entry points are tested by virt-manager but for example the CPU affinity stuff is really uncommon, actually it took months before we found an error in the last change of hypercalls.
Its possible to test the VCPU stuff with virsh - that's how I tested it against the new HV. I can test it against 3.0.3 in the same way when I have a minute.
I was more thinking in term of a framework we could use to regression test... Keeping all version of Xen/KVM/... may not be possible, but if we could autodetect when make tests is run on a machine what is available and test that subset this would ease global coverage.
+ +/* As of Hypervisor Call v2, DomCtl v5 we are now 8-byte aligned + even on 32-bit archs when dealing with uint64_t */ +#define ALIGN_64 __attribute__((aligned(8)))
I'm wondering, should we test for GCC version here and #error if not, so that people who may compile with a different compiler may have a chance to catch potential problem here ?
In theory yes, but in practice the user is doomed anyway, because we already have
#include <xen/dom0_ops.h> #include <xen/version.h> #include <xen/xen.h> #include <xen/linux/privcmd.h>
which are littered with __attribute__((aligned(8))) with no check for GCC.
Okay :-)
hehe, now if Xen headers exported a maximum number of domain that would be be clean. I would be surprized if there wasn't a hardcoded limit but I was unable to find one under /usr/include/xen headers ...
Welll domid_t is a uint16_t, so that's 65,3556 guests total.
good point. A bit high as the default, though 65000 * sizeof(xen_getdomaininfolist) is not that much in case of weird error case. Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

Hi, Dan I test on Fedora7 test3 x86_64 but failed. (Old version(which includes test3) works fine.) virsh dominfo 0 libvir: error : no support for hypervisor (null) lt-virsh: error: failed to connect to the hypervisor Thanks Atsushi SAKAI

On Fri, Apr 13, 2007 at 02:37:03PM +0900, Atsushi SAKAI wrote:
Hi, Dan
I test on Fedora7 test3 x86_64 but failed. (Old version(which includes test3) works fine.) virsh dominfo 0
libvir: error : no support for hypervisor (null) lt-virsh: error: failed to connect to the hypervisor
Hi Atsushi, could you help us a bit with the debugging there, by running as root lt-virsh under gdb and adding a breakpoint in xenHypervisorInit() around line 1233 in xen_internal.c the version detection starts. We test a couple of hypervisor call here (done via ioctl), could you see if all the ioctl failed ? Then assuming it goes though we go look up the version later in that function (at detect_v2) and call virXen_getdomaininfo() to do so, could you trace through it and virXen_getdomaininfolist() to see what actually happens ? thanks a lot ! Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

Hi, Daniel I will investigate this issue. But Nextweek I am absent (for XenSummit). In this case, ask Nobuhiro (Ito). Thanks Atsushi SAKAI Daniel Veillard <veillard@redhat.com> wrote:
On Fri, Apr 13, 2007 at 02:37:03PM +0900, Atsushi SAKAI wrote:
Hi, Dan
I test on Fedora7 test3 x86_64 but failed. (Old version(which includes test3) works fine.) virsh dominfo 0
libvir: error : no support for hypervisor (null) lt-virsh: error: failed to connect to the hypervisor
Hi Atsushi,
could you help us a bit with the debugging there, by running as root lt-virsh under gdb and adding a breakpoint in xenHypervisorInit() around line 1233 in xen_internal.c the version detection starts. We test a couple of hypervisor call here (done via ioctl), could you see if all the ioctl failed ? Then assuming it goes though we go look up the version later in that function (at detect_v2) and call virXen_getdomaininfo() to do so, could you trace through it and virXen_getdomaininfolist() to see what actually happens ?
thanks a lot !
Daniel
-- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

On Fri, Apr 13, 2007 at 02:37:03PM +0900, Atsushi SAKAI wrote:
Hi, Dan
I test on Fedora7 test3 x86_64 but failed. (Old version(which includes test3) works fine.) virsh dominfo 0
libvir: error : no support for hypervisor (null) lt-virsh: error: failed to connect to the hypervisor
Hmm, Fedora 7 test3 is still on Xen 3.0.4 so shouldn't have been impacted. I'll double check on my own box too. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi, Dan This is my mistake. Fedora7 problem is not exist. (I newly used machine and I do not know xen-devel is not in) Anyway I found bug, for GetVersion. (It is related to xen_unified.c) I will send it later. Thanks Atsushi SAKAI "Daniel P. Berrange" <berrange@redhat.com> wrote:
On Fri, Apr 13, 2007 at 02:37:03PM +0900, Atsushi SAKAI wrote:
Hi, Dan
I test on Fedora7 test3 x86_64 but failed. (Old version(which includes test3) works fine.) virsh dominfo 0
libvir: error : no support for hypervisor (null) lt-virsh: error: failed to connect to the hypervisor
Hmm, Fedora 7 test3 is still on Xen 3.0.4 so shouldn't have been impacted. I'll double check on my own box too.
Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi, Dan Just for 64bit issue. Tomohiro(Takahashi) check virsh(libvirt) on xen-ia64-unstable 14828 virsh works fine! So We can test libvirt on current xen-unstable! Thanks Atsushi SAKAI "Daniel P. Berrange" <berrange@redhat.com> wrote:
On Fri, Apr 13, 2007 at 02:37:03PM +0900, Atsushi SAKAI wrote:
Hi, Dan
I test on Fedora7 test3 x86_64 but failed. (Old version(which includes test3) works fine.) virsh dominfo 0
libvir: error : no support for hypervisor (null) lt-virsh: error: failed to connect to the hypervisor
Hmm, Fedora 7 test3 is still on Xen 3.0.4 so shouldn't have been impacted. I'll double check on my own box too.
Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
participants (4)
-
Atsushi SAKAI
-
Daniel P. Berrange
-
Daniel Veillard
-
Richard W.M. Jones