[Libvir] virsh - vcpuinfo cmd not working in 0.2.2

Cheers libvirt, I've tested the CLI virsh with my recent compiled version of libvirt and version 0.1.9. In Version 0.2.2 vcpuinfo provides wrong information. In version 0.1.9 only the values for VCPU and CPU are wrong. Do you have any idea why? See the test results below. Jan > xm vcpu-list Name ID VCPUs CPU State Time(s) CPU Affinity Domain-0 0 0 0 r-- 95463.9 any cpu Domain-0 0 1 - --p 55.6 any cpu stornode 6 0 1 -b- 2036.1 1 worknode 5 0 1 -b- 2073.9 1 > virsh vcpuinfo (0,5,6) in version 0.2.2 - Domain-0 VCPU: 0 CPU: 0 State: blocked CPU time: 0.0s CPU Affinity: -- - worknode virsh # vcpuinfo 5 VCPU: 0 CPU: 0 State: blocked CPU time: 4.3s CPU Affinity: -- - stornode VCPU: 0 CPU: 0 State: blocked CPU time: 4.3s CPU Affinity: -- > virsh vcpuinfo (0,5,6) in version 0.1.9 - Domain-0 VCPU: 0 CPU: 0 State: running CPU time: 95466.7s CPU Affinity: yy - worknode VCPU: 0 CPU: 1 State: blocked CPU time: 2074.2s CPU Affinity: -y - stornode VCPU: 0 CPU: 1 State: blocked CPU time: 2036.5s CPU Affinity: -y

On Fri, May 04, 2007 at 07:51:51PM +0200, Jan Michael wrote:
Cheers libvirt,
I've tested the CLI virsh with my recent compiled version of libvirt and version 0.1.9. In Version 0.2.2 vcpuinfo provides wrong information. In version 0.1.9 only the values for VCPU and CPU are wrong. Do you have any idea why?
Can you tell me what architecture you are runing, and what Xen HV version is being used ? Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi Daniel, Daniel P. Berrange wrote:
On Fri, May 04, 2007 at 07:51:51PM +0200, Jan Michael wrote:
Cheers libvirt,
I've tested the CLI virsh with my recent compiled version of libvirt and version 0.1.9. In Version 0.2.2 vcpuinfo provides wrong information. In version 0.1.9 only the values for VCPU and CPU are wrong. Do you have any idea why?
Can you tell me what architecture you are runing, and what Xen HV version is being used ?
I'm running Xen 3.0.3-rc5-1.2835.s on Linux 2.6.18-1.2835 SMP Wed Nov 29 21:05:58 CET 2006 i686 i686 i386 GNU/Linux. The box has 2Gigs of memory and runs with intel dual xeon 2.8GHz. What else do you need? Cheers, Jan

Hi, Jan I think you should use 0.2.1 at this moment. libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. But Xen-hypervisor-sysctl works fine. This problem recognized in two weeks ago, but I have no time to investigate this issue. Thanks Atsushi SAKAI Jan Michael <jan.michael@cern.ch> wrote: > Cheers libvirt, > > I've tested the CLI virsh with my recent compiled version of libvirt > and version 0.1.9. In Version 0.2.2 vcpuinfo provides wrong > information. In version 0.1.9 only the values for VCPU and CPU are > wrong. Do you have any idea why? > > See the test results below. > > Jan > > > xm vcpu-list > Name ID VCPUs CPU State Time(s) CPU > Affinity > Domain-0 0 0 0 r-- 95463.9 any cpu > Domain-0 0 1 - --p 55.6 any cpu > stornode 6 0 1 -b- 2036.1 1 > worknode 5 0 1 -b- 2073.9 1 > > > virsh vcpuinfo (0,5,6) in version 0.2.2 > - Domain-0 > VCPU: 0 > CPU: 0 > State: blocked > CPU time: 0.0s > CPU Affinity: -- > > - worknode > virsh # vcpuinfo 5 > VCPU: 0 > CPU: 0 > State: blocked > CPU time: 4.3s > CPU Affinity: -- > > - stornode > VCPU: 0 > CPU: 0 > State: blocked > CPU time: 4.3s > CPU Affinity: -- > > > > virsh vcpuinfo (0,5,6) in version 0.1.9 > - Domain-0 > VCPU: 0 > CPU: 0 > State: running > CPU time: 95466.7s > CPU Affinity: yy > > - worknode > VCPU: 0 > CPU: 1 > State: blocked > CPU time: 2074.2s > CPU Affinity: -y > > - stornode > VCPU: 0 > CPU: 1 > State: blocked > CPU time: 2036.5s > CPU Affinity: -y > > -- > Libvir-list mailing list > Libvir-list@redhat.com > https://www.redhat.com/mailman/listinfo/libvir-list

Hi Atsushi, Atsushi SAKAI wrote: > I think you should use 0.2.1 at this moment. > libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. > But Xen-hypervisor-sysctl works fine. > This problem recognized in two weeks ago, > but I have no time to investigate this issue. As you advised me, I tried the 0.2.1. But I had no success. If I use the wrapper for virsh from src directory I get the following error <error> [root@xen-machine src]# ./virsh libvir: error : no support for hypervisor lt-virsh: error: failed to connect to the hypervisor </error> And when using virsh directly from src/.libs the result is the same as with 0.2.2 <virsh 0.2.1> virsh # vcpuinfo 0 VCPU: 0 CPU: 0 State: blocked CPU time: 0.0s CPU Affinity: -- virsh # vcpuinfo 5 VCPU: 0 CPU: 0 State: blocked CPU time: 4.3s CPU Affinity: -- </virsh 0.2.1> Any ideas? Does virsh uses the libraries from its own directory /opt/ libvirt-0.2.1/ (in this case) or does it uses the installed libraries from /usr/lib? I hope you can help me. Thanks, Jan > Hi, Jan > > I think you should use 0.2.1 at this moment. > libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. > But Xen-hypervisor-sysctl works fine. > This problem recognized in two weeks ago, > but I have no time to investigate this issue. > > Thanks > Atsushi SAKAI > > > Jan Michael <jan.michael@cern.ch> wrote: > >> Cheers libvirt, >> >> I've tested the CLI virsh with my recent compiled version of libvirt >> and version 0.1.9. In Version 0.2.2 vcpuinfo provides wrong >> information. In version 0.1.9 only the values for VCPU and CPU are >> wrong. Do you have any idea why? >> >> See the test results below. >> >> Jan >> >>> xm vcpu-list >> Name ID VCPUs CPU State Time(s) CPU >> Affinity >> Domain-0 0 0 0 r-- 95463.9 >> any cpu >> Domain-0 0 1 - --p 55.6 >> any cpu >> stornode 6 0 1 -b- 2036.1 1 >> worknode 5 0 1 -b- 2073.9 1 >> >>> virsh vcpuinfo (0,5,6) in version 0.2.2 >> - Domain-0 >> VCPU: 0 >> CPU: 0 >> State: blocked >> CPU time: 0.0s >> CPU Affinity: -- >> >> - worknode >> virsh # vcpuinfo 5 >> VCPU: 0 >> CPU: 0 >> State: blocked >> CPU time: 4.3s >> CPU Affinity: -- >> >> - stornode >> VCPU: 0 >> CPU: 0 >> State: blocked >> CPU time: 4.3s >> CPU Affinity: -- >> >> >>> virsh vcpuinfo (0,5,6) in version 0.1.9 >> - Domain-0 >> VCPU: 0 >> CPU: 0 >> State: running >> CPU time: 95466.7s >> CPU Affinity: yy >> >> - worknode >> VCPU: 0 >> CPU: 1 >> State: blocked >> CPU time: 2074.2s >> CPU Affinity: -y >> >> - stornode >> VCPU: 0 >> CPU: 1 >> State: blocked >> CPU time: 2036.5s >> CPU Affinity: -y >> >> -- >> Libvir-list mailing list >> Libvir-list@redhat.com >> https://www.redhat.com/mailman/listinfo/libvir-list > >

On Mon, May 07, 2007 at 10:13:57AM +0900, Atsushi SAKAI wrote:
Hi, Jan
I think you should use 0.2.1 at this moment. libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. But Xen-hypervisor-sysctl works fine. This problem recognized in two weeks ago, but I have no time to investigate this issue.
I've been trying to reproduce / diagnose the problems you reported too but not had much luck so far. Every way I look at it the code looks to be using the correct hypercall numbers, operation numbers & structs. Until I just noticed this: xenHypervisorDoV2Dom(int handle, xen_op_v2_dom* op) { .... if (mlock(op, sizeof(dom0_op_t)) < 0) { Notice that it is doing sizeof(dom0_op_t) instead of sizeof(xen_op_v2_dom) There is the same typo with xenHypervisorDoV2Sys. Now dom0_op_t is defined as struct dom0_op { uint32_t cmd; uint32_t interface_version; /* DOM0_INTERFACE_VERSION */ union { struct dom0_msr msr; struct dom0_settime settime; struct dom0_add_memtype add_memtype; struct dom0_del_memtype del_memtype; struct dom0_read_memtype read_memtype; struct dom0_microcode microcode; struct dom0_platform_quirk platform_quirk; struct dom0_memory_map_entry physical_memory_map; uint8_t pad[128]; } u; }; Which is 4 + 4 + 128 bytes == 136 Nexzt, xen_sysctl is defined as struct xen_sysctl { uint32_t cmd; uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */ union { struct xen_sysctl_readconsole readconsole; struct xen_sysctl_tbuf_op tbuf_op; struct xen_sysctl_physinfo physinfo; struct xen_sysctl_sched_id sched_id; struct xen_sysctl_perfc_op perfc_op; struct xen_sysctl_getdomaininfolist getdomaininfolist; uint8_t pad[128]; } u; }; Which is also 4 + 4 + 128 bytes == 136 Finally, xen_domctl is defined as struct xen_domctl { uint32_t cmd; uint32_t interface_version; /* XEN_DOMCTL_INTERFACE_VERSION */ domid_t domain; union { struct xen_domctl_createdomain createdomain; struct xen_domctl_getdomaininfo getdomaininfo; struct xen_domctl_getmemlist getmemlist; struct xen_domctl_getpageframeinfo getpageframeinfo; struct xen_domctl_getpageframeinfo2 getpageframeinfo2; struct xen_domctl_vcpuaffinity vcpuaffinity; struct xen_domctl_shadow_op shadow_op; struct xen_domctl_max_mem max_mem; struct xen_domctl_vcpucontext vcpucontext; struct xen_domctl_getvcpuinfo getvcpuinfo; struct xen_domctl_max_vcpus max_vcpus; struct xen_domctl_scheduler_op scheduler_op; struct xen_domctl_setdomainhandle setdomainhandle; struct xen_domctl_setdebugging setdebugging; struct xen_domctl_irq_permission irq_permission; struct xen_domctl_iomem_permission iomem_permission; struct xen_domctl_ioport_permission ioport_permission; struct xen_domctl_hypercall_init hypercall_init; struct xen_domctl_arch_setup arch_setup; struct xen_domctl_settimeoffset settimeoffset; uint8_t pad[128]; } u; }; Which is cruicially different 4 + 4 + 2 + 128 bytes == 138 So the buffer we're mlock()ing is 2 bytes too small for domctl hypercalls. This may or may not explan the bugs, but its a worthwhile bug fix to try if you have a system where you can reliably reproduce the vcpu problems. The second thing is that we've just discovered a bug in the Fedora Xen kernels 2.6.20 wrt to SMP which could cause random bad things to happen So if you're using a Fedora 2.6.20 kernel it is also worth seeing if it is still a problem with an older Fedora 2.6.19/18 kernel, or with the vanilla upstream Xen Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi, Dan Thanks for your information. I am also wondering mlock data. (I guess old variable used in historical reason.) I try to check it. Anyway I change the struct(xen_v2_vcpuinfo) ordering. (vcpu goes last and other goes to front in Dom0-vcpu0 case.) Then I successfully get the information. (This is just temporary, I investigate this issue in my side work) Thanks Atsushi SAKAI "Daniel P. Berrange" <berrange@redhat.com> wrote:
On Mon, May 07, 2007 at 10:13:57AM +0900, Atsushi SAKAI wrote:
Hi, Jan
I think you should use 0.2.1 at this moment. libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. But Xen-hypervisor-sysctl works fine. This problem recognized in two weeks ago, but I have no time to investigate this issue.
I've been trying to reproduce / diagnose the problems you reported too but not had much luck so far. Every way I look at it the code looks to be using the correct hypercall numbers, operation numbers & structs. Until I just noticed this:
xenHypervisorDoV2Dom(int handle, xen_op_v2_dom* op) { .... if (mlock(op, sizeof(dom0_op_t)) < 0) {
Notice that it is doing sizeof(dom0_op_t) instead of sizeof(xen_op_v2_dom)
There is the same typo with xenHypervisorDoV2Sys.
Now dom0_op_t is defined as
struct dom0_op { uint32_t cmd; uint32_t interface_version; /* DOM0_INTERFACE_VERSION */ union { struct dom0_msr msr; struct dom0_settime settime; struct dom0_add_memtype add_memtype; struct dom0_del_memtype del_memtype; struct dom0_read_memtype read_memtype; struct dom0_microcode microcode; struct dom0_platform_quirk platform_quirk; struct dom0_memory_map_entry physical_memory_map; uint8_t pad[128]; } u; };
Which is 4 + 4 + 128 bytes == 136
Nexzt, xen_sysctl is defined as
struct xen_sysctl { uint32_t cmd; uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */ union { struct xen_sysctl_readconsole readconsole; struct xen_sysctl_tbuf_op tbuf_op; struct xen_sysctl_physinfo physinfo; struct xen_sysctl_sched_id sched_id; struct xen_sysctl_perfc_op perfc_op; struct xen_sysctl_getdomaininfolist getdomaininfolist; uint8_t pad[128]; } u; };
Which is also 4 + 4 + 128 bytes == 136
Finally, xen_domctl is defined as
struct xen_domctl { uint32_t cmd; uint32_t interface_version; /* XEN_DOMCTL_INTERFACE_VERSION */ domid_t domain; union { struct xen_domctl_createdomain createdomain; struct xen_domctl_getdomaininfo getdomaininfo; struct xen_domctl_getmemlist getmemlist; struct xen_domctl_getpageframeinfo getpageframeinfo; struct xen_domctl_getpageframeinfo2 getpageframeinfo2; struct xen_domctl_vcpuaffinity vcpuaffinity; struct xen_domctl_shadow_op shadow_op; struct xen_domctl_max_mem max_mem; struct xen_domctl_vcpucontext vcpucontext; struct xen_domctl_getvcpuinfo getvcpuinfo; struct xen_domctl_max_vcpus max_vcpus; struct xen_domctl_scheduler_op scheduler_op; struct xen_domctl_setdomainhandle setdomainhandle; struct xen_domctl_setdebugging setdebugging; struct xen_domctl_irq_permission irq_permission; struct xen_domctl_iomem_permission iomem_permission; struct xen_domctl_ioport_permission ioport_permission; struct xen_domctl_hypercall_init hypercall_init; struct xen_domctl_arch_setup arch_setup; struct xen_domctl_settimeoffset settimeoffset; uint8_t pad[128]; } u; };
Which is cruicially different 4 + 4 + 2 + 128 bytes == 138
So the buffer we're mlock()ing is 2 bytes too small for domctl hypercalls. This may or may not explan the bugs, but its a worthwhile bug fix to try if you have a system where you can reliably reproduce the vcpu problems.
The second thing is that we've just discovered a bug in the Fedora Xen kernels 2.6.20 wrt to SMP which could cause random bad things to happen So if you're using a Fedora 2.6.20 kernel it is also worth seeing if it is still a problem with an older Fedora 2.6.19/18 kernel, or with the vanilla upstream Xen
Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi Daniel, On 07.05.2007, at 22:06, Daniel P. Berrange wrote:
On Mon, May 07, 2007 at 10:13:57AM +0900, Atsushi SAKAI wrote:
Hi, Jan
I think you should use 0.2.1 at this moment. libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. But Xen-hypervisor-sysctl works fine. This problem recognized in two weeks ago, but I have no time to investigate this issue.
I've been trying to reproduce / diagnose the problems you reported too but not had much luck so far. Every way I look at it the code looks to be using the correct hypercall numbers, operation numbers & structs. Until I just noticed this:
Thank you for trying to tracking down this error. Even though I get the following error from virsh command line interface, <error> [root@xen-machine libvirt-0.2.1]# ./src/virsh libvir: error : no support for hypervisor lt-virsh: error: failed to connect to the hypervisor </error> my own program works fine with the libraries of 0.2.1. The reason I had no success before with version 0.2.1 was because before this version was compiled with the libraries of 0.2.1. After I cleaned up my system and build 0.2.1 again I could use the library. But now the virsh command is not working and I don't know how to figure out with which library version it was build. To get to the point: I use version 0.2.1 for now and will test one of the newer version when it is able to handle Xen-hypervisor-domctl correctly as Atsushi said.
The second thing is that we've just discovered a bug in the Fedora Xen kernels 2.6.20 wrt to SMP which could cause random bad things to happen So if you're using a Fedora 2.6.20 kernel it is also worth seeing if it is still a problem with an older Fedora 2.6.19/18 kernel, or with the vanilla upstream Xen
I'm not really shure about my kernel version and its origin because I didn't compiled it myself. But it is very likely that my kernel is no Fedora one because I use Scientific Linux. Cheers, Jan

On Thu, May 10, 2007 at 02:55:29PM +0200, Jan Michael wrote:
Hi Daniel,
On 07.05.2007, at 22:06, Daniel P. Berrange wrote:
On Mon, May 07, 2007 at 10:13:57AM +0900, Atsushi SAKAI wrote:
Hi, Jan
I think you should use 0.2.1 at this moment. libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. But Xen-hypervisor-sysctl works fine. This problem recognized in two weeks ago, but I have no time to investigate this issue.
I've been trying to reproduce / diagnose the problems you reported too but not had much luck so far. Every way I look at it the code looks to be using the correct hypercall numbers, operation numbers & structs. Until I just noticed this:
Thank you for trying to tracking down this error. Even though I get the following error from virsh command line interface,
<error> [root@xen-machine libvirt-0.2.1]# ./src/virsh libvir: error : no support for hypervisor lt-virsh: error: failed to connect to the hypervisor </error>
Can you re-run with strace -f -o virsh.log ./srv/virsh And send the resulting log file. Also, can you run 'lsof -p <pid>' for the PID which matches xenstored, and the 2nd python XendD process. # ps -axuwf | grep xen Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ root 13 0.0 0.0 0 0 ? S< May08 0:00 \_ [xenwatch] root 14 0.0 0.0 0 0 ? S< May08 0:00 \_ [xenbus] root 30138 0.0 0.0 69080 712 pts/2 S+ 09:23 0:00 \_ grep xen root 3106 0.3 0.0 8396 900 ? S May08 10:08 xenstored --pid-file /var/run/xenstore.pid root 3111 0.0 0.4 172968 9252 ? S May08 0:00 python /usr/sbin/xend start root 3112 0.5 1.4 359856 29728 ? Sl May08 15:33 \_ python /usr/sbin/xend start root 3114 0.0 0.0 20520 548 ? Sl May08 0:00 xenconsoled You'd need to run lsof -p 3106 lsof -p 3112 ..as root.
The second thing is that we've just discovered a bug in the Fedora Xen kernels 2.6.20 wrt to SMP which could cause random bad things to happen So if you're using a Fedora 2.6.20 kernel it is also worth seeing if it is still a problem with an older Fedora 2.6.19/18 kernel, or with the vanilla upstream Xen
I'm not really shure about my kernel version and its origin because I didn't compiled it myself. But it is very likely that my kernel is no Fedora one because I use Scientific Linux.
At very least output from 'xm info' will usefull Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi Daniel, On 10.05.2007, at 15:05, Daniel P. Berrange wrote:
On Thu, May 10, 2007 at 02:55:29PM +0200, Jan Michael wrote:
Hi Daniel,
On 07.05.2007, at 22:06, Daniel P. Berrange wrote:
On Mon, May 07, 2007 at 10:13:57AM +0900, Atsushi SAKAI wrote:
Hi, Jan
I think you should use 0.2.1 at this moment. libvirt cannot handle Xen-hypervisor-domctl correctly on 0.2.2. But Xen-hypervisor-sysctl works fine. This problem recognized in two weeks ago, but I have no time to investigate this issue.
I've been trying to reproduce / diagnose the problems you reported too but not had much luck so far. Every way I look at it the code looks to be using the correct hypercall numbers, operation numbers & structs. Until I just noticed this:
Thank you for trying to tracking down this error. Even though I get the following error from virsh command line interface,
<error> [root@xen-machine libvirt-0.2.1]# ./src/virsh libvir: error : no support for hypervisor lt-virsh: error: failed to connect to the hypervisor </error>
Can you re-run with
strace -f -o virsh.log ./srv/virsh
And send the resulting log file.
Done. Please find attached virsh.log.
Also, can you run 'lsof -p <pid>' for the PID which matches xenstored, and the 2nd python XendD process.
# ps -axuwf | grep xen Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/ procps-3.2.7/FAQ root 13 0.0 0.0 0 0 ? S< May08 0:00 \_ [xenwatch] root 14 0.0 0.0 0 0 ? S< May08 0:00 \_ [xenbus] root 30138 0.0 0.0 69080 712 pts/2 S+ 09:23 0:00 \_ grep xen root 3106 0.3 0.0 8396 900 ? S May08 10:08 xenstored --pid-file /var/run/xenstore.pid root 3111 0.0 0.4 172968 9252 ? S May08 0:00 python /usr/sbin/xend start root 3112 0.5 1.4 359856 29728 ? Sl May08 15:33 \_ python /usr/sbin/xend start root 3114 0.0 0.0 20520 548 ? Sl May08 0:00 xenconsoled
You'd need to run
lsof -p 3106 lsof -p 3112
..as root.
Done. You'll find lsof of xenstored attached as > lsof.3713.xenstored and lsof of xend python process attached as > lsof.3719.python.xend.
The second thing is that we've just discovered a bug in the Fedora Xen kernels 2.6.20 wrt to SMP which could cause random bad things to happen So if you're using a Fedora 2.6.20 kernel it is also worth seeing if it is still a problem with an older Fedora 2.6.19/18 kernel, or with the vanilla upstream Xen
I'm not really shure about my kernel version and its origin because I didn't compiled it myself. But it is very likely that my kernel is no Fedora one because I use Scientific Linux.
At very least output from 'xm info' will usefull.
As you wish. Please find attached as > xm.info Hope that helps. Cheers, Jan
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/ ~danberr/ -=| |=- Projects: http://freshmeat.net/ ~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Jan Michael wrote:
On 10.05.2007, at 15:05, Daniel P. Berrange wrote:
On Thu, May 10, 2007 at 02:55:29PM +0200, Jan Michael wrote:
<error> [root@xen-machine libvirt-0.2.1]# ./src/virsh libvir: error : no support for hypervisor lt-virsh: error: failed to connect to the hypervisor </error>
Can you re-run with
strace -f -o virsh.log ./srv/virsh
And send the resulting log file.
Done. Please find attached virsh.log.
32266 stat64("/var/run/xenstored/socket", {st_mode=S_IFSOCK|0600,st_size=0, ...}) = 0 32266 socket(PF_FILE, SOCK_STREAM, 0) = 4 32266 fcntl64(4, F_GETFD) = 0 32266 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 32266 connect(4, {sa_family=AF_FILE, path="/var/run/xenstored/socket"},110) = 0 32266 write(2, "libvir: error : no support for h"..., 42) = 42 It looks like the last thing it does is to connect to xenstored (successfully) then fails. Did you get any further on this? I would be tempted to try libvirt from CVS. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

Hi, Rich I am just a little time to do on this. If you have time, please do it. Anyway Another way to solve this is off the xen_internal.c function and on the xend_internal.c (This is just a templary.) Thanks Atsushi SAKAI "Richard W.M. Jones" <rjones@redhat.com> wrote:
Jan Michael wrote:
On 10.05.2007, at 15:05, Daniel P. Berrange wrote:
On Thu, May 10, 2007 at 02:55:29PM +0200, Jan Michael wrote:
<error> [root@xen-machine libvirt-0.2.1]# ./src/virsh libvir: error : no support for hypervisor lt-virsh: error: failed to connect to the hypervisor </error>
Can you re-run with
strace -f -o virsh.log ./srv/virsh
And send the resulting log file.
Done. Please find attached virsh.log.
32266 stat64("/var/run/xenstored/socket", {st_mode=S_IFSOCK|0600,st_size=0, ...}) = 0 32266 socket(PF_FILE, SOCK_STREAM, 0) = 4 32266 fcntl64(4, F_GETFD) = 0 32266 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 32266 connect(4, {sa_family=AF_FILE, path="/var/run/xenstored/socket"},110) = 0 32266 write(2, "libvir: error : no support for h"..., 42) = 42
It looks like the last thing it does is to connect to xenstored (successfully) then fails.
Did you get any further on this? I would be tempted to try libvirt from CVS.
Rich.
-- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Tue, May 15, 2007 at 09:26:25AM +0900, Atsushi SAKAI wrote:
Hi, Rich
I am just a little time to do on this. If you have time, please do it.
Anyway Another way to solve this is off the xen_internal.c function and on the xend_internal.c (This is just a templary.)
IIRC, you previously mentioned you were testing on Fedora rawhide which (at the time) was based on Xen 3.0.4. In the past week we've discovered the SMP support in rawhide was broken on x86_64, and there was a very serious memory corruption bug on i386 kernel, and there was a uninitialized variable inmore SMP code for both archs. So unless this problem can be reproduced on the upstreawm xen kernels I'd put it down to a Fedora kernel bug. Thus for further testing I'd recommend - Test against the vanillia 2.6.16 kernel from xen 3.0.4 release - Test against current rawhide which has fixed the kernel bugs and also updated to xen 3.0.5 Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hi Richard, On 14.05.2007, at 18:00, Richard W.M. Jones wrote:
Jan Michael wrote:
32266 stat64("/var/run/xenstored/socket", {st_mode=S_IFSOCK| 0600,st_size=0, ...}) = 0 32266 socket(PF_FILE, SOCK_STREAM, 0) = 4 32266 fcntl64(4, F_GETFD) = 0 32266 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 32266 connect(4, {sa_family=AF_FILE, path="/var/run/xenstored/ socket"},110) = 0 32266 write(2, "libvir: error : no support for h"..., 42) = 42
It looks like the last thing it does is to connect to xenstored (successfully) then fails.
Did you get any further on this? I would be tempted to try libvirt from CVS.
No, not with virsh in libvirt 0.2.1. As you suggested I tried the latest libvirt from CVS. I configured it with: > ./autogen.sh --disable-bridge-params --with-test=no --with-qemu=no Because otherwise it won't compile on my installation. Virsh then can be successfully run from the libvirt/src directory. It reports me the following: <virsh version> virsh # version Compiled against library: libvir 0.2.2 Using library: libvir 0.2.2 Using API: Xen 3.0.1 Running hypervisor: Xen 3.0.0 </virsh version> But it still has a broken vcpuinfo command: <virsh vcpuinfo> virsh # vcpuinfo 0 VCPU: 0 CPU: 0 State: blocked CPU time: 0.0s CPU Affinity: -- </virsh vcpuinfo> I have no problem with that so far, since I can use libvirt 0.2.1 in my own programs without restriction of any kind. I use virsh very rarely for some testing purposes. Cheers, Jan
participants (4)
-
Atsushi SAKAI
-
Daniel P. Berrange
-
Jan Michael
-
Richard W.M. Jones