[libvirt-users] help
by vishal upadhyay
Hi All,
I a very new user of libvirt API.
--
Thanks and Regards
vishal Upadhyay
Voice:-08904713858
11 years, 9 months
[libvirt-users] Out of Memory Error
by varun bhatnagar
Hi,
I am trying to attach a usb device to my virtualbox node using libvirt. My
node is already running, I stop my node and then I am trying to add this
usb device.
I have a separate xml file defined for usb. It looks like this:
*<device>
<hostdev mode='subsystem' type='usb'>
<source>
<vendor id='0x4321'/>
<product id='0xfeeb'/>
</source>
</hostdev>
</device> *
*
*
I am trying to attach it using code:
*conn = libvirt.open("vbox:///session")
if conn ==None:
print 'Failed to open connection to the hypervisor'
sys.exit(1)
print 'connected to vbox hypervisor driver'
domainInstance = conn.lookupByName('SampleNode')
filed = open('/root/testFolder/usbSharedFolder.xml', 'r')
config_str = filed.read()
libvirt.virDomain.attachDevice(domainInstance, config_str)*
When it is executed I am getting an error message saying:
*libvir: VirtualBox Driver error : out of memory *
*
*
Even with virsh it is giving the same error. I am using the below command:
*attach-device SampleNode /root/testFolder/usbSharedFolder.xml*
Can anyone tell me what is going wrong. It is really very important. Please
do reply.
Thanks in advance.
11 years, 9 months
[libvirt-users] Changing QoS on-the-fly
by Davide Guerri
Hi all,
it's possible to change/add the QoS settings for a running domain?
I edited the domain definition with virsh but it seems to have no effect until I shut down the domain.
libvirt version 0.9.13
kvm-qemu version 1.2.0
Best,
Davide.
11 years, 9 months
[libvirt-users] [Q] how to manage Infiniband disk(SRP) volume wit libvirt.
by Hiroyuki Sato
Dear members.
I'm looking for best practice for administration Infiniband SRP volume
with libvirt (virsh)
How to manage these volumes?
* SRP Disk is /dev/disk/by-id/scsi-2766f6c3030303037 or /dev/sdi
Now I edited guest domain file with ``virsh edit XXXX'' command
and append the following lines.
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/disk/by-id/scsi-2766f6c3030303037'/>
<target dev='vdc' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
It seems work fine, however I feed it is a little bit slow.
Can I attach this volume another(good performance) way?
ie. attach as scsi disk.
Thank you for your advice.
Sincerely.
--
Hiroyuki Sato
11 years, 9 months
[libvirt-users] OSX rpcgen problem
by Brian Candler
Trying to build the virtualbox-4.2 branch from git://pipo.sk/pipo/libvirt.git
under OSX 10.7, I get the following build error:
...
CC timegm.lo
CC vasnprintf.lo
CCLD libgnu.la
/usr/bin/ranlib: file: .libs/libgnu.a(fd-hook.o) has no symbols
/usr/bin/ranlib: file: .libs/libgnu.a(threadlib.o) has no symbols
ranlib: file: .libs/libgnu.a(fd-hook.o) has no symbols
ranlib: file: .libs/libgnu.a(threadlib.o) has no symbols
GEN charset.alias
GEN ref-add.sed
GEN ref-del.sed
Making all in include
Making all in libvirt
make[3]: Nothing to be done for `all'.
make[3]: Nothing to be done for `all-am'.
Making all in src
GEN util/virkeymaps.h
GEN locking/lock_protocol.h
GEN locking/lock_protocol.c
GEN locking/lock_daemon_dispatch_stubs.h
GEN lxc/lxc_protocol.h
unsigned hyper initpid;
^^^^^^^^^^^^^^^^^^^^^^^^^^
lxc/lxc_protocol.x, line 18: expected ';'
cannot shutdown /usr/bin/rpcgen: at ./rpc/genprotocol.pl line 124.
make[2]: *** [lxc/lxc_protocol.h] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2
I built using the following command line:
PATH=/usr/local/Cellar/gettext/0.18.2/bin:$PATH ./autogen.sh --prefix=$HOME/usr
and gettext was installed using homebrew.
/usr/bin/rpcgen is from Xcode 4.6.2, and the problem is visible just by
running this by itself:
$ rpcgen -h src/lxc/lxc_protocol.x
/*
* Please do not edit this file.
* It was generated using rpcgen.
*/
#ifndef _LXC_PROTOCOL_H_RPCGEN
#define _LXC_PROTOCOL_H_RPCGEN
#define RPCGEN_VERSION 199506
#include <rpc/rpc.h>
enum virLXCProtocolExitStatus {
VIR_LXC_PROTOCOL_EXIT_STATUS_ERROR = 0,
VIR_LXC_PROTOCOL_EXIT_STATUS_SHUTDOWN = 1,
VIR_LXC_PROTOCOL_EXIT_STATUS_REBOOT = 2,
};
typedef enum virLXCProtocolExitStatus virLXCProtocolExitStatus;
#ifdef __cplusplus
extern "C" bool_t xdr_virLXCProtocolExitStatus(XDR *, virLXCProtocolExitStatus*);
#elif __STDC__
extern bool_t xdr_virLXCProtocolExitStatus(XDR *, virLXCProtocolExitStatus*);
#else /* Old Style C */
bool_t xdr_virLXCProtocolExitStatus();
#endif /* Old Style C */
struct virLXCProtocolExitEventMsg {
enum virLXCProtocolExitStatus status;
};
typedef struct virLXCProtocolExitEventMsg virLXCProtocolExitEventMsg;
#ifdef __cplusplus
extern "C" bool_t xdr_virLXCProtocolExitEventMsg(XDR *, virLXCProtocolExitEventMsg*);
#elif __STDC__
extern bool_t xdr_virLXCProtocolExitEventMsg(XDR *, virLXCProtocolExitEventMsg*);
#else /* Old Style C */
bool_t xdr_virLXCProtocolExitEventMsg();
#endif /* Old Style C */
unsigned hyper initpid;
^^^^^^^^^^^^^^^^^^^^^^^^^^
src/lxc/lxc_protocol.x, line 18: expected ';'
Perhaps Apple's version of rpcgen is broken? If so, I guess I'll have to
wait for a tarball before I can try this out. Unfortunately I upgraded to
virtualbox 4.2 and this means I've lost the ability to control it using
virsh :-(
Regards,
Brian.
11 years, 9 months
[libvirt-users] There seems a deadlock in libvirt
by Chun-Hung Chen
Hi, all,
We were running OpenStack with Ubuntu and libvirt 0.9.10. We found that
libvirt monitor command not working well.
There were a lot of error in libvirtd.log like this
2013-02-07 06:07:39.000+0000: 18112: error :
qemuDomainObjBeginJobInternal:773 : Timed out during operation: cannot
acquire state change lock
We dig into libvirtd by strace and find one of the thread only have the
following command
futex(0x7f69ac0ec0ec, FUTEX_WAIT_PRIVATE, 2717, NULL
It seems this thread waiting for reply but nothing came back thus other
threads would wait for it. We also saw there is a function called
virCondWaitUntil(). Is it safe for us to modify the code from virCondWait()
to virCondWaitUntil() to prevent such deadlock scenario? Thanks.
Following is the gdb -p 'libvirt.pid' and 'thread id' and 'bt full'
#0 0x00007f69c8c1dd84 in pthread_cond_wait@(a)GLIBC_2.3.2 () from
/lib/x86_64-linux-gnu/libpthread.so.0
No symbol table info available.
#1 0x00007f69c9ee884a in virCondWait (c=<optimized out>, m=<optimized
out>) at util/threads-pthread.c:117
ret = <optimized out>
#2 0x000000000049c749 in qemuMonitorSend (mon=0x7f69ac0ec0c0,
msg=<optimized out>) at qemu/qemu_monitor.c:826
ret = -1
__func__ = "qemuMonitorSend"
__FUNCTION__ = "qemuMonitorSend"
#3 0x00000000004ac8ed in qemuMonitorJSONCommandWithFd (mon=0x7f69ac0ec0c0,
cmd=0x7f6998028280, scm_fd=-1, reply=0x7f69c57829f8)
at qemu/qemu_monitor_json.c:230
ret = -1
msg = {txFD = -1, txBuffer = 0x7f69980e9b00
"{\"execute\":\"query-balloon\",\"id\":\"libvirt-1359\"}\r\n", txOffset =
49, txLength = 49,
rxBuffer = 0x0, rxLength = 0, rxObject = 0x0, finished = false,
passwordHandler = 0, passwordOpaque = 0x0}
cmdstr = 0x7f69980ef2f0
"{\"execute\":\"query-balloon\",\"id\":\"libvirt-1359\"}"
id = 0x7f69980b0a20 "libvirt-1359"
exe = <optimized out>
__FUNCTION__ = "qemuMonitorJSONCommandWithFd"
__func__ = "qemuMonitorJSONCommandWithFd"
#4 0x00000000004ae794 in qemuMonitorJSONGetBalloonInfo
(mon=0x7f69ac0ec0c0, currmem=0x7f69c5782a48) at
qemu/qemu_monitor_json.c:1190
ret = <optimized out>
cmd = 0x7f6998028280
reply = 0x0
__FUNCTION__ = "qemuMonitorJSONGetBalloonInfo"
#5 0x0000000000457451 in qemudDomainGetInfo (dom=<optimized out>,
info=0x7f69c5782b50) at qemu/qemu_driver.c:2181
priv = 0x7f69a0093b00
driver = 0x7f69b80ca8e0
vm = 0x7f69a0093370
ret = -1
err = <optimized out>
balloon = <optimized out>
__FUNCTION__ = "qemudDomainGetInfo"
#6 0x00007f69c9f63eda in virDomainGetInfo (domain=0x7f69980e3650,
info=0x7f69c5782b50) at libvirt.c:4230
ret = <optimized out>
conn = <optimized out>
__func__ = "virDomainGetInfo"
__FUNCTION__ = "virDomainGetInfo"
#7 0x0000000000439bca in remoteDispatchDomainGetInfo (ret=0x7f6998000c20,
args=<optimized out>, rerr=0x7f69c5782c50, client=0x157e730,
server=<optimized out>, msg=<optimized out>) at remote_dispatch.h:1640
rv = -1
tmp = {state = 1 '\001', maxMem = 2097152, memory = 0, nrVirtCpu =
0, cpuTime = 5981880000000}
dom = 0x7f69980e3650
priv = <optimized out>
#8 remoteDispatchDomainGetInfoHelper (server=<optimized out>,
client=0x157e730, msg=<optimized out>, rerr=0x7f69c5782c50, args=<optimized
out>,
ret=0x7f6998000c20) at remote_dispatch.h:1616
__func__ = "remoteDispatchDomainGetInfoHelper"
#9 0x00007f69c9fbb915 in virNetServerProgramDispatchCall (msg=0x1689cc0,
client=0x157e730, server=0x1577c90, prog=0x15825d0)
at rpc/virnetserverprogram.c:416
ret = 0x7f6998000c20 ""
rv = -1
i = <optimized out>
arg = 0x7f6998027950 "\360e\n\230i\177"
dispatcher = 0x73de40
rerr = {code = 0, domain = 0, message = 0x0, level = 0, dom = 0x0,
str1 = 0x0, str2 = 0x0, str3 = 0x0, int1 = 0, int2 = 0, net = 0x0}
#10 virNetServerProgramDispatch (prog=0x15825d0, server=0x1577c90,
client=0x157e730, msg=0x1689cc0) at rpc/virnetserverprogram.c:289
ret = -1
rerr = {code = 0, domain = 0, message = 0x0, level = 0, dom = 0x0,
str1 = 0x0, str2 = 0x0, str3 = 0x0, int1 = 0, int2 = 0, net = 0x0}
__func__ = "virNetServerProgramDispatch"
__FUNCTION__ = "virNetServerProgramDispatch"
#11 0x00007f69c9fb6461 in virNetServerHandleJob (jobOpaque=<optimized out>,
opaque=0x1577c90) at rpc/virnetserver.c:164
srv = 0x1577c90
job = 0x155dfa0
__func__ = "virNetServerHandleJob"
#12 0x00007f69c9ee8e3e in virThreadPoolWorker (opaque=<optimized out>) at
util/threadpool.c:144
data = 0x0
pool = 0x1577d80
cond = 0x1577de0
priority = false
job = 0x162dd20
#13 0x00007f69c9ee84e6 in virThreadHelper (data=<optimized out>) at
util/threads-pthread.c:161
args = 0x0
local = {func = 0x7f69c9ee8d00 <virThreadPoolWorker>, opaque =
0x1559f90}
#14 0x00007f69c8c19e9a in start_thread () from
/lib/x86_64-linux-gnu/libpthread.so.0
No symbol table info available.
#15 0x00007f69c89474bd in clone () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#16 0x0000000000000000 in ?? ()
No symbol table info available.
Regards,
Chun-Hung
11 years, 9 months
[libvirt-users] Failed to to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
by Yin Olivia-R63875
Hi,
I tried to build and run libvirt-1.0.1 on FSL PowerPC platform.
The connection to '/var/run/libvirt/libvirt-sock' is always refused.
# libvirtd -d
# export LIBVIRT_DEBUG=1
# export LIBVIRT_LOG_OUTPUTS="1:file:virsh.log"
# virsh -c qemu:///system list
2013-01-10 04:55:20.409+0000: 2574: info : libvirt version: 1.0.1
2013-01-10 04:55:20.409+0000: 2574: debug : virLogParseOutputs:1288 : outputs=1:file:virsh.log
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
Exactly it could work with libvirt-0.10.1. Below are the debug messages with both versions.
# cat virsh.log
-------------------------
1). libvirt-1.0.1
-------------------------
<cut>
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1145 : name "qemu:///system" to URI components:
scheme qemu
server (null)
user (null)
port 0
path /system
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1191 : trying driver 0 (Test) ...
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1197 : driver 0 Test returned DECLINED
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1191 : trying driver 1 (OPENVZ) ...
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1197 : driver 1 OPENVZ returned DECLINED
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1191 : trying driver 2 (VMWARE) ...
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1197 : driver 2 VMWARE returned DECLINED
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1191 : trying driver 3 (VBOX) ...
2013-01-10 06:12:12.107+0000: 2340: debug : do_open:1197 : driver 3 VBOX returned DECLINED
2013-01-10 06:12:12.108+0000: 2340: debug : do_open:1191 : trying driver 4 (remote) ...
2013-01-10 06:12:12.108+0000: 2340: debug : doRemoteOpen:586 : proceeding with name = qemu:///system
2013-01-10 06:12:12.108+0000: 2340: debug : doRemoteOpen:595 : Connecting with transport 1
2013-01-10 06:12:12.108+0000: 2340: debug : doRemoteOpen:671 : Proceeding with sockname /var/run/libvirt/libvirt-sock
2013-01-10 06:12:12.108+0000: 2340: error : virNetSocketNewConnectUNIX:570 : Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
2013-01-10 06:12:12.108+0000: 2340: debug : virFileClose:72 : Closed fd 7
2013-01-10 06:12:12.108+0000: 2340: debug : virNetClientCloseInternal:698 : client=(nil) wantclose=0
2013-01-10 06:12:12.108+0000: 2340: debug : do_open:1197 : driver 4 remote returned ERROR
2013-01-10 06:12:12.108+0000: 2340: debug : virObjectUnref:135 : OBJECT_UNREF: obj=0x48900648
2013-01-10 06:12:12.108+0000: 2340: debug : virObjectUnref:137 : OBJECT_DISPOSE: obj=0x48900648
2013-01-10 06:12:12.108+0000: 2340: debug : virEventPollAddTimeout:225 : Used 0 timeout slots, adding at least 10 more
2013-01-10 06:12:12.108+0000: 2340: debug : virEventPollInterruptLocked:716 : Interrupting
2013-01-10 06:12:12.108+0000: 2340: debug : virEventPollAddTimeout:248 : EVENT_POLL_ADD_TIMEOUT: timer=1 frequency=0 cb=0x10008624 opa
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollRunOnce:640 : Poll got 1 event(s)
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollDispatchTimeouts:425 : Dispatch 1
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollDispatchTimeouts:448 : EVENT_POLL_DISPATCH_TIMEOUT: timer=1
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollDispatchHandles:470 : Dispatch 1
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollDispatchHandles:484 : i=0 w=1
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollDispatchHandles:498 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollCleanupTimeouts:516 : Cleanup 1
2013-01-10 06:12:12.108+0000: 2341: debug : virEventPollCleanupHandles:564 : Cleanup 1
2013-01-10 06:12:12.108+0000: 2340: debug : virEventPollRemoveTimeout:300 : EVENT_POLL_REMOVE_TIMEOUT: timer=1
2013-01-10 06:12:12.108+0000: 2340: debug : virEventPollInterruptLocked:712 : Skip interrupt, 0 1216382096
<cut>
-------------------------
2). libvirt-0.10.1
-------------------------
<cut>
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1127 : name "qemu:///system" to URI components:
scheme qemu
server (null)
user (null)
port 0
path /system
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1174 : trying driver 0 (Test) ...
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1180 : driver 0 Test returned DECLINED
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1174 : trying driver 1 (OPENVZ) ...
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1180 : driver 1 OPENVZ returned DECLINED
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1174 : trying driver 2 (VMWARE) ...
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1180 : driver 2 VMWARE returned DECLINED
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1174 : trying driver 3 (VBOX) ...
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1180 : driver 3 VBOX returned DECLINED
2013-01-10 06:06:27.275+0000: 2393: debug : do_open:1174 : trying driver 4 (remote) ...
2013-01-10 06:06:27.275+0000: 2393: debug : doRemoteOpen:576 : proceeding with name = qemu:///system
2013-01-10 06:06:27.276+0000: 2393: debug : doRemoteOpen:585 : Connecting with transport 1
2013-01-10 06:06:27.276+0000: 2393: debug : doRemoteOpen:661 : Proceeding with sockname /var/run/libvirt/libvirt-sock
2013-01-10 06:06:27.277+0000: 2393: debug : virNetSocketNew:146 : localAddr=0xbff169d4 remoteAddr=0xbff16a58 fd=7 errfd=-1 pid=0
2013-01-10 06:06:27.277+0000: 2393: debug : virObjectNew:110 : OBJECT_NEW: obj=0x48900dc8 classname=virNetSocket
2013-01-10 06:06:27.277+0000: 2393: debug : virNetSocketNew:203 : RPC_SOCKET_NEW: sock=0x48900dc8 fd=7 errfd=-1 pid=0 localAddr=127.0.
2013-01-10 06:06:27.277+0000: 2393: debug : virObjectNew:110 : OBJECT_NEW: obj=0x48900ff8 classname=virNetClient
2013-01-10 06:06:27.277+0000: 2393: debug : virNetClientNew:342 : RPC_CLIENT_NEW: client=0x48900ff8 sock=0x48900dc8
2013-01-10 06:06:27.277+0000: 2393: debug : virObjectRef:168 : OBJECT_REF: obj=0x48900ff8
2013-01-10 06:06:27.277+0000: 2393: debug : virObjectRef:168 : OBJECT_REF: obj=0x48900dc8
2013-01-10 06:06:27.277+0000: 2393: debug : virEventPollInterruptLocked:701 : Interrupting
2013-01-10 06:06:27.277+0000: 2393: debug : virEventPollAddHandle:136 : EVENT_POLL_ADD_HANDLE: watch=2 fd=7 events=1 cb=0xf7bad00 opaq
2013-01-10 06:06:27.277+0000: 2394: debug : virEventPollRunOnce:625 : Poll got 1 event(s)
2013-01-10 06:06:27.277+0000: 2393: debug : virKeepAliveNew:212 : client=0x48900ff8, interval=-1, count=0
<cut>
I checked the source codes of function src/rpc/virnetsocket.c and there's no change on function virNetSocketNewConnectUNIX().
Best Regards,
Olivia
11 years, 9 months
[libvirt-users] Live migration: Xen to KVM
by Анатолий Степанов
Hello!
Is there some way to migrate guest OS from one Xen-host to another KVM-host
using libvirt?
(live migration is highly desirable)
(Both hosts have the same hardware, Xen running in full-virtualized mode.)
11 years, 9 months
[libvirt-users] VMM and puppet/kickstart/vagrant
by Jay Vyas
Hi guys:
I'd like to "scriptify" my virtual machine manager deployment. In
particular id like to
1) Configure a 3 node cluster from an ISO
2) Assign static IPs at the vm level, and have those cascade into /etc/hosts
3) Add storage to each machine in the cluster .
4) Set the memory/CPU# for each machine.
Is there a way i can automate this task using VMM, or maybe, using VMM in
conjunction with puppet/kickstart/vagrant/?...
Thanks
--
Jay Vyas
http://jayunit100.blogspot.com
11 years, 9 months
[libvirt-users] Managing Live Snapshots with Libvirt 1.0.1
by Andrew Martin
Hello,
I recently compiled libvirt 1.0.1 and qemu 1.3.0 on Ubuntu 12.04. I have performed live snapshots on VMs using "virsh snapshot-create-as" and then later re-merge the images together using "virsh blockpull". I am wondering how I can do a couple of other operations on the images while the VM is running. For example, VM1 is running from the snap3 image, with the following snapshot history (backing files):
[orig] <-- [snap1] <-- [snap2] <-- [snap3]
1. Can I revert VM1 to use snap2 while it is live, or must it be shutdown? After shutting it down, is the best way to revert to snap2 to just edit the xml file and change the block device to point to snap2? Afterwards, I believe snap3 would become unusable and should be deleted?
2. If I would like to start a new VM from snap1, is there a way to extract a copy of this snapshot from the chain, to an independent image file? I tried to use "virsh blockcopy" but it returned this error:
# virsh blockcopy VM1 vda snap1.qcow2 --wait --verbose
error: Requested operation is not valid: domain is not transient
Thanks,
Andrew
11 years, 9 months