[libvirt-users] Metadata accessible within guests
by Renich Bon Ciric
Hello,
I'd like to know if there is any way to make metadata; for example,
the public part of an SSH, accessible within a Guest?
This would be very useful in order to provide access to Guests in an
automated way.
For example, let's suppose I have an implementation that gives a user
the power to create Guests. The user has stored his public SSH key in
my implementation.
After the user creates a Guest, I can inject (with a proper rc.local
script or something of the sorts) his/her SSH key properly.
This would be useful for a million things. Another cool feature would
be that I could sync the root password with the SPICE/VNC password.
anyway, if there is a way to store metadata into an XML and, maybe,
access it with some tool from within the Guest, this would be awesome.
Thank you for your time.
--
It's hard to be free... but I love to struggle. Love isn't asked for;
it's just given. Respect isn't asked for; it's earned!
Renich Bon Ciric
http://www.woralelandia.com/
http://www.introbella.com/
12 years, 5 months
[libvirt-users] Why the xen vm memory won't bloon
by Tony Huang
Hello,
I have a vm with startup xml file has currentmemory = 256M and
memory=1024M. However, once the vm started up, the memory allocation
won't change. xentop shows that the memory keeps in 256M. How can I
make the vm to allocate more memory when they needed.
--
Tony
12 years, 5 months
[libvirt-users] CPU usage by libvirtd
by Zhihua Che
Hi,
I employed libvirt to monitor 50 domains in one node(host). my
monitor calls libvirtd for every stats such as cpu, memory, disk every
10 sec. All my requests are made in one thread.
And I found that the libvirtd may cost 30% or above cpu time. Is it
normal in my case?
I tried to reduce this cpu consumption and found some variable in
libvirtd.conf and got a little confused by variables like max_clients,
max_workers, max_requests.
In my opinion, max_clients limits the number of virConnect created by app.
max_workers limits the number of thread to handle requests from
clients. Does that mean requests from one client may be handled in
multi-thread way if max_workers is greater than max_workers?
max_request limits the concurrent RPC calls according the comments
in libvirtd.conf. I wonder if it can improve the response time if I
give a higher value in the case that all my requests are made in one
thread?
I've tried several different configurations and they didn't reduce
the cpu usage effectively.
12 years, 5 months
[libvirt-users] Problem halting/restaring a lxc container from within
by david@dit.upm.es
Hi,
I've been making some tests with libvirt and LXC and found some problems
when halting/restarting a LXC container from within.
Basically, on a Ubuntu 12.04 system with libvirt installed as package
(0.9.8), I've created a basic container image with:
lxc-create -t ubuntu -n lxc
And started it using the libvirt XML listed below and the following command:
virsh -c lxc:// create lxc.xml
I can access the container console normally but when I issue a halt or
reboot command from inside, the container initiates the halt/reboot
execution but it does not finishes it properly.
For example:
$ sudo halt -p
[sudo] password for ubuntu:
Broadcast message from ubuntu@my-container
(/dev/pts/0) at 23:19 ...
The system is going down for power off NOW!
ubuntu@my-container:~$ acpid: exiting
* Asking all remaining processes to terminate... [ OK ]
* All processes ended within 1 seconds.... [ OK ]
* Deconfiguring network interfaces... [ OK ]
* Deactivating swap... [fail]
* Unmounting weak filesystems... [ OK ]
umount: /run/lock: not mounted
mount: / is busy
* Will now halt
The container seems to finish the shutdown process but libvirt does not
seem to be signaled about it (virsh shows the container is still
executing). Something similar happens with reboot.
However, if a start that container with:
lxc-start -n lxc
and do the same test, it works perfectly:
# sudo halt -p
[sudo] password for ubuntu:
Broadcast message from ubuntu@my-container
(/dev/lxc/console) at 23:17 ...
The system is going down for power off NOW!
ubuntu@my-container:~$ acpid: exiting
* Asking all remaining processes to terminate... [ OK ]
* All processes ended within 1 seconds.... [ OK ]
* Deconfiguring network interfaces... [ OK ]
* Deactivating swap... [fail]
umount: /run/lock: not mounted
mount: cannot mount block device
/dev/disk/by-uuid/9b50a43d-98c3-45ad-a540-7fcbc629a418 read-only
* Will now halt
#
Any idea about how to investigate/solve this problem??
Thanks in advance,
David Fernandez
-- lxc.xml file --
<domain type='lxc'>
<name>lxc</name>
<memory>524288</memory>
<os>
<type>exe</type>
<init>/sbin/init</init>
</os>
<vcpu>1</vcpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/lib/libvirt/libvirt_lxc</emulator>
<filesystem type='mount'>
<source dir='/var/lib/lxc/lxc/rootfs'/>
<target dir='/'/>
</filesystem>
<interface type='network'>
<source network='default'/>
</interface>
<console type='pty'/>
</devices>
</domain>
12 years, 5 months
[libvirt-users] Reboot KVM guest failed via virsh!
by GaoYi
Hi,
I have tried to reboot a KVM guest by this command line: virsh reboot
vm1
However, the virsh report: reboot is not supported without json
monitor. I am using libvirt-0.9.9 and qemu-0.14.0. Can anyone provide some
help?
Thanks,
Yi
12 years, 5 months
[libvirt-users] using vmchannel between 6.x host/guests
by Michael MacDonald
Hi all. Having trouble figuring out the magic to set up VMChannel comms between EL6.x host/guests. My end goal is to enable fence_virt in the guest to talk to fence_virtd on the host via VMChannel. I'd prefer to use that instead of multicast because it is supposed to work even if networking in the guest is down/borked.
My analysis is that there is a mismatch between what libvirt is feeding qemu-kvm and what qemu-kvm is willing to accept:
virt-install option:
--channel unix,path=/var/run/cluster/fence/foobar,mode=bind,target_type=guestfwd,target_address=10.0.2.179:1229
turns into this XML:
<channel type='unix'>
<source mode='bind' path='/var/run/cluster/fence/foobar'/>
<target type='guestfwd' address='10.0.2.179' port='1229'/>
</channel>
Which then gets fed to qemu-kvm as this:
-chardev socket,id=charchannel0,path=/var/run/cluster/fence/foobar,server,nowait -netdev user,guestfwd=tcp:10.0.2.179:1229,chardev=charchannel0,
id=user-channel0
And then qemu-kvm barfs like so:
qemu-kvm: -netdev user,guestfwd=tcp:10.0.2.179:1229,chardev=charchannel0,id=user-channel0: Invalid parameter 'chardev'
Versions:
libvirt-0.9.10-21.el6.1
qemu-kvm-0.12.1.2-2.295.el6
NB: I did try this with the versions in 6.2, got the same result. Upgraded to these versions to see if the problem went away, but no joy.
NB2: The fence_virt.conf manpage lists the following as example XML for defining a channel device:
<channel type=’unix’>
<source mode=’bind’ path=’/sandbox/guests/fence_molly_vmchannel’/>
<target type=’guestfwd’ address=’10.0.2.179’ port=’1229’/>
</serial>
Any advice would be greatly appreciated. I can fall back to multicast, of course, but I'd like to make this work if possible.
Thanks!
Mike
12 years, 5 months
[libvirt-users] idle connections (?)
by Evaggelos Balaskas
Hello,
i have noticed that for some reason my windows 2003 & windows 2008 VMs
drops map drives caused by idle connections.
I am not a windows admin and i have no knowledge of troubleshooting
this, except reading support.microsoft.com
The win admin is yelling that this is a fedora17 libvirt problem and
that i need to fix this.
I have two different net cards and i am using macvtap with vepa model
for the VMs
Is there any possibility that could be true ?
Can i test somehow or prove that this is not related with libvirt ?
Thanks in advance for any help.
--
Evaggelos Balaskas - Unix System Engineer
http://gr.linkedin.com/in/evaggelosbalaskas
12 years, 5 months
[libvirt-users] virsh iface-list () - function is not supported
by Ananth
Hi,
I am using KVM hypervisor on Ubuntu 12.04 (libvirt-0.9.8). The call to list
the interfaces seem to be failing on this version of libvirt as seen below.
virsh # iface-list
error: Failed to list active interfaces
error: this function is not supported by the connection driver:
virConnectNumOfInterfaces
I have seen this issue with libvirt 0.9.2 as well, which seemed to be
working in 0.9.4 version.
Is there any work around for this or any alternate way of querying the
interfaces and bridges through libvirt?
--
Regards
Ananth
12 years, 5 months
[libvirt-users] qemu-kvm fails on RHEL6
by sumit sengupta
Hi,
When I'm trying to run qemu-kvm command on RHEL6(linux kernel 2.6.32) then I get following errors which I think related to tap devices in my setup. Any idea why is that?
bash$ LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-00000027 -uuid a93aeed9-15f7-4ded-b6b3-34c8d2c101a8 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000027.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -kernel /home/sumitsen/openstack/nova/instances/instance-00000027/kernel -initrd /home/sumitsen/openstack/nova/instances/instance-00000027/ramdisk -append root=/dev/vda console=ttyS0 -drive file=/home/sumitsen/openstack/nova/instances/instance-00000027/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=26,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:15:84:3e,bus=pci.0,addr=0x3 -chardev
file,id=charserial0,path=/home/sumitsen/openstack/nova/instances/instance-00000027/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/1
qemu-kvm: -netdev tap,fd=26,id=hostnet0: TUNGETIFF ioctl() failed: Bad file descriptor
TUNSETOFFLOAD ioctl() failed: Bad file descriptor
qemu-kvm: -append root=/dev/vda: could not open disk image console=ttyS0: No such file or directory
[sumitsen@sorrygate-dr ~]$ rpm -qa qemu-kvm
qemu-kvm-0.12.1.2-2.209.el6.x86_64
Let me know if you need any other info.
Thanks,
Sumit
12 years, 5 months
[libvirt-users] How does libvirt interaction with KVM to create a VM?
by Dennis Chen
All,
These days I am trying to understand the interaction relationship
between the libvirt and KVM kernel module, eg. kvm_intel.ko.
We know that KVM kernel module expose an entry in form of device file
"/dev/kvm" which can be accessed by user space application to control,
for example, create a VM using KVM_CREATE_VM with help of ioctl.
Now let's say the tool virsh based upon libvirt, we can create a guest
domain with the command looks like:
#virsh create guest.xml
Obviously, the above command will create a VM. But when I try to
investigate the libvirt code, I can't find any code play with the
"/dev/kvm" to send KVM_CREATE_VM ioctl code to KVM kernel module. But
I do found that the reference count of the kvm_intel.ko changed before
the virsh create command launched and after.
So my question is: how does the libvirt interaction with KVM to create a
VM? Anybody can give me some tips about that, eg, the corresponding
codes in libvirt?
BRs.
Dennis
12 years, 5 months