[libvirt-users] Questions and a blockpull issue
by Abbas
Coming off of xen environment; still testing with kvm, just a few
questions:
1. What is the roadmap for release of qemu-kvm 1.1 and libvirt 0.10.2 for
el6, cause I had to compile from upstream to get the latest stuff.
2. Should not virt-manager show the sparsed disk size instead of actual
reserved size of a vm?
3. Where is the virsh bash_completion conf.d file from upstream; since el6
rpm for it seems have tab completion built right into virsh shell.
4. Created a disk-only snapshot of a vm CO1 called capture1 but the syntax
of blockpulling seems to be opposite from what has been advertised on
Fedora and other relative wikis. Check below and the error with first
blockpull command.
[root@KVM libvirt]# virsh snapshot-create-as CO1 capture1 "CO1s first
snapshot test" --disk-only --atomic
[root@KVM libvirt]# virsh -d 0 blockpull CO1 --path /home/vms/co1.img
--bandwidth 500 --base /home/vms/co1.capture1
blockpull: domain(optdata): CO1
blockpull: path(optdata): /home/vms/co1.img
blockpull: bandwidth(optdata): 500
blockpull: base(optdata): /home/vms/co1.capture1
blockpull: found option <domain>: CO1
blockpull: <domain> trying as domain NAME
error: invalid argument: No device found for specified path
[root@KVM libvirt]# virsh -d 0 blockpull --domain CO1 --path
/home/vms/co1.capture1 --base /home/vms/co1.img --verbose --wait
blockpull: domain(optdata): CO1
blockpull: path(optdata): /home/vms/co1.capture1
blockpull: base(optdata): /home/vms/co1.img
blockpull: verbose(bool): (none)
blockpull: wait(bool): (none)
blockpull: found option <domain>: CO1
blockpull: <domain> trying as domain NAME
Block Pull: [100 %]
Pull complete
Best,
Abbas.
12 years, 2 months
[libvirt-users] cpuset not affecting real pid placement
by Andrey Korolyov
Hi,
At least on 0.10.2 setting a cpuset doesn`t match a real process
placement - VM still consumes all available cores.
VM config:
.snip.
<vcpu placement='static' cpuset='0-5,12-17'>12</vcpu>
.snip.
for cpuset in $(find /cgroup/cpuset/libvirt/qemu/vmid/ -name
cpuset.cpus) ; do grep 0-5 $cpuset ; done
got: empty responce, e.g. 0-23 in my setup
expected: at least bounded vcpuX threads
In neighbor 'cpu' group libvirt sets cfq weights just fine, so this is
clearly a bug.
12 years, 2 months
[libvirt-users] Using libvirt to monitor virtual environment.
by Vinicius Gonçalves Braga
Hey,
I am new with libvirt. I am working to create a program to monitor a
virtual environment. I have seen some solutions that use libvirt. Is this
API good to make monitoring operations? I didn't see operations that show
the CPU or memory usage, at least in Java binding. Are there operations
like these, or we have to calculate based on lower level informations?
12 years, 2 months
[libvirt-users] Create and revert to snapshot fails silently after device update on running domain
by Jasper Spit
Hi list,
I'm having an issue with snapshot creation. Scenario:
qemu 1.1
libvirt 0.9.12
I create a domain, and start it. The domain has 1 IDE cdrom device
defined (see below).
When started, I want to mount an ISO file to it. So I do
updateDeviceFlags in libvirt-python or update-device in virsh (both have
the same problem).
This works fine, the ISO image becomes available to the domain. Now I
create a snapshot on the still running domain using snapshotCreateXML in
libvirt-python or snapshot-create in virsh. The command returns
immediately without error (a normal snapshot takes several seconds to
complete). If I revert to this snapshot this command also returns
immediately without error, but the snapshot is not actually reverted to,
the domain remains running in the same state as if nothing had happened
(I test this by verifying console output and checking if a testfile is
present on the domain). If I do not use the update device commands prior
to creating a snapshot, all is well. If I remove the source from the
cdrom device using update device, the snapshots work properly again.
Any idea what causes this?
Steps to reproduce using virsh:
virsh # start 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain 4d5d722b-864c-657e-0f39-55d1bafc760e started
virsh # snapshot-create 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain snapshot 1348653920 created
virsh # snapshot-revert 4d5d722b-864c-657e-0f39-55d1bafc760e 1348653920
All is good, the snapshot is reverted to properly. Now I update the
cdrom device:
virsh # update-device 4d5d722b-864c-657e-0f39-55d1bafc760e deb.xml
Device updated successfully
virsh # snapshot-create 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain snapshot 1348654116 created
Command returns instantly.
virsh # snapshot-revert 4d5d722b-864c-657e-0f39-55d1bafc760e 1348654116
No errors, but the snapshot is not reverted to. Now I remove the
<source .../> again and do an update-device.
virsh # update-device 4d5d722b-864c-657e-0f39-55d1bafc760e deb-off.xml
Device updated successfully
virsh # snapshot-create 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain snapshot 1348654540 created
virsh # snapshot-revert 4d5d722b-864c-657e-0f39-55d1bafc760e 1348654540
Snapshot is created and reverted to properly.
virsh # dumpxml 4d5d722b-864c-657e-0f39-55d1bafc760e
<domain type='kvm' id='135'>
<name>4d5d722b-864c-657e-0f39-55d1bafc760e</name>
<uuid>4d5d722b-864c-657e-0f39-55d1bafc760e</uuid>
<memory unit='KiB'>786432</memory>
<currentMemory unit='KiB'>786432</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='pc-1.1'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source
file='/data/images/debian-live-6.0.4-amd64-standard.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0'
unit='0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source
file='/data/domains/2f8baacd-563c-b747-b621-c0ddb4aa84bd'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
<controller type='usb' index='0'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:39:86:29'/>
<source bridge='br0'/>
<target dev='vnet2'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<input type='tablet' bus='usb'>
<alias name='input0'/>
</input>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='5902' autoport='yes'/>
<video>
<model type='vga' vram='9216' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</memballoon>
</devices>
<seclabel type='none'/>
</domain>
deb.xml:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/data/images/debian-live-6.0.4-amd64-standard.iso'/>
<target dev='hdc' bus='ide'/>
</disk>
deb-off.xml:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
</disk>
Thanks much,
- Jasper
12 years, 2 months
[libvirt-users] R: Using libvirt to monitor virtual environment.
by robmail16@libero.it
Hello
I need a solution for monitoring and automatic migration of Guest vm machines. A good approach would be to determine which guests to migrate based on "trend usage algorithm". There is something in java and libvirt that could be used?
RegardsRoberto
----Messaggio originale----
Da: vinicius.braga(a)lupa.inf.ufg.br
Data: 18/10/2012 16.02
A: <libvirt-users(a)redhat.com>
Ogg: [libvirt-users] Using libvirt to monitor virtual environment.
Hey,
I am new with libvirt. I am working to create a program to monitor a virtual environment. I have seen some solutions that use libvirt. Is this API good to make monitoring operations? I didn't see operations that show the CPU or memory usage, at least in Java binding. Are there operations like these, or we have to calculate based on lower level informations?
12 years, 2 months
[libvirt-users] 0.10.x incorrectly reporting currentMemory size
by Andrey Korolyov
Hi,
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>1394380</currentMemory>
<memtune>
<hard_limit unit='KiB'>1594380</hard_limit>
<soft_limit unit='KiB'>1494380</soft_limit>
</memtune>
results to:
0.10.x, .dominfo. or dumpxml | grep -i currentmemory
Max memory: 16777216 KiB
Used memory: 13977292 KiB
0.9.11-13:
Max memory: 16777216 KiB
Used memory: 1394380 KiB
12 years, 2 months
[libvirt-users] Migrating a LV backed guest
by Corey Quinn
I have a KVM VM that's backed by a logical volume on local disk.
I'd like to copy / move it to an identically configured host.
[root@virt2 cquinn]# virsh migrate --copy-storage-all --verbose --persistent node1.www qemu+ssh://10.102.1.11/system
error: Unable to read from monitor: Connection reset by peer
How should I effectively troubleshoot this? Am I misunderstanding how virsh migrate is supposed to work?
Regards,
Corey Quinn
12 years, 2 months
[libvirt-users] cgroup blkio.weight working, but not for KVM guests
by Ben Clay
I'm running libvirt 0.10.2 and qemu-kvm-1.2.0, both compiled from source, on
CentOS 6. I've got a working blkio cgroup hierarchy which I'm attaching
guests to using the following XML guest configs:
VM1 (foreground):
<cputune>
<shares>2048</shares>
</cputune>
<blkiotune>
<weight>1000</weight>
</blkiotune>
VM2 (background):
<cputune>
<shares>2</shares>
</cputune>
<blkiotune>
<weight>100</weight>
</blkiotune>
I've tested write throughput on the host using cgexec and dd, demonstrating
that libvirt has correctly set up the cgroups:
cgexec -g blkio:libvirt/qemu/foreground time dd if=/dev/zero of=trash1.img
oflag=direct bs=1M count=4096 & cgexec -g blkio:libvirt/qemu/background time
dd if=/dev/zero of=trash2.img oflag=direct bs=1M count=4096 &
Snap from iotop, showing an 8:1 ratio (should be 10:1, but 8:1 is
acceptable):
Total DISK READ: 0.00 B/s | Total DISK WRITE: 91.52 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
9602 be/4 root 0.00 B/s 10.71 M/s 0.00 % 98.54 % dd if=/dev/zero
of=trash2.img oflag=direct bs=1M count=4096
9601 be/4 root 0.00 B/s 80.81 M/s 0.00 % 97.76 % dd if=/dev/zero
of=trash1.img oflag=direct bs=1M count=4096
Further, checking the task list inside each cgroup shows the guest's main
PID, plus those of the virtio kernel threads. It's hard to tell if all the
virtio kernel threads are listed, but all the ones I've hunted down appear
to be there.
However, when running the same dd commands inside the guests, I get
roughly-equal performance - nowhere near the ~8:1 relative bandwidth
enforcement I get from the host: (background ctrl-c'd right after foreground
finishes, both started within 1s of each other)
[ben@foreground ~]$ dd if=/dev/zero of=trash1.img oflag=direct bs=1M
count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 104.645 s, 41.0 MB/s
[ben@background ~]$ dd if=/dev/zero of=trash2.img oflag=direct bs=1M
count=4096
^C4052+0 records in
4052+0 records out
4248829952 bytes (4.2 GB) copied, 106.318 s, 40.0 MB/s
I thought based on this statement: "Currently, the Block I/O subsystem does
not work for buffered write operations. It is primarily targeted at direct
I/O, although it works for buffered read operations." from this page:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/ht
ml/Resource_Management_Guide/ch-Subsystems_and_Tunable_Parameters.html that
this problem might be due to host-side buffering, but I have that explicitly
disabled in my guest configs:
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type="file" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source file="/path/to/disk.img"/>
<target dev="vda" bus="virtio"/>
<alias name="virtio-disk0"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x04"
function="0x0"/>
</disk>
Here is the qemu line from ps, showing that it's clearly being passed
through from the guest XML config:
root 5110 20.8 4.3 4491352 349312 ? Sl 11:58 0:38
/usr/bin/qemu-kvm -name background -S -M pc-1.2 -enable-kvm -m 2048 -smp
2,sockets=2,cores=1,threads=1 -uuid ea632741-c7be-36ab-bd69-da3cbe505b38
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/background.monitor,server,n
owait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/path/to/disk.img,if=none,id=drive-virtio-disk0,format=raw,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virti
o-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=22
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:11:22:33:44:55,bus=pci.0,addr=
0x3 -chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc
127.0.0.1:1 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
For fun I tried a few different cache options to try to force a bypass the
host buffercache, including writethough and directsync, but the number of
virtio kernel threads appeared to explode (especially for directsync) and
the throughput dropped quite low: ~50% of "none" for writethrough and ~5%
for directsync.
With cache=none, when I generate write loads inside the VMs, I do see growth
in the host's buffer cache. Further, if I use non-direct I/O inside the
VMs, and inflate the balloon (forcing the guest's buffer cache to flush), I
don't see a corresponding drop in background throughput. Is it possible
that the cache="none" directive is not being respected?
Since cgroups is working for host-side processes I think my blkio subsystem
is correctly set up (using cfq, group_isolation=1 etc). Maybe I miscompiled
qemu, without some needed direct I/O support? Has anyone seen this before?
Ben Clay
rbclay(a)ncsu.edu
12 years, 2 months
[libvirt-users] Migrating fails with "Timed out during operation: cannot acquire state change lock"
by Guido Winkelmann
Hi,
After recently upgrading to 0.9.11 (as shipped with Fedora 17), I was just
trying to migrate a qemu domain from one host to another (using
virDomainMigrateToURI()), when I got this error message:
Unsafe migration: Migration may lead to data corruption if disks use cache !=
none
Okay, this is explainable (if a bit disappointing - I would have hoped that
qemu would be able to disable disk caches before migration and then reenable
them afterwards...). However, the next time I tried migrating the same domain,
I got this error message after about 30 seconds:
Timed out during operation: cannot acquire state change lock
This looks like a bug to me - granted not a high profile one if it only
happens when someone is retrying a failed migration, but still...
Guido
12 years, 2 months
[libvirt-users] Connection using Java
by Felipe Oliveira Gutierrez
Hi,
I am using java to connect to the xen, but my class is throwing a
exception. Does anyone know what is happening?
public class TestConnection {
public static void main(String[] args) {
Connect conn = null;
try {
conn = new Connect("xen+ssh://root@192.XXX.XXX.XX/", true);
libvir: RPC error : End of file while reading data: nc: invalid option --
'U'
nc -h for help: Input/output error
exception caught:org.libvirt.LibvirtException: End of file while reading
data: nc: invalid option -- 'U'
nc -h for help: Input/output error
level:VIR_ERR_ERROR
code:VIR_ERR_SYSTEM_ERROR
domain:VIR_FROM_RPC
hasConn:false
hasDom:false
hasNet:false
message:End of file while reading data: nc: invalid option -- 'U'
nc -h for help: Input/output error
str1:%s
str2:End of file while reading data: nc: invalid option -- 'U'
nc -h for help: Input/output error
str3:null
int1:-1
int2:-1
Thanks,
Felipe
--
*-- Felipe Oliveira Gutierrez*
*-- **lipe.82(a)gmail.com* <lipe.82(a)gmail.com>
*-- https://sites.google.com/site/lipe82/Home/diaadia*
12 years, 2 months