[libvirt-users] libvirt 1.0.3 Vs 1.0.4 / cgroup devices
by mohamed amine Larabi
Hi there,
I am using libvirt with lxc to create fedora 16 & 18 containers on fedora
18 host.
first I did the setup with libvirt 1.0.3 and everything worked fine, then
after upgrading to libvirt 1.0.4, I could not create character device on
the guests :
Test on the guest1 :
# ls -l /dev
total 0
lrwxrwxrwx. 1 root root 10 Apr 17 21:18 console -> /dev/pts/0
lrwxrwxrwx. 1 root root 11 Apr 17 21:18 core -> /proc/kcore
lrwxrwxrwx. 1 root root 13 Apr 17 21:18 fd -> /proc/self/fd
crw-rw-rw-. 1 root root 1, 7 Apr 17 21:18 full
drwxr-xr-x. 2 root root 0 Apr 17 21:18 hugepages
prw-------. 1 root root 0 Apr 17 21:18 initctl
srw-rw-rw-. 1 root root 0 Apr 17 21:18 log
drwxrwxrwt. 2 root root 40 Apr 17 21:18 mqueue
crw-rw-rw-. 1 root root 1, 3 Apr 17 21:18 null
crw-rw-rw-. 1 root root 5, 2 Apr 18 10:31 ptmx
drwxr-xr-x. 2 root root 0 Apr 17 21:18 pts
crw-r--r--. 1 root root 1, 8 Apr 17 21:19 random
drwxrwxrwt. 2 root root 40 Apr 17 21:18 shm
lrwxrwxrwx. 1 root root 15 Apr 17 21:18 stderr -> /proc/self/fd/2
lrwxrwxrwx. 1 root root 15 Apr 17 21:18 stdin -> /proc/self/fd/0
lrwxrwxrwx. 1 root root 15 Apr 17 21:18 stdout -> /proc/self/fd/1
lrwxrwxrwx. 1 root root 10 Apr 17 21:18 tty1 -> /dev/pts/0
crw-rw-rw-. 1 root root 1, 9 Apr 17 21:18 urandom
crw-rw-rw-. 1 root root 1, 5 Apr 17 21:18 zero
# rm -f /dev/random (successful)
# mknod random c 1 8
mknod: `random': Operation not permitted
Config on the host :
knowing that selinux is set to permissive and c 1:8 rwm is in the cgroup
devices list of the guest1
# cat /sys/fs/cgroup/devices/libvirt/lxc/guest1/devices.list
c 1:3 rwm
c 1:5 rwm
c 1:7 rwm
c 1:8 rwm
c 1:9 rwm
c 5:0 rwm
c 5:2 rwm
c 10:229 rwm
c 136:* rwm
is this a change that was introduced intentially on 1.0.4 ? if yes, how can
I make it work ?
please advice
Thank you in advance
Amine
11 years, 8 months
[libvirt-users] question about process power which has MCSx
by yue
hi,all
a qemu-kvm process and its disk(image file) have the same MCS(s0:c111,c555). it express this process have access to this image.
i do not know the power to access its image file is the max or min?
if any other power this process(domain) has?how much?
i want to know the exact power a qemu-kvm process has besides access its image file ,other kinds of files,dirs etc.
my test case:
after start a guestVM(its disk xml ,cache='none' error_policy='stop'), make some modification on its files and save them.
then go to hypervisor, modify the MCS of guestVM's image file.
1.i can read those files(cache=none)?it should not be so. why?
2.then modify files and save, the guestVM hang, it is paused on UI. this is right qeum process can not write again. why this guestVM is hang? and can not be resumed
3.look at audit info. denied { write } for pid=52162 comm="qemu-kvm".
that pid is 52162, is not my qemu-kvm's pid? why?
thanks so much.
11 years, 8 months
[libvirt-users] libvirt support for qcow2 rebase?
by Skardal, Harald
I have not found support in libvirt (nor virsh) for doing the equivalent
of "qemu-img rebase ....".
The use case:
You have copied a qcow2 stack and the new files have different names or
reside in a different directory. Therefore you need to change the
backing file.
Is there a way to do this? Is this a planned addition to libvirt?
Harald
11 years, 8 months
[libvirt-users] Shouldn't vol-upload / virStorageVolUpload() be doing some format conversion?
by Guido Winkelmann
Hi,
I just tried using vol-update to copy an image file to a storage pool, and I
noticed that, at least in my configuration (qemu backend, storage pool of type
"dir"), this command does not appear to be doing any kind of format
conversion. It simply copied the source file as-is over the file in which the
target volume was stored with no checks whether that even makes sense.
The help text to this command says it supports offset and length, which -
kinda, sorta - implies that I can write a block of bytes at offset n to a
volume, attach that volume to a domain, and then the OS in the guest domain
will find exactly the bytes I just wrote at exactly offset n in its newly
attached harddisk/CDROM/whatever. Instead, trying to this with anything but
raw images would probably just break the image file.
AFAICS, in its current form, this command and this API-call are only useful in
exactly two scenarios:
- Both the source and target image are in raw format
- Source and target image are in the same non-raw format (like qcow2 or vmdk)
AND you are copying the entire volume at once.
In all other cases - source and target are in different formats, source and
target are in non-raw and you are trying to write only parts of a volume -
this can only lead to a broken image file.
As far as I see it, at the layer of abstraction libvirt offers, "volume"
should not be synonymous with "the file the file the volume is stored in on
disk", but rather with "the volume as a guest machine will get to see it".
Regards,
Guido
11 years, 8 months
[libvirt-users] network under session
by Evaggelos Balaskas
Perhaps a stupid question, but i will give it a try:
I have two machines on libvirtd session and network on usermod.
Can i (somehow) cross these two machines ?
machineA:
ip addr add 10.10.10.101/24 dev eth0
ip route add default via 10.10.10.102 dev eth0
machineB:
ip addr add 10.10.10.102/24 dev eth0
ip route add default via 10.10.10.101 dev eth0
--
Evaggelos Balaskas - Unix System Engineer
http://gr.linkedin.com/in/evaggelosbalaskas
11 years, 8 months
[libvirt-users] using transport protocol in live migration
by digvijay chauhan
Hello,
I am working on live migration of virtual machine using xen and
kvm.If i use qemu+ssh:///system then is the transport protocol used during
live migration tcp or ssh?I mean i want to evaluate the performance of
transport protocil during live migration using wireshark and netperf
tool,so using this command will show tcp performance?
Orelse i will have to use qemu+tcp:/// ?
11 years, 8 months
[libvirt-users] after snapshot-delete, the qcow2 image file size doesn't decrease
by me,apporc
After snapshot-delete, the qcow2 image file size doesn't decrease, isn't
that a waste of disk space?
Would someone please tell me how to decrease the file size when
snapshot-delete, if that's possible?
The image file name of my virtual machine is d0.qcow
As follows:
[root@test1 ]# virsh list
Id Name State
----------------------------------------------------
32 bfbe8ca8-8579-11e2-844a-001018951f48 running
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 3.6G
cluster_size: 65536
[root@test1 ]# ls -lh
total 3.6G
-rw------- 1 qemu qemu 3.6G Apr 12 17:13 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-list bfbe8ca8-8579-11e2-844a-001018951f48
Name Creation Time State
------------------------------------------------------------
[root@test1 8]# virsh snapshot-create bfbe8ca8-8579-11e2-844a-001018951f48
Domain snapshot 1365758005 created
[root@test1 ]# virsh snapshot-create bfbe8ca8-8579-11e2-844a-001018951f48
Domain snapshot 1365758022 created
[root@test1 bfbe8ca8-8579-11e2-844a-001018951f48]# virsh snapshot-list
bfbe8ca8-8579-11e2-844a-001018951f48
Name Creation Time State
------------------------------------------------------------
1365758005 2013-04-12 17:13:25 +0800 running
1365758022 2013-04-12 17:13:42 +0800 running
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 3.8G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1365758005 127M 2013-04-12 17:13:25 00:00:53.141
2 1365758022 127M 2013-04-12 17:13:42 00:01:09.508
[root@test1 ]# ls -lh
total 3.9G
-rw------- 1 qemu qemu 3.9G Apr 12 17:14 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-delete
bfbe8ca8-8579-11e2-844a-001018951f48 1365758022
Domain snapshot 1365758022 deleted
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 3.8G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
0 1970-01-01 08:00:00 00:00:00.000
[root@test1 ]# ls -lh
total 3.9G
-rw------- 1 qemu qemu 3.9G Apr 12 17:14 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-delete
bfbe8ca8-8579-11e2-844a-001018951f48 1365758005
Domain snapshot 1365758005 deleted
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 3.8G
cluster_size: 65536
[root@test1 ]# ls -lh
total 3.9G
-rw------- 1 qemu qemu 3.9G Apr 12 17:14 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-create bfbe8ca8-8579-11e2-844a-001018951f48
Domain snapshot 1365758311 created
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 3.8G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1365758311 252M 2013-04-12 17:18:32 00:05:58.136
[root@test1 ]# ls -lh
total 3.9G
-rw------- 1 qemu qemu 3.9G Apr 12 17:18 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-create bfbe8ca8-8579-11e2-844a-001018951f48
Domain snapshot 1365758338 created
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 4.1G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1365758311 252M 2013-04-12 17:18:32 00:05:58.136
2 1365758338 252M 2013-04-12 17:18:58 00:06:24.272
[root@test1 ]# ls -lh
total 4.1G
-rw------- 1 qemu qemu 4.1G Apr 12 17:19 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-list bfbe8ca8-8579-11e2-844a-001018951f48
Name Creation Time State
------------------------------------------------------------
1365758311 2013-04-12 17:18:31 +0800 running
1365758338 2013-04-12 17:18:58 +0800 running
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 4.1G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1365758311 252M 2013-04-12 17:18:32 00:05:58.136
2 1365758338 252M 2013-04-12 17:18:58 00:06:24.272
[root@test1 ]# virsh snapshot-delete
bfbe8ca8-8579-11e2-844a-001018951f48 1365758338
Domain snapshot 1365758338 deleted
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 4.1G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1365758311 252M 2013-04-12 17:18:32 00:05:58.136
[root@test1 ]# ls -lh
total 4.1G
-rw------- 1 qemu qemu 4.1G Apr 12 17:20 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-delete
bfbe8ca8-8579-11e2-844a-001018951f48 1365758311
Domain snapshot 1365758311 deleted
[root@test1 ]# qemu-img info d0.qcow
image: d0.qcow
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 4.1G
cluster_size: 65536
[root@test1 ]# ls -lh
total 4.1G
-rw------- 1 qemu qemu 4.1G Apr 12 17:20 d0.qcow
drwx------ 2 root root 4.0K Apr 12 17:12 held
[root@test1 ]# virsh snapshot-list bfbe8ca8-8579-11e2-844a-001018951f48
Name Creation Time State
------------------------------------------------------------
11 years, 8 months