[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] unable to dissect libvirt rpc packets using wireshark plugin
by gowrishankar
Hi,
I am trying libvirt plugin in wireshark to dissect RPC payload in TCP, but
finding dissector code not really working.
My env is Fedora core 21 (x86_64) and installed packages are as follow:
wireshark-1.12.6-1.fc21.x86_64
libvirt-wireshark-1.2.9.3-2.fc21.x86_64
Earlier, just after installation, I noticed libvirt.so available only in
/usr/lib64/wireshark/plugins/1.12.5/ . Wireshark could not load libvirt
plugin.
So, I copied above .so into 1.12.6/ under same plugins folder, following it
wireshark could list libvirt as supported protocol.
tshark -G protocols | grep libvirt
Libvirt libvirt libvirt
However, on checking with some pcaps which has libvirt RPC calls captured on
wire, wireshark does not list libvirt RPC packets, as I search for "libvirt"
protocol in pcap.
Have anyone experienced this before or if you have any pointer that I could
check in my env, that would be very helpful.
--
Regards,
Gowrishankar M
8 years, 10 months
[libvirt-users] Questions about qcow2 file size management
by Jérôme
Hi all.
I have a few questions regarding the qcow2 format.
1/ Allocated size vs. file size
When creating a VM, I indicated a size of 10 G.
$ls -lsh
7,7G -rw------- 1 libvirt-qemu libvirt-qemu 11G oct. 14 10:04
prod.qcow2
The allocated size is lesser than max size. Alright.
I think I more or less grab the difference between allocated size and
file size, but I'm not sure I get the point of the auto-grow feature of
qcow2.
Can someone confirm that if I have, say, a 60Go partition, I can create
more than six 10Go VM, as long as each don't actually use the full 10
Go? Sort of like flying companies selling more tickets than seats in the
plane assuming a few people won't come.
In other words, can I have this on a 60Go drive?
total 53G
7,4G -rw------- 1 root root 11G oct. 8 06:34 prod_151008.qcow2
7,4G -rw------- 1 root root 11G oct. 9 06:37 prod_151009.qcow2
7,5G -rw------- 1 root root 11G oct. 10 06:41 prod_151010.qcow2
7,5G -rw------- 2 root root 11G oct. 11 06:49 prod_151011.qcow2
7,5G -rw------- 1 root root 11G oct. 12 06:44 prod_151012.qcow2
7,5G -rw------- 1 root root 11G oct. 13 06:27 prod_151013.qcow2
7,7G -rw------- 1 root root 11G oct. 14 06:55 prod_151014.qcow2
(Total allocated is 53 < 60 but sum of 11G sizes is 77 G.)
If so, then I guess I get the point.
2/ Backup size
Is there a simple way to do backups that only take the actual allocated
space (7,7 G in my example) instead of the max (10 G)?
I'm using snapshots and blockcommits to backup a VM, thanks to help on
this list. The scripts are on GitHub for anyone interested [1].
Basically, what the script does is snapshot then cp as local backup
file. Then rsync to another backup partition. Both source and
destination partitions are ext4.
Source dir (where I cp the snapshots):
# ls -lsh vmbackup/daily/
total 53G
7,4G -rw------- 1 root root 11G oct. 8 06:34 prod_151008.qcow2
7,4G -rw------- 1 root root 11G oct. 9 06:37 prod_151009.qcow2
7,5G -rw------- 1 root root 11G oct. 10 06:41 prod_151010.qcow2
7,5G -rw------- 2 root root 11G oct. 11 06:49 prod_151011.qcow2
7,5G -rw------- 1 root root 11G oct. 12 06:44 prod_151012.qcow2
7,5G -rw------- 1 root root 11G oct. 13 06:27 prod_151013.qcow2
7,7G -rw------- 1 root root 11G oct. 14 06:55 prod_151014.qcow2
Destination (where rsync copies the files):
# ls -lsh /mnt/usb_hd/vm/vmbackup/daily/
total 71G
11G -rw------- 1 root root 11G oct. 8 06:34 prod_151008.qcow2
11G -rw------- 1 root root 11G oct. 9 06:37 prod_151009.qcow2
11G -rw------- 1 root root 11G oct. 10 06:41 prod_151010.qcow2
11G -rw------- 2 root root 11G oct. 11 06:49 prod_151011.qcow2
11G -rw------- 1 root root 11G oct. 12 06:44 prod_151012.qcow2
11G -rw------- 1 root root 11G oct. 13 06:27 prod_151013.qcow2
11G -rw------- 1 root root 11G oct. 14 06:55 prod_151014.qcow2
Allocated size is as big as file size. This kind of beats the point of
having qcow2 grow as needed.
Ideally, I would like to create VM with a huge max file size, but only
backup the actually used space. How may I achieve this?
I just tested
qemu-img convert -O qcow2 prod_151008.qcow2 prod_151008_resize.qcow2
and I get
7,2G -rw-r--r-- 1 root root 7,2G oct. 14 10:51 prod_151008_resize.qcow2
Seems to work. I could do this before rsyncing.
Is this the recommanded way? Is there a higher level libvirt command to
use instead of qemu-img?
3/ Playing with allocated space
My VM apparently has an allocated size of 7.7 G
7,7G -rw------- 1 libvirt-qemu libvirt-qemu 11G oct. 14 10:04
prod.qcow2
However, loging into it shows much less than 7.7 G is used:
$df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 9.8G 2.0G 7.3G 22% /
udev 10M 0 10M 0% /dev
tmpfs 2.0G 17M 2.0G 1% /run
tmpfs 4.9G 4.0K 4.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
tmpfs 1003M 0 1003M 0% /run/user/1000
I understand once qcow2 has grown to a given allocated size, it won't
de-allocate space automatically (auto-allocation is one way only).
Perhaps I ended up loading 7Go of files, then those were deleted and now
only 2Go are used but the file still has 7G allocated. Could be, but I
don't see when this might have happened. Could there be another
explanation?
Say I want to minimize allocated space because it is a pity to use 2 Go
and keep 7 Go backups (I know disk space is cheap, this is just an
example), how may I shrink allocated space to currently used space?
I'm not talking about shrinking the file, just minimizing the allocated
size to keep small backups as per former paragraph, still keeping max
file size big to let the VM grow further when needed.
In other words, I'm aiming at
2,7G -rw------- 1 libvirt-qemu libvirt-qemu 11G oct. 14 10:04
prod.qcow2
not at
2,7G -rw------- 1 libvirt-qemu libvirt-qemu 2,7G oct. 14 10:04
prod.qcow2
The guest is a Linux, BTW.
4/ Playing with qcow2 file sizes
Say I've got this file
7,7G -rw------- 1 libvirt-qemu libvirt-qemu 11G oct. 14 10:04
prod.qcow2
I'd like to make it bigger, like
7,7G -rw------- 1 libvirt-qemu libvirt-qemu 100G oct. 14 10:04
prod.qcow2
AFAIU, the steps would be
- power off the VM
- expand the qcow2 file size
- from the host, extend the volume
- power on the VM
- from the guest, extend the filesystem
I've seen various post on the internet but I'm a bit confused and some
posts may be to old to consider new libvirt/qemu possibilities, so I'd
appreciate a safe pointer. (I'm also interested in reducing file size.)
Thanks for any hint.
[1] https://github.com/Jerome-github/vmbackup
--
Jérôme
8 years, 11 months
[libvirt-users] "Failed to start domain..."
by Ken D'Ambrosio
Sadly, I'm back with another issue. I can do a "system list --all" just
fine; however, if I attempt to start the machines, I get back:
maas@Bill-MAAS-cc:~$ strace -s 1024 -f -o /tmp/asdfasdf.log virsh -c
vbox+ssh://gbadmin@10.20.0.1/system start PXE-client-07
error: Failed to start domain PXE-client-07
error: An error occurred, but the cause is unknown
Log files on both client and server are pretty sparse on details of any
sort... as, again, is Google. Any ideas?
Thanks yet again,
-Ken
8 years, 12 months
[libvirt-users] enabling virtio-scsi-data-plane in libvirt
by Vasiliy Tolstov
Can somebody knows how to enable virtio-scsi-data-plane in libvirt for
specific domain?
I know that i need to replace "-device virtio-scsi-pci" with "-object
iothread,id=io1 -device virtio-scsi-pci,iothread=io1" in qemu, but how
can i do this in libvirt?
--
Vasiliy Tolstov,
e-mail: v.tolstov(a)selfip.ru
8 years, 12 months
[libvirt-users] How to disable kvm_steal_time feature
by Piotr Rybicki
Hi.
I would like to workaround a bug, when after live-migration of KVM
guest, there is a 100% steal time shown in guest.
I've read, that disabling 'kvm_steal_time' feature should workarund this
bug, but i can't find a way to disable it in libvirt's domain xml file.
Tried in <cpu> section:
<feature policy='disable' name='kvm_steal_time'/>
but that doesn't work.
Also, couldn't find any related information in libvirt documentation.
Google helps neither.
How can I disable this feature?
Thanks in advance.
Piotr Rybicki
8 years, 12 months
[libvirt-users] virsh uses internally qemu-img ?
by Lentes, Bernd
Hi,
i read that virsh uses internally qemu-img (http://serverfault.com/questions/692435/qemu-img-snapshot-on-live-vm).
Is that true ? so snapshotting a running vm with virsh or qemu-img is the same ?
Bernd
--
Bernd Lentes
Systemadministration
institute of developmental genetics
Gebäude 35.34 - Raum 208
HelmholtzZentrum München
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 (0)89 3187 1241
fax: +49 (0)89 3187 2294
Wer Visionen hat soll zum Hausarzt gehen
Helmut Schmidt
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
8 years, 12 months
[libvirt-users] libvirtd doesn't attach Sheepdog storage VDI disk correctly
by Adolf Augustin
Hi,
I am trying to use libvirt with sheepdog.
I am using Debian 8 stable with libvirt V1.21.0
I am encountering a Problem which already has been reported.
=================================================================
See here: http://www.spinics.net/lists/virt-tools/msg08363.html
=================================================================
qemu/libvirtd is not setting the path Variable correctly.
Every time I convert an image to sheepdog
=============================================
qemu-img convert /home/user/temp/debian-8.2.0-amd64-CD-1.iso
sheepdog:debian.iso
============================================
the dump of the image says
=============================================
virsh # vol-dumpxml --pool herd debian.iso
<volume type='network'>
<name>debian.iso</name>
<key>herd/debian.iso</key>
<source>
</source>
<capacity unit='bytes'>657457152</capacity>
<allocation unit='bytes'>658505728</allocation>
<target>
<path>debian.iso</path>
<format type='unknown'/>
</target>
</volume>
==============================================
It should read
===============================================
...
<path>sheepdog:debian.iso</path>
...
===============================================
I ungraded the system to Debain testing now running:
================================================
virsh # version
Compiled against library: libvirt 1.2.21
Using library: libvirt 1.2.21
Using API: QEMU 1.2.21
Running hypervisor: QEMU 2.4.0
================================================
The Problem ist still existent.
It should have been solved in libvirt 1.2.17
See here: https://libvirt.org/news.html
=====================================================
....
update sheepdog client] update sheepdog client path (Vasiliy Tolstov),
.....
=====================================================
Nevertheless it doesn't work in Debian 8 testing (libvirt 1.2.21).
--
Best regards
Adolf Augustin
E-mail: adolf.augustin(a)zettamail.de
PGP-Key: 0xC4709AFE
Fingerprint: 1806 35FA CAE8 0202 B7AF 12B9 5956 5BC0 C470 9AFE
8 years, 12 months