[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 6 months
[libvirt-users] virDomainCoreDumpWithFormat files created as root
by NoxDaFox
Greetings,
I am dumping a guest VM memory for inspection using the command
"virDomainCoreDumpWithFormat" and the created files appear to belong to
root (both user and group).
I have searched around but didn't find any answer. Is there a way to
instruct QEMU to create those files under a different user?
Thank you.
9 years, 1 month
[libvirt-users] PCI passthrough fails in virsh: iommu group is not viable
by Alex Holst
I would really appreciate some pointers on what I am doing wrong here.
I have a need to run multiple virtual guests which have each their own GPU and
some USB controllers passed-through. I am able to run one of the guests like
this (assuming vfio stuff has happened elsewhere), but I would prefer to use
virsh:
kvm -M q35 -m 8192 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/seabios/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-device vfio-pci,host=00:1d.0,bus=pcie.0 \
-device vfio-pci,host=00:1a.0,bus=pcie.0 \
-nographic -boot menu=on /vm2/foo.img
I found the hardware addresses using lspci. When I invoke the same machine with virsh with what
I believe are the same addresses, I get:
virsh # start foo
error: Failed to start domain foo
error: internal error: process exited while connecting to monitor: 2015-08-12T18:24:10.651720Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x4: vfio: error, group 18 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
2015-08-12T18:24:10.651752Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x4: vfio: failed to get group 18
2015-08-12T18:24:10.651766Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x4: Device initialization failed.
2015-08-12T18:24:10.651781Z qemu-system-x86_64: -device vfio-pci,host=02:00.0,id=hostdev0,bus=pci.0,addr=0x4: Device 'vfio-pci' could not be initialized
I have included dumpxml output below -- is the hostdev section wrong?
<domain type='kvm'>
<name>foo</name>
<uuid>51f57655-11be-41bf-b925-2e6aef01f9c4</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static' current='1'>4</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>SandyBridge</model>
<topology sockets='1' cores='4' threads='1'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/vm2/foo.img'/>
<target dev='sda' bus='sata'/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdb' bus='ide'/>
<readonly/>
<boot order='2'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</controller>
<interface type='direct'>
<mac address='52:54:00:f0:47:f5'/>
<source dev='p5p1' mode='bridge'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'/>
<video>
<model type='cirrus' vram='16384' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</hostdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</memballoon>
</devices>
</domain>
--
Alex Holst
9 years, 3 months
[libvirt-users] VM locking
by Prof. Dr. Michael Schefczyk
Dear All,
I am trying to use VM (disk) locking on a two node Centos 7 KVM cluster. Unfortunately, I am not successful.
Using virtlockd (https://libvirt.org/locking-lockd.html), I get each host to write the zero length file with a hashed filename to the shared folder specified. Regardless of which host I start a VM (domain) on, they do produce the identical filename per VM. What does not work, however, is to prevent the second host to start the VM already running on the first VM.
Using sanlock (https://libvirt.org/locking-sanlock.html), no domains do start at all. However, the file "__LIBVIRT__DISKS__" is also not written to the shared folder. Hence, nothing works.
Can someone please point me to the right direction?
My system is current Centos 7 using a gluster storage setup like suggested for oVirt (https://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/) based on the oVirt 3.5 repo but without an engine. I do this, because I want to retain VM files in qcow2 format for live backups and with human readable names. What does work then is, e.g., live migration. A locking mechanism to prevent starting a VM twice would be good.
Please note that this configuration - both according to redhat and to my own trial and error - requires Lock=False in /etc/nfsmount.conf. Is there a connection with my findings? My issues occur regardless of the files being in NFS or Gluster folders. KVM must load the Gluster storage indirectly via NFS rather than straight via Gluster, as Gluster storage does not seem to fully work, at least in virt-manager.
My software versions are:
libvirt 1.2.8-16.el7_1.3.x86_64
qemu-kvm ev-2.1.2-23.el7_1.3.1.x86_64
Regards,
Michael
9 years, 3 months
[libvirt-users] Live migration & storage copy broken since 1.2.17
by Antoine Millet
Hi,
It seems that live migration using storage copy is broken since libvirt
1.2.17.
Here is the command line used to do the migration using virsh:
virsh migrate --live --p2p --persistent --undefinesource --copy
-storage-all d2b545d3-db32-48d3-b7fa-f62ff3a7fa18
qemu+tcp://dest/system
XML dump of my storage:
<pool type='logical'>
<name>local</name>
<uuid>276bda97-d6c2-4681-bc3f-0c8c221bd1b1</uuid>
<capacity unit='bytes'>1024207093760</capacity>
<allocation unit='bytes'>4294967296</allocation>
<available unit='bytes'>1019912126464</available>
<source>
<name>hkvm</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/hkvm</path>
</target>
</pool>
When it fails, I get the following error:
error: Storage volume not found: no storage vol with matching name
'402aea08-26aa-45b6-a46e-a3b02137ff26'
Where 402aea08-26aa-45b6-a46e-a3b02137ff26 is the volume bound to my
VM:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source pool='local' volume='402aea08-26aa-45b6-a46e
-a3b02137ff26'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
Here is the test result (version is for both source and destination):
- 1.2.15 -> ok
- 1.2.16 -> ok
- 1.2.17 -> fails
- 1.2.18 -> fails
Is it a bug? Do I miss something?
Thanks,
Antoine
9 years, 3 months
[libvirt-users] CPU feature 'svm' not reported inside guest without 'host-passthrough'
by Sladjan Ristic
Hi,
I am confused by the behaviour of the CPU description in my VM. I have a host with an AMD CPU with
SVM feature and I want to try nested virtualization on a Fedora22 guest. The host is Fedora22 as well.
libvirtd (libvirt) 1.2.13.1
kernel 4.1.6-200.fc22.x86_64
1.
I tried 'custom' mode with model qemu64 with required feature 'SVM' but '/proc/cpuinfo' in the guest
doesn't show SVM.
2.
However, if I forbid the SVM feature, the guest can't be created and libvirt complains that the host CPU
has SVM.
3.
The only way to have SVM reported in '/proc/cpuinfo' inside the guest VM is using 'host-passthrough'
mode, but I want to avoid it to allow migration of the VM.
Is this an issue with AMD
This is part of the description of the guest:
<domain type='kvm'>
<name>vagrant-fedora-libvirt-nested_default</name>
<uuid>4a379890-b891-4971-aad6-12fad45eaebc</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static'>4</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>qemu64</model>
<feature policy='require' name='svm'/>
</cpu>
Regards,
Sladjan
9 years, 3 months