[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] stream finish throws exception via python API
by Shahar Havivi
Hi,
The following snippet works fine e.g. receiving the data but when calling
stream.finish() we get the following error:
stream = con.newStream()
vol.download(stream, 0, 0, 0)
buf = stream.recv(1024)
stream.finish()
libvirt: I/O Stream Utils error : internal error: I/O helper exited abnormally
Traceback (most recent call last):
File "./helpers/kvm2ovirt", line 149, in <module>
download_volume(vol, item[1], diskno, disksitems, pksize)
File "./helpers/kvm2ovirt", line 102, in download_volume
stream.finish()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 5501, in finish
if ret == -1: raise libvirtError ('virStreamFinish() failed')
libvirt.libvirtError: internal error: I/O helper exited abnormally
Am I doing something wrong?
Thank you,
Shahar.
8 years
[libvirt-users] systemctl libvirt-guests.service fails to start during host boot
by Benoit
Hi,
I have been using qemu and kvm for a while but I am newbie to libvirt.
(but I really like it :)
I am on Parabola (fork of Archlinux, using systemd)
I only got an issue regarding libvirt-guests service, when my host boots
about 7 times to 10 I got a issue on the service.
------------------------------------------------------------------------
systemctl status libvirt-guests.service
● libvirt-guests.service - Suspend Active Libvirt Guests
Loaded: loaded (/usr/lib/systemd/system/libvirt-guests.service;
enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2016-08-11 10:55:35
CEST; 36s ago
Docs: man:libvirtd(8)
http://libvirt.org
Process: 347 ExecStart=/usr/lib/libvirt/libvirt-guests.sh start
(code=exited, status=1/FAILURE)
Main PID: 347 (code=exited, status=1/FAILURE)
Aug 11 10:55:35 oms_apex_plaisir systemd[1]: Starting Suspend Active
Libvirt Guests...
Aug 11 10:55:35 oms_apex_plaisir libvirt-guests.sh[347]: Resuming guests
on default URI...
Aug 11 10:55:35 oms_apex_plaisir libvirt-guests.sh[347]: Resuming guest
: error: failed to connect to the hypervisor
Aug 11 10:55:35 oms_apex_plaisir libvirt-guests.sh[347]: error: no valid
connection
Aug 11 10:55:35 oms_apex_plaisir libvirt-guests.sh[347]: error: Failed
to connect socket to '/var/run/libvirt/libvirt-sock' no such file
Aug 11 10:55:35 oms_apex_plaisir systemd[1]: libvirt-guests.service:
Main process exited, code=exited, status=1/FAILURE
Aug 11 10:55:35 oms_apex_plaisir systemd[1]: Failed to start Suspend
Active Libvirt Guests.
Aug 11 10:55:35 oms_apex_plaisir systemd[1]: libvirt-guests.service:
Unit entered failed state.
Aug 11 10:55:35 oms_apex_plaisir systemd[1]: libvirt-guests.service:
Failed with result 'exit-code'.
------------------------------------------------------------------------
it is like the service starts too early.
When I restart the service everything is good and if I shutdown it sends
the ACPI message correctly to my guests.
I already seen lot of discussion but not able to find any working
solution.. any idea to help me figure out the issue ?
many thanks
belette
8 years, 2 months
[libvirt-users] Help With Nested Virtualization
by Brandon Golway
(Copied from my post on the Arch Linux forums:
https://bbs.archlinux.org/viewtopic.php?pid=1650650#p1650650)
I have a FreeNAS 10 KVM setup via libvirt on my Arch server and I'd like to
be able to test out the virtualization features in the nightly FreeNAS 10
builds but the problem is that I can't seem to get VT-x to correctly pass
through to the guest. I have followed the [u][url=
https://wiki.archlinux.org/index.php/KVM#Nested_virtualization]Nested
Virtualization[/url][/u] section of the KVM wiki and I'm sure it's
supported and enabled.
Here's proof
[code] [bran@nas ~]$ sudo systool -m kvm_intel -v | grep nested
nested = "Y"
[bran@nas ~]$ lscpu|grep Virtualization
Virtualization: VT-x[/code]
So the host/hardware isn't the problem, I believe the problem lies within
libvirt.
RedHat says to use [b]copy host CPU configuration[/b] or
[b]host-passthrough[/b], with the latter being preferred. If I use the
former I get the [b]CMT not supported[/b] error, but if I type in
[b]host-passthrough[/b] which according to [u][url=
https://bbs.archlinux.org/viewtopic.php?id=214539]this post[/url][/u]
should work. When I set [b]host-passthrough[/b] it allows the system to
boot up, but when I try to start a guest FreeNAS gives me the error that
VT-x instructions aren't available, I have no idea how to check them either
since the [b]proc[/b] pseudo-filesystem doesn't exist in BSD. I know for a
fact that this isn't a problem with the FreeNAS builds because I've been
testing them out for months on my Windows 10 desktop via VMware and nested
virtualization works without issue, so it must be an issue with
KVM/libvirt.
Can someone clue me in on what the issue is?
Here's the entire XML config for the FreeNAS VM
[code]<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made
using:
virsh edit FreeNAS_10
or other application using the libvirt API.
-->
<domain type='kvm'>
<name>FreeNAS_10</name>
<uuid>ea816b85-7685-495a-bc97-28a882f190d7</uuid>
<title>FreeNAS v10</title>
<description>Nightly Alpha Test Releases</description>
<memory unit='KiB'>6340608</memory>
<currentMemory unit='KiB'>6340608</currentMemory>
<vcpu placement='static'>4</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.6'>hvm</type>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough'/>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/sbin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/mnt/storage/vm-storage/FreeNAS_Disk1.img'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/mnt/storage/vm-storage/FreeNAS_Disk2.img'/>
<target dev='vdc' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/mnt/storage/vm-storage/FreeNAS_Disk3.img'/>
<target dev='vdd' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='directsync' io='native'/>
<source file='/var/lib/libvirt/images/FreeNAS_10.img'/>
<target dev='vde' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:24:5c:08'/>
<source bridge='vmbridge'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'
primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='1'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
function='0x0'/>
</memballoon>
</devices>
</domain>[/code]
Thanks,
Brandon Golway
8 years, 2 months
[libvirt-users] attaching storage pool error
by Johan Kragsterman
Hi!
System centos7, system default libvirt version.
I've succeeded to create an npiv storage pool, which I could start without problems. Though I couldn't attach it to the vm, it throwed errors when trying. I want to boot from it, so I need it working from start. I read one of Daniel Berrange's old(2010) blogs about attaching an iScsi pool, and draw my conclusions from that. Other documentation I haven't found. Someone can point me to a more recent documentation of this?
Are there other mailing list in the libvirt/KVM communities that are more focused on storage? I'd like to know about these, if so, since I'm a storage guy, and fiddle around a lot with these things...
There are quite a few things I'd like to know about, that I doubt this list cares about, or have knowledge about, like multipath devices/pools, virtio-scsi in combination with npiv-storagepool, etc.
So anyone that can point me further....?
Rgrds Johan
8 years, 2 months
[libvirt-users] active commit not supported with this QEMU binary
by Ishmael Tsoaela
Hi All,
I am creating snapshot and whilest running blockcommit I am coming acrross
this error:
virsh blockcommit Node-A vda --verbose --pivot
error: unsupported configuration: active commit not supported with this
QEMU binary
Does anyone know what this could be and how to fix it, i tried compiling
latest libvirt and latest qemu..
libvirtd -V
libvirtd (libvirt) 2.1.0
kvm -version
QEMU emulator version 2.6.93, Copyright
8 years, 2 months
[libvirt-users] IDE vs SATA vs VirtIO
by William Kern
ok from googling its clear that VirtIO is consistently recommend for the
best performance.
However, with some legacy VMs that we are bringing into the system isn't
an option.
Is there a peformance difference between IDE and SATA?
For that matter how much BETTER is VirtIO over the other two?
Note, these are QCOW2 images if that makes a difference.
-bill
8 years, 2 months
[libvirt-users] couple of questions about virt-v2v
by cmc
Hi,
I have two questions about virt-v2v that I hope someone could help with.
1. I have a host that I use to export VMs from VMWare to oVirt, as the RHEL
7 version of virt-v2v does not support W2012. When I run the
conversion/export (-o ovirt -os ovirt-srv:/mnt/export) it looks for a
bridge on the host I'm running the export for. What does it need the bridge
on the local host for?
2. When I run a conversion of a W2012 host, it reports:
"virt-v2v: warning: Neither rhev-apt.exe nor vmdp.exe can be found. Unable
to install one of them.
virt-v2v: warning: there is no QXL driver for this version of Windows (6.2
x86_64). virt-v2v looks for this driver in /usr/share/virtio-win"
I can't find these two exe files. I've installed the ovirt-guest-tools-iso
for FC23, and symlinked the mounted ISO to /usr/share/virtio-win, and
though the two files it is looking for are not on that ISO, I thought it
might help (it doesn't).
Can virt-v2v install such drivers, or should this just be done manually
from within the guest?
Thanks,
C
8 years, 2 months