[libvirt-users] libvirt and virt-manager - Unable to complete install: 'internal error: unsupported input bus usb'
by Eduardo Lúcio Amorim Costa
I'm trying create a virtual machine using Xen as hypervisor and
virt-manager (libvirt) as the management module. When trying to create the
virtual machine I am getting the following error:
"
Unable to complete install: 'internal error: unsupported input bus usb'
"
ERROR DETAILS:
Unable to complete install: 'internal error: unsupported input bus usb'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2276, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 461, in
start_install
doboot, transient)
File "/usr/share/virt-manager/virtinst/guest.py", line 397, in
_create_guest
domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib/python3.7/site-packages/libvirt.py", line 3717, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirt.libvirtError: internal error: unsupported input bus usb
NOTE I: The deploy process follows the instructions here
https://www.youtube.com/watch?v=BwkmDM-Gpzc and here
https://wiki.centos.org/HowTos/Xen/Xen4QuickStart .
NOTE II: Xen Hypervisor uses a CentOS 7 as "domu".
MORE INFORMATION ABOUT THE PROBLEM HERE:
https://unix.stackexchange.com/q/464186/61742
Thanks! =D
6 years, 4 months
[libvirt-users] Guest startup delay options ignored
by Inception Hosting
Hi Folks,
been searching around for a while on this and see similar issues reported going back a number of years without solution.
The START_DELAY=(number) seems to be completely ignored in /etc/sysconfig/libvirt-guests which is unfortunate as it means everything starts at once without control.
This is fine on NVMe based servers however on the older spinning disks the I/O is creating significant issues.
Is anyone aware of any other methods to achieve a start up delay for guest virtual machines on boot?
Thanks.
Anthony.
6 years, 4 months
[libvirt-users] Efficacy of jitterentropy RNG on qemu-kvm Guests
by procmem
Hello. I'm a distro maintainer and was wondering about the efficacy of
entropy daemons like haveged and jitterentropyd in qemu-kvm. One of the
authors of haveged [0] pointed out if the hardware cycles counter is
emulated and deterministic, and thus predictible. He therefore does not
recommend using HAVEGE on those systems. Is this the case with KVM's
counters?
PS. I will be setting VM CPU settings to host-passthrough.
Bonus: Also if anyone knows the answer to this question about Xen please
let me know because its the other main platform we support and they
don't have the luxury of virtio-rng in PVH mode.
Thanks.
[0]
https://github.com/BetterCrypto/Applied-Crypto-Hardening/commit/cf7cef7a8...
6 years, 4 months
[libvirt-users] qemu guest agent
by Cobin Bluth
Hello Libvirt-Users!
I have a quick question about the qemu guest agent.
Is it possible to use the guest agent from inside the guest in order to
query the name of its own domain?
For example, I use a base-image.qcow2 with a baked-in hostname. I would
like to include a little tool in my guest image to change the hostname to
the name of the domain.
Is this sort of thing possible?
-Cobin
6 years, 4 months
[libvirt-users] New Application to add to web site
by Matthew
Hello,
Could you please add the OpenVM Dashboard to the list of applications that use libvirt. Below is the URL and description.
Also, to all the developers who put libvirt-php together...THANK YOU!
Thanks,
Matthew Penning
URL: https://openvm.tech <https://openvm.tech/>
Description: The OpenVM Dashboard is an open source HTML5 and PHP based web interface for the KVM and QEMU hypervisor. It is designed to be a easy-to-use management platform allowing users to create and manage domains (virtual machines) using the libvirt-php module.
6 years, 4 months
Re: [libvirt-users] Windows Guest I/O performance issues (already using virtio) (Matt Schumacher)
by Allence
I think performance is not just about your xml, the host system will have a bigger impact. Maybe you can see this link:
Https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/...
>Date: Wed, 8 Aug 2018 17:35:11 +0000
>From: Matt Schumacher <matt.s(a)aptalaska.com>
>To: "libvirt-users(a)redhat.com" <libvirt-users(a)redhat.com>
>Subject: [libvirt-users] Windows Guest I/O performance issues (already
> using virtio)
>Message-ID: <30EE4896-CF82-4E23-82D5-0C56B38F5E09(a)aptalaska.com>
>Content-Type: text/plain; charset="utf-8"
>
>List,
>
>I have a number of Windows 2016 servers I am deploying, but I?m having some I/O performance issues. I have done all of the obvious things like virtio drivers, but am finding there is more performance to be found with hyper-v extensions, how we virtualize the hardware clock, and iothreads. I?m using ZVOLs to back the VM, and I?m using 4k block sizes, which seems to offer the best 4k random read/write performance (mail and database workloads), but maybe I?m missing something at this layer too.
>
>Questions:
>
>
> 1. Does my VM config look reasonable for the latest releases of windows? Are there features I should be using that will help performance?
> 2. Why does the hypervclock timer make so much performance difference in windows VMs?
> 3. Does my virtualized CPU model make sense? I defined Haswell-noTSX-IBRS and libvirt added the features.
> 4. Which kernel branch offers the best stability and performance?
> 5. Are there performance gains in using UEFI booting the windows guest and defining ?<blockio logical_block_size='4096' physical_block_size='4096'/>?? Perhaps better block size consistency through to the zvol?
>
>
>Here is my setup:
>
>48 core Haswell CPU
>192G Ram
>Linux 4.14.61 or 4.9.114 (testing both)
>ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8 spindles of 15k disks (testing both)
>4k block size zvol for virtual machines
>32G arc cache
>
>Here is my VM:
>
><domain type='kvm' id='12'>
> <name>testvm</name>
> <memory unit='KiB'>33554432</memory>
> <currentMemory unit='KiB'>33554432</currentMemory>
> <vcpu placement='static'>12</vcpu>
> <iothreads>1</iothreads>
> <os>
> <type arch='x86_64' machine='pc-i440fx-2.12'>hvm</type>
> <boot dev='cdrom'/>
> <boot dev='hd'/>
> </os>
> <features>
> <acpi/>
> <hyperv>
> <relaxed state='on'/>
> <vapic state='on'/>
> <spinlocks state='on' retries='8191'/>
> <vpindex state='on'/>
> <runtime state='on'/>
> <synic state='on'/>
> <reset state='on'/>
> <vendor_id state='on' value='KVM Hv'/>
> </hyperv>
> </features>
> <cpu mode='custom' match='exact' check='full'>
> <model fallback='forbid'>Haswell-noTSX-IBRS</model>
> <topology sockets='1' cores='6' threads='2'/>
> <feature policy='require' name='vme'/>
> <feature policy='require' name='f16c'/>
> <feature policy='require' name='rdrand'/>
> <feature policy='require' name='hypervisor'/>
> <feature policy='require' name='arat'/>
> <feature policy='disable' name='spec-ctrl'/>
> <feature policy='require' name='xsaveopt'/>
> <feature policy='require' name='abm'/>
> </cpu>
> <clock offset='localtime'>
> <timer name='rtc' tickpolicy='catchup'/>
> <timer name='pit' tickpolicy='delay'/>
> <timer name='hpet' present='yes'/>
> <timer name='hypervclock' present='yes'/>
> </clock>
> <on_poweroff>destroy</on_poweroff>
> <on_reboot>restart</on_reboot>
> <on_crash>destroy</on_crash>
> <devices>
> <emulator>/usr/bin/qemu-system-x86_64</emulator>
> <disk type='block' device='disk'>
> <driver name='qemu' type='raw' cache='none' io='native' ioeventfd='on' iothread='1'/>
> <source dev='/dev/zvol/datastore/vm/testvm-vda'/>
> <backingStore/>
> <target dev='vda' bus='virtio'/>
> <alias name='virtio-disk0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
> </disk>
> <disk type='file' device='cdrom'>
> <driver name='qemu'/>
> <target dev='hdc' bus='ide'/>
> <readonly/>
> <alias name='ide0-1-0'/>
> <address type='drive' controller='0' bus='1' target='0' unit='0'/>
> </disk>
> <controller type='ide' index='0'>
> <alias name='ide'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
> </controller>
> <controller type='usb' index='0' model='piix3-uhci'>
> <alias name='usb'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
> </controller>
> <controller type='pci' index='0' model='pci-root'>
> <alias name='pci.0'/>
> </controller>
> <controller type='virtio-serial' index='0'>
> <alias name='virtio-serial0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
> </controller>
> <interface type='bridge'>
> <source bridge='lan'/>
> <target dev='vnet0'/>
> <model type='virtio'/>
> <alias name='net0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
> </interface>
> <channel type='unix'>
> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-12-testvm/org.qemu.guest_agent.0'/>
> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
> <alias name='channel0'/>
> <address type='virtio-serial' controller='0' bus='0' port='1'/>
> </channel>
> <input type='tablet' bus='usb'>
> <alias name='input0'/>
> <address type='usb' bus='0' port='1'/>
> </input>
> <input type='mouse' bus='ps2'>
> <alias name='input1'/>
> </input>
> <input type='keyboard' bus='ps2'>
> <alias name='input2'/>
> </input>
> <graphics type='vnc' port='5901' autoport='no' listen='0.0.0.0'>
> <listen type='address' address='0.0.0.0'/>
> </graphics>
> <video>
> <model type='cirrus' vram='16384' heads='1' primary='yes'/>
> <alias name='video0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
> </video>
> <memballoon model='none'/>
> </devices>
> <seclabel type='dynamic' model='dac' relabel='yes'>
> <label>+0:+100</label>
> <imagelabel>+0:+100</imagelabel>
> </seclabel>
></domain>
6 years, 4 months
[libvirt-users] Mount URL as cdrom/iso KVM/QEMU
by Inception Hosting
Hi Folks,
According to the examples in http://libvirt.org/formatdomain.html#elementsDisks it should be possible to mount an iso /url as a cdrom, the example given is:
</disk>
<disk type='network' device='cdrom'>
<driver name='qemu' type='raw'/>
<source protocol="http" name="url_path">
<host name="hostname" port="80"/>
</source>
<target dev='hde' bus='ide' tray='open'/>
<readonly/>
</disk>
I am unable to get this to work at all and wondered if I am just missing something obvious:
Boot failed: Could not read from CDROM (code 0003)
Actual XML in use snippet:
<disk type='network' device='cdrom'>
<driver name='qemu' type='raw'/>
<source protocol="http" name="/debian-cd/current/amd64/iso-cd/debian-9.5.0-amd64-netinst.iso">
<host name="mirror.bytemark.co.uk" port="80"/>
</source>
<target dev='hdb' bus='ide' tray='open'/>
<readonly/>
</disk>
libvirtd (libvirt) 3.9.0 QEMU emulator version 2.10.0
Is anyone able to offer any assistance or tips?
I have tried putting the complete path including fqdn in the url_path as well, with and without http://
Many Thanks.
Anthony.
6 years, 4 months
[libvirt-users] Windows Guest I/O performance issues (already using virtio)
by Matt Schumacher
List,
I have a number of Windows 2016 servers I am deploying, but I’m having some I/O performance issues. I have done all of the obvious things like virtio drivers, but am finding there is more performance to be found with hyper-v extensions, how we virtualize the hardware clock, and iothreads. I’m using ZVOLs to back the VM, and I’m using 4k block sizes, which seems to offer the best 4k random read/write performance (mail and database workloads), but maybe I’m missing something at this layer too.
Questions:
1. Does my VM config look reasonable for the latest releases of windows? Are there features I should be using that will help performance?
2. Why does the hypervclock timer make so much performance difference in windows VMs?
3. Does my virtualized CPU model make sense? I defined Haswell-noTSX-IBRS and libvirt added the features.
4. Which kernel branch offers the best stability and performance?
5. Are there performance gains in using UEFI booting the windows guest and defining “<blockio logical_block_size='4096' physical_block_size='4096'/>”? Perhaps better block size consistency through to the zvol?
Here is my setup:
48 core Haswell CPU
192G Ram
Linux 4.14.61 or 4.9.114 (testing both)
ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8 spindles of 15k disks (testing both)
4k block size zvol for virtual machines
32G arc cache
Here is my VM:
<domain type='kvm' id='12'>
<name>testvm</name>
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<vcpu placement='static'>12</vcpu>
<iothreads>1</iothreads>
<os>
<type arch='x86_64' machine='pc-i440fx-2.12'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vpindex state='on'/>
<runtime state='on'/>
<synic state='on'/>
<reset state='on'/>
<vendor_id state='on' value='KVM Hv'/>
</hyperv>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<topology sockets='1' cores='6' threads='2'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='disable' name='spec-ctrl'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='abm'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='yes'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native' ioeventfd='on' iothread='1'/>
<source dev='/dev/zvol/datastore/vm/testvm-vda'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='bridge'>
<source bridge='lan'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-12-testvm/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5901' autoport='no' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='none'/>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+0:+100</label>
<imagelabel>+0:+100</imagelabel>
</seclabel>
</domain>
6 years, 4 months
[libvirt-users] Copy volume from one storage to another
by Shashwat shagun
Hi,
I want to copy a volume from one to pool to another pool through libvirt golang api but I’m unaware of any such functions. Can anybody guide me a little bit here?
Best Regards,
Shashwat Shagun
me(a)shashwat.tech
6 years, 4 months
[libvirt-users] LIBVIRT-4.6.0 can't work with QEMU 3.0.0
by Holger Schranz
Hello,
if I try to use Llibvirt-4.6.0 together with qemu-3.0.0-rc4.
I run into an issue. Please see the following Memo.
Best regards
Holger
================================================================================
Procduce QEMU-3.0.0-RC4 and LIBVIRT-4.6.0:
QEMU-3-0-0-RC4:
make clean
./configure --enable-libiscsi --enable-libusb --enable-bzip2
--enable-libnfs --enable-spice --enable-user --enable-virtfs
--enable-opengl --enable-sdl --enable-gtk --enable-virglrenderer
--disable-seccomp --prefix=/usr;make -j$(grep -c ^processor /proc/cpuinfo)
su
make install
LIBVIRT-4.6.0
make clean
./configure -q --with-lxc --with-storage-iscsi --with-storage-scsi
--with-interface --with-storage-lvm --with-storage-fs --with-udev
--with-vmware --with-storage-mpath --prefix=/usr;make -j$(grep -c
^processor /proc/cpuinfo)
su
systemctl stop libvirtd.service
cd libvirt-4.5.0
make uninstall
cd ..
cd libvirt-4.6.0
make install
systemctl status libvirtd.service -l
systemctl daemon-reload
systemctl start libvirtd.service
systemctl status libvirtd.service -l
? libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
vendor preset: disabled)
Active: active (running) since Wed 2018-08-08 09:05:26 CEST; 3s ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 137948 (libvirtd)
Tasks: 19 (limit: 32768)
CGroup: /system.slice/libvirtd.service
?? 4725 /usr/sbin/dnsmasq
--conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
--dhcp-script=/usr/lib/libvirt_leaseshelper
?? 4726 /usr/sbin/dnsmasq
--conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
--dhcp-script=/usr/lib/libvirt_leaseshelper
??137948 /usr/sbin/libvirtd
Aug 08 09:05:26 etcsvms5 systemd[1]: Starting Virtualization daemon...
Aug 08 09:05:26 etcsvms5 systemd[1]: Started Virtualization daemon.
Aug 08 09:05:26 etcsvms5 libvirtd[137948]: 2018-08-08 07:05:26.876+0000:
137964: info : libvirt version: 4.6.0
Aug 08 09:05:26 etcsvms5 libvirtd[137948]: 2018-08-08 07:05:26.876+0000:
137964: info : hostname: etcsvms5
Aug 08 09:05:26 etcsvms5 libvirtd[137948]: 2018-08-08 07:05:26.876+0000:
137964: error : virNetworkObjAssignDefLocked:589 : operation failed:
network 'default' already exists with uuid
52abe250-4f23-40fe-875e-0808d57ce72d
Aug 08 09:05:27 etcsvms5 dnsmasq[4725]: read /etc/hosts - 8 addresses
Aug 08 09:05:27 etcsvms5 dnsmasq[4725]: read
/var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Aug 08 09:05:27 etcsvms5 dnsmasq-dhcp[4725]: read
/var/lib/libvirt/dnsmasq/default.hostsfile
Aug 08 09:05:27 etcsvms5 libvirtd[137948]: 2018-08-08 07:05:27.711+0000:
137964: warning : virLXCDriverCapsInit:82 : Failed to get host CPU cache
info
Aug 08 09:05:27 etcsvms5 libvirtd[137948]: 2018-08-08 07:05:27.719+0000:
137964: warning : umlCapsInit:73 : Failed to get host CPU cache info
etcsvms5:/home/shl/Install/libvirt-4.6.0 #
---> test1: Start virt-manager
etcsvms5:/home/shl/Install/libvirt-4.6.0 # virsh version
Compiled against library: libvirt 4.6.0
Using library: libvirt 4.6.0
Using API: LXC 4.6.0
Running hypervisor: LXC 4.4.140
etcsvms5:/home/shl/Install/libvirt-4.6.0 #
Information on virt-manager window:
Unable to connect to libvirt qemu:///system.
no connection driver available for qemu:///system
Libvirt URI is: qemu:///system
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/connection.py", line 1036,
in _do_open
self._backend.open(self._do_creds_password)
File "/usr/share/virt-manager/virtinst/connection.py", line 144, in open
open_flags)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 104, in
openAuth
if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: no connection driver available for qemu:///system
---> TEst2: Call VM via virsh create
etcsvms5:/home/shl/Install/libvirt-4.6.0 # cd /kvm/CS8400/M4
etcsvms5:/kvm/CS8400/M4 # cd VLP0
etcsvms5:/kvm/CS8400/M4/VLP0 # virsh create M4-VLP0.xml
error: Failed to create domain from M4-VLP0.xml
error: invalid argument: could not find capabilities for arch=x86_64
domaintype=kvm
etcsvms5:/kvm/CS8400/M4/VLP0 #
-----
switch back to LIBVIRT.4-5-0 and all runs o.k.
---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
6 years, 4 months