[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
[libvirt-users] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
by Li Feng
Hi Guys,
And I want to add the vhost-user-scsi-pci/vhost-user-blk-pci support
for libvirt.
The usage in qemu like this:
Vhost-SCSI
-chardev socket,id=char0,path=/var/tmp/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
Vhost-BLK
-chardev socket,id=char1,path=/var/tmp/vhost.1
-device vhost-user-blk-pci,id=blk0,chardev=char1
What type should I add for libvirt.
Type1:
<hostdev mode='subsystem' type='vhost-user'>
<source protocol='vhost-user-scsi' path='/tmp/vhost-scsi.sock'></source>
<alias name="vhost-user-scsi-disk1"/>
</hostdev>
Type2:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source protocol='vhost-user' path='/tmp/vhost-scsi.sock'>
</source>
<target dev='sdb' bus='vhost-user-scsi'/>
<boot order='3'/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source protocol='vhost-user' path='/tmp/vhost-blk.sock'>
</source>
<target dev='vda' bus='vhost-user-blk'/>
<boot order='1'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
Could anyone give some suggestions?
Thanks,
Feng Li
--
The SmartX email address is only for business purpose. Any sent message
that is not related to the business is not authorized or permitted by
SmartX.
本邮箱为北京志凌海纳科技有限公司(SmartX)工作邮箱. 如本邮箱发出的邮件与工作无关,该邮件未得到本公司任何的明示或默示的授权.
4 years, 11 months
[libvirt-users] Internal error reported by libvirt while creating a VM
by Ajay Kumar
Hi, am Ajay Kumar.
I am trying to create a virtual machine on the remote host (where libvirt
5.8.0) was installed using a virtual machine manager which are running on
the local ubuntu machine.
*The specification of my remote host are:*
*Hypervisor: *KVM
*QEMU version:* QEMU emulator version 2.4.0.1, Copyright (c) 2003-2008
Fabrice Bellard
The below error is propagating when I am trying to install KVM-VMI (
https://github.com/KVM-VMI/kvm-vmi/tree/kvmi) with modified QEMU, and there
is a modified QEMU involved, in /usr/local/bin/qemu-system-x86_64.
The below particular error propagating when I am trying to create a virtual
machine using VMM gui (virt-manager).
Unable to complete install: 'internal error: process exited while
connecting to monitor:
(process:7400): GLib-WARNING **: 18:19:45.044: ../../../../glib/gmem.c:489:
custom memory allocation vtable not supported
2019-09-30T18:19:45.046714Z qemu-system-x86_64: -msg timestamp=on:
Unsupported machine type
Use -machine help to list supported machines!'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2553, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 498, in
start_install
doboot, transient)
File "/usr/share/virt-manager/virtinst/guest.py", line 434, in
_create_guest
domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3603, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: internal error: process exited while connecting to monitor:
(process:7400): GLib-WARNING **: 18:19:45.044: ../../../../glib/gmem.c:489:
custom memory allocation vtable not supported
2019-09-30T18:19:45.046714Z qemu-system-x86_64: -msg timestamp=on:
Unsupported machine type
Use -machine help to list supported machines!
--
*With Best Regards:*
Ajay
5 years, 1 month
[libvirt-users] virsh -c lxc:/// setvcpus and <vcpu> configuration fails
by info@layer7.net
Hi folks!
i created a server with this XML file:
<domain type='lxc'>
<name>lxctest1</name>
<uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid>
<metadata>
<libosinfo:libosinfo
xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://centos.org/centos/6.9"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>1024000</memory>
<currentMemory unit='KiB'>1024000</currentMemory>
<vcpu>2</vcpu>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64'>exe</type>
<init>/sbin/init</init>
</os>
<idmap>
<uid start='0' target='200000' count='65535'/>
<gid start='0' target='200000' count='65535'/>
</idmap>
<features>
<privnet/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='mapped'>
<source dir='/mnt'/>
<target dir='/'/>
</filesystem>
<interface type='network'>
<mac address='00:16:3e:3e:3e:bb'/>
<source network='Public Network'/>
</interface>
<console type='pty'>
<target type='lxc' port='0'/>
</console>
</devices>
</domain>
I would expect it to have 2 cpu cores and 1 GB RAM.
The RAM config works.
The CPU config does not:
[root@lxctest1 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
Stepping: 4
CPU MHz: 2399.950
BogoMIPS: 4205.88
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23
It gives me all CPU's from the host.
I also tried it with
<cpu>
<topology sockets='1' cores='2' threads='1'/>
</cpu>
That didnt help too.
I tried to modify the vcpus through virsh:
#virsh -c lxc:/// setvcpus lxctest1 2
error: this function is not supported by the connection driver:
virDomainSetVcpus
Which didnt work too.
This happens on:
Centos 7
Kernel: 5.1.15-1.el7.elrepo.x86_64
#virsh -V
Virsh command line tool of libvirt 4.5.0
See web site at https://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC ESX Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Gluster ZFS
Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline
and also on
Fedora 30
Kernel: 5.2.9-200.fc30.x86_64
# virsh -V
Virsh command line tool of libvirt 5.1.0
See web site at https://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC LibXL OpenVZ VMware PHYP VirtualBox ESX
Hyper-V Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog
Gluster ZFS
Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline
-----------
Can anyone please tell me what i am doing wrong here ?
Thank you !
Greetings
Oliver
5 years, 2 months
[libvirt-users] experience with balloon memory ?
by Lentes, Bernd
Hi ML,
i'm thinking about using balloon memory for our domains. We have about 15 domains running concurrently,
and i think it might be nice if a domain requires more RAM it grabs it, and if it don't need it anymore, it releases it.
But i have no experience with it. So i have some questions:
- is live migration possible with balloon ?
- is it stable ?
- the domain needs an appropriate driver i think ?
- are there drivers for Windows 7 and 10 ?
- are there drivers for Linux, which kernel version do i need ?
- does someone have experience with it ? Is there a kind of "best practise" ?
Thanks.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
Perfekt ist wer keine Fehler macht
Also sind Tote perfekt
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 2 months
[libvirt-users] Privacy Extension not working in VM
by Thomas Luening
Hello @ all
With the rebuilding of my Server from Debian 9 to Debian 10, I also
switch from Virtual Box to Libvirt/KVM. Due to new requirements for the
VMs, now I have an actual problem, which unfortunately I can not solve.
The problem has already been discussed in the German Debian-Forum ...
unfortunately also without success.
The facts:
- ISP = Dual Stack with daily separation
- Host and VM = Debian 10
- The VMs are via macvtap-device regular LAN-Clients
- IPv4 = DHCP and NAT by DSL-Router
- IPv6 = GUA via RA and SLAAC (2003::/3)
- IPv4 works fine in the VM
- IPv6 (NDP, RA, SLAAC) works basically also fine in the VM
The existing problem in the VM:
- MAC-Based GUA (2000::/3) is ok, both inbound and outbound
- Outbound traffic via the second GUA (PE-Based) is filtered apparently,
but not via packetfiltering, I don't know where. There are no error
messages. On the part of the kernel in the VM and the IPv6-stack,
everything looks completely ok, no error messages, except that
Outbound-Traffic by the PE-Address is quietly blocked. The MAC-
Based IPv6 works unchanged and without error as before.
My questions:
1. Is there a special setting for the VM, to allow the use of Privacy
Extensions for IPv6 unlimited?
2. Or is that possibly even a known and at the moment unsolved problem?
3. Or is this a intended limitation of virtualization?
Can anyone help me with a solution or a hint? Thank you.
BR, Tom
5 years, 2 months
[libvirt-users] Certificate checking on TLS migrations to an IP address
by Milan Zamazal
Hi, I'm trying to add TLS migrations to oVirt, but I've hit a problem
with certificate checking.
oVirt uses the destination host IP address, rather than the host name,
in the migration URI passed to virDomainMigrateToURI3. One reason for
doing that is that a separate migration network may be used for
migrations, while the host name resolves to the management network
interface.
But it causes a problem with certificate checking. The destination IP
address is checked against the name, which is a host name, given in the
destination certificate. That means there is mismatch and the migration
fails. I don't think it'd be a very good idea to avoid the problem by
putting IP addresses into server certificates.
Is there any way to make TLS migrations working under these
circumstances? For instance, SPICE remote-viewer allows the client to
specify the certificate subject to expect on the host when connecting to
it using an IP address. Can (or could) libvirt do something similar?
Or is there any other mechanism to handle this problem?
Thanks,
Milan
5 years, 2 months
[libvirt-users] VM cloning with memory state
by Benny Zlotnik
Hi,
We are trying to implement VM cloning with snapshots for ovirt. For
the most part, everything seems to work fine. However, when
saved-state comes into play it gets complicated. As it seems
impossible to restore a saved state file for a different VM due to a
mismatch in UUID and name, I was wondering if it is possible to
somehow work around this?
Is it a sensible thing to do? (these are essentially the same VMs in
terms of configuration and disk content)
Is there a better approach to cloning a VM entirely (with snapshots and memory)?
Thanks!
5 years, 2 months