[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 1 month
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 2 months
[libvirt-users] GVT-g - suboptimal user experience
by Alex Ivanov
Hi.
In the current state of gvt-g the user experience is suboptimal.
So my question is what are the ETAs for following features:
1. Accelerated virt-manager console using gvt-g device
2. Custom resolutions or dynamic resolution
3. UEFI VMs support (Windows guest)
Thanks.
5 years, 5 months
[libvirt-users] Autodetecting backing file properties when using vol-create-as
by Gionatan Danti
Hi all,
experimenting with vol-create-as, I think it should autodetect some data
- filesize and backing file format, specifically. However, the current
implementation require us to specify both filesize and backing file
format.
Considering that qemu-img already autodetects these data, there are any
reason for the lack of autodetect by libvirt? Should I open a bugzilla
issue?
Below you can find a practical example of what I mean. System is CentOS
Linux release 7.6.1810 (Core) with libvirt-4.5.0-10.el7_6.4.x86_64
Please let me know if I am missing something.
Thanks.
# create base file
[root@singularity images]# qemu-img create base.qcow2 8G -f qcow2
Formatting 'base.qcow2', fmt=qcow2 size=8589934592 cluster_size=65536
lazy_refcounts=off refcount_bits=16
[root@singularity images]# qemu-img info base.qcow2
image: base.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 17K
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# create overlay1.qcow2 via qemu-img; note how backing file format is
autodetected ("-f qcow2" regards the overlay file itself)
[root@singularity images]# qemu-img create -b
/var/lib/libvirt/images/base.qcow2 overlay1.qcow2 -f qcow2
Formatting 'overlay1.qcow2', fmt=qcow2 size=8589934592
backing_file=/var/lib/libvirt/images/base.qcow2 cluster_size=65536
lazy_refcounts=off refcount_bits=16
[root@singularity images]# qemu-img info overlay1.qcow2
image: overlay1.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 17K
cluster_size: 65536
backing file: /var/lib/libvirt/images/base.qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# try the same with virsh vol-create-as; note how you must specify
filesize and the backing file format is *wrong* (leading to an unusable
overlay disk)
[root@singularity images]# virsh vol-create-as default overlay2.qcow2 8G
--format qcow2 --backing-vol /var/lib/libvirt/images/base.qcow2
Vol overlay2.qcow2 created
[root@singularity images]# qemu-img info overlay2.qcow2
image: overlay2.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 17K
cluster_size: 65536
backing file: /var/lib/libvirt/images/base.qcow2
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
5 years, 5 months
[libvirt-users] Running all my virtual machines with a low priority
by R. Diez
Hi all:
I have an Ubuntu 18.04 system. What is the easiest way to run all of my virtual machines with a low priority? Say a "nice" level of 15.
I just do not want my virtual machines to have too much of an impact in any other processes on the system.
Thanks in advance,
rdiez
5 years, 6 months
[libvirt-users] Cloning a volume via storage XML
by Weller, Lennart
Hello everyone,
After the Bug with libvirt was found and fixed by Michal I am now
looking for a way to actually do the task I intended to do. I cannot
find any information if it is possible to clone a base-volume a la vol-
clone for ceph rbd.
As I posted in my addendum to the first post here I thought something
like this volume xml would do the job:
<volume>
<name>coreos00.disk</name>
<capacity unit="bytes">9116319744</capacity>
<target>
<format type="raw"></format>
<permissions>
<mode>644</mode>
</permissions>
</target>
<backingStore>
<path>vmdisks/coreos_2023</path>
<format type="raw"></format>
</backingStore>
</volume>
But even with the fix applied this simply creates an empty volume of
the given size. Granted this has no information about the snapshot so I
was sceptical that this would be enough in the first place.
If the vol-clone is not an option right now via volume xml it's not
that wild as I could just patch the terraform-libvirt-provider to do
the task beforehand but I wanted to know if it there exists an option
for libvirt to do it automatically when provided with an xml. I can't
find any options in the storage xml documentation so my guess would be
no.
Kind regards,
Lennart Weller
5 years, 6 months
[libvirt-users] libvirtd via unix socket using system uri
by lameventanas@gmail.com
I want to run libvirtd as a special user, and allowing users that belong
to a special group to connect via qemu+unix:///system (eg: unix socket).
I did everything necessary to do so: created a libvirt user and group,
added the libvirt user to the kvm group, added my normal user to the
libvirt group, and made sure the socket is owned by libvirt:libvirt with
permissions set to 770.
libvirtd starts successfully, but when I try to connect as the normal
user I get this error:
bash$ virsh --connect qemu+unix://system
error: failed to connect to the hypervisor
error: invalid argument: using unix socket and remote server 'system' is
not supported.
A trace shows virsh is not even trying to open the socket.
I want to use the socket because I just need local connectivity and
don't want to run sasl and set up certificates for this, but at the same
time want to run libvirtd as a dedicated user.
Is there any reason to prevent libvirt from being used like this?
5 years, 6 months
[libvirt-users] Error connecting to hypervisor
by Sukrit Bhatnagar
Hi,
I have compiled and installed libvirt from git checkout
and started libvirtd service. The version is 5.3.0 and
I have done system-wide installation.
When I do a `virsh list`, I get the following error:
error: failed to connect to the hypervisor
error: Unable to encode message header
Similarly, the libvirtd instance shows the following error:
2019-04-23 21:36:57.777+0000: 1807: info : libvirt version: 5.3.0
2019-04-23 21:36:57.777+0000: 1807: info : hostname: dell
2019-04-23 21:36:57.777+0000: 1807: error : virNetSocketReadWire:1803
: End of file while reading data: Input/output error
Furthermore, virnetmessagetest show the following error:
$ VIR_TEST_DEBUG=1 ./virnetmessagetest
TEST: virnetmessagetest
1) Message Header Encode
... libvirt: XML-RPC error : Unable to encode message header
FAILED
2) Message Header Decode
... libvirt: XML-RPC error : Unable to decode message length
FAILED
3) Message Payload Encode
... libvirt: XML-RPC error : Unable to encode message header
FAILED
4) Message Payload Decode
... libvirt: XML-RPC error : Unable to decode message length
FAILED
5) Message Payload Stream Encode
... libvirt: XML-RPC error : Unable to encode message header
FAILED
I am also compiling qemu from source and
the virsh commands ran fine when I was using
libvirt 4.7.0 from Fedora repository.
Any ideas how I can solve this issue?
Thanks,
Sukrit
5 years, 6 months
[libvirt-users] Libvirt pool cannot see or create rbd clones
by Weller, Lennart
Hello everyone,
To increase my odds of finding an answer I also wanted to ask here.
This is my post from serverfault[1] in verbatim:
While trying to get a cloned disk running from my OS snapshot I run
into the problem that Libvirt cannot see existing images cloned from a
snapshot. Created via:
# rbd -p vmdisks clone vmdisks/coreos_2023@base vmdisks/coreos00.disk
The base image has the one snapshot 'base' and is protected. The cloned
disk is created just fine:
# rbd -p vmdisks info coreos00.disk
rbd image 'coreos00.disk':
size 8.49GiB in 2174 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.48a99c6b8b4567
format: 2
features: layering
flags:
create_timestamp: Thu Apr 25 14:46:52 2019
parent: vmdisks/coreos_2023@base
overlap: 8.49GiB
I temporarily have Libvirt configured with a rbd pool that uses the
ceph admin user. But I cannot see the cloned disk. Just the parent:
virsh # vol-list --pool rbd_image_root
Name Path
---------------------------------------------------------------------
---------
coreos_2023 vmdisks/coreos_2023
If I try to create the cloned image from within virsh I run into the
following issue:
virsh # vol-clone --pool rbd_image_root coreos_2023 coreos00.disk
error: Failed to clone vol from coreos_2023
error: failed to iterate RBD snapshot coreos_2023@base: Operation not
permitted
Note that this pool uses the Ceph admin user which makes the Operation
not permitted a tad odd.
Am I missing a configuration option here that would allow for the pool
to use clones? I can't find any information on this in the
documentation so far. And the source code of libvirt looks like it
should support both features.
Versions:
Libvirt Machine: Ubuntu 18.04
Compiled against library: libvirt 4.0.0
Using library: libvirt 4.0.0
Using API: QEMU 4.0.0
Running hypervisor: QEMU 2.11.1
Ceph Machine: openSUSE Leap 42.3
Ceph 12.2.5
[1]
https://serverfault.com/questions/964586/libvirt-pool-cannot-see-or-creat...
5 years, 6 months