[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
Need more doc for libvirt-console-proxy
by Guy Godfroy
Hello,
I'm making a web app for my company that will enable different teams to
manage their own VMs. I wish to make possible to interact with each VM
console, so I plan to use some xterm.js with websockets.
So I discovered libvirt-console-proxy [1] when I looked for something to
put a libvirt console into a websocket. That seems like the right tool
for the job.
The only doc I found is this article from 2017 [2]. After trying to
understand from this article and from --help, I still have many
questions. I am really bad at reading code so I can't even get answers
from the sources.
My main concern is: How a client is supposed to talk to the proxy? It is
said that a security token must be provided. How? HTTP header? Which
header? Am I missing something in websocket protocol? I think an example
client implementation would help a lot.
Also, I tried to use virtconsoleresolveradm to set up metadata on my
domains like explained in the article [1] :
./virtconsoleresolveradm enable milou
Enabled access to domain 'milou'
But that doesn't seem to do anything (except defining the metadata
namespace in the XML):
virsh metadata milou http://libvirt.org/schemas/console-proxy/1.0
<consoles/>
I precise that I have already this in my XML:
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
Should I remove that? Should I edit that?
Thanks for your help.
Guy Godfroy
[1] https://gitlab.com/libvirt/libvirt-console-proxy
[2]
https://www.berrange.com/posts/2017/01/26/announce-new-libvirt-console-pr...
3 years, 1 month
Fail to do blockcopy to network dest xml with --reuse-external
by Han Han
Hello, libvirt developers,
Recently I find an issue of libvirt blockcopy:
Versions:
libvirt-7.4.0
qemu-kvm-6.0.0
Steps:
1. Create a nbd server
# qemu-img create -f qcow2 /var/lib/libvirt/images/fedora-1.qcow2 10G -o
preallocation=full
# qemu-nbd -e 10 /var/lib/libvirt/images/fedora-1.qcow2 -p 10001
2. Prepare a running VM
# virsh list
Id Name State
------------------------
3 fedora running
# virsh dumpxml 3|xmllint --xpath //disk -
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"/>
<source file="/var/lib/libvirt/images/fedora.qcow2" index="1"/>
<backingStore/>
<target dev="hda" bus="ide"/>
<alias name="ide0-0-0"/>
<address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>
3. Blockcopy to a nbd dest xml with --reuse-external
# cat /tmp/copy.xml
<disk type='network' device='disk'>
<driver name='qemu' type='qcow2'/>
<source protocol="nbd">
<host name="localhost" port="10001" />
</source>
<backingStore/>
<target dev='hda' bus='ide'/>
</disk>
# virsh blockcopy fedora hda --xml /tmp/copy.xml --transient-job --wait
--verbose --finish --reuse-external
error: unsupported configuration: reused mirror destination format must be
specified
But it works without --reuse-external
# virsh blockcopy fedora hda --xml /tmp/copy.xml --transient-job --wait
--verbose --finish
Block Copy: [100 %]
Successfully copied
Since it is clear that the format of dest image is qcow2, the error message
"reused mirror destination format must be specified" is wrong. The
blockcopy with network disk + --reuse-external should either be supported
or post a better error message.
I am not sure if --reuse-external flag is only for file type disk. It seems
the description of VIR_DOMAIN_BLOCK_COPY_REUSE_EXT(
https://github.com/libvirt/libvirt/blob/7c08141f906e20e730c4b6407bc638e74...)
indicates this flag is for file only:
* VIR_DOMAIN_BLOCK_COPY_REUSE_EXT flag is present stating that the file
* was pre-created with the correct format and metadata and sufficient
* size to hold the copy. In case the VIR_DOMAIN_BLOCK_COPY_SHALLOW flag
* is used the pre-created file has to exhibit the same guest visible
contents
* as the backing file of the original image. This allows a management app
to
* pre-create files with relative backing file names, rather than the
default
* of absolute backing file names.
Please help to confirm if it is a bug here.
3 years, 4 months
issue when not using acpi indices in libvirt 7.4.0 and qemu 6.0.0
by Riccardo Ravaioli
Hi everyone,
We have an issue with how network interfaces are presented in the VM with
the latest libvirt 7.4.0 and qemu 6.0.0.
Previously, we were on libvirt 7.0.0 and qemu 5.2.0, and we used increasing
virtual PCI addresses for any type of network interface (virtio, PCI
passthrough, SRIOV) in order to decide the interface order inside the VM.
For instance the following snippet yields ens1, ens2 and ens3 in a Debian
Buster VM:
<interface type="ethernet">
<target dev="0.vSrv"/>
<mac address="52:54:00:aa:cc:05"/>
<address bus="0x01" domain="0x0000" function="0x0" slot="0x01"
type="pci"/>
<model type="virtio"/>
<driver>
<host csum="off"/>
</driver>
</interface>
<interface type="ethernet">
<target dev="1.vSrv"/>
<mac address="52:54:00:aa:bb:81"/>
<address bus="0x01" domain="0x0000" function="0x0" slot="0x02"
type="pci"/>
<model type="virtio"/>
<driver>
<host csum="off"/>
</driver>
</interface>
<hostdev managed="yes" mode="subsystem" type="pci">
<source>
<address bus="0x0d" domain="0x0000" function="0x0" slot="0x00"/>
</source>
<address bus="0x01" domain="0x0000" function="0x0" slot="0x03"
type="pci"/>
</hostdev>
After upgrading to libvirt 7.4.0 and qemu 6.0.0, the XML snippet above
yielded:
- ens1 for the first virtio interface => OK
- rename4 for the second virtio interface => **KO**
- ens3 for the PCI passthrough interface => OK
Argh! What happened to ens2? By running udev inside the VM, I see that
"rename4" is the result of a conflict between the ID_NET_NAME_SLOT of the
second and the third interface, both appearing as ID_NET_NAME_SLOT=ens3. In
theory rename4 should show ID_NET_NAME_SLOT=ens2. What happened?
# udevadm info -q all /sys/class/net/rename4
P: /devices/pci0000:00/0000:00:03.0/0000:01:02.0/virtio4/net/rename4
L: 0
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:01:02.0/virtio4/net/rename4
E: INTERFACE=rename4
E: IFINDEX=4
E: SUBSYSTEM=net
E: USEC_INITIALIZED=94191911
E: ID_NET_NAMING_SCHEME=v240
E: ID_NET_NAME_MAC=enx525400aabba1
E: ID_NET_NAME_PATH=enp1s2
E: ID_NET_NAME_SLOT=ens3
E: ID_BUS=pci
E: ID_VENDOR_ID=0x1af4
E: ID_MODEL_ID=0x1000
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Red Hat, Inc.
E: ID_MODEL_FROM_DATABASE=Virtio network device
E: ID_PATH=pci-0000:01:02.0
E: ID_PATH_TAG=pci-0000_01_02_0
E: ID_NET_DRIVER=virtio_net
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/rename4
E: TAGS=:systemd:
# udevadm info -q all /sys/class/net/ens3
P: /devices/pci0000:00/0000:00:03.0/0000:01:03.0/net/ens3
L: 0
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:01:03.0/net/ens3
E: INTERFACE=ens3
E: IFINDEX=2
E: SUBSYSTEM=net
E: USEC_INITIALIZED=3600940
E: ID_NET_NAMING_SCHEME=v240
E: ID_NET_NAME_MAC=enx00900b621235
E: ID_OUI_FROM_DATABASE=LANNER ELECTRONICS, INC.
E: ID_NET_NAME_PATH=enp1s3
E: ID_NET_NAME_SLOT=ens3
E: ID_BUS=pci
E: ID_VENDOR_ID=0x8086
E: ID_MODEL_ID=0x1533
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Intel Corporation
E: ID_MODEL_FROM_DATABASE=I210 Gigabit Network Connection
E: ID_PATH=pci-0000:01:03.0
E: ID_PATH_TAG=pci-0000_01_03_0
E: ID_NET_DRIVER=igb
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/ens3
E: TAGS=:systemd:
Is there anything we can do in the XML definition of the VM to fix this?
The PCI tree from within the VM is the following, if it helps:
(with libvirt 7.0.0 and qemu 5.2.0 it was the same)
# lspci -tv
-[0000:00]-+-00.0 Intel Corporation 440FX - 82441FX PMC [Natoma]
+-01.0 Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
+-01.1 Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
+-01.2 Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II]
+-01.3 Intel Corporation 82371AB/EB/MB PIIX4 ACPI
+-02.0 Cirrus Logic GD 5446
+-03.0-[01]--+-01.0 Red Hat, Inc. Virtio network device
| +-02.0 Red Hat, Inc. Virtio network device
| \-03.0 Intel Corporation I210 Gigabit Network
Connection
+-04.0-[02]--
+-05.0-[03]--
+-06.0-[04]--
+-07.0-[05]--
+-08.0-[06]----01.0 Red Hat, Inc. Virtio block device
+-09.0 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port
SATA Controller [AHCI mode]
+-0a.0 Red Hat, Inc. Virtio console
+-0b.0 Red Hat, Inc. Virtio memory balloon
\-0c.0 Red Hat, Inc. Virtio RNG
I see that a new feature in qemu and libvirt is to add ACPI indices in
order to have network interfaces appear as *onboard* and sort them through
this index as opposed to virtual PCI addresses. This is great. I see that
in this case, interfaces appear as eno1, eno2, etc.
However, for the sake of backward compatibility, is there a way to have the
previous behaviour where interfaces were called by their PCI slot number
(ens1, ens2, etc.)?
If I move to the new naming yielded by ACPI indices, I am mostly worried
about any possible change in interface names that might occur across VMs
running different OS's, with respect to what we had before with libvirt
7.0.0 and qemu 5.2.0.
Thanks!
Best,
Riccardo Ravaioli
3 years, 5 months
Libvirt hit a issue when do VM migration
by 梁朝军
Hi All,
Who can give me a help? I hit another issue when do vm migration? Libvirt throw out the error like below
"libvirt: Domain Config error : unsupported configuration: vcpu enable order of vCPU '0' differs between source and destination definitions”
The cup information is defined VM domain on source host is like:
<vcpu placement='static'>2</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
</vcpus>
<cputune>
<vcpupin vcpu='0' cpuset='42'/>
<vcpupin vcpu='1' cpuset='43'/>
</cputune>
The distinction host previews generate xml for VM like:
<vcpu placement='static'>2</vcpu>
<vcpus>
<vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
<vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
</vcpus>
<cputune>
<vcpupin vcpu='0' cpuset='42'/>
<vcpupin vcpu='1' cpuset='43'/>
</cputune>
What’t wrong with it? Or I missing some actions in configurations regarding the migration
Thanks a lot !
3 years, 5 months
unaccessible directory in lxcContainerResolveSymlinks
by Priyanka Gupta
Hi,
Could someone pls let me know when this condition could possibly arise?
lxcContainerResolveSymlinks:621 : Skipped unaccessible '/flash/dir'
The code seems to call access('/flash/dir', F_OK) which shall only check
for existence of this directory '/flash/dir'. I have this directory created
on my host. Is there anything that I am missing?
Thanks
Priyanka
3 years, 5 months
KVM Virtual Machine Network - Guest-guest/VM-VM only network (no host/hypervisor access, no outbound connectivity)
by Eduardo Lúcio Amorim Costa
I know that with the *virsh* command I can create several types of networks
(a "NAT network", for example) as we can see in these URLs...
KVM network management <https://programmersought.com/article/52213715009/>
KVM default NAT-based networking
<https://www.ibm.com/downloads/cas/ZVJGQX8E> (page 33)
*QUESTION:* How can I create a network (*lan_n*) where only guests/VMs have
connectivity, with no outbound connectivity and no host/hypervisor
connectivity?
*NOTE:* The connectivity to other resources will be provided by a
*pfSense* firewall
server that will have access to another network (*wan_n*) with outbound
connectivity and other resources.
Network layout...
[N]wan_n
↕
[I]wan_n
[V]pfsense_vm
[I]lan_n
↕
[N]lan_n
↕
.............................
↕ ↕ ↕
[V]some_vm_0 [V]some_vm_1 [V]some_vm_4
[V]some_vm_2 [V]some_vm_5
[V]some_vm_3
_ [N] - Network;
_ [I] - Network Interface;
_ [V] - Virtual Machine.
*Thanks! =D*
*ORIGINAL QUESTION: *https://serverfault.com/q/1066478/276753
<https://programmersought.com/article/52213715009/>
--
*Eduardo Lúcio*
Tecnologia, Desenvolvimento e Software Livre
LightBase Consultoria em Software Público
eduardo.lucio(a)lightbase.com.br <eduardo.lucio(a)LightBase.com.br>
*+55-61-3347-1949* - http://brlight.org <eduardo.lucio(a)LightBase.com.br> -
*Brasil-DF*
*Software livre! Abrace essa idéia!*
*"Aqueles que negam liberdade aos outros não a merecem para si mesmos."*
*Abraham Lincoln*
3 years, 5 months