[libvirt-users] Error starting domain: internal error: Unable to add port vnet0 to OVS bridge br0
by Harsh Gondaliya
I have installed OVS from sources using the installation steps mentioned on
this link: http://docs.openvswitch.org/en/latest/intro/install/general/
I had installed libvrt, KVM, QEMU and all the necessary packages using
apt-get. My KVM-QEMU hypervisor has been running well.
To add a VM with the port attached to OVS bridge I changed the XML domain
file as per the instructions on this page:
http://docs.openvswitch.org/en/latest/howto/libvirt/
But the when I start the VM using the Virtual Machine Manager I get
the following error:
*Error starting domain: internal error: Unable to add port vnet0 to OVS
bridge br0*
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in
newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
startup
self._backend.create()
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1035, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: Unable to add port vnet0 to OVS bridge br0
My output for ovs-vsctl show:
3c28f516-dd5c-43cf-bea1-7c068668d1f6
Bridge "br0"
Port "enp0s31f6"
Interface "enp0s31f6"
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.11.90"
*However, when OVS is installed using apt-get rather than installing from
source or tarball, all these steps work very well.*
Please guide me why this error is occurring. I am using Ubuntu 16.04 LTS as
my host machine. Many users are facing this issue and they have reported it
on OVS and other mailing lists. But, none is able to give a satisfactory
solution.
Regards,
Harsh
5 years, 8 months
[libvirt-users] use dirty bitmap to backup raw images
by Zhan Adlun
Dear Sir,
I use dirty bitmap to backup kvm raw images. I did not shutdown or suspend the vm when backup.
I try to export backup as raw images or qcow2 images when doing full backups as the origin disk is in raw format, all backup jobs finish successfully. But only the raw images works, the guest os runs well. The qcow2 backup has data corruption and the guest os is in rescue mode.This happens all the time and no operation is done to the vm when doing backup.
export as raw disk
{ "type": "drive-backup", "data": {"device": "drive-virtio-disk0", "target": "/datastore/centos.raw","sync":"full" ,"format": "raw"}}
export as qcow2 disk
{ "type": "drive-backup", "data": {"device": "drive-virtio-disk0", "target": "/datastore/centos.raw","sync":"full" ,"format": "qcow2"}}
After I suspend the vm, and export qcow2 disk, the disk works.
Are there any limits when using bitmap to backup raw disks? For example , we should keep the backup disk in the same format as the origin when vm is running?
5 years, 8 months
[libvirt-users] KVM - full system ( disk+memory) snapshot by excluding the raw disks
by Suresh Babu Kandukuru
Hi There ,
I have KVM VM with 4 qcow2 disks and 2 raw disks . when I try to take full system snapshot by excluding raw disks . it is give below error . can you help me how to fix this ?. or is it possible to take full system snapshot in this case .?.
When I change XMl as internal snapshots for raw disks . It throws a message snapshot are not supported on war disks .
[root@localhost oscgpkg-7eee963550ba9e5b]# virsh snapshot-create --domain oscg2 --xmlfile oscgsnapshot.xml
error: unsupported configuration: disk 'vda' must use snapshot mode 'internal'
[root@localhost oscgpkg-7eee963550ba9e5b]#
[root@localhost oscgpkg-7eee963550ba9e5b]# cat oscgsnapshot.xml
<domainsnapshot>
<description>oscg snapshot</description>
<memory snapshot='internal'/>
<disks>
<disk name='hda' snapshot='internal'>
</disk>
<disk name='hdb' snapshot='internal'>
</disk>
<disk name='hdc' snapshot='internal'>
</disk>
<disk name='hdd' snapshot='internal'>
</disk>
<disk name='vda' snapshot='no'>
</disk>
<disk name='vdb' snapshot='no'>
</disk>
</disks>
</domainsnapshot>
5 years, 8 months
[libvirt-users] ANNOUNCE: Oz 0.17.0 release
by Chris Lalancette
All,
I'm pleased to announce release 0.17.0 of Oz. Oz is a program for
doing automated installation of guest operating systems with limited
input from the user. Release 0.17.0 switches Oz to be python3 only,
since Python 2 support is ending soon. There are also some minor fixes
in here, along with the addition of support for some new OSs.
A tarball and zipfile of this release is available on the Github
releases page: https://github.com/clalancette/oz/releases . Packages for
Fedora-30 and Rawhide will be built in Koji and will eventually make
their way to stable. Instructions on how to get and use Oz are
available at http://github.com/clalancette/oz/wiki .
If you have questions or comments about Oz, please feel free to contact
me at clalancette at gmail.com, or open up an issue on the github page:
http://github.com/clalancette/oz/issues .
Thanks to everyone who contributed to this release through bug reports,
patches, and suggestions for improvement.
Chris Lalancette
5 years, 8 months
[libvirt-users] vlan tagging for openVSwitch
by lejeczek
hi everyone,
I'm trying to get vlans tagged in libvirt as my switch's end (yes
traffic will be leaving the host and into network switches) allows only
tagged vlans.
But with network as such:
...
<portgroup name='vlan-55'>
<vlan trunk='yes'>
<tag id='55'/>
</vlan>
</portgroup>
</network>
and guest as:
<interface type='network'>
<mac address='52:54:00:15:00:26'/>
<source network='ovsbr0' portgroup='vlan-55'/>
<model type='virtio'/>
</interface>
When the guest is fully initialized vSwitch shows:
...
_uuid : b3c130db-fa84-49f8-9cf5-824ec8cf3b81
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [35c0a914-a21a-43d7-9f63-adacffbb62bc]
lacp : []
mac : []
name : "ovsbr0"
other_config : {}
qos : []
statistics : {}
status : {}
tag : []
trunks : []
vlan_mode : []
No tags, no trunks, no vlan mode???
Is there something I missed (in docs though I sroogled exensively) ?
I also tried to add mode='trunk' into <tag id='55'/> and virsh does not
complains but next time I edit the guest the mode bit is gone.
My vSwitch's bridge has only one phys iface (into the net switch) and I
tried setting that iface with tag/no tag, with vlan_mode/no vlan_mode
but if guest is up with above libvirt's vSwitch initialization then
guest cannot ping net switch no matter the setting for phys iface.
I'm on Centos 7.6 with libvirt-4.5.0-10.el7_6.4.x86_64 &
openvswitch-2.0.0-7.el7.x86_64.
What can be the problem here?
many thanks, L.
5 years, 8 months
[libvirt-users] KVM-Docker-Networking using TAP and MACVLAN
by Lars Lindstrom
Hi everyone!
I have the following requirement: I need to connect a set of Docker
containers to a KVM. The containers shall be isolated in a way that they
cannot communicate to each other without going through the KVM, which
will act as router/firewall. For this, I thought about the following
simple setup (as opposed to a more complex one involving a bridge with
vlan_filtering and a seperate VLAN for each container):
+------------------------------------------------------------------+
| Host |
| +-------------+ +----------------------+---+
| | KVM | | Docker +-> | a |
| | +----------+ +----------+ +--------------+ +---+
| | | NIC lan0 | <-> | DEV tap0 | <-> | NET macvlan0 | <-+-> | b |
| | +----------+ +----------+ +--------------+ +---+
| | | | +-> | c |
| +-------------+ +----------------------+---+
| |
+------------------------------------------------------------------+
NIC lan0:
<interface type='direct'>
<source dev='tap0' mode='vepa'/>
<model type='virtio'/>
</interface>
*** Welcome to pfSense 2.4.4-RELEASE-p1 (amd64) on pfSense ***
LAN (lan) -> vtnet0 -> v4: 10.0.20.1/24
DEV tap0:
[root@server ~]# ip tuntap add tap0 mode tap
[root@server ~]# ip l set tap0 up
[root@server ~]# ip l show tap0
49: tap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether ce:9e:95:89:33:5f brd ff:ff:ff:ff:ff:ff
[root@server ~]# virsh start pfsense
[root@server opt]# ip l show tap0
49: tap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether ce:9e:95:89:33:5f brd ff:ff:ff:ff:ff:ff
NET macvlan0:
[root@server ~]# docker network create --driver macvlan
--subnet=10.0.20.0/24 --gateway=10.0.20.1 --opt parent=tap0 macvlan0
CNT a:
[root@server ~]# docker run --network macvlan0 --ip=10.0.20.2 -it
alpine /bin/sh
/ # ping -c 4 10.0.20.1
PING 10.0.20.1 (10.0.20.1): 56 data bytes
--- 10.0.20.1 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:0A:00:14:02
inet addr:10.0.20.2 Bcast:10.0.20.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:448 (448.0 B) TX bytes:448 (448.0 B)
/ # ip r
default via 10.0.20.1 dev eth0
10.0.20.0/24 dev eth0 scope link src 10.0.20.2
CNT b:
[root@server ~]# docker run --network macvlan0 --ip=10.0.20.2 -it
alpine /bin/ping 10.0.20.1
PING 10.0.20.1 (10.0.20.1): 56 data bytes
CNT c:
[root@server ~]# docker run --network macvlan0 --ip=10.0.20.2 -it
alpine /bin/ping 10.0.20.1
PING 10.0.20.1 (10.0.20.1): 56 data bytes
The KVM is not reachable from within a Docker container (during the test
firewalld was disabled) and vice versa. The first thing I noticed is
that tap0 remains NO-CARRIER and DOWN, even though the KVM has been
started. Shouldn't the link come up as soon as the KVM is started (and
thus is connected to the tap0 device)? The next thing that looked
strange to me - even though interface and routing configuration within
the container seemingly looks OK, there are 0 packets TX/RX on eth0
after pinging the KVM (but 4 on lo instead).
Any idea on how to proceed from here? Is this a valid setup and a valid
libvirt configuration for that setup?
Thanks and br, Lars
5 years, 8 months
[libvirt-users] How to insert a dummy NIC
by wferi@niif.hu
Hi,
I have to host (with KVM) an appliance which does not use its second and
third NIC. They have to be present in the guest, but they'd better stay
totally disconnected from anything in the host. "Second" and "third"
apparently means bus order. Let's consider virtio devices only. I think
the best technical solution is adding -device virtio-net-pci,addr=0x3 and
similar options to the KVM command line, without any corresponding
-netdev options (better ideas welcome). QEMU emits "Warning: nic
virtio-net-pci.2 has no peer" messages, but that's expected. I can even
do this much using the <qemu:commandline> element, but libvirt assigns
the 0x3 address to other virtio devices, leading to collision. Is there
a way to "reserve" a bus address for such manually added devices without
assigning explicit addresses to all other devices in the configuration?
Things I also tried (and found inadequate):
* Using "generic ethernet connection" for the dummy NICs. Close, but
requires extra permissions for accessing /dev/net/tun, and technically
feels a little inferior to using a peerless network device like above.
* TCP tunnel server. Even more inferior, does not require extra
permissions but leaves even looser ends (listening sockets). Also, the
RelaxNG grammal does not let me specify a model for this interface
type, so maintaining bus order with respect to the virtio interfaces is
impossible. A grammar bug?
* Using a dummy VLAN in the bridge. This is what I temporarily settled
for, but this requires global agreement and still technically inferior,
so I'd like to move away.
* A <network> without forwarding. Still inferior, and also requires
configuration sharing across the host cluster.
--
Thanks,
Feri
5 years, 8 months
[libvirt-users] Obtaining the PID of a domain's QEMU process from C
by Shawn Anastasio
Hello all,
I'm currently writing a C program that uses the libvirt API and I need a
way to obtain the pid of a given domain's QEMU process.
Specifically, I'm writing an ivshmem server that uses SO_PEERCRED to get
the pid of clients that connect to it, and I would like to use that pid
to look up the domain in libvirt to determine the proper domain ID to
return to the client.
As far as I can tell, libvirt doesn't expose this information in an easy
to access manner. Of course it is possible to call `ps` and grep for the
information I'm looking for, but I was hoping for a cleaner solution.
If anybody knows how to do this, advice would be greatly appreciated.
Thanks in advance,
Shawn
5 years, 8 months
[libvirt-users] why attach-disk can't be effective when the guest is booting
by Jianan Gao
Hi,
When i use "virsh attach-device rhel disk.xml" when the guest is
booting,then i can't find the disk in the guest after booting.But i can
find it by "virsh domblklist rhel",and when i want to detach it from the
guest and use "virsh detach-disk rhel vdb", i still can find the disk by
"virsh domblklist rhel"
The disk.xml is like:
<disk type="file" device="disk">
<driver name='qemu' type='raw' cache='none'/>
<source file="/var/lib/libvirt/images/foo.img"/>
<target dev="vdb" bus="virtio"/>
</disk>
Libvirt-release is 5.0.0.4 and the qemu-kvm release is 3.1.0
So i want to know why it failed to attach-disk when booting,and why we design
in this way.Maybe it's better to return failure if we can't use it like
this.
5 years, 8 months