On Wed, Mar 31, 2021 at 02:11:40PM +0200, Michal Privoznik wrote:
On 3/31/21 1:36 PM, Daniel P. Berrangé wrote:
> On Wed, Mar 31, 2021 at 01:09:32PM +0200, Michal Privoznik wrote:
> > On 3/30/21 11:35 AM, Waleed Musa wrote:
> > > Hi all,
> > >
> > > I see in libvirt you are supporting attach/detach devices to existing
> > > xml domain using *attachDeviceFlags *and *detachDeviceFlags *APIs.
> > > Now we are adding some qemu command to the xml domain related to some
> > > interfaces using alias names before starting the VM, but we will face an
> > > issue with hot plug such devices, so I have two question here:
> > >
> > > 1. Is it applicable to set the alias names for interfaces because I saw
> > > it's ignored when I add it to xml domain before starting the VM?
> > > 2. Is there a way or API to attach qemu commands to running domain as
> > > you are doing in attaching the device using *attachDeviceFlags?*
> > >
> > > *Example of my xml*
> > > *<domain type='kvm' id='5'
> > >
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
> > > *
> > > * <devices>
> > > *
> > > <interface type='vhostuser'>
> > > <mac address='fa:16:3e:ac:12:4c'/>
> > > <source type='unix'
> > > path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5'
mode='server'/>
> > > <target dev='tapbbb6bbe9-eb'/>
> > > <model type='virtio'/>
> > > <driver queues='4' rx_queue_size='512'
tx_queue_size='512'/>
> > > <alias name='net0'/>
> > > <address type='pci' domain='0x0000'
bus='0x00' slot='0x03'
> > > function='0x0'/>
> > > </interface>
> > > * </devices>
> > > *
> > > * <qemu:commandline>*
> > > <qemu:arg value='-set'/>
> > > <qemu:arg value='device.net0.page-per-vq=on'/>
> > > <qemu:arg value='-set'/>
> > > <qemu:arg value='device.net0.host_mtu=8942'/>
> > > * </qemu:commandline>*
> > > *</domain>*
> >
> > Is this perhaps related to the following bug?
> >
> > "interface type='vhostuser' libvirtError: Cannot set interface
MTU"
> >
https://bugzilla.redhat.com/show_bug.cgi?id=1940559
> >
> > Are you trying to work around it? I am discussing with Moshe how libvirt can
> > help, but honestly, I don't like the solution I proposed.
> >
> > Long story short, the interface is in different container (among with OVS
> > bridge) and thus when we query ovs-vsctl it connects to the system one and
> > doesn't find that interface. What I proposed was to allow specifying path
to
> > ovs db.socket but this would need to be done o per domain basis.
>
> If it is possible to specify a ovs db.socket path in the XML, then
> why can't this path simply be bind mounted to the right location
> in the first place.
Because then you'd override the path for system wide OVS. I mean, by default
the socket is under /var/run/openvswitch/db.sock and if you have another OVS
running inside a container you can expose it under
/prefix/var/run/openvswitch/db.sock. ovs-vsctl allows this by --db=$path
argument.
Oh, right I was mis-understanding the scenario.
I've always thought if we wanted to support use of resources from
other namespaces, then that would involve use of a namespace="$PID"
attribute to identify the namespace.
<source type='unix' namespace="523532"
path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/>
IOW, libvirt could enter the namespace, and then talk to the OVS
db.sock at its normal location.
It would be bit strange having namespace refer to a different
NS for the OVS, but using the path=/var/lib/..... from the
current namespace, but as long as we define the semantics
clearly its not the end of the world.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|