[Libvirt] Attach qemu commands to an running xml domain

Hi all, I see in libvirt you are supporting attach/detach devices to existing xml domain using attachDeviceFlags and detachDeviceFlags APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here: 1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM? 2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using attachDeviceFlags? Example of my xml <domain type='kvm' id='5' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <devices> <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> </devices> <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> </qemu:commandline> </domain> Regards, Waleed Mousa Software Engineer, Nvidia <https://www.nvidia.com/en-me/geforce/>

On Tue, Mar 30, 2021 at 09:35:48 +0000, Waleed Musa wrote:
Hi all,
I see in libvirt you are supporting attach/detach devices to existing xml domain using attachDeviceFlags and detachDeviceFlags APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here:
1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM?
User specified aliases are possible with 'ua-' prefix: https://www.libvirt.org/formatdomain.html#devices
2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using attachDeviceFlags?
No. For VMs which use <qemu:commandline> we don't formally provide support. I suggest that if you have a good use case for the attributes below, you send patches adding the functionality to libvirt if you want to have it on device hotplug.
Example of my xml <domain type='kvm' id='5' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <devices> <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> </devices> <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> </qemu:commandline> </domain>
Regards, Waleed Mousa Software Engineer, Nvidia <https://www.nvidia.com/en-me/geforce/>

________________________________ From: Peter Krempa <pkrempa@redhat.com> Sent: Tuesday, March 30, 2021 01:42 PM To: Waleed Musa <waleedm@nvidia.com> Cc: libvir-list@redhat.com <libvir-list@redhat.com>; Moshe Levi <moshele@nvidia.com>; Adrian Chiris <adrianc@nvidia.com> Subject: Re: [Libvirt] Attach qemu commands to an running xml domain External email: Use caution opening links or attachments On Tue, Mar 30, 2021 at 09:35:48 +0000, Waleed Musa wrote:
Hi all,
I see in libvirt you are supporting attach/detach devices to existing xml domain using attachDeviceFlags and detachDeviceFlags APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here:
1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM?
User specified aliases are possible with 'ua-' prefix: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.libvir...
Thanks, I'll try it
2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using attachDeviceFlags?
No. For VMs which use <qemu:commandline> we don't formally provide support. I suggest that if you have a good use case for the attributes below, you send patches adding the functionality to libvirt if you want to have it on device hotplug.
We can't implement hot-plug
Example of my xml <domain type='kvm' id='5' xmlns:qemu='https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Flibvirt.org%2Fschemas%2Fdomain%2Fqemu%2F1.0&data=04%7C01%7Cwaleedm%40nvidia.com%7C72e7134a4ea6450695c108d8f368917a%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C1%7C637526977746721327%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=AK2tXXVGhKiJgFOZyKwiVOcOKCU49jSscbwQduaXSww%3D&reserved=0'> <devices> <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> </devices> <qemu:commandline> <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> </qemu:commandline> </domain>
Regards, Waleed Mousa Software Engineer, Nvidia <https://www.nvidia.com/en-me/geforce/>

On Tue, Mar 30, 2021 at 11:49:51 +0000, Waleed Musa wrote: Your reply email looks very broken, but I think I can find the relevant parts:
________________________________ From: Peter Krempa <pkrempa@redhat.com> Sent: Tuesday, March 30, 2021 01:42 PM To: Waleed Musa <waleedm@nvidia.com> Cc: libvir-list@redhat.com <libvir-list@redhat.com>; Moshe Levi <moshele@nvidia.com>; Adrian Chiris <adrianc@nvidia.com> Subject: Re: [Libvirt] Attach qemu commands to an running xml domain
External email: Use caution opening links or attachments
On Tue, Mar 30, 2021 at 09:35:48 +0000, Waleed Musa wrote:
[...]
2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using attachDeviceFlags?
No. For VMs which use <qemu:commandline> we don't formally provide support. I suggest that if you have a good use case for the attributes below, you send patches adding the functionality to libvirt if you want to have it on device hotplug.
We can't implement hot-plug
Can you elaborate why? If the options are justifiably useful and qemu upstream supports them there's nothing preventing that.

On 3/30/21 11:35 AM, Waleed Musa wrote:
Hi all,
I see in libvirt you are supporting attach/detach devices to existing xml domain using *attachDeviceFlags *and *detachDeviceFlags *APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here:
1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM? 2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using *attachDeviceFlags?*
*Example of my xml* *<domain type='kvm' id='5' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> * * <devices> * <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> * </devices> * * <qemu:commandline>* <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> * </qemu:commandline>* *</domain>*
Is this perhaps related to the following bug? "interface type='vhostuser' libvirtError: Cannot set interface MTU" https://bugzilla.redhat.com/show_bug.cgi?id=1940559 Are you trying to work around it? I am discussing with Moshe how libvirt can help, but honestly, I don't like the solution I proposed. Long story short, the interface is in different container (among with OVS bridge) and thus when we query ovs-vsctl it connects to the system one and doesn't find that interface. What I proposed was to allow specifying path to ovs db.socket but this would need to be done o per domain basis. What I particularly don't like about this solution is that while it may fix this one use case, it opens the gate for whole lot of other requests. For instance, consider if other interfaces of other types live in different network namespace. "Hey, I want to use eth0 from that namespace, but run QEMU in another one". Also, we don't really like exposing paths in libvirt XML unless necessary. Michal

On Wed, Mar 31, 2021 at 01:09:32PM +0200, Michal Privoznik wrote:
On 3/30/21 11:35 AM, Waleed Musa wrote:
Hi all,
I see in libvirt you are supporting attach/detach devices to existing xml domain using *attachDeviceFlags *and *detachDeviceFlags *APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here:
1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM? 2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using *attachDeviceFlags?*
*Example of my xml* *<domain type='kvm' id='5' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> * * <devices> * <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> * </devices> * * <qemu:commandline>* <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> * </qemu:commandline>* *</domain>*
Is this perhaps related to the following bug?
"interface type='vhostuser' libvirtError: Cannot set interface MTU" https://bugzilla.redhat.com/show_bug.cgi?id=1940559
Are you trying to work around it? I am discussing with Moshe how libvirt can help, but honestly, I don't like the solution I proposed.
Long story short, the interface is in different container (among with OVS bridge) and thus when we query ovs-vsctl it connects to the system one and doesn't find that interface. What I proposed was to allow specifying path to ovs db.socket but this would need to be done o per domain basis.
If it is possible to specify a ovs db.socket path in the XML, then why can't this path simply be bind mounted to the right location in the first place. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 3/31/21 1:36 PM, Daniel P. Berrangé wrote:
On Wed, Mar 31, 2021 at 01:09:32PM +0200, Michal Privoznik wrote:
On 3/30/21 11:35 AM, Waleed Musa wrote:
Hi all,
I see in libvirt you are supporting attach/detach devices to existing xml domain using *attachDeviceFlags *and *detachDeviceFlags *APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here:
1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM? 2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using *attachDeviceFlags?*
*Example of my xml* *<domain type='kvm' id='5' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> * * <devices> * <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> * </devices> * * <qemu:commandline>* <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> * </qemu:commandline>* *</domain>*
Is this perhaps related to the following bug?
"interface type='vhostuser' libvirtError: Cannot set interface MTU" https://bugzilla.redhat.com/show_bug.cgi?id=1940559
Are you trying to work around it? I am discussing with Moshe how libvirt can help, but honestly, I don't like the solution I proposed.
Long story short, the interface is in different container (among with OVS bridge) and thus when we query ovs-vsctl it connects to the system one and doesn't find that interface. What I proposed was to allow specifying path to ovs db.socket but this would need to be done o per domain basis.
If it is possible to specify a ovs db.socket path in the XML, then why can't this path simply be bind mounted to the right location in the first place.
Because then you'd override the path for system wide OVS. I mean, by default the socket is under /var/run/openvswitch/db.sock and if you have another OVS running inside a container you can expose it under /prefix/var/run/openvswitch/db.sock. ovs-vsctl allows this by --db=$path argument. Michal

On Wed, Mar 31, 2021 at 02:11:40PM +0200, Michal Privoznik wrote:
On 3/31/21 1:36 PM, Daniel P. Berrangé wrote:
On Wed, Mar 31, 2021 at 01:09:32PM +0200, Michal Privoznik wrote:
On 3/30/21 11:35 AM, Waleed Musa wrote:
Hi all,
I see in libvirt you are supporting attach/detach devices to existing xml domain using *attachDeviceFlags *and *detachDeviceFlags *APIs. Now we are adding some qemu command to the xml domain related to some interfaces using alias names before starting the VM, but we will face an issue with hot plug such devices, so I have two question here:
1. Is it applicable to set the alias names for interfaces because I saw it's ignored when I add it to xml domain before starting the VM? 2. Is there a way or API to attach qemu commands to running domain as you are doing in attaching the device using *attachDeviceFlags?*
*Example of my xml* *<domain type='kvm' id='5' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> * * <devices> * <interface type='vhostuser'> <mac address='fa:16:3e:ac:12:4c'/> <source type='unix' path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> <target dev='tapbbb6bbe9-eb'/> <model type='virtio'/> <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> * </devices> * * <qemu:commandline>* <qemu:arg value='-set'/> <qemu:arg value='device.net0.page-per-vq=on'/> <qemu:arg value='-set'/> <qemu:arg value='device.net0.host_mtu=8942'/> * </qemu:commandline>* *</domain>*
Is this perhaps related to the following bug?
"interface type='vhostuser' libvirtError: Cannot set interface MTU" https://bugzilla.redhat.com/show_bug.cgi?id=1940559
Are you trying to work around it? I am discussing with Moshe how libvirt can help, but honestly, I don't like the solution I proposed.
Long story short, the interface is in different container (among with OVS bridge) and thus when we query ovs-vsctl it connects to the system one and doesn't find that interface. What I proposed was to allow specifying path to ovs db.socket but this would need to be done o per domain basis.
If it is possible to specify a ovs db.socket path in the XML, then why can't this path simply be bind mounted to the right location in the first place.
Because then you'd override the path for system wide OVS. I mean, by default the socket is under /var/run/openvswitch/db.sock and if you have another OVS running inside a container you can expose it under /prefix/var/run/openvswitch/db.sock. ovs-vsctl allows this by --db=$path argument.
Oh, right I was mis-understanding the scenario. I've always thought if we wanted to support use of resources from other namespaces, then that would involve use of a namespace="$PID" attribute to identify the namespace. <source type='unix' namespace="523532" path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/> IOW, libvirt could enter the namespace, and then talk to the OVS db.sock at its normal location. It would be bit strange having namespace refer to a different NS for the OVS, but using the path=/var/lib/..... from the current namespace, but as long as we define the semantics clearly its not the end of the world. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

> -----Original Message----- > From: Daniel P. Berrangé <berrange@redhat.com> > Sent: Wednesday, March 31, 2021 3:23 PM > To: Michal Privoznik <mprivozn@redhat.com> > Cc: Waleed Musa <waleedm@nvidia.com>; libvir-list@redhat.com; Moshe > Levi <moshele@nvidia.com>; Adrian Chiris <adrianc@nvidia.com> > Subject: Re: [Libvirt] Attach qemu commands to an running xml domain > > External email: Use caution opening links or attachments > > > On Wed, Mar 31, 2021 at 02:11:40PM +0200, Michal Privoznik wrote: > > On 3/31/21 1:36 PM, Daniel P. Berrangé wrote: > > > On Wed, Mar 31, 2021 at 01:09:32PM +0200, Michal Privoznik wrote: > > > > On 3/30/21 11:35 AM, Waleed Musa wrote: > > > > > Hi all, > > > > > > > > > > I see in libvirt you are supporting attach/detach devices to > > > > > existing xml domain using *attachDeviceFlags *and > *detachDeviceFlags *APIs. > > > > > Now we are adding some qemu command to the xml domain related > to > > > > > some interfaces using alias names before starting the VM, but we > > > > > will face an issue with hot plug such devices, so I have two question > here: > > > > > > > > > > 1. Is it applicable to set the alias names for interfaces because I saw > > > > > it's ignored when I add it to xml domain before starting the VM? > > > > > 2. Is there a way or API to attach qemu commands to running domain > as > > > > > you are doing in attaching the device using > > > > > *attachDeviceFlags?* > > > > > > > > > > *Example of my xml* > > > > > *<domain type='kvm' id='5' > > > > > xmlns:qemu='https://nam11.safelinks.protection.outlook.com/?url= > > > > > > http%3A%2F%2Flibvirt.org%2Fschemas%2Fdomain%2Fqemu%2F1.0&da > t > > > > > > a=04%7C01%7Cmoshele%40nvidia.com%7C310b722c5d9b4dbef69e08d8f43fc > > > > > > 7c5%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637527902060468 > > > > > > 241%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2lu > MzIi > > > > > > LCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=mGgrFOieiNdozuu > U > > > > > 1tYThVFou2E4caoR0ECW1jljRfk%3D&reserved=0'> > > > > > * > > > > > * <devices> > > > > > * > > > > > <interface type='vhostuser'> > > > > > <mac address='fa:16:3e:ac:12:4c'/> > > > > > <source type='unix' > > > > > path='/var/lib/vhost_sockets/sockbbb6bbe9-eb5' mode='server'/> > > > > > <target dev='tapbbb6bbe9-eb'/> > > > > > <model type='virtio'/> > > > > > <driver queues='4' rx_queue_size='512' tx_queue_size='512'/> > > > > > <alias name='net0'/> > > > > > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' > > > > > function='0x0'/> > > > > > </interface> > > > > > * </devices> > > > > > * > > > > > * <qemu:commandline>* > > > > > <qemu:arg value='-set'/> > > > > > <qemu:arg value='device.net0.page-per-vq=on'/> > > > > > <qemu:arg value='-set'/> > > > > > <qemu:arg value='device.net0.host_mtu=8942'/> > > > > > * </qemu:commandline>* > > > > > *</domain>* > > > > > > > > Is this perhaps related to the following bug? > > > > > > > > "interface type='vhostuser' libvirtError: Cannot set interface MTU" > > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2F > > > > > bugzilla.redhat.com%2Fshow_bug.cgi%3Fid%3D1940559&data=04%7C0 > 1 > > > > > %7Cmoshele%40nvidia.com%7C310b722c5d9b4dbef69e08d8f43fc7c5%7C430 > 83 > > > > > d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637527902060468241%7CUnkn > own > > > > > %7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha > Ww > > > > > iLCJXVCI6Mn0%3D%7C1000&sdata=g8nFl0ygr2JdDz1juugnbC0dpfpIrKX > NG > > > > exEL6oMnDc%3D&reserved=0 > > > > > > > > Are you trying to work around it? I am discussing with Moshe how > > > > libvirt can help, but honestly, I don't like the solution I proposed. > > > > > > > > Long story short, the interface is in different container (among > > > > with OVS > > > > bridge) and thus when we query ovs-vsctl it connects to the system > > > > one and doesn't find that interface. What I proposed was to allow > > > > specifying path to ovs db.socket but this would need to be done o per > domain basis. > > > > > > If it is possible to specify a ovs db.socket path in the XML, then > > > why can't this path simply be bind mounted to the right location in > > > the first place. > > > > Because then you'd override the path for system wide OVS. I mean, by > > default the socket is under /var/run/openvswitch/db.sock and if you > > have another OVS running inside a container you can expose it under > > /prefix/var/run/openvswitch/db.sock. ovs-vsctl allows this by > > --db=$path argument. > > Oh, right I was mis-understanding the scenario. > > I've always thought if we wanted to support use of resources from other > namespaces, then that would involve use of a namespace="$PID" > attribute to identify the namespace. > > <source type='unix' namespace="523532" > path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/> > > IOW, libvirt could enter the namespace, and then talk to the OVS db.sock at > its normal location. > > It would be bit strange having namespace refer to a different NS for the > OVS, but using the path=/var/lib/..... from the current namespace, but as > long as we define the semantics clearly its not the end of the world. Just to better explain the use case here so we have user space vdpa solution which requires the following qemu flags to work: 1. page-per-vq=on 2. host_mtu I understand the adding page-per-vq is feature requested to I am putting that aside. Regarding the host_mtu we want to use the libvirt xml to set mtu like in [1] so we can set mtu in hot plug. Basically we just want that the libvirt xml mtu will just set the qemu host_mtu. Th problem is the this using mtu in libvirt xml will query ovs (it assume that if ovs installed on the host it needs to set it mtu). In our case we have OVS running on the host for switching and OVS in container to connect the VF netdevice with vhostuser to qemu to just forward traffic from VF to virtio using vdpa) 1. One option is to allow to disable the set of mtu in ovs so it will just add the qemu host_mtu flag (we can set the mtu on ovs by openstack) 2. Second option is to change how we query ovs to add --db flag so we can change the query to be on the ovs in the container Both option are fine with me. [1] - <interface type='vhostuser'> <mac address='fa:16:3e:92:6d:79'/> <source type='unix' path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/> <model type='virtio'/> <driver rx_queue_size='512' tx_queue_size='512'/> <mtu size='8942'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> > > Regards, > Daniel > -- > |: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fberr > ange.com%2F&data=04%7C01%7Cmoshele%40nvidia.com%7C310b722c > 5d9b4dbef69e08d8f43fc7c5%7C43083d15727340c1b7db39efd9ccc17a%7C0%7 > C0%7C637527902060468241%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w > LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am > p;sdata=SRttkiVBWP9rgqQBzgDsRQSKcUL1T%2B3Z0f96i4%2B4tm8%3D& > reserved=0 -o- > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fww > w.flickr.com%2Fphotos%2Fdberrange&data=04%7C01%7Cmoshele%40 > nvidia.com%7C310b722c5d9b4dbef69e08d8f43fc7c5%7C43083d15727340c1b7 > db39efd9ccc17a%7C0%7C0%7C637527902060468241%7CUnknown%7CTWFpb > GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI > 6Mn0%3D%7C1000&sdata=v8hkQ6F4lwKoECekJNt4WHgpReHo8GzMPz > YbW8wv5V0%3D&reserved=0 :| > |: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flibvir > t.org%2F&data=04%7C01%7Cmoshele%40nvidia.com%7C310b722c5d9b > 4dbef69e08d8f43fc7c5%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0% > 7C637527902060468241%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAw > MDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sda > ta=y%2F%2BTU%2FG6qz5kdYeA8KVzZEMzhH6zwp3%2F639ylpIIT1Y%3D&am > p;reserved=0 -o- > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffsto > p138.berrange.com%2F&data=04%7C01%7Cmoshele%40nvidia.com%7C > 310b722c5d9b4dbef69e08d8f43fc7c5%7C43083d15727340c1b7db39efd9ccc17 > a%7C0%7C0%7C637527902060468241%7CUnknown%7CTWFpbGZsb3d8eyJWI > joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1 > 000&sdata=WtiEpRFS83Hp1Ya9pzjvjGsVDD5SyBPdpHRHKPOop24%3D& > amp;reserved=0 :| > |: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fenta > ngle- > photo.org%2F&data=04%7C01%7Cmoshele%40nvidia.com%7C310b722c > 5d9b4dbef69e08d8f43fc7c5%7C43083d15727340c1b7db39efd9ccc17a%7C0%7 > C0%7C637527902060468241%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w > LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am > p;sdata=Xok16UY9Qt2QY31nHgu01hR0ykuduqwoJeRR5K3NnUM%3D&r > eserved=0 -o- > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fww > w.instagram.com%2Fdberrange&data=04%7C01%7Cmoshele%40nvidia. > com%7C310b722c5d9b4dbef69e08d8f43fc7c5%7C43083d15727340c1b7db39e > fd9ccc17a%7C0%7C0%7C637527902060468241%7CUnknown%7CTWFpbGZsb3 > d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0 > %3D%7C1000&sdata=GtgEbUuRNH5WPXi%2F5PkE74aEbh8N7mEwJ8ntE > te3%2FfM%3D&reserved=0 :|

On 3/31/21 4:01 PM, Moshe Levi wrote:
-----Original Message----- From: Daniel P. Berrangé <berrange@redhat.com> Sent: Wednesday, March 31, 2021 3:23 PM To: Michal Privoznik <mprivozn@redhat.com> Cc: Waleed Musa <waleedm@nvidia.com>; libvir-list@redhat.com; Moshe Levi <moshele@nvidia.com>; Adrian Chiris <adrianc@nvidia.com> Subject: Re: [Libvirt] Attach qemu commands to an running xml domain
I've always thought if we wanted to support use of resources from other namespaces, then that would involve use of a namespace="$PID" attribute to identify the namespace.
<source type='unix' namespace="523532" path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/>
IOW, libvirt could enter the namespace, and then talk to the OVS db.sock at its normal location.
It would be bit strange having namespace refer to a different NS for the OVS, but using the path=/var/lib/..... from the current namespace, but as long as we define the semantics clearly its not the end of the world.
Just to better explain the use case here so we have user space vdpa solution which requires the following qemu flags to work: 1. page-per-vq=on 2. host_mtu
I understand the adding page-per-vq is feature requested to I am putting that aside.
Correct.
Regarding the host_mtu we want to use the libvirt xml to set mtu like in [1] so we can set mtu in hot plug. Basically we just want that the libvirt xml mtu will just set the qemu host_mtu. Th problem is the this using mtu in libvirt xml will query ovs (it assume that if ovs installed on the host it needs to set it mtu). In our case we have OVS running on the host for switching and OVS in container to connect the VF netdevice with vhostuser to qemu to just forward traffic from VF to virtio using vdpa) 1. One option is to allow to disable the set of mtu in ovs so it will just add the qemu host_mtu flag (we can set the mtu on ovs by openstack)
Don't you need to set the MTU also on the TAP device? Or will it be inherited from the OVS bridge?
2. Second option is to change how we query ovs to add --db flag so we can change the query to be on the ovs in the container
Both option are fine with me.
Michal

-----Original Message----- From: Michal Privoznik <mprivozn@redhat.com> Sent: Wednesday, March 31, 2021 5:49 PM To: Moshe Levi <moshele@nvidia.com>; Daniel P. Berrangé <berrange@redhat.com> Cc: Waleed Musa <waleedm@nvidia.com>; libvir-list@redhat.com; Adrian Chiris <adrianc@nvidia.com> Subject: Re: [Libvirt] Attach qemu commands to an running xml domain
External email: Use caution opening links or attachments
On 3/31/21 4:01 PM, Moshe Levi wrote:
-----Original Message----- From: Daniel P. Berrangé <berrange@redhat.com> Sent: Wednesday, March 31, 2021 3:23 PM To: Michal Privoznik <mprivozn@redhat.com> Cc: Waleed Musa <waleedm@nvidia.com>; libvir-list@redhat.com;
Moshe
Levi <moshele@nvidia.com>; Adrian Chiris <adrianc@nvidia.com> Subject: Re: [Libvirt] Attach qemu commands to an running xml domain
I've always thought if we wanted to support use of resources from other namespaces, then that would involve use of a namespace="$PID" attribute to identify the namespace.
<source type='unix' namespace="523532" path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/>
IOW, libvirt could enter the namespace, and then talk to the OVS db.sock at its normal location.
It would be bit strange having namespace refer to a different NS for the OVS, but using the path=/var/lib/..... from the current namespace, but as long as we define the semantics clearly its not the end
of the world. Just to better explain the use case here so we have user space vdpa solution which requires the following qemu flags to work: 1. page-per-vq=on 2. host_mtu
I understand the adding page-per-vq is feature requested to I am putting that aside.
Correct.
Regarding the host_mtu we want to use the libvirt xml to set mtu like in [1]
so we can set mtu in hot plug.
Basically we just want that the libvirt xml mtu will just set the qemu host_mtu. Th problem is the this using mtu in libvirt xml will query ovs (it assume that if ovs installed on the host it needs to set it mtu). In our case we have OVS running on the host for switching and OVS in container to connect the VF netdevice with vhostuser to qemu to just forward traffic from VF to virtio using vdpa) 1. One option is to allow to disable the set of mtu in ovs so it will just add the qemu host_mtu flag (we can set the mtu on ovs by openstack)
Don't you need to set the MTU also on the TAP device? Or will it be inherited from the OVS bridge? In our case it MTU of the VF nerdevice and we set it by OVS.
2. Second option is to change how we query ovs to add --db flag so we can change the query to be on the ovs in the container
Both option are fine with me.
Michal
participants (5)
-
Daniel P. Berrangé
-
Michal Privoznik
-
Moshe Levi
-
Peter Krempa
-
Waleed Musa