Hi Ashish,
IMO, it is yes, no way to increase tx_queue_size for direct type interface
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
On Thu, Oct 26, 2017 at 3:38 PM, Ashish Kurian <ashishbnv(a)gmail.com> wrote:
Hi Yalan,
In the previous email you mentioned "tx_queue_size='512' will not work in
the guest with direct type interface, in fact, no matter what you set, it
will not work and guest will get the default '256'. "
So if I am using macvtap for my interfaces, then the device type will
always be direct type. Does it mean that there is no way I can increase the
buffer size with the macvtap interfaces?
Best Regards,
Ashish Kurian
On Thu, Oct 26, 2017 at 9:04 AM, Ashish Kurian <ashishbnv(a)gmail.com>
wrote:
> Hi Yalan,
>
> Thank you for your comment on qemu-kvm-rhev
>
> I am waiting for a response about my previous email with the logs
> attached. I do not understand what is the problem.
>
>
> On Oct 26, 2017 8:58 AM, "Yalan Zhang" <yalzhang(a)redhat.com> wrote:
>
> Hi Ashish,
>
> Please never mind for qemu-kvm-rhev.
> qemu with the code base 2.10.0 will support the tx_queue_size and
> rx_queue_size.
>
> Thank you~
>
>
>
>
>
> -------
> Best Regards,
> Yalan Zhang
> IRC: yalzhang
> Internal phone: 8389413
>
> On Thu, Oct 26, 2017 at 2:22 PM, Yalan Zhang <yalzhang(a)redhat.com> wrote:
>
>> Hi Ashish,
>>
>> Are these packages available for free? How can I install them?
>> => You did have vhost backend driver. Do not set <driver
>> name='qemu'...>, by default it will use vhost as backend driver.
>>
>> Is it possible to have my interfaces with an IP address inside the VM
>> to be bridged to the physical interfaces on the host?
>> => Yes, you can create a linux bridge with physical interface connected,
>> and use bridge type interface. Refer to
https://libvirt.org/formatd
>> omain.html#elementsNICSBridge
>> direct type is also ok (but your host and guest have no access to each
>> other).
>>
>> Is it also a possibility that I change the rx and tx buffer on the
>> physical interface on the host and it is reflected automatically inside the
>> VM as you said it will always receive the default value of the host?
>> => No, it do not receive the default value of the host. It's the default
>> value related with the virtual device driver on the guest.
>> hostdev type interface will passthrough the physical interface or VF of
>> the host to guest, it will get the device's parameters for rx and tx buffer.
>>
>>
>>
>> -------
>> Best Regards,
>> Yalan Zhang
>> IRC: yalzhang
>> Internal phone: 8389413
>>
>> On Thu, Oct 26, 2017 at 1:30 PM, Ashish Kurian <ashishbnv(a)gmail.com>
>> wrote:
>>
>>> Hi Yalan,
>>>
>>> Thank you for your response. I do not have the following packages
>>> installed
>>>
>>> vhost backend driver
>>> qemu-kvm-rhev package
>>>
>>> Are these packages available for free? How can I install them?
>>>
>>> In my KVM VM, I must have an IP address to the interfaces that I am
>>> trying to increasing the buffers. That is the reason I was using macvtap
>>> (direct type interface). Is it possible to have my interfaces with an IP
>>> address inside the VM to be bridged to the physical interfaces on the host?
>>>
>>> Is it also a possibility that I change the rx and tx buffer on the
>>> physical interface on the host and it is reflected automatically inside the
>>> VM as you said it will always receive the default value of the host?
>>>
>>>
>>> Best Regards,
>>> Ashish Kurian
>>>
>>> On Thu, Oct 26, 2017 at 6:45 AM, Yalan Zhang <yalzhang(a)redhat.com>
>>> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> I have tested with your xml in the first mail, and it works for
rx_queue_size(see
>>>> below).
>>>> multiqueue need to work with vhost backend driver. And when you set
>>>> "queues=1" it will ignored.
>>>>
>>>> Please check your qemu-kvm-rhev package, should be newer than
>>>> qemu-kvm-rhev-2.9.0-16.el7_4.2
>>>> And the logs?
>>>>
>>>> tx_queue_size='512' will not work in the guest with direct type
>>>> interface, in fact, no matter what you set, it will not work and guest
will
>>>> get the default '256'.
>>>> We only support vhost-user backend to have more than 256. refer to
>>>>
https://libvirt.org/formatdomain.html#elementsNICSEthernet
>>>>
>>>> tx_queue_size
>>>> The optional tx_queue_size attribute controls the size of virtio ring
>>>> for each queue as described above. The default value is hypervisor
>>>> dependent and may change across its releases. Moreover, some hypervisors
>>>> may pose some restrictions on actual value. For instance, QEMU v2.9
>>>> requires value to be a power of two from [256, 1024] range. In addition
to
>>>> that, this may work only for a subset of interface types, e.g.
>>>> aforementioned QEMU enables this option only for vhostuser type. Since
>>>> 3.7.0 (QEMU and KVM only)
>>>> multiqueue only supports vhost as backend driver.
>>>>
>>>> # rpm -q libvirt qemu-kvm-rhev
>>>> libvirt-3.2.0-14.el7_4.3.x86_64
>>>> qemu-kvm-rhev-2.9.0-16.el7_4.9.x86_64
>>>>
>>>> 1. the xml as below
>>>> <interface type='direct'>
>>>> <mac address='52:54:00:00:b5:99'/>
>>>> <source dev='eno1' mode='vepa'/>
>>>> <model type='virtio'/>
>>>> <driver name='vhost' queues='5'
rx_queue_size='512'
>>>> tx_queue_size='512'>
>>>> <host csum='off' gso='off' tso4='off'
tso6='off' ecn='off'
>>>> ufo='off' mrg_rxbuf='off'/>
>>>> <guest csum='off' tso4='off'
tso6='off' ecn='off' ufo='off'/>
>>>> </driver>
>>>> <address type='pci' domain='0x0000'
bus='0x00' slot='0x03'
>>>> function='0x0'/>
>>>> </interface>
>>>>
>>>> 2. after start the vm, check the qemu command line:
>>>> *-netdev
>>>> tap,fds=26:28:29:30:31,id=hostnet0,vhost=on,vhostfds=32:33:34:35:36*
>>>> -device virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,
>>>> host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest
>>>> _tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
>>>> *mq=on,vectors=12,rx_queue_size=512,tx_queue_size=512*,netdev=hostnet
>>>> 0,id=net0,mac=52:54:00:00:b5:99,bus=pci.0,addr=0x3
>>>>
>>>> 3. check on guest
>>>> # ethtool -g eth0
>>>> Ring parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: *512 ==> rx_queue_size works*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: *256 ===> no change*
>>>> Current hardware settings:
>>>> RX: *512 **==> rx_queue_size works*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: *256 ===> no change*
>>>>
>>>> # ethtool -l eth0
>>>> Channel parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: *5 ==> queues what we set*
>>>> Current hardware settings:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: 1
>>>>
>>>>
>>>> If change to qemu as driver,
>>>> # virsh edit rhel7
>>>> ..
>>>> <interface type='direct'>
>>>> <mac address='52:54:00:00:b5:99'/>
>>>> <source dev='eno1' mode='vepa'/>
>>>> <model type='virtio'/>
>>>> <driver name='qemu' queues='5'
rx_queue_size='512'
>>>> tx_queue_size='512'>
>>>> <host csum='off' gso='off' tso4='off'
tso6='off' ecn='off'
>>>> ufo='off' mrg_rxbuf='off'/>
>>>> <guest csum='off' tso4='off'
tso6='off' ecn='off' ufo='off'/>
>>>> </driver>
>>>> <address type='pci' domain='0x0000'
bus='0x00' slot='0x03'
>>>> function='0x0'/>
>>>> </interface>
>>>> ..
>>>> Domain rhel7 XML configuration edited. ==> the xml can validate and
>>>> save
>>>>
>>>> # virsh start rhel7
>>>> Domain rhel7 started
>>>>
>>>>
>>>> # virsh dumpxml rhel7 | grep /interface -B9
>>>> <source dev='eno1' mode='vepa'/>
>>>> <target dev='macvtap0'/>
>>>> <model type='virtio'/>
>>>> *<driver name='qemu' queues='5'
rx_queue_size='512'
>>>> tx_queue_size='512'>*
>>>> <host csum='off' gso='off' tso4='off'
tso6='off' ecn='off'
>>>> ufo='off' mrg_rxbuf='off'/>
>>>> <guest csum='off' tso4='off'
tso6='off' ecn='off' ufo='off'/>
>>>> </driver>
>>>> <alias name='net0'/>
>>>> <address type='pci' domain='0x0000'
bus='0x00' slot='0x03'
>>>> function='0x0'/>
>>>> </interface>
>>>>
>>>>
>>>> * -netdev tap,fds=26:28:29:30:31*,id=hostnet0 -device
>>>> virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,
>>>> host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest
>>>> _tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
>>>> *rx_queue_size=512,tx_queue_size=512*,netdev=hostnet0,id=net0,mac=52:
>>>> 54:00:00:b5:99,bus=pci.0,addr=0x3
>>>>
>>>> *"mq=on,vectors=12" is missing*, indicates there is no
multiqueue
>>>>
>>>> and check on guest
>>>>
>>>> # ethtool -l eth0
>>>> Channel parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: 1 ==> no multiqueue
>>>> Current hardware settings:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: 1
>>>>
>>>> # ethtool -g eth0
>>>> Ring parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: *512*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 256
>>>> Current hardware settings:
>>>> RX: *512*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 256
>>>>
>>>>
>>>>
>>>>
>>>> -------
>>>> Best Regards,
>>>> Yalan Zhang
>>>> IRC: yalzhang
>>>> Internal phone: 8389413
>>>>
>>>> On Thu, Oct 26, 2017 at 2:33 AM, Ashish Kurian
<ashishbnv(a)gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Michal,
>>>>>
>>>>> An update to what I have already said : when I try adding <driver
>>>>> name='qemu' txmode='iothread' ioeventfd='on'
event_idx='off' queues='1'
>>>>> rx_queue_size='512' tx_queue_size='512'> although
it showed me the error as
>>>>> mentioned, when I checked the xml again I saw that <driver
>>>>> name='qemu' txmode='iothread' ioeventfd='on'
event_idx='off' > is added to
>>>>> the interface.
>>>>>
>>>>> The missing parameters are : queues='1'
rx_queue_size='512'
>>>>> tx_queue_size='512'
>>>>>
>>>>> Best Regards,
>>>>> Ashish Kurian
>>>>>
>>>>> On Wed, Oct 25, 2017 at 5:07 PM, Ashish Kurian
<ashishbnv(a)gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Michal,
>>>>>>
>>>>>> What I found was that when I restarted the machine and did a
virsh
>>>>>> edit command to see the xml config, I see that it is was not
actually
>>>>>> changed. This suggests why I saw 256 again after restarting.
>>>>>>
>>>>>> So now I tried again to edit the xml via virsh edit command and
used
>>>>>> the following to set the parameters.
>>>>>>
>>>>>> <driver name='qemu' txmode='iothread'
ioeventfd='on' event_idx='off'
>>>>>> queues='1' rx_queue_size='512'
tx_queue_size='512'>
>>>>>> </driver>
>>>>>>
>>>>>> It was not accepted and I got the error saying :
>>>>>>
>>>>>>
>>>>>> error: XML document failed to validate against schema: Unable to
>>>>>> validate doc against /usr/share/libvirt/schemas/domain.rng
>>>>>> Extra element devices in interleave
>>>>>> Element domain failed to validate content
>>>>>>
>>>>>> What does this imply? I have two more other interfaces and do I
have
>>>>>> to the same to them also?
>>>>>>
>>>>>> Btw, there are now logs generated now in the domain log or
libvirtd
>>>>>> log
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Best Regards,
>>>>>> Ashish Kurian
>>>>>>
>>>>>> On Wed, Oct 25, 2017 at 2:50 PM, Michal Privoznik <
>>>>>> mprivozn(a)redhat.com> wrote:
>>>>>>
>>>>>>> On 10/25/2017 01:53 PM, Ashish Kurian wrote:
>>>>>>> > Dear Users/Developers,
>>>>>>> >
>>>>>>> > I am using a KVM Ubuntu VM as a degrader to apply
specific delays
>>>>>>> to
>>>>>>> > incoming packets. As the delay for my packets can be
higher than
>>>>>>> 7.5
>>>>>>> > seconds, there is not enough buffer on my interface to
buffer all
>>>>>>> the
>>>>>>> > packets. Therefore those overflowing packets are dropped
in the
>>>>>>> machine and
>>>>>>> > not forwarded.
>>>>>>> >
>>>>>>> > When I tried to use the command ethtool -G ens8 rx 512
to
>>>>>>> increase the
>>>>>>> > buffer size, I get the following error.
>>>>>>> >
>>>>>>> > Cannot set device ring parameters: Operation not
permitted
>>>>>>> >
>>>>>>> > I have kept the VM xml files as specified in the link :
>>>>>>> >
https://libvirt.org/formatdomain.html. The value that I
kept in
>>>>>>> my xml file
>>>>>>> > is as follows.
>>>>>>> >
>>>>>>> > <interface type='direct'>
>>>>>>> > <mac address='52:54:00:72:f9:eb'/>
>>>>>>> > <source dev='enp7s0f0'
mode='vepa'/>
>>>>>>> > <model type='virtio'/>
>>>>>>> > <driver name='vhost' queues='5'
rx_queue_size='512'
>>>>>>> > tx_queue_size='512'>
>>>>>>> > <host csum='off' gso='off'
tso4='off' tso6='off' ecn='off'
>>>>>>> ufo='off'
>>>>>>> > mrg_rxbuf='off'/>
>>>>>>> > <guest csum='off' tso4='off'
tso6='off' ecn='off'
>>>>>>> ufo='off'/>
>>>>>>> > </driver>
>>>>>>> > <address type='pci'
domain='0x0000' bus='0x00' slot='0x08'
>>>>>>> > function='0x0'/>
>>>>>>> > </interface>
>>>>>>> > <interface type='direct'>
>>>>>>> > <mac address='52:54:00:00:b5:99'/>
>>>>>>> > <source dev='enp7s0f1'
mode='vepa'/>
>>>>>>> > <model type='virtio'/>
>>>>>>> > <driver name='vhost' queues='5'
rx_queue_size='512'
>>>>>>> > tx_queue_size='512'>
>>>>>>> > <host csum='off' gso='off'
tso4='off' tso6='off' ecn='off'
>>>>>>> ufo='off'
>>>>>>> > mrg_rxbuf='off'/>
>>>>>>> > <guest csum='off' tso4='off'
tso6='off' ecn='off'
>>>>>>> ufo='off'/>
>>>>>>> > </driver>
>>>>>>> > <address type='pci'
domain='0x0000' bus='0x00' slot='0x09'
>>>>>>> > function='0x0'/>
>>>>>>>
>>>>>>> So what does the qemu command line look like? You can find it
in
>>>>>>> either
>>>>>>> libvirtd log or domain log.
>>>>>>>
>>>>>>>
http://wiki.libvirt.org/page/DebugLogs
>>>>>>>
>>>>>>> Michal
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> libvirt-users mailing list
>>>>> libvirt-users(a)redhat.com
>>>>>
https://www.redhat.com/mailman/listinfo/libvirt-users
>>>>>
>>>>
>>>>
>>>
>>
>
>