[PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface

--- diff to v2: - Remove passing "actualType" argument, get it inside funtion - Format code. diff to v1: - Move qemuDomainDefIsOvsport from src/qemu/qemu_domain.c to src/conf/domain_conf.c - Call virCommandFree(cmd)free cmd before reusing it. - Ddd g_autofree to variables. - Reduce usage of virReportError(), and coupled it with return -1. - Fix remove port qos error. - Optimise code structure. Thanks to Michal Privoznik for helping reviewing these patches and solving problems. Really sorry to bring extra work to review them. I will continue to learn and become familiar with submission process. Now libvirt use tc rules to manage interface's qos. But when an interface is created by ovs, there is no qos setting result in ovs database. Therefore, qos of ovs port should be set via ovs management command. We add a function to tell whether a port definition is an ovs managed virtual port. Change default qdisc rules, which return 0 directly if the port is ovs managed(When the ovs port is set noqueue, qos config on this port will not work). Add ovs management function of setting and cleaning qos. Then check if the port is an ovs managed port during its life cycle, and call the ovs management function to set or clean qos settings. zhangjl02 (4): virDomain: interface: add virDomainNetDefIsOvsport virDomain: interface: add virNetDevOpenvswitchInterfaceSetQos and virNetDevOpenvswitchInterfaceClearQos qemu: interface: remove setting noqueue for ovs port qemu: interface: check and use ovs command to set qos of ovs managed port src/conf/domain_conf.c | 11 ++ src/conf/domain_conf.h | 2 + src/libvirt_private.syms | 3 + src/qemu/qemu_command.c | 10 +- src/qemu/qemu_domain.c | 3 +- src/qemu/qemu_driver.c | 23 ++- src/qemu/qemu_hotplug.c | 35 ++-- src/qemu/qemu_process.c | 7 +- src/util/virnetdevopenvswitch.c | 274 ++++++++++++++++++++++++++++++++ src/util/virnetdevopenvswitch.h | 11 ++ 10 files changed, 364 insertions(+), 15 deletions(-) -- 2.30.2.windows.1

From: zhangjl02 <zhangjl02@inspur.com> Tell whether a port definition is an ovs managed virtual port --- diff to v2: - Delete actualType argument, get it in function. - Format code. Thanks to Michal Privoznik's advice. --- src/conf/domain_conf.c | 11 +++++++++++ src/conf/domain_conf.h | 2 ++ src/libvirt_private.syms | 1 + 3 files changed, 14 insertions(+) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 04c10df0a9..5a27cd9d7d 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -29126,6 +29126,17 @@ virDomainNetGetActualVirtPortProfile(const virDomainNetDef *iface) } } +/* Check whether the port is an ovs managed port */ +bool +virDomainNetDefIsOvsport(const virDomainNetDef *net) +{ + const virNetDevVPortProfile *vport = virDomainNetGetActualVirtPortProfile(net); + virDomainNetType actualType = virDomainNetGetActualType(net); + + return (actualType == VIR_DOMAIN_NET_TYPE_BRIDGE) && vport && + vport->virtPortType == VIR_NETDEV_VPORT_PROFILE_OPENVSWITCH; +} + const virNetDevBandwidth * virDomainNetGetActualBandwidth(const virDomainNetDef *iface) { diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 4d9d499b16..2a36c5acf1 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -3610,6 +3610,8 @@ int virDomainNetGetActualDirectMode(const virDomainNetDef *iface); virDomainHostdevDef *virDomainNetGetActualHostdev(virDomainNetDef *iface); const virNetDevVPortProfile * virDomainNetGetActualVirtPortProfile(const virDomainNetDef *iface); +bool +virDomainNetDefIsOvsport(const virDomainNetDef *net); const virNetDevBandwidth * virDomainNetGetActualBandwidth(const virDomainNetDef *iface); const virNetDevVlan *virDomainNetGetActualVlan(const virDomainNetDef *iface); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 43e6398ae5..110b243e28 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -518,6 +518,7 @@ virDomainNetDefActualFromNetworkPort; virDomainNetDefActualToNetworkPort; virDomainNetDefFormat; virDomainNetDefFree; +virDomainNetDefIsOvsport; virDomainNetDefNew; virDomainNetDefToNetworkPort; virDomainNetDHCPInterfaces; -- 2.30.2.windows.1

From: zhangjl02 <zhangjl02@inspur.com> Introduce qos setting and cleaning method. Use ovs command to set qos parameters on specific interface of qemu virtual machine. When an ovs port is created, we add 'ifname' to external-ids. When setting qos on an ovs port, query its qos and queue. If found, change qos on queried queue and qos, otherwise create new queue and qos. When cleaning qos, query and clean queues and qos in ovs table record by 'ifname' and 'vmid'. --- diff to v2: - format and optimize code structure --- src/libvirt_private.syms | 2 + src/util/virnetdevopenvswitch.c | 274 ++++++++++++++++++++++++++++++++ src/util/virnetdevopenvswitch.h | 11 ++ 3 files changed, 287 insertions(+) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 110b243e28..36322d03b3 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -2807,8 +2807,10 @@ virNetDevMidonetUnbindPort; virNetDevOpenvswitchAddPort; virNetDevOpenvswitchGetMigrateData; virNetDevOpenvswitchGetVhostuserIfname; +virNetDevOpenvswitchInterfaceClearQos; virNetDevOpenvswitchInterfaceGetMaster; virNetDevOpenvswitchInterfaceParseStats; +virNetDevOpenvswitchInterfaceSetQos; virNetDevOpenvswitchInterfaceStats; virNetDevOpenvswitchMaybeUnescapeReply; virNetDevOpenvswitchRemovePort; diff --git a/src/util/virnetdevopenvswitch.c b/src/util/virnetdevopenvswitch.c index eac68d9556..7a64a8dbe6 100644 --- a/src/util/virnetdevopenvswitch.c +++ b/src/util/virnetdevopenvswitch.c @@ -30,6 +30,7 @@ #include "virlog.h" #include "virjson.h" #include "virfile.h" +#include "virutil.h" #define VIR_FROM_THIS VIR_FROM_NONE @@ -140,6 +141,7 @@ int virNetDevOpenvswitchAddPort(const char *brname, const char *ifname, g_autofree char *ifaceid_ex_id = NULL; g_autofree char *profile_ex_id = NULL; g_autofree char *vmid_ex_id = NULL; + g_autofree char *ifname_ex_id = NULL; virMacAddrFormat(macaddr, macaddrstr); virUUIDFormat(ovsport->interfaceID, ifuuidstr); @@ -149,6 +151,7 @@ int virNetDevOpenvswitchAddPort(const char *brname, const char *ifname, macaddrstr); ifaceid_ex_id = g_strdup_printf("external-ids:iface-id=\"%s\"", ifuuidstr); vmid_ex_id = g_strdup_printf("external-ids:vm-id=\"%s\"", vmuuidstr); + ifname_ex_id = g_strdup_printf("external-ids:ifname=\"%s\"", ifname); if (ovsport->profileID[0] != '\0') { profile_ex_id = g_strdup_printf("external-ids:port-profile=\"%s\"", ovsport->profileID); @@ -174,6 +177,7 @@ int virNetDevOpenvswitchAddPort(const char *brname, const char *ifname, "--", "set", "Interface", ifname, ifaceid_ex_id, "--", "set", "Interface", ifname, vmid_ex_id, "--", "set", "Interface", ifname, profile_ex_id, + "--", "set", "Interface", ifname, ifname_ex_id, "--", "set", "Interface", ifname, "external-ids:iface-status=active", NULL); @@ -614,3 +618,273 @@ int virNetDevOpenvswitchUpdateVlan(const char *ifname, return 0; } + + +/** + * virNetDevOpenvswitchInterfaceSetQos: + * @ifname: on which interface + * @bandwidth: rates to set (may be NULL) + * @swapped: true if IN/OUT should be set contrariwise + * + * Update qos configuration of an OVS port. + * + * If @swapped is set, the IN part of @bandwidth is set on + * @ifname's TX, and vice versa. If it is not set, IN is set on + * RX and OUT on TX. This is because for some types of interfaces + * domain and the host live on the same side of the interface (so + * domain's RX/TX is host's RX/TX), and for some it's swapped + * (domain's RX/TX is hosts's TX/RX). + * + * Return 0 on success, -1 otherwise. + */ +int +virNetDevOpenvswitchInterfaceSetQos(const char *ifname, + const virNetDevBandwidth *bandwidth, + const unsigned char *vmid, + bool swapped) +{ + virNetDevBandwidthRate *rx = NULL; /* From domain POV */ + virNetDevBandwidthRate *tx = NULL; /* From domain POV */ + + if (!bandwidth) { + /* nothing to be enabled */ + return 0; + } + + if (geteuid() != 0) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("Network bandwidth tuning is not available" + " in session mode")); + return -1; + } + + if (!ifname) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("Unable to set bandwidth for interface because " + "device name is unknown")); + return -1; + } + + if (swapped) { + rx = bandwidth->out; + tx = bandwidth->in; + } else { + rx = bandwidth->in; + tx = bandwidth->out; + } + + if (!bandwidth->out && !bandwidth->in) { + if (virNetDevOpenvswitchInterfaceClearQos(ifname, vmid) < 0) { + VIR_WARN("Clean qos for interface %s failed", ifname); + } + return 0; + } + + if (tx && tx->average) { + char vmuuidstr[VIR_UUID_STRING_BUFLEN]; + g_autoptr(virCommand) cmd = NULL; + g_autofree char *vmid_ex_id = NULL; + g_autofree char *ifname_ex_id = NULL; + g_autofree char *average = NULL; + g_autofree char *peak = NULL; + g_autofree char *burst = NULL; + g_autofree char *qos_uuid = NULL; + g_autofree char *queue_uuid = NULL; + + average = g_strdup_printf("%llu", tx->average * 8192); + if (tx->burst) + burst = g_strdup_printf("%llu", tx->burst * 8192); + if (tx->peak) + peak = g_strdup_printf("%llu", tx->peak * 8192); + + /* find queue */ + cmd = virNetDevOpenvswitchCreateCmd(); + virUUIDFormat(vmid, vmuuidstr); + vmid_ex_id = g_strdup_printf("external-ids:vm-id=\"%s\"", vmuuidstr); + ifname_ex_id = g_strdup_printf("external-ids:ifname=\"%s\"", ifname); + virCommandAddArgList(cmd, "--no-heading", "--columns=_uuid", "find", "queue", + vmid_ex_id, ifname_ex_id, NULL); + virCommandSetOutputBuffer(cmd, &queue_uuid); + if (virCommandRun(cmd, NULL) < 0) { + VIR_WARN("Unable to find queue on port %s", ifname); + } + + /* find qos */ + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "--no-heading", "--columns=_uuid", "find", "qos", + vmid_ex_id, ifname_ex_id, NULL); + virCommandSetOutputBuffer(cmd, &qos_uuid); + if (virCommandRun(cmd, NULL) < 0) { + VIR_WARN("Unable to find qos on port %s", ifname); + } + + /* create qos and set */ + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + if (queue_uuid && *queue_uuid) { + g_auto(GStrv) lines = g_strsplit(queue_uuid, "\n", 0); + virCommandAddArgList(cmd, "set", "queue", lines[0], NULL); + } else { + virCommandAddArgList(cmd, "set", "port", ifname, "qos=@qos1", + vmid_ex_id, ifname_ex_id, + "--", "--id=@qos1", "create", "qos", "type=linux-htb", NULL); + virCommandAddArgFormat(cmd, "other_config:min-rate=%s", average); + if (burst) { + virCommandAddArgFormat(cmd, "other_config:burst=%s", burst); + } + if (peak) { + virCommandAddArgFormat(cmd, "other_config:max-rate=%s", peak); + } + virCommandAddArgList(cmd, "queues:0=@queue0", vmid_ex_id, ifname_ex_id, + "--", "--id=@queue0", "create", "queue", NULL); + } + virCommandAddArgFormat(cmd, "other_config:min-rate=%s", average); + if (burst) { + virCommandAddArgFormat(cmd, "other_config:burst=%s", burst); + } + if (peak) { + virCommandAddArgFormat(cmd, "other_config:max-rate=%s", peak); + } + virCommandAddArgList(cmd, vmid_ex_id, ifname_ex_id, NULL); + if (virCommandRun(cmd, NULL) < 0) { + if (*queue_uuid) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unable to set queue configuration on port %s"), ifname); + } else { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unable to create and set qos configuration on port %s"), ifname); + } + return -1; + } + + if (qos_uuid && *qos_uuid) { + g_auto(GStrv) lines = g_strsplit(qos_uuid, "\n", 0); + + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "set", "qos", lines[0], NULL); + virCommandAddArgFormat(cmd, "other_config:min-rate=%s", average); + if (burst) { + virCommandAddArgFormat(cmd, "other_config:burst=%s", burst); + } + if (peak) { + virCommandAddArgFormat(cmd, "other_config:max-rate=%s", peak); + } + virCommandAddArgList(cmd, vmid_ex_id, ifname_ex_id, NULL); + if (virCommandRun(cmd, NULL) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unable to set qos configuration on port %s"), ifname); + return -1; + } + } + } + + if (rx) { + g_autoptr(virCommand) cmd = NULL; + + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "set", "Interface", ifname, NULL); + virCommandAddArgFormat(cmd, "ingress_policing_rate=%llu", rx->average * 8); + if (rx->burst) + virCommandAddArgFormat(cmd, "ingress_policing_burst=%llu", rx->burst * 8); + + if (virCommandRun(cmd, NULL) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unable to set vlan configuration on port %s"), ifname); + return -1; + } + } + + return 0; +} + +int +virNetDevOpenvswitchInterfaceClearQos(const char *ifname, + const unsigned char *vmid) +{ + char vmuuidstr[VIR_UUID_STRING_BUFLEN]; + g_autoptr(virCommand) cmd = NULL; + g_autofree char *vmid_ex_id = NULL; + g_autofree char *qos_uuid = NULL; + g_autofree char *queue_uuid = NULL; + g_autofree char *port_qos = NULL; + size_t i; + + /* find qos */ + cmd = virNetDevOpenvswitchCreateCmd(); + virUUIDFormat(vmid, vmuuidstr); + vmid_ex_id = g_strdup_printf("external-ids:vm-id=\"%s\"", vmuuidstr); + virCommandAddArgList(cmd, "--no-heading", "--columns=_uuid", "find", "qos", vmid_ex_id, NULL); + virCommandSetOutputBuffer(cmd, &qos_uuid); + if (virCommandRun(cmd, NULL) < 0) { + VIR_WARN("Unable to find qos on port %s", ifname); + } + + /* find queue */ + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + vmid_ex_id = g_strdup_printf("external-ids:vm-id=\"%s\"", vmuuidstr); + virCommandAddArgList(cmd, "--no-heading", "--columns=_uuid", "find", "queue", vmid_ex_id, NULL); + virCommandSetOutputBuffer(cmd, &queue_uuid); + if (virCommandRun(cmd, NULL) < 0) { + VIR_WARN("Unable to find queue on port %s", ifname); + } + + if (qos_uuid && *qos_uuid) { + g_auto(GStrv) lines = g_strsplit(qos_uuid, "\n", 0); + + /* destroy qos */ + for (i = 0; lines[i] != NULL; i++) { + const char *line = lines[i]; + if (!*line) { + continue; + } + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "--no-heading", "--columns=_uuid", "--if-exists", + "list", "port", ifname, "qos", NULL); + virCommandSetOutputBuffer(cmd, &port_qos); + if (virCommandRun(cmd, NULL) < 0) { + VIR_WARN("Unable to remove port qos on port %s", ifname); + } + if (port_qos && *port_qos) { + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "remove", "port", ifname, "qos", line, NULL); + if (virCommandRun(cmd, NULL) < 0) { + VIR_WARN("Unable to remove port qos on port %s", ifname); + } + } + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "destroy", "qos", line, NULL); + if (virCommandRun(cmd, NULL) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unable to destroy qos on port %s"), ifname); + return -1; + } + } + } + /* destroy queue */ + if (queue_uuid && *queue_uuid) { + g_auto(GStrv) lines = g_strsplit(queue_uuid, "\n", 0); + + for (i = 0; lines[i] != NULL; i++) { + const char *line = lines[i]; + if (!*line) { + continue; + } + virCommandFree(cmd); + cmd = virNetDevOpenvswitchCreateCmd(); + virCommandAddArgList(cmd, "destroy", "queue", line, NULL); + if (virCommandRun(cmd, NULL) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unable to destroy queue on port %s"), ifname); + return -1; + } + } + } + + return 0; +} diff --git a/src/util/virnetdevopenvswitch.h b/src/util/virnetdevopenvswitch.h index 7525376855..2dcd1aec6b 100644 --- a/src/util/virnetdevopenvswitch.h +++ b/src/util/virnetdevopenvswitch.h @@ -21,6 +21,7 @@ #pragma once #include "internal.h" +#include "virnetdevbandwidth.h" #include "virnetdevvportprofile.h" #include "virnetdevvlan.h" @@ -69,3 +70,13 @@ int virNetDevOpenvswitchGetVhostuserIfname(const char *path, int virNetDevOpenvswitchUpdateVlan(const char *ifname, const virNetDevVlan *virtVlan) ATTRIBUTE_NONNULL(1) G_GNUC_WARN_UNUSED_RESULT; + +int virNetDevOpenvswitchInterfaceSetQos(const char *ifname, + const virNetDevBandwidth *bandwidth, + const unsigned char *vmid, + bool swapped) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3) G_GNUC_WARN_UNUSED_RESULT; + +int virNetDevOpenvswitchInterfaceClearQos(const char *ifname, + const unsigned char *vmid) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; -- 2.30.2.windows.1

From: zhangjl02 <zhangjl02@inspur.com> Return 0 directly if the port is ovs managed. When the ovs port is set noqueue, qos config on this port will not work. --- diff to v2: - remove "actualType" argument --- src/qemu/qemu_domain.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 8488f58e09..bb529cc987 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -11527,7 +11527,8 @@ qemuDomainInterfaceSetDefaultQDisc(virQEMUDriver *driver, actualType == VIR_DOMAIN_NET_TYPE_NETWORK || actualType == VIR_DOMAIN_NET_TYPE_BRIDGE || actualType == VIR_DOMAIN_NET_TYPE_DIRECT) { - if (virNetDevBandwidthSetRootQDisc(net->ifname, "noqueue") < 0) + if (!virDomainNetDefIsOvsport(net) && + virNetDevBandwidthSetRootQDisc(net->ifname, "noqueue") < 0) return -1; } -- 2.30.2.windows.1

From: zhangjl02 <zhangjl02@inspur.com> When qos is set or delete, we have to check if the port is an ovs managed port. If true, call the virNetDevOpenvswitchInterfaceSetQos function when qos is set, and call the virNetDevOpenvswitchInterfaceClearQos function when the interface is to be destroyed. --- diff to v2: - remove "acutualType" argument - optimize code structure Thank to Michal Privoznik for helping solve these problems. --- src/qemu/qemu_command.c | 10 ++++++++-- src/qemu/qemu_driver.c | 23 +++++++++++++++++++++-- src/qemu/qemu_hotplug.c | 35 ++++++++++++++++++++++++++--------- src/qemu/qemu_process.c | 7 ++++++- 4 files changed, 61 insertions(+), 14 deletions(-) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index ea513693f7..522394bb74 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -8610,9 +8610,15 @@ qemuBuildInterfaceCommandLine(virQEMUDriver *driver, actualBandwidth = virDomainNetGetActualBandwidth(net); if (actualBandwidth) { if (virNetDevSupportsBandwidth(actualType)) { - if (virNetDevBandwidthSet(net->ifname, actualBandwidth, false, - !virDomainNetTypeSharesHostView(net)) < 0) + if (virDomainNetDefIsOvsport(net)) { + if (virNetDevOpenvswitchInterfaceSetQos(net->ifname, actualBandwidth, + def->uuid, + !virDomainNetTypeSharesHostView(net)) < 0) + goto cleanup; + } else if (virNetDevBandwidthSet(net->ifname, actualBandwidth, false, + !virDomainNetTypeSharesHostView(net)) < 0) { goto cleanup; + } } else { VIR_WARN("setting bandwidth on interfaces of " "type '%s' is not implemented yet", diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 235f575901..72f550bf8d 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -10231,6 +10231,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, bool inboundSpecified = false, outboundSpecified = false; int actualType; bool qosSupported = true; + bool ovsType = false; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG, -1); @@ -10277,6 +10278,7 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, if (net) { actualType = virDomainNetGetActualType(net); qosSupported = virNetDevSupportsBandwidth(actualType); + ovsType = virDomainNetDefIsOvsport(net); } if (qosSupported && persistentNet) { @@ -10366,8 +10368,25 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom, } } - if (virNetDevBandwidthSet(net->ifname, newBandwidth, false, - !virDomainNetTypeSharesHostView(net)) < 0) { + if (ovsType) { + if (virNetDevOpenvswitchInterfaceSetQos(net->ifname, newBandwidth, + vm->def->uuid, + !virDomainNetTypeSharesHostView(net)) < 0) { + virErrorPtr orig_err; + + virErrorPreserveLast(&orig_err); + ignore_value(virNetDevOpenvswitchInterfaceSetQos(net->ifname, newBandwidth, + vm->def->uuid, + !virDomainNetTypeSharesHostView(net))); + if (net->bandwidth) { + ignore_value(virDomainNetBandwidthUpdate(net, + net->bandwidth)); + } + virErrorRestore(&orig_err); + goto endjob; + } + } else if (virNetDevBandwidthSet(net->ifname, newBandwidth, false, + !virDomainNetTypeSharesHostView(net)) < 0) { virErrorPtr orig_err; virErrorPreserveLast(&orig_err); diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c index d2a354d026..cb6a4e4ea5 100644 --- a/src/qemu/qemu_hotplug.c +++ b/src/qemu/qemu_hotplug.c @@ -1409,9 +1409,15 @@ qemuDomainAttachNetDevice(virQEMUDriver *driver, actualBandwidth = virDomainNetGetActualBandwidth(net); if (actualBandwidth) { if (virNetDevSupportsBandwidth(actualType)) { - if (virNetDevBandwidthSet(net->ifname, actualBandwidth, false, - !virDomainNetTypeSharesHostView(net)) < 0) + if (virDomainNetDefIsOvsport(net)) { + if (virNetDevOpenvswitchInterfaceSetQos(net->ifname, actualBandwidth, + vm->def->uuid, + !virDomainNetTypeSharesHostView(net)) < 0) + goto cleanup; + } else if (virNetDevBandwidthSet(net->ifname, actualBandwidth, false, + !virDomainNetTypeSharesHostView(net)) < 0) { goto cleanup; + } } else { VIR_WARN("setting bandwidth on interfaces of " "type '%s' is not implemented yet", @@ -3914,9 +3920,15 @@ qemuDomainChangeNet(virQEMUDriver *driver, const virNetDevBandwidth *newb = virDomainNetGetActualBandwidth(newdev); if (newb) { - if (virNetDevBandwidthSet(newdev->ifname, newb, false, - !virDomainNetTypeSharesHostView(newdev)) < 0) + if (virDomainNetDefIsOvsport(newdev)) { + if (virNetDevOpenvswitchInterfaceSetQos(newdev->ifname, newb, + vm->def->uuid, + !virDomainNetTypeSharesHostView(newdev)) < 0) + goto cleanup; + } else if (virNetDevBandwidthSet(newdev->ifname, newb, false, + !virDomainNetTypeSharesHostView(newdev)) < 0) { goto cleanup; + } } else { /* * virNetDevBandwidthSet() doesn't clear any existing @@ -4665,11 +4677,16 @@ qemuDomainRemoveNetDevice(virQEMUDriver *driver, if (!(charDevAlias = qemuAliasChardevFromDevAlias(net->info.alias))) return -1; - if (virDomainNetGetActualBandwidth(net) && - virNetDevSupportsBandwidth(virDomainNetGetActualType(net)) && - virNetDevBandwidthClear(net->ifname) < 0) - VIR_WARN("cannot clear bandwidth setting for device : %s", - net->ifname); + if (virNetDevSupportsBandwidth(virDomainNetGetActualType(net))) { + if (virDomainNetDefIsOvsport(net)) { + if (virNetDevOpenvswitchInterfaceClearQos(net->ifname, vm->def->uuid) < 0) + VIR_WARN("cannot clear bandwidth setting for ovs device : %s", + net->ifname); + } else if (virNetDevBandwidthClear(net->ifname) < 0) { + VIR_WARN("cannot clear bandwidth setting for device : %s", + net->ifname); + } + } /* deactivate the tap/macvtap device on the host, which could also * affect the parent device (e.g. macvtap passthrough mode sets diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 2b03b0ab98..3693796b06 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -7966,7 +7966,6 @@ void qemuProcessStop(virQEMUDriver *driver, for (i = 0; i < def->nnets; i++) { virDomainNetDef *net = def->nets[i]; vport = virDomainNetGetActualVirtPortProfile(net); - switch (virDomainNetGetActualType(net)) { case VIR_DOMAIN_NET_TYPE_DIRECT: ignore_value(virNetDevMacVLanDeleteWithVPortProfile( @@ -8023,6 +8022,12 @@ void qemuProcessStop(virQEMUDriver *driver, else VIR_WARN("Unable to release network device '%s'", NULLSTR(net->ifname)); } + + if (virDomainNetDefIsOvsport(net) && + virNetDevOpenvswitchInterfaceClearQos(net->ifname, vm->def->uuid) < 0) { + VIR_WARN("cannot clear bandwidth setting for ovs device : %s", + net->ifname); + } } retry: -- 2.30.2.windows.1

On 7/7/21 11:18 AM, zhangjl02 wrote:
---
diff to v2: - Remove passing "actualType" argument, get it inside funtion - Format code.
diff to v1: - Move qemuDomainDefIsOvsport from src/qemu/qemu_domain.c to src/conf/domain_conf.c - Call virCommandFree(cmd)free cmd before reusing it. - Ddd g_autofree to variables. - Reduce usage of virReportError(), and coupled it with return -1. - Fix remove port qos error. - Optimise code structure.
Thanks to Michal Privoznik for helping reviewing these patches and solving problems. Really sorry to bring extra work to review them. I will continue to learn and become familiar with submission process.
Now libvirt use tc rules to manage interface's qos. But when an interface is created by ovs, there is no qos setting result in ovs database. Therefore, qos of ovs port should be set via ovs management command. We add a function to tell whether a port definition is an ovs managed virtual port. Change default qdisc rules, which return 0 directly if the port is ovs managed(When the ovs port is set noqueue, qos config on this port will not work). Add ovs management function of setting and cleaning qos. Then check if the port is an ovs managed port during its life cycle, and call the ovs management function to set or clean qos settings.
zhangjl02 (4): virDomain: interface: add virDomainNetDefIsOvsport virDomain: interface: add virNetDevOpenvswitchInterfaceSetQos and virNetDevOpenvswitchInterfaceClearQos qemu: interface: remove setting noqueue for ovs port qemu: interface: check and use ovs command to set qos of ovs managed port
src/conf/domain_conf.c | 11 ++ src/conf/domain_conf.h | 2 + src/libvirt_private.syms | 3 + src/qemu/qemu_command.c | 10 +- src/qemu/qemu_domain.c | 3 +- src/qemu/qemu_driver.c | 23 ++- src/qemu/qemu_hotplug.c | 35 ++-- src/qemu/qemu_process.c | 7 +- src/util/virnetdevopenvswitch.c | 274 ++++++++++++++++++++++++++++++++ src/util/virnetdevopenvswitch.h | 11 ++ 10 files changed, 364 insertions(+), 15 deletions(-)
Patches look good. However, you forgot to add Signed-off-by line to each patch (sorry for not realizing earlier). We require it per: https://libvirt.org/hacking.html#developer-certificate-of-origin I can fix that before pushing, just reply to this e-mail with your S-o-b and I will amend that to each commit. Michal

Here is my signed-off-by line Signed-off-by: zhangjl02@inspur.com Thanks again for reminding:) . zhangjl02
On 9/7/21 3:44 PM, Michal Prívozník <mprivozn@redhat.com> wrote:
On 7/7/21 11:18 AM, zhangjl02 wrote:
---
diff to v2: - Remove passing "actualType" argument, get it inside funtion - Format code.
diff to v1: - Move qemuDomainDefIsOvsport from src/qemu/qemu_domain.c to src/conf/domain_conf.c - Call virCommandFree(cmd)free cmd before reusing it. - Ddd g_autofree to variables. - Reduce usage of virReportError(), and coupled it with return -1. - Fix remove port qos error. - Optimise code structure.
Thanks to Michal Privoznik for helping reviewing these patches and solving problems. Really sorry to bring extra work to review them. I will continue to learn and become familiar with submission process.
Now libvirt use tc rules to manage interface's qos. But when an interface is created by ovs, there is no qos setting result in ovs database. Therefore, qos of ovs port should be set via ovs management command. We add a function to tell whether a port definition is an ovs managed virtual port. Change default qdisc rules, which return 0 directly if the port is ovs managed(When the ovs port is set noqueue, qos config on this port will not work). Add ovs management function of setting and cleaning qos. Then check if the port is an ovs managed port during its life cycle, and call the ovs management function to set or clean qos settings.
zhangjl02 (4): virDomain: interface: add virDomainNetDefIsOvsport virDomain: interface: add virNetDevOpenvswitchInterfaceSetQos and virNetDevOpenvswitchInterfaceClearQos qemu: interface: remove setting noqueue for ovs port qemu: interface: check and use ovs command to set qos of ovs managed port
src/conf/domain_conf.c | 11 ++ src/conf/domain_conf.h | 2 + src/libvirt_private.syms | 3 + src/qemu/qemu_command.c | 10 +- src/qemu/qemu_domain.c | 3 +- src/qemu/qemu_driver.c | 23 ++- src/qemu/qemu_hotplug.c | 35 ++-- src/qemu/qemu_process.c | 7 +- src/util/virnetdevopenvswitch.c | 274 ++++++++++++++++++++++++++++++++ src/util/virnetdevopenvswitch.h | 11 ++ 10 files changed, 364 insertions(+), 15 deletions(-)
Patches look good. However, you forgot to add Signed-off-by line to each patch (sorry for not realizing earlier). We require it per:
https://libvirt.org/hacking.html#developer-certificate-of-origin
I can fix that before pushing, just reply to this e-mail with your S-o-b and I will amend that to each commit.
Michal

On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com
Thanks again for reminding:) .
Perfect. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> and pushed. Congratulations on your first libvirt contribution! Michal

Hi Jinsheng, I have tested the patch and have some questions, could you please help to confirm? 1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected? 2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"? 3) Is the output from tc command expected? Test inbound: 1. start vm with setting as below: <interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/> <bandwidth> <inbound average='100' peak='200' burst='256'/> </bandwidth> ... </interface> 2. # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : 200 inbound.burst : 256 inbound.floor : 0 outbound.average: 0 outbound.peak : 0 outbound.burst : 0 # ip l 17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff # ovs-vsctl show interface …... ingress_policing_burst: 0 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 0 …... name : vnet5 # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 # tc -d filter show dev vnet5 parent ffff: (no outputs) For outbound: # virsh dumpxml rhel | grep /bandwidth -B2 <bandwidth> <outbound average='100' peak='200' burst='256'/> </bandwidth> # virsh domiftune rhel vnet9 inbound.average: 0 inbound.peak : 0 inbound.burst : 0 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 # ovs-vsctl list interface ingress_policing_burst: *2048* ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: *800* ... # tc -d filter show dev vnet9 parent ffff: filter protocol all pref 49 basic chain 0 filter protocol all pref 49 basic chain 0 handle 0x1 action order 1: police 0x1 rate 800Kbit burst 256Kb mtu 64Kb action drop/pipe overhead 0b linklayer unspec ref 1 bind 1 # tc -d class show dev vnet9 (no outputs) ------- Best Regards, Yalan Zhang IRC: yalzhang On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com> wrote:
On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com
Thanks again for reminding:) .
Perfect.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
and pushed. Congratulations on your first libvirt contribution!
Michal

Hi Yalan, 1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average is set to 100, the corresponding value will be set to 819200 in ovs. 2) For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit. 3) Ovs use tc to set qos, so we can see output from tc command. This patch is to unify the qos control and query on ovs ports. The conversion explanation is added in this patch: https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html And there are 6 following patches to fix some bugs. See https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html ------- Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@redhat.com] 发送时间: 2021年10月25日 17:54 收件人: Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团 抄送: libvir-list@redhat.com; Norman Shen(申嘉童); zhangjl02 主题: Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface Hi Jinsheng, I have tested the patch and have some questions, could you please help to confirm? 1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected? 2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"? 3) Is the output from tc command expected? Test inbound: 1. start vm with setting as below: <interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/> <bandwidth> <inbound average='100' peak='200' burst='256'/> </bandwidth> ... </interface> 2. # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : 200 inbound.burst : 256 inbound.floor : 0 outbound.average: 0 outbound.peak : 0 outbound.burst : 0 # ip l 17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff # ovs-vsctl show interface …... ingress_policing_burst: 0 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 0 …... name : vnet5 # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 # tc -d filter show dev vnet5 parent ffff: (no outputs) For outbound: # virsh dumpxml rhel | grep /bandwidth -B2 <bandwidth> <outbound average='100' peak='200' burst='256'/> </bandwidth> # virsh domiftune rhel vnet9 inbound.average: 0 inbound.peak : 0 inbound.burst : 0 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 # ovs-vsctl list interface ingress_policing_burst: 2048 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 800 ... # tc -d filter show dev vnet9 parent ffff: filter protocol all pref 49 basic chain 0 filter protocol all pref 49 basic chain 0 handle 0x1 action order 1: police 0x1 rate 800Kbit burst 256Kb mtu 64Kb action drop/pipe overhead 0b linklayer unspec ref 1 bind 1 # tc -d class show dev vnet9 (no outputs) ------- Best Regards, Yalan Zhang IRC: yalzhang On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com<mailto:mprivozn@redhat.com>> wrote: On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com<mailto:zhangjl02@inspur.com>
Thanks again for reminding:) .
Perfect. Reviewed-by: Michal Privoznik <mprivozn@redhat.com<mailto:mprivozn@redhat.com>> and pushed. Congratulations on your first libvirt contribution! Michal

Hi Jinsheng, Thank you for the explanation. From the statistics above, the tc outputs for outbound matches. But I'm confused about the inbound statistics: # virsh domiftune rhel vnet5 inbound.average: *100* inbound.peak : *200* inbound.burst : 256 ... # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 As the value in libvirt xml is KB, inbound.average: **100 KB** can not match with *"rate 819200bit"* in tc outputs*,* I supposed it should be *800Kbit. *Please help to confirm. And so does "ceil 1638Kbit" (may be it should be 1600Kbit as "inbound.peak : 200"). I have run netperf to test the actual rate, the result is pass. 2 vm connected to the same bridge, set one vm with Qos, see test results below: # virsh domiftune rhel vnet0 inbound.average: 400 inbound.peak : 500 inbound.burst : 125 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 Throughput for inbound: 3.92 * 10^6bits/sec Throughput for outbound: 0.93 * 10^6bits/sec These patches fixed the bug [1] which closed with deferred resolution. Thank you! And this reminds me of another ovs Qos related bug [2], which was about network. And I tried with the scenarios in [2], there are no changes(not fixed). Just for information. :-) [1] https://bugzilla.redhat.com/show_bug.cgi?id=1510237 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1826168 ------- Best Regards, Yalan Zhang IRC: yalzhang On Tue, Oct 26, 2021 at 3:23 PM Jinsheng Zhang (张金生)-云服务集团 < zhangjl02@inspur.com> wrote:
Hi Yalan,
1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average is set to 100, the corresponding value will be set to 819200 in ovs.
2) For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit.
3) Ovs use tc to set qos, so we can see output from tc command.
This patch is to unify the qos control and query on ovs ports.
The conversion explanation is added in this patch: https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html
And there are 6 following patches to fix some bugs. See https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html
-------
Best Regards,
Jinsheng Zhang
*发件人:* Yalan Zhang [mailto:yalzhang@redhat.com] *发送时间:* 2021年10月25日 17:54 *收件人:* Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团 *抄送:* libvir-list@redhat.com; Norman Shen(申嘉童); zhangjl02 *主题:* Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface
Hi Jinsheng,
I have tested the patch and have some questions, could you please help to confirm?
1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected?
2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"?
3) Is the output from tc command expected?
Test inbound:
1. start vm with setting as below:
<interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/>
<bandwidth>
<inbound average='100' peak='200' burst='256'/>
</bandwidth>
...
</interface>
2.
# virsh domiftune rhel vnet5
inbound.average: 100
inbound.peak : 200
inbound.burst : 256
inbound.floor : 0
outbound.average: 0
outbound.peak : 0
outbound.burst : 0
# ip l
17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff
# ovs-vsctl show interface
…...
ingress_policing_burst: 0
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: 0
…...
name : vnet5
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate *819200bit* ceil *1638Kbit* linklayer ethernet burst *256Kb*/1 mpu 0b cburst 256Kb/1 mpu 0b level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
# tc -d filter show dev vnet5 parent ffff:
(no outputs)
For outbound:
# virsh dumpxml rhel | grep /bandwidth -B2
<bandwidth>
<outbound average='100' peak='200' burst='256'/>
</bandwidth>
# virsh domiftune rhel vnet9
inbound.average: 0
inbound.peak : 0
inbound.burst : 0
inbound.floor : 0
outbound.average: 100
outbound.peak : 200
outbound.burst : 256
# ovs-vsctl list interface
ingress_policing_burst: *2048*
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: *800*
...
# tc -d filter show dev vnet9 parent ffff:
filter protocol all pref 49 basic chain 0
filter protocol all pref 49 basic chain 0 handle 0x1
action order 1: police 0x1 rate* 800Kbit burst 256Kb* mtu 64Kb action drop/pipe overhead 0b linklayer unspec
ref 1 bind 1
# tc -d class show dev vnet9
(no outputs)
------- Best Regards, Yalan Zhang IRC: yalzhang
On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com> wrote:
On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com
Thanks again for reminding:) .
Perfect.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
and pushed. Congratulations on your first libvirt contribution!
Michal

Hi Yalan, It seems that there is no output error abount inbound settings from your statistics. 100KB is short for 100 kilobytes, and 1 byte is 8 bit, therefore 100 kilobytes is 800 kilobit and is also 1024*800 bit which is 819200 bit or 800 Kbit for short. Similarly, 200 KB is equal to 1600Kbit. From your test results, inbound.average is set to 400 KB which is 400 * 1024 * 8 bit(approximately 3.2*10^6 bits). outbound.average is set to 100 KB which is approximately 0.8*10^6 bits. Considering peek and burst is larger than average. The netperf test result is meaningful. For the second bug mentioned, after create the ovs-net, tc rules are created. But when attach an interface to an instance, qos settings is not add to port neither in xml or tc . It is a bug, I think. I will think about fixing this. ------- Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@redhat.com] 发送时间: 2021年10月27日 18:35 收件人: Jinsheng Zhang (张金生)-云服务集团 抄送: libvir-list@redhat.com; Norman Shen(申嘉童) 主题: Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface Hi Jinsheng, Thank you for the explanation. From the statistics above, the tc outputs for outbound matches. But I'm confused about the inbound statistics: # virsh domiftune rhel vnet5 inbound. approximately 3.2*10^6 bits: 100 inbound.peak : 200 inbound.burst : 256 ... # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 As the value in libvirt xml is KB, inbound.average: *100 KB* can not match with *"rate 819200bit"* in tc outputs, I supposed it should be 800Kbit. Please help to confirm. And so does "ceil 1638Kbit" (may be it should be 1600Kbit as "inbound.peak : 200"). I have run netperf to test the actual rate, the result is pass. 2 vm connected to the same bridge, set one vm with Qos, see test results below: # virsh domiftune rhel vnet0 inbound.average: 400 inbound.peak : 500 inbound.burst : 125 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 Throughput for inbound: 3.92 * 10^6bits/sec Throughput for outbound: 0.93 * 10^6bits/sec These patches fixed the bug [1] which closed with deferred resolution. Thank you! And this reminds me of another ovs Qos related bug [2], which was about network. And I tried with the scenarios in [2], there are no changes(not fixed). Just for information. :-) [1] https://bugzilla.redhat.com/show_bug.cgi?id=1510237 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1826168 ------- Best Regards, Yalan Zhang IRC: yalzhang On Tue, Oct 26, 2021 at 3:23 PM Jinsheng Zhang (张金生)-云服务集团 <zhangjl02@inspur.com<mailto:zhangjl02@inspur.com>> wrote: Hi Yalan, 1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average is set to 100, the corresponding value will be set to 819200 in ovs. 2) For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit. 3) Ovs use tc to set qos, so we can see output from tc command. This patch is to unify the qos control and query on ovs ports. The conversion explanation is added in this patch: https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html And there are 6 following patches to fix some bugs. See https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html ------- Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@redhat.com<mailto:yalzhang@redhat.com>] 发送时间: 2021年10月25日 17:54 收件人: Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团 抄送: libvir-list@redhat.com<mailto:libvir-list@redhat.com>; Norman Shen(申嘉童); zhangjl02 主题: Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface Hi Jinsheng, I have tested the patch and have some questions, could you please help to confirm? 1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected? 2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"? 3) Is the output from tc command expected? Test inbound: 1. start vm with setting as below: <interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/> <bandwidth> <inbound average='100' peak='200' burst='256'/> </bandwidth> ... </interface> 2. # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : 200 inbound.burst : 256 inbound.floor : 0 outbound.average: 0 outbound.peak : 0 outbound.burst : 0 # ip l 17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff # ovs-vsctl show interface …... ingress_policing_burst: 0 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 0 …... name : vnet5 # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 # tc -d filter show dev vnet5 parent ffff: (no outputs) For outbound: # virsh dumpxml rhel | grep /bandwidth -B2 <bandwidth> <outbound average='100' peak='200' burst='256'/> </bandwidth> # virsh domiftune rhel vnet9 inbound.average: 0 inbound.peak : 0 inbound.burst : 0 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 # ovs-vsctl list interface ingress_policing_burst: 2048 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 800 ... # tc -d filter show dev vnet9 parent ffff: filter protocol all pref 49 basic chain 0 filter protocol all pref 49 basic chain 0 handle 0x1 action order 1: police 0x1 rate 800Kbit burst 256Kb mtu 64Kb action drop/pipe overhead 0b linklayer unspec ref 1 bind 1 # tc -d class show dev vnet9 (no outputs) ------- Best Regards, Yalan Zhang IRC: yalzhang On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com<mailto:mprivozn@redhat.com>> wrote: On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com<mailto:zhangjl02@inspur.com>
Thanks again for reminding:) .
Perfect. Reviewed-by: Michal Privoznik <mprivozn@redhat.com<mailto:mprivozn@redhat.com>> and pushed. Congratulations on your first libvirt contribution! Michal

Hi Jinsheng, Get it. Thank you for the explanation! ------- Best Regards, Yalan Zhang IRC: yalzhang On Thu, Oct 28, 2021 at 4:20 PM Jinsheng Zhang (张金生)-云服务集团 < zhangjl02@inspur.com> wrote:
Hi Yalan,
It seems that there is no output error abount inbound settings from your statistics. 100KB is short for 100 kilobytes, and 1 byte is 8 bit, therefore 100 kilobytes is 800 kilobit and is also 1024*800 bit which is 819200 bit or 800 Kbit for short. Similarly, 200 KB is equal to 1600Kbit.
From your test results, inbound.average is set to 400 KB which is 400 * 1024 * 8 bit(approximately 3.2*10^6 bits). outbound.average is set to 100 KB which is approximately 0.8*10^6 bits. Considering peek and burst is larger than average. The netperf test result is meaningful.
For the second bug mentioned, after create the ovs-net, tc rules are created. But when attach an interface to an instance, qos settings is not add to port neither in xml or tc . It is a bug, I think. I will think about fixing this.
-------
Best Regards,
Jinsheng Zhang
*发件人:* Yalan Zhang [mailto:yalzhang@redhat.com] *发送时间:* 2021年10月27日 18:35 *收件人:* Jinsheng Zhang (张金生)-云服务集团 *抄送:* libvir-list@redhat.com; Norman Shen(申嘉童) *主题:* Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface
Hi Jinsheng,
Thank you for the explanation. From the statistics above, the tc outputs for outbound matches. But I'm confused about the inbound statistics:
# virsh domiftune rhel vnet5
inbound. approximately 3.2*10^6 bits: *100*
inbound.peak : *200*
inbound.burst : 256
...
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate *819200bit* ceil *1638Kbit* linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
As the value in libvirt xml is KB, inbound.average: **100 KB** can not match with *"rate *819200bit"** in tc outputs*,* I supposed it should be *800Kbit. * Please help to confirm.
And so does "ceil *1638Kbit"* (may be it should be 1600Kbit as "inbound.peak : 200").
I have run netperf to test the actual rate, the result is pass. 2 vm connected to the same bridge, set one vm with Qos, see test results below:
# virsh domiftune rhel vnet0 inbound.average: 400 inbound.peak : 500 inbound.burst : 125 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256
Throughput for inbound: 3.92 * 10^6bits/sec
Throughput for outbound: 0.93 * 10^6bits/sec
These patches fixed the bug [1] which closed with deferred resolution. Thank you!
And this reminds me of another ovs Qos related bug [2], which was about network.
And I tried with the scenarios in [2], there are no changes(not fixed). Just for information. :-)
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1510237
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1826168
------- Best Regards, Yalan Zhang IRC: yalzhang
On Tue, Oct 26, 2021 at 3:23 PM Jinsheng Zhang (张金生)-云服务集团 < zhangjl02@inspur.com> wrote:
Hi Yalan,
1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average is set to 100, the corresponding value will be set to 819200 in ovs.
2) For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit.
3) Ovs use tc to set qos, so we can see output from tc command.
This patch is to unify the qos control and query on ovs ports.
The conversion explanation is added in this patch: https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html
And there are 6 following patches to fix some bugs. See https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html
-------
Best Regards,
Jinsheng Zhang
*发件人:* Yalan Zhang [mailto:yalzhang@redhat.com] *发送时间:* 2021年10月25日 17:54 *收件人:* Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团 *抄送:* libvir-list@redhat.com; Norman Shen(申嘉童); zhangjl02 *主题:* Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface
Hi Jinsheng,
I have tested the patch and have some questions, could you please help to confirm?
1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected?
2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"?
3) Is the output from tc command expected?
Test inbound:
1. start vm with setting as below:
<interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/>
<bandwidth>
<inbound average='100' peak='200' burst='256'/>
</bandwidth>
...
</interface>
2.
# virsh domiftune rhel vnet5
inbound.average: 100
inbound.peak : 200
inbound.burst : 256
inbound.floor : 0
outbound.average: 0
outbound.peak : 0
outbound.burst : 0
# ip l
17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff
# ovs-vsctl show interface
…...
ingress_policing_burst: 0
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: 0
…...
name : vnet5
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate *819200bit* ceil *1638Kbit* linklayer ethernet burst *256Kb*/1 mpu 0b cburst 256Kb/1 mpu 0b level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
# tc -d filter show dev vnet5 parent ffff:
(no outputs)
For outbound:
# virsh dumpxml rhel | grep /bandwidth -B2
<bandwidth>
<outbound average='100' peak='200' burst='256'/>
</bandwidth>
# virsh domiftune rhel vnet9
inbound.average: 0
inbound.peak : 0
inbound.burst : 0
inbound.floor : 0
outbound.average: 100
outbound.peak : 200
outbound.burst : 256
# ovs-vsctl list interface
ingress_policing_burst: *2048*
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: *800*
...
# tc -d filter show dev vnet9 parent ffff:
filter protocol all pref 49 basic chain 0
filter protocol all pref 49 basic chain 0 handle 0x1
action order 1: police 0x1 rate* 800Kbit burst 256Kb* mtu 64Kb action drop/pipe overhead 0b linklayer unspec
ref 1 bind 1
# tc -d class show dev vnet9
(no outputs)
------- Best Regards, Yalan Zhang IRC: yalzhang
On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com> wrote:
On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com
Thanks again for reminding:) .
Perfect.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
and pushed. Congratulations on your first libvirt contribution!
Michal

Hi Jinsheng, I asked as I have mixed the "K" as 1000. Thank you for the explanation, I'm clear now. And I found the "inbound.peak : *200" *was calculated to "ceil 1638Kbit", maybe 1600Kbit is more reasonable? as I found for other interface type like nat, it was 1600Kbit from tc output. Please help to confirm, Thank you! # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : *200* inbound.burst : 256 ... # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 ------- Best Regards, Yalan Zhang IRC: yalzhang On Thu, Oct 28, 2021 at 4:20 PM Jinsheng Zhang (张金生)-云服务集团 < zhangjl02@inspur.com> wrote:
Hi Yalan,
It seems that there is no output error abount inbound settings from your statistics. 100KB is short for 100 kilobytes, and 1 byte is 8 bit, therefore 100 kilobytes is 800 kilobit and is also 1024*800 bit which is 819200 bit or 800 Kbit for short. Similarly, 200 KB is equal to 1600Kbit.
From your test results, inbound.average is set to 400 KB which is 400 * 1024 * 8 bit(approximately 3.2*10^6 bits). outbound.average is set to 100 KB which is approximately 0.8*10^6 bits. Considering peek and burst is larger than average. The netperf test result is meaningful.
For the second bug mentioned, after create the ovs-net, tc rules are created. But when attach an interface to an instance, qos settings is not add to port neither in xml or tc . It is a bug, I think. I will think about fixing this.
-------
Best Regards,
Jinsheng Zhang
*发件人:* Yalan Zhang [mailto:yalzhang@redhat.com] *发送时间:* 2021年10月27日 18:35 *收件人:* Jinsheng Zhang (张金生)-云服务集团 *抄送:* libvir-list@redhat.com; Norman Shen(申嘉童) *主题:* Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface
Hi Jinsheng,
Thank you for the explanation. From the statistics above, the tc outputs for outbound matches. But I'm confused about the inbound statistics:
# virsh domiftune rhel vnet5
inbound. approximately 3.2*10^6 bits: *100*
inbound.peak : *200*
inbound.burst : 256
...
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate *819200bit* ceil *1638Kbit* linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
As the value in libvirt xml is KB, inbound.average: **100 KB** can not match with *"rate *819200bit"** in tc outputs*,* I supposed it should be *800Kbit. * Please help to confirm.
And so does "ceil *1638Kbit"* (may be it should be 1600Kbit as "inbound.peak : 200").
I have run netperf to test the actual rate, the result is pass. 2 vm connected to the same bridge, set one vm with Qos, see test results below:
# virsh domiftune rhel vnet0 inbound.average: 400 inbound.peak : 500 inbound.burst : 125 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256
Throughput for inbound: 3.92 * 10^6bits/sec
Throughput for outbound: 0.93 * 10^6bits/sec
These patches fixed the bug [1] which closed with deferred resolution. Thank you!
And this reminds me of another ovs Qos related bug [2], which was about network.
And I tried with the scenarios in [2], there are no changes(not fixed). Just for information. :-)
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1510237
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1826168
------- Best Regards, Yalan Zhang IRC: yalzhang
On Tue, Oct 26, 2021 at 3:23 PM Jinsheng Zhang (张金生)-云服务集团 < zhangjl02@inspur.com> wrote:
Hi Yalan,
1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average is set to 100, the corresponding value will be set to 819200 in ovs.
2) For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit.
3) Ovs use tc to set qos, so we can see output from tc command.
This patch is to unify the qos control and query on ovs ports.
The conversion explanation is added in this patch: https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html
And there are 6 following patches to fix some bugs. See https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html
-------
Best Regards,
Jinsheng Zhang
*发件人:* Yalan Zhang [mailto:yalzhang@redhat.com] *发送时间:* 2021年10月25日 17:54 *收件人:* Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团 *抄送:* libvir-list@redhat.com; Norman Shen(申嘉童); zhangjl02 *主题:* Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface
Hi Jinsheng,
I have tested the patch and have some questions, could you please help to confirm?
1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected?
2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"?
3) Is the output from tc command expected?
Test inbound:
1. start vm with setting as below:
<interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/>
<bandwidth>
<inbound average='100' peak='200' burst='256'/>
</bandwidth>
...
</interface>
2.
# virsh domiftune rhel vnet5
inbound.average: 100
inbound.peak : 200
inbound.burst : 256
inbound.floor : 0
outbound.average: 0
outbound.peak : 0
outbound.burst : 0
# ip l
17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff
# ovs-vsctl show interface
…...
ingress_policing_burst: 0
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: 0
…...
name : vnet5
# tc -d class show dev vnet5
class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate *819200bit* ceil *1638Kbit* linklayer ethernet burst *256Kb*/1 mpu 0b cburst 256Kb/1 mpu 0b level 0
class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7
# tc -d filter show dev vnet5 parent ffff:
(no outputs)
For outbound:
# virsh dumpxml rhel | grep /bandwidth -B2
<bandwidth>
<outbound average='100' peak='200' burst='256'/>
</bandwidth>
# virsh domiftune rhel vnet9
inbound.average: 0
inbound.peak : 0
inbound.burst : 0
inbound.floor : 0
outbound.average: 100
outbound.peak : 200
outbound.burst : 256
# ovs-vsctl list interface
ingress_policing_burst: *2048*
ingress_policing_kpkts_burst: 0
ingress_policing_kpkts_rate: 0
ingress_policing_rate: *800*
...
# tc -d filter show dev vnet9 parent ffff:
filter protocol all pref 49 basic chain 0
filter protocol all pref 49 basic chain 0 handle 0x1
action order 1: police 0x1 rate* 800Kbit burst 256Kb* mtu 64Kb action drop/pipe overhead 0b linklayer unspec
ref 1 bind 1
# tc -d class show dev vnet9
(no outputs)
------- Best Regards, Yalan Zhang IRC: yalzhang
On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com> wrote:
On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com
Thanks again for reminding:) .
Perfect.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
and pushed. Congratulations on your first libvirt contribution!
Michal

Hi Yalan, You are right about it. For other interface type, values in tc rules are calculated by multiply 8*1000 instead of 8*1024. I didn’t notice it. To make them uniform, I will fix it in next patch. Really thanks for your help. Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@redhat.com] 发送时间: 2021年10月29日 16:52 收件人: Jinsheng Zhang (张金生)-云服务集团 抄送: libvir-list@redhat.com; Norman Shen(申嘉童) 主题: Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface Hi Jinsheng, I asked as I have mixed the "K" as 1000. Thank you for the explanation, I'm clear now. And I found the "inbound.peak : 200" was calculated to "ceil 1638Kbit", maybe 1600Kbit is more reasonable? as I found for other interface type like nat, it was 1600Kbit from tc output. Please help to confirm, Thank you! # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : 200 inbound.burst : 256 ... # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 ------- Best Regards, Yalan Zhang IRC: yalzhang On Thu, Oct 28, 2021 at 4:20 PM Jinsheng Zhang (张金生)-云服务集团 <zhangjl02@inspur.com<mailto:zhangjl02@inspur.com>> wrote: Hi Yalan, It seems that there is no output error abount inbound settings from your statistics. 100KB is short for 100 kilobytes, and 1 byte is 8 bit, therefore 100 kilobytes is 800 kilobit and is also 1024*800 bit which is 819200 bit or 800 Kbit for short. Similarly, 200 KB is equal to 1600Kbit. From your test results, inbound.average is set to 400 KB which is 400 * 1024 * 8 bit(approximately 3.2*10^6 bits). outbound.average is set to 100 KB which is approximately 0.8*10^6 bits. Considering peek and burst is larger than average. The netperf test result is meaningful. For the second bug mentioned, after create the ovs-net, tc rules are created. But when attach an interface to an instance, qos settings is not add to port neither in xml or tc . It is a bug, I think. I will think about fixing this. ------- Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@redhat.com<mailto:yalzhang@redhat.com>] 发送时间: 2021年10月27日 18:35 收件人: Jinsheng Zhang (张金生)-云服务集团 抄送: libvir-list@redhat.com<mailto:libvir-list@redhat.com>; Norman Shen(申嘉童) 主题: Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface Hi Jinsheng, Thank you for the explanation. From the statistics above, the tc outputs for outbound matches. But I'm confused about the inbound statistics: # virsh domiftune rhel vnet5 inbound. approximately 3.2*10^6 bits: 100 inbound.peak : 200 inbound.burst : 256 ... # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 As the value in libvirt xml is KB, inbound.average: *100 KB* can not match with *"rate 819200bit"* in tc outputs, I supposed it should be 800Kbit. Please help to confirm. And so does "ceil 1638Kbit" (may be it should be 1600Kbit as "inbound.peak : 200"). I have run netperf to test the actual rate, the result is pass. 2 vm connected to the same bridge, set one vm with Qos, see test results below: # virsh domiftune rhel vnet0 inbound.average: 400 inbound.peak : 500 inbound.burst : 125 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 Throughput for inbound: 3.92 * 10^6bits/sec Throughput for outbound: 0.93 * 10^6bits/sec These patches fixed the bug [1] which closed with deferred resolution. Thank you! And this reminds me of another ovs Qos related bug [2], which was about network. And I tried with the scenarios in [2], there are no changes(not fixed). Just for information. :-) [1] https://bugzilla.redhat.com/show_bug.cgi?id=1510237 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1826168 ------- Best Regards, Yalan Zhang IRC: yalzhang On Tue, Oct 26, 2021 at 3:23 PM Jinsheng Zhang (张金生)-云服务集团 <zhangjl02@inspur.com<mailto:zhangjl02@inspur.com>> wrote: Hi Yalan, 1) For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average is set to 100, the corresponding value will be set to 819200 in ovs. 2) For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit. 3) Ovs use tc to set qos, so we can see output from tc command. This patch is to unify the qos control and query on ovs ports. The conversion explanation is added in this patch: https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html And there are 6 following patches to fix some bugs. See https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html ------- Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@redhat.com<mailto:yalzhang@redhat.com>] 发送时间: 2021年10月25日 17:54 收件人: Michal Prívozník; Jinsheng Zhang (张金生)-云服务集团 抄送: libvir-list@redhat.com<mailto:libvir-list@redhat.com>; Norman Shen(申嘉童); zhangjl02 主题: Re: [PATCH v3 0/4] Add qemu support setting qos via ovs on ovs interface Hi Jinsheng, I have tested the patch and have some questions, could you please help to confirm? 1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected? 2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from the setting "outbound.burst : 256"? 3) Is the output from tc command expected? Test inbound: 1. start vm with setting as below: <interface type='bridge'> <source bridge='ovsbr0'/> <virtualport type='openvswitch'/> <bandwidth> <inbound average='100' peak='200' burst='256'/> </bandwidth> ... </interface> 2. # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : 200 inbound.burst : 256 inbound.floor : 0 outbound.average: 0 outbound.peak : 0 outbound.burst : 0 # ip l 17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff # ovs-vsctl show interface …... ingress_policing_burst: 0 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 0 …... name : vnet5 # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate 819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 # tc -d filter show dev vnet5 parent ffff: (no outputs) For outbound: # virsh dumpxml rhel | grep /bandwidth -B2 <bandwidth> <outbound average='100' peak='200' burst='256'/> </bandwidth> # virsh domiftune rhel vnet9 inbound.average: 0 inbound.peak : 0 inbound.burst : 0 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 # ovs-vsctl list interface ingress_policing_burst: 2048 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 800 ... # tc -d filter show dev vnet9 parent ffff: filter protocol all pref 49 basic chain 0 filter protocol all pref 49 basic chain 0 handle 0x1 action order 1: police 0x1 rate 800Kbit burst 256Kb mtu 64Kb action drop/pipe overhead 0b linklayer unspec ref 1 bind 1 # tc -d class show dev vnet9 (no outputs) ------- Best Regards, Yalan Zhang IRC: yalzhang On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@redhat.com<mailto:mprivozn@redhat.com>> wrote: On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote:
Here is my signed-off-by line
Signed-off-by: zhangjl02@inspur.com<mailto:zhangjl02@inspur.com>
Thanks again for reminding:) .
Perfect. Reviewed-by: Michal Privoznik <mprivozn@redhat.com<mailto:mprivozn@redhat.com>> and pushed. Congratulations on your first libvirt contribution! Michal
participants (4)
-
Jinsheng Zhang (张金生)-云服务集团
-
Michal Prívozník
-
Yalan Zhang
-
zhangjl02