[libvirt] [PATCH 0/6] tests: misc collection of improvments
by Cole Robinson
Small collection of test suite improvements that were getting mixed
in with other patches I'm working on.
Patch #1 is a REGENERATE tweak for qemu argvs (this is already on list)
Patch #2 fixes running schema tests from top git dir
Patch #3-4 simplify the domain xml2xml tests
Patch #5 adds a new driver agnostic genericxml2xml test (to be used later)
Patch #6 adds QEMUCaps plumbing to qemuxml2xml (to be used later)
Cole Robinson (6):
tests: Add newlines with VIR_TEST_REGENERATE_OUTPUT
tests: Fix running schematests directly from topdir
tests: Share domain XML2XML compare helper
tests: qemuxml2xml: drop early file loading
tests: add genericxml2xmltest
tests: qemuxml2xml: Wire up QEMUCaps usage
tests/Makefile.am | 9 ++
tests/bhyvexml2xmltest.c | 30 +------
tests/capabilityschematest | 2 +-
tests/domaincapsschematest | 2 +-
tests/domainschematest | 4 +-
tests/domainsnapshotschematest | 2 +-
tests/genericxml2xmlindata/generic-disk-virtio.xml | 45 ++++++++++
.../genericxml2xmloutdata/generic-disk-virtio.xml | 45 ++++++++++
tests/genericxml2xmltest.c | 83 +++++++++++++++++++
tests/interfaceschematest | 2 +-
tests/lxcxml2xmltest.c | 50 +----------
tests/networkschematest | 2 +-
tests/nodedevschematest | 2 +-
tests/nwfilterschematest | 2 +-
tests/qemuxml2xmltest.c | 96 ++++++----------------
tests/secretschematest | 2 +-
tests/storagepoolschematest | 2 +-
tests/storagevolschematest | 2 +-
tests/test-lib.sh | 9 +-
tests/testutils.c | 47 ++++++++++-
tests/testutils.h | 6 ++
21 files changed, 281 insertions(+), 163 deletions(-)
create mode 100644 tests/genericxml2xmlindata/generic-disk-virtio.xml
create mode 100644 tests/genericxml2xmloutdata/generic-disk-virtio.xml
create mode 100644 tests/genericxml2xmltest.c
--
2.5.0
8 years, 11 months
[libvirt] [PATCH] Add missing virxdrdefs.h include to log_protocol
by Roman Bogorodskiy
Commit 2b6f6ad introduced the virxdrdefs.h header with
common definitions to be included in the protocol files,
but logging/log_protocol.x was missed, so add it there as well.
Hopefully this fixes build on OS X.
---
src/logging/log_protocol.x | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/logging/log_protocol.x b/src/logging/log_protocol.x
index a07334f..b0ac31b 100644
--- a/src/logging/log_protocol.x
+++ b/src/logging/log_protocol.x
@@ -2,6 +2,7 @@
*/
%#include "internal.h"
+%#include "virxdrdefs.h"
typedef opaque virLogManagerProtocolUUID[VIR_UUID_BUFLEN];
--
2.4.6
8 years, 11 months
[libvirt] [PATCH V2 0/3] Xen: Support vif outging bandwidth QoS
by Jim Fehlig
Happy New Year!
This small series adds support for specifying vif outgoing rate limits
in Xen. The first patch adds support for converting rate limits between
sexpr config and domXML. The second patch does the same for xl/xm config.
The third patch adds outgoing rate limiting to the libxl driver.
V1 here
https://www.redhat.com/archives/libvir-list/2015-December/msg00899.html
In V2 I've extended support to include the sexpr config format
Jim Fehlig (3):
xenconfig: support vif bandwidth in sexpr parser and formatter
xenconfig: support vif bandwidth in xm and xl parser and formatter
libxl: support vif outgoing bandwidth QoS
src/libvirt_xenconfig.syms | 1 +
src/libxl/libxl_conf.c | 39 +++++++++++++
src/xenconfig/xen_common.c | 30 ++++++++++
src/xenconfig/xen_sxpr.c | 74 +++++++++++++++++++++++++
src/xenconfig/xen_sxpr.h | 2 +
tests/sexpr2xmldata/sexpr2xml-vif-rate.sexpr | 11 ++++
tests/sexpr2xmldata/sexpr2xml-vif-rate.xml | 51 +++++++++++++++++
tests/sexpr2xmltest.c | 2 +
tests/xlconfigdata/test-vif-rate.cfg | 26 +++++++++
tests/xlconfigdata/test-vif-rate.xml | 57 +++++++++++++++++++
tests/xlconfigtest.c | 1 +
tests/xml2sexprdata/xml2sexpr-fv-net-rate.sexpr | 10 ++++
tests/xml2sexprdata/xml2sexpr-fv-net-rate.xml | 34 ++++++++++++
tests/xml2sexprtest.c | 1 +
14 files changed, 339 insertions(+)
create mode 100644 tests/sexpr2xmldata/sexpr2xml-vif-rate.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-vif-rate.xml
create mode 100644 tests/xlconfigdata/test-vif-rate.cfg
create mode 100644 tests/xlconfigdata/test-vif-rate.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-net-rate.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-net-rate.xml
--
2.1.4
8 years, 11 months
[libvirt] [PATCH v3 00/14] Use macros for more common virsh command options
by John Ferlan
v2:
http://www.redhat.com/archives/libvir-list/2015-December/msg00766.html
Changes since v2:
Use VIRSH_COMMON_OPT_<optname> for option prefix instead of
VIRSH_<optname>_OPT_COMMON
Patches have a few thumbs up already, figured I'd post it one last time
for perusal and checks of the naming 'algorithm.
John Ferlan (14):
virsh: Covert VSH_POOL_ macro to VIRSH_COMMON_OPT_
virsh: Move VIRSH_COMMON_OPT_POOL to virsh.h
virsh: Create macro for common "domain" option
virsh: Create macro for common "persistent" option
virsh: Create macro for common "config" option
virsh: Create macro for common "live" option
virsh: Create macro for common "current" option
virsh: Create macro for common "file" option
virsh: Create macros for common "pool" options
virsh: Create macros for common "vol" options
virsh: Have domain-monitor use common "domain" option
virsh: have snapshot use common "domain" option
virsh: Create macro for common "network" option
virsh: Create macro for common "interface" option
po/POTFILES.in | 1 +
tools/virsh-domain-monitor.c | 77 +---
tools/virsh-domain.c | 911 +++++++++----------------------------------
tools/virsh-interface.c | 37 +-
tools/virsh-network.c | 61 +--
tools/virsh-pool.c | 71 ++--
tools/virsh-snapshot.c | 60 +--
tools/virsh-volume.c | 148 ++-----
tools/virsh.h | 17 +
9 files changed, 334 insertions(+), 1049 deletions(-)
--
2.5.0
8 years, 11 months
Re: [libvirt] [PATCH 1/4] cgroup: Fix possible bug as a result of code motion for vcpu cgroup setup
by Henning Schild
On Mon, 11 Jan 2016 13:50:32 -0500
John Ferlan <jferlan(a)redhat.com> wrote:
> Commit id '90b721e43' moved where the virCgroupAddTask was made until
> after the check for the vcpupin checks. However, in doing so it missed
> an option where if the cpumap didn't exist, then the code would
> continue back to the top of the current vcpu loop. The results was
> that the virCgroupAddTask wouldn't be called.
>
> Signed-off-by: John Ferlan <jferlan(a)redhat.com>
> ---
> src/qemu/qemu_cgroup.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c
> index 1c406ce..91b3328 100644
> --- a/src/qemu/qemu_cgroup.c
> +++ b/src/qemu/qemu_cgroup.c
> @@ -1079,10 +1079,7 @@ qemuSetupCgroupForVcpu(virDomainObjPtr vm)
> }
> }
>
> - if (!cpumap)
> - continue;
> -
> - if (qemuSetupCgroupCpusetCpus(cgroup_vcpu, cpumap) < 0)
> + if (cpumap && qemuSetupCgroupCpusetCpus(cgroup_vcpu,
> cpumap) < 0) goto cleanup;
> }
>
Good catch, should be applied!
Henning
8 years, 11 months
Re: [libvirt] Fwd: DNS for IPv6 addresses?
by hongming
Hi Yaniv
Please refer to the following .
[root@localhost images]# virsh net-dumpxml dhcp
<network connections='2'>
<name>dhcp</name>
<uuid>066f0d89-67a6-42c8-bdeb-ed9420aaaf4f</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr4' stp='on' delay='0'/>
<mac address='52:54:00:c8:29:9f'/>
<ip address='192.168.123.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.123.2' end='192.168.123.254'/>
</dhcp>
</ip>
<ip family='ipv6' address='2001:db8:ca2:2::1' prefix='64'>
<dhcp>
<range start='2001:db8:ca2:2:1::10' end='2001:db8:ca2:2:1::ff'/>
</dhcp>
</ip>
</network>
[root@localhost images]# virsh dumpxml rhel7.1|grep /interface -B7
<interface type='network'>
<mac address='52:54:00:82:49:b1'/>
<source network='dhcp' bridge='virbr4'/>
<target dev='vnet1'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
[root@localhost images]# virsh domifaddr rhel7.1 --source lease
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet1 52:54:00:82:49:b1 ipv4 192.168.123.171/24
- - ipv6 2001:db8:ca2:2:1::1a/64
[root@localhost images]# cat /var/lib/libvirt/dnsmasq/virbr4.status
[
{
"ip-address": "192.168.123.171",
"mac-address": "52:54:00:82:49:b1",
"expiry-time": 1452594968
},
{
"iaid": "8538545",
"ip-address": "2001:db8:ca2:2:1::1a",
"mac-address": "52:54:00:82:49:b1",
"client-id":
"00:04:ee:53:b7:a8:c7:46:ab:95:d0:86:88:ee:6e:51:a0:2a",
"server-duid": "",
"expiry-time": 1452594971
}
]
Login guest to check ipv6 address . It is the same as the result
returned by "domifaddr"
Thanks
Hongming
On 01/11/2016 03:24 PM, Min Zhan wrote:
> @shyu,
>
> Could you help have a look and reply this question?
>
> Regards,
> Min Zhan
>
> ----- Forwarded Message -----
>> From: "Yaniv Kaul" <ykaul(a)redhat.com>
>> To: libvirt-list(a)redhat.com
>> Sent: Friday, January 8, 2016 5:40:09 PM
>> Subject: [libvirt] DNS for IPv6 addresses?
>>
>> Is there a way to define DNS for IPv6 addresses?
>> Something like:
>> <dns forwardPlainNames =" yes " >
>> <host ip =" 192.168.200.4 " >
>> <hostname> lago_basic_suite_3_6_storage-iscsi </hostname>
>> </host>
>> </dns>
>>
>>
>>
>> Only for IPv6?
>> I reckon I can't just use an IPv6 address in the 'IP' attribute?
>>
>> TIA,
>> Y.
>>
>> --
>> libvir-list mailing list
>> libvir-list(a)redhat.com
>> https://www.redhat.com/mailman/listinfo/libvir-list
8 years, 11 months
[libvirt] Software Offloading Performance Issue on increasing VM's (TSO enabled ) pushing traffic
by Piyush R Srivastava1
Hi,
Problem-
Offloading (in Software) for VM generated packets ( TSO enabled in VM's )
degrades severely with increase in VM's on a host.
On increasing VM's ( which are pushing traffic simultaneously ) on a
compute node-
- % offloaded packets coming out of VM's ( TSO enabled ) on tap port /
veth-pair decreases significantly
- Size of offloaded packets coming out of VM's (TSO enabled ) on tap
porty / veth pair decreases significantly
We are using OpenStack setup. Throughput for SNAT Test ( iperf client at VM
and server at external network machine ) is SIGNIFICANTLY less that DNAT
Test ( server at VM and client at external network machine). For 50 VM's
( 25 VM on each compute node on a 2 Compute Node setup ) SNAT throughput is
30% less than DNAT Throughput.
I was hoping to get community feedback on what is controlling the software
offloading of VM packets and how can we improve it ?
NOTE- This seems to be one of the bottlenecks in SNAT which is affecting
throughput at TX side on Compute Node. Improving this would help in
improving SNAT test network performance.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Description-
We have a testbed OpenStack deployment. We boot 1, 10 and 25 VM's on a
single compute node and start iperf traffic. ( VM's are iperf client ).
We then simultaneously do tcpdump at the veth-pair connecting the VM to the
OVS Bridges.
Tcpdump data shows that on increasing the VM's on a host, the % of
offloaded packets degrades severely
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Host configuration- 12 cores ( 24 vCPU ), 40 GB RAM
[root rhel7-25 ~]# uname -a
Linux rhel7-25.in.ibm.com 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38
EST 2015 x86_64 x86_64 x86_64 GNU/Linux
VM MTU is set to 1450
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Analysis-
Following is the % of non-offloaded packets observed at the tap ports /
veth pair ( connected to VM's TO OVS Bridge )
------------------------------------
| VMs on 1 Compute Node | % Non-Offloaded packets |
|-----------------------------------|
| 1 | 11.11% |
| 10 | 71.78% |
| 25 | 80.44% |
|----|--------- |
Thus we see significant degradation in offloaded packets when 10 and 25
VM's are sending iperf data simultaneously. ( TSO enabled VM's )
Non-Offloaded packets means Ethernet Frame of size 1464 ( VM MTU is 1450 ).
Thus the packets coming out of the VM's (TSO enabled ) are majority
non-offloaded as we increase VM's on a host.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Tcpdump details-
Iperf Server IP- 1.1.1.34
For 1 VM, we see majority offloaded packets and also large sized offloaded
frames-
[piyush rhel7-34 25]$ cat qvoed7aa38d-22.log | grep "> 1.1.1.34.5001" |
head -n 30
14:36:26.331073 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 74:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 0
14:36:26.331917 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 66:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 0
14:36:26.331946 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 90:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 24
14:36:26.331977 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 7056:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 6990
14:36:26.332018 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 5658:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 5592
14:36:26.332527 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 7056:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 6990
14:36:26.332560 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 9852:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 9786
14:36:26.333024 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 8454:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 8388
14:36:26.333054 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 7056:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 6990
14:36:26.333076 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 4260:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 4194
14:36:26.333530 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 16842:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 16776
14:36:26.333568 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 4260:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 4194
14:36:26.333886 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 21036:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 20970
14:36:26.333925 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 2862:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 2796
14:36:26.334303 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 21036:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 20970
14:36:26.334349 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 2862:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 2796
14:36:26.334741 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 22434:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 22368
14:36:26.335118 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 25230:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 25164
14:36:26.335566 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 25230:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 25164
14:36:26.336007 fa:16:3e:98:41:8b > fa:16:3e:ef:5f:16, IPv4, length 23832:
10.20.7.3.50395 > 1.1.1.34.5001: tcp 23766
For 20 VM's, we see reduction is size of offloaded packets and also the
size of offloaded packets is reduced. Tcpdump for one of the 10 VM's
( similar characterization for all 10 VM's )-
[piyush rhel7-34 25]$ cat qvo255d8cdd-90.log | grep "> 1.1.1.34.5001" |
head -n 30
15:09:25.024790 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 74:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 0
15:09:25.026834 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 66:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 0
15:09:25.026870 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 90:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 24
15:09:25.027186 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.027213 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 5658:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 5592
15:09:25.032500 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 5658:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 5592
15:09:25.032539 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 1464:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 1398
15:09:25.032567 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.035122 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.035631 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.035661 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.038508 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.038904 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
15:09:25.039300 fa:16:3e:b9:f8:ec > fa:16:3e:c1:de:cc, IPv4, length 7056:
10.20.18.3.36798 > 1.1.1.34.5001: tcp 6990
For 25 VM's, we see very less offloaded packets and also the size of
offloaded packets is reduced. Tcpdump for one of the 25 VM's ( similar
characterization for all 25 VM's )-
15:52:31.544316 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.544340 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545034 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545066 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 5658:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 5592
15:52:31.545474 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545501 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 2862:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 2796
15:52:31.545539 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 2862:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 2796
15:52:31.545572 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 7056:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 6990
15:52:31.545736 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545807 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545813 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545934 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545956 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.545974 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
15:52:31.546012 fa:16:3e:3c:7d:78 > fa:16:3e:aa:af:d5, IPv4, length 1464:
10.20.10.3.45892 > 1.1.1.34.5001: tcp 1398
Thanks and regards,
Piyush Raman
Mail: pirsriva(a)in.ibm.com
8 years, 11 months
[libvirt] [PATCH] virsh: Fix alignment in VIRSH_COMMON_OPT_CONFIG definition
by Andrea Bolognani
---
Pushed as trivial and safe for freeze.
tools/virsh.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/virsh.h b/tools/virsh.h
index 94d012a..8b5e5ba 100644
--- a/tools/virsh.h
+++ b/tools/virsh.h
@@ -78,8 +78,8 @@
# define VIRSH_COMMON_OPT_CONFIG(_helpstr) \
{.name = "config", \
- .type = VSH_OT_BOOL, \
- .help = _helpstr \
+ .type = VSH_OT_BOOL, \
+ .help = _helpstr \
} \
# define VIRSH_COMMON_OPT_LIVE(_helpstr) \
--
2.5.0
8 years, 11 months
[libvirt] Bug in RPC code causes failure to start LXC container using virDomainCreateXMLWithFiles
by Ben Gray
Hi,
Occasionally when trying to start LXC containers with fds I get the
following error:
virNetMessageDupFD:562 : Unable to duplicate FD -1: Bad file
descriptor
I tracked it down to the code that handles EAGAIN errors from
recvfd. In such cases the virNetMessageDecodeNumFDs function may be
called multiple times from virNetServerClientDispatchRead and each time
it overwrites the msg->fds array. In the best case (when msg->donefds
== 0) this results in a memory leak, in the worse case it will leak any
fd's already in msg->fds and cause subsequent failures when dup is called.
A very similar problem is mention here:
https://www.redhat.com/archives/libvir-list/2012-December/msg01306.html
Below is my patch to fix the issue.
--- a/src/rpc/virnetserverclient.c 2015-01-23 11:46:24.000000000 +0000
+++ b/src/rpc/virnetserverclient.c 2015-11-26 15:30:51.214462290 +0000
@@ -1107,36 +1107,40 @@
/* Now figure out if we need to read more data to get some
* file descriptors */
- if (msg->header.type == VIR_NET_CALL_WITH_FDS &&
- virNetMessageDecodeNumFDs(msg) < 0) {
- virNetMessageQueueServe(&client->rx);
- virNetMessageFree(msg);
- client->wantClose = true;
- return; /* Error */
- }
+ if (msg->header.type == VIR_NET_CALL_WITH_FDS) {
+ size_t i;
- /* Try getting the file descriptors (may fail if blocking) */
- for (i = msg->donefds; i < msg->nfds; i++) {
- int rv;
- if ((rv = virNetSocketRecvFD(client->sock, &(msg->fds[i])))
< 0) {
+ if (msg->nfds == 0 &&
+ virNetMessageDecodeNumFDs(msg) < 0) {
virNetMessageQueueServe(&client->rx);
virNetMessageFree(msg);
client->wantClose = true;
- return;
+ return; /* Error */
}
- if (rv == 0) /* Blocking */
- break;
- msg->donefds++;
- }
- /* Need to poll() until FDs arrive */
- if (msg->donefds < msg->nfds) {
- /* Because DecodeHeader/NumFDs reset bufferOffset, we
- * put it back to what it was, so everything works
- * again next time we run this method
- */
- client->rx->bufferOffset = client->rx->bufferLength;
- return;
+ /* Try getting the file descriptors (may fail if blocking) */
+ for (i = msg->donefds; i < msg->nfds; i++) {
+ int rv;
+ if ((rv = virNetSocketRecvFD(client->sock,
&(msg->fds[i]))) < 0) {
+ virNetMessageQueueServe(&client->rx);
+ virNetMessageFree(msg);
+ client->wantClose = true;
+ return;
+ }
+ if (rv == 0) /* Blocking */
+ break;
+ msg->donefds++;
+ }
+
+ /* Need to poll() until FDs arrive */
+ if (msg->donefds < msg->nfds) {
+ /* Because DecodeHeader/NumFDs reset bufferOffset, we
+ * put it back to what it was, so everything works
+ * again next time we run this method
+ */
+ client->rx->bufferOffset = client->rx->bufferLength;
+ return;
+ }
}
/* Definitely finished reading, so remove from queue */
8 years, 11 months