[libvirt] [RFC PATCH 0/2] Add new mdev type for aggregated resources
by Zhenyu Wang
Current mdev device create interface depends on fixed mdev type, which get uuid
from user to create instance of mdev device. If user wants to use customized
number of resource for mdev device, then only can create new mdev type for that
which may not be flexible.
To allow to create user defined resources for mdev, this RFC trys
to extend mdev create interface by adding new "instances=xxx" parameter
following uuid, for target mdev type if aggregation is supported, it can
create new mdev device which contains resources combined by number of
instances, e.g
echo "<uuid>,instances=10" > create
VM manager e.g libvirt can check mdev type with "aggregation" attribute
which can support this setting. And new sysfs attribute "instances" is
created for each mdev device to show allocated number. Default number
of 1 or no "instances" file can be used for compatibility check.
This RFC trys to create new KVMGT type with minimal vGPU resources which
can be combined with "instances=x" setting to allocate for user wanted resources.
Zhenyu Wang (2):
vfio/mdev: Add new instances parameters for mdev create
drm/i915/gvt: Add new aggregation type
drivers/gpu/drm/i915/gvt/gvt.c | 26 ++++++++++++---
drivers/gpu/drm/i915/gvt/gvt.h | 14 +++++---
drivers/gpu/drm/i915/gvt/kvmgt.c | 9 +++--
drivers/gpu/drm/i915/gvt/vgpu.c | 56 ++++++++++++++++++++++++++++----
drivers/s390/cio/vfio_ccw_ops.c | 3 +-
drivers/vfio/mdev/mdev_core.c | 11 ++++---
drivers/vfio/mdev/mdev_private.h | 6 +++-
drivers/vfio/mdev/mdev_sysfs.c | 42 ++++++++++++++++++++----
include/linux/mdev.h | 3 +-
samples/vfio-mdev/mbochs.c | 3 +-
samples/vfio-mdev/mdpy.c | 3 +-
samples/vfio-mdev/mtty.c | 3 +-
12 files changed, 141 insertions(+), 38 deletions(-)
--
2.18.0.rc2
6 years, 1 month
[libvirt] [PATCH] conf: Fix typos in pcie controllers' name
by Han Han
Signed-off-by: Han Han <hhan(a)redhat.com>
---
src/conf/domain_addr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/conf/domain_addr.c b/src/conf/domain_addr.c
index 442e6aab94..e4ed143b76 100644
--- a/src/conf/domain_addr.c
+++ b/src/conf/domain_addr.c
@@ -158,9 +158,9 @@ virDomainPCIAddressFlagsCompatible(virPCIDeviceAddressPtr addr,
} else if (devFlags & VIR_PCI_CONNECT_TYPE_PCIE_ROOT_PORT) {
connectStr = "pcie-root-port";
} else if (devFlags & VIR_PCI_CONNECT_TYPE_PCIE_SWITCH_UPSTREAM_PORT) {
- connectStr = "pci-switch-upstream-port";
+ connectStr = "pcie-switch-upstream-port";
} else if (devFlags & VIR_PCI_CONNECT_TYPE_PCIE_SWITCH_DOWNSTREAM_PORT) {
- connectStr = "pci-switch-downstream-port";
+ connectStr = "pcie-switch-downstream-port";
} else if (devFlags & VIR_PCI_CONNECT_TYPE_DMI_TO_PCI_BRIDGE) {
connectStr = "dmi-to-pci-bridge";
} else if (devFlags & VIR_PCI_CONNECT_TYPE_PCIE_TO_PCI_BRIDGE) {
--
2.19.1
6 years, 1 month
[libvirt] [PATCHv5 00/19] Introduce x86 Cache Monitoring Technology (CMT)
by Wang Huaqiang
This series of patches and the series already been merged introduce
the x86 Cache Monitoring Technology (CMT) to libvirt by interacting
with kernel resource control (resctrl) interface. CMT is one of the
Intel(R) x86 CPU feature which belongs to the Resource Director
Technology (RDT). CMT reports the occupancy of the last level cache,
which is shared by all CPU cores.
In the v1 series, an original and complete feature for CMT was introduced
The v2 and v3 patches address the feature for the host capability of CMT.
v4 is addressing the feature for monitoring VM vcpu thread set cache
occupancy and reporting it through a virsh command.
We have serval discussion about the enabling of CMT, please refer to
following links for the RFCs.
RFCv3
https://www.redhat.com/archives/libvir-list/2018-August/msg01213.html
RFCv2
https://www.redhat.com/archives/libvir-list/2018-July/msg00409.html
https://www.redhat.com/archives/libvir-list/2018-July/msg01241.html
RFCv1
https://www.redhat.com/archives/libvir-list/2018-June/msg00674.html
And the merged commits are list as below, for host capability of CMT.
6af8417415508c31f8ce71234b573b4999f35980
8f6887998bf63594ae26e3db18d4d5896c5f2cb4
58fcee6f3a2b7e89c21c1fb4ec21429c31a0c5b8
12093f1feaf8f5023dcd9d65dff111022842183d
a5d293c18831dcf69ec6195798387fbb70c9f461
1. About reason why CMT is necessary in libvirt?
The perf events of 'CMT, MBML, MBMT' have been phased out since Linux
kernel commit c39a0e2c8850f08249383f2425dbd8dbe4baad69, in libvirt
the perf based cmt,mbm will not work with the latest linux kernel. These
patches add CMT feature to libvirt through kernel resctrlfs interface.
2 Create cache monitoring group (cache monitor).
The main interface for creating monitoring group is through XML file. The
proposed configuration is like:
<cputune>
<cachetune vcpus='1'>
<cache id='0' level='3' type='code' size='7680' unit='KiB'/>
<cache id='1' level='3' type='data' size='3840' unit='KiB'/>
+ <monitor level='3' vcpus='1'/>
</cachetune>
<cachetune vcpus='4-7'>
+ <monitor level='3' vcpus='4-6'/>
</cachetune>
</cputune>
In above XML, created 2 cache resctrl allocation groups and 2 resctrl
monitoring groups.
The changes of cache monitor will be effective in next booting of VM.
2 Show CMT result through command 'domstats'
Adding the interface in qemu to report this information for resource
monitor group through command 'virsh domstats --cpu-total'.
Below is a typical output:
# virsh domstats 1 --cpu-total
Domain: 'ubuntu16.04-base'
...
cpu.cache.monitor.count=2
cpu.cache.0.name=vcpus_1
cpu.cache.0.vcpus=1
cpu.cache.0.bank.count=2
cpu.cache.0.bank.0.id=0
cpu.cache.0.bank.0.bytes=4505600
cpu.cache.0.bank.1.id=1
cpu.cache.0.bank.1.bytes=5586944
cpu.cache.1.name=vcpus_4-6
cpu.cache.1.vcpus=4,5,6
cpu.cache.1.bank.count=2
cpu.cache.1.bank.0.id=0
cpu.cache.1.bank.0.bytes=17571840
cpu.cache.1.bank.1.id=1
cpu.cache.1.bank.1.bytes=29106176
Changes in v5:
- qemu: Setting up vcpu and adding pids to resctrl monitor groups during
re-connection.
- Add the document for domain configuration related to resctrl monitor.
Changes in v4:
v4 is addressing the feature for monitoring VM vcpu
thread set cache occupancy and reporting it through a
virsh command.
- Introduced resctrl default allocation
- Introduced resctrl monitor and default monitor
Changes in v3:
- Addressed John Ferlan's review.
- Typo fixed.
- Removed VIR_ENUM_DECL(virMonitor);
Changes in v2:
- Introduced MBM capability.
- Capability layout changed
* Moved <monitor> from cahe <bank> to <cache>
* Renamed <Threshold> to <reuseThreshold>
- Document for 'reuseThreshold' changed.
- Introduced API virResctrlInfoGetMonitorPrefix
- Added more tests, covering standalone CMT, fake new
feature.
- Creating CMT resource control group will be
subsequent job.
Wang Huaqiang (19):
docs: Refactor schemas to support default allocation
util: Introduce resctrl monitor for CMT
util: Refactor code for adding PID to the resource group
util: Add interface for adding PID to monitor
util: Refactor code for determining allocation path
util: Add monitor interface to determine path
util: Refactor code for creating resctrl group
util: Add interface for creating monitor group
util: Add more interfaces for resctrl monitor
util: Introduce default monitor
conf: Refactor code for matching existing resctrls
conf: Refactor virDomainResctrlAppend
conf: Add resctrl monitor configuration
Util: Add function for checking if monitor is running
qemu: enable resctrl monitor in qemu
conf: Add a 'id' to virDomainResctrlDef
qemu: refactor qemuDomainGetStatsCpu
qemu: Report cache occupancy (CMT) with domstats
qemu: Setting up vcpu and adding pids to resctrl monitor groups during
reconnection
docs/formatdomain.html.in | 30 +-
docs/schemas/domaincommon.rng | 14 +-
src/conf/domain_conf.c | 327 ++++++++++--
src/conf/domain_conf.h | 12 +
src/libvirt-domain.c | 9 +
src/libvirt_private.syms | 12 +
src/qemu/qemu_driver.c | 271 +++++++++-
src/qemu/qemu_process.c | 52 +-
src/util/virresctrl.c | 562 ++++++++++++++++++++-
src/util/virresctrl.h | 49 ++
tests/genericxml2xmlindata/cachetune-cdp.xml | 3 +
.../cachetune-colliding-monitor.xml | 30 ++
tests/genericxml2xmlindata/cachetune-small.xml | 7 +
tests/genericxml2xmltest.c | 2 +
14 files changed, 1277 insertions(+), 103 deletions(-)
create mode 100644 tests/genericxml2xmlindata/cachetune-colliding-monitor.xml
--
2.7.4
6 years, 1 month
[libvirt] [PATCH] network: honor the ipv6 network option
by Ryan Goodfellow
According the the documentation for the ipv6 network attribute
https://libvirt.org/formatnetwork.html
"When set to yes, the optional parameter ipv6 enables a network
definition with no IPv6 gateway addresses specified to have
guest-to-guest communications."
But this is not the current behavior, the ipv6 attribute is ignored and
the resulting /proc/sys/net/ipv6/conf/<virbrX>/disable_ipv6 gets set to
1 even when ipv6="yes".
This commit fixes that by checking for the ipv6 network attribute during
bridge setup.
Signed-off-by: Ryan C Goodfellow <rgoodfel(a)isi.edu>
---
src/network/bridge_driver.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
index 4bbc4f5a6d..69022fbfbb 100644
--- a/src/network/bridge_driver.c
+++ b/src/network/bridge_driver.c
@@ -2221,7 +2221,8 @@ networkSetIPv6Sysctls(virNetworkObjPtr obj)
virNetworkDefPtr def = virNetworkObjGetDef(obj);
char *field = NULL;
int ret = -1;
- bool enableIPv6 = !!virNetworkDefGetIPByIndex(def, AF_INET6, 0);
+ bool enableIPv6 = !!virNetworkDefGetIPByIndex(def, AF_INET6, 0) |
+ def->ipv6nogw;
/* set disable_ipv6 if there are no ipv6 addresses defined for the
* network. But also unset it if there *are* ipv6 addresses, as we
--
2.17.1
6 years, 1 month
[libvirt] [PATCH] qemu: Put format=raw onto cmd line for SCSI passthrough
by Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=1632833
When doing a SCSI passthrough we don't put format= onto the
command line. This causes qemu to probe the format automatically
which ends up in a warning in the domain log and possible qemu
disabling writes to the first block (according to the warning
message).
Based-on-work-of: Paolo Bonzini <pbonzini(a)redhat.com>
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_command.c | 2 +-
tests/qemuxml2argvdata/hostdev-scsi-lsi.args | 2 +-
tests/qemuxml2argvdata/hostdev-scsi-readonly.args | 2 +-
tests/qemuxml2argvdata/hostdev-scsi-virtio-scsi.args | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 269276f2f9..1ff593c657 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -4841,7 +4841,7 @@ qemuBuildSCSIHostdevDrvStr(virDomainHostdevDefPtr dev,
} else {
if (!(source = qemuBuildSCSIHostHostdevDrvStr(dev)))
goto error;
- virBufferAsprintf(&buf, "file=/dev/%s,if=none", source);
+ virBufferAsprintf(&buf, "file=/dev/%s,if=none,format=raw", source);
}
VIR_FREE(source);
diff --git a/tests/qemuxml2argvdata/hostdev-scsi-lsi.args b/tests/qemuxml2argvdata/hostdev-scsi-lsi.args
index d05e2a8bf8..f2048fe920 100644
--- a/tests/qemuxml2argvdata/hostdev-scsi-lsi.args
+++ b/tests/qemuxml2argvdata/hostdev-scsi-lsi.args
@@ -25,6 +25,6 @@ server,nowait \
-drive file=/dev/HostVG/QEMUGuest2,format=raw,if=none,id=drive-ide0-0-0 \
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,\
bootindex=1 \
--drive file=/dev/sg0,if=none,id=drive-hostdev0 \
+-drive file=/dev/sg0,if=none,format=raw,id=drive-hostdev0 \
-device scsi-generic,bus=scsi0.0,scsi-id=7,drive=drive-hostdev0,id=hostdev0 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
diff --git a/tests/qemuxml2argvdata/hostdev-scsi-readonly.args b/tests/qemuxml2argvdata/hostdev-scsi-readonly.args
index c6336ca441..0d5a0d327d 100644
--- a/tests/qemuxml2argvdata/hostdev-scsi-readonly.args
+++ b/tests/qemuxml2argvdata/hostdev-scsi-readonly.args
@@ -25,7 +25,7 @@ server,nowait \
-drive file=/dev/HostVG/QEMUGuest2,format=raw,if=none,id=drive-ide0-0-0 \
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,\
bootindex=1 \
--drive file=/dev/sg0,if=none,id=drive-hostdev0,readonly=on \
+-drive file=/dev/sg0,if=none,format=raw,id=drive-hostdev0,readonly=on \
-device scsi-generic,bus=scsi0.0,channel=0,scsi-id=4,lun=8,\
drive=drive-hostdev0,id=hostdev0 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
diff --git a/tests/qemuxml2argvdata/hostdev-scsi-virtio-scsi.args b/tests/qemuxml2argvdata/hostdev-scsi-virtio-scsi.args
index 4bf4ce7f82..13a1e9fe95 100644
--- a/tests/qemuxml2argvdata/hostdev-scsi-virtio-scsi.args
+++ b/tests/qemuxml2argvdata/hostdev-scsi-virtio-scsi.args
@@ -25,7 +25,7 @@ server,nowait \
-drive file=/dev/HostVG/QEMUGuest2,format=raw,if=none,id=drive-ide0-0-0 \
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,\
bootindex=1 \
--drive file=/dev/sg0,if=none,id=drive-hostdev0 \
+-drive file=/dev/sg0,if=none,format=raw,id=drive-hostdev0 \
-device scsi-generic,bus=scsi0.0,channel=0,scsi-id=4,lun=8,\
drive=drive-hostdev0,id=hostdev0 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
--
2.18.1
6 years, 1 month
[libvirt] [PATCH v2 00/11] Implement alternative metadata locking
by Michal Privoznik
v2 of:
https://www.redhat.com/archives/libvir-list/2018-October/msg00162.html
diff to v1 (all of this happened in 2/11 only):
- Moved virFileIsDir() and related checks into virSecurityManagerMetadataLock
- Use VIR_APPEND_ELEMENT_COPY_INPLACE() to properly fill FD array
- Lock sockets iff open() succeeds
Michal Prívozník (11):
security: Always spawn process for transactions
security_manager: Rework metadata locking
Revert "security_manager: Load lock plugin on init"
Revert "qemu_conf: Introduce metadata_lock_manager"
Revert "lock_manager: Allow disabling configFile for
virLockManagerPluginNew"
Revert "lock_driver: Introduce VIR_LOCK_MANAGER_ACQUIRE_ROLLBACK"
Revert "lock_driver: Introduce
VIR_LOCK_MANAGER_RESOURCE_TYPE_METADATA"
Revert "_virLockManagerLockDaemonPrivate: Move @hasRWDisks into dom
union"
Revert "lock_driver: Introduce new
VIR_LOCK_MANAGER_OBJECT_TYPE_DAEMON"
Revert "lock_driver_lockd: Introduce
VIR_LOCK_SPACE_PROTOCOL_ACQUIRE_RESOURCE_METADATA flag"
Revert "virlockspace: Allow caller to specify start and length offset
in virLockSpaceAcquireResource"
cfg.mk | 4 +-
src/locking/lock_daemon_dispatch.c | 11 +-
src/locking/lock_driver.h | 12 -
src/locking/lock_driver_lockd.c | 421 ++++++++++-------------------
src/locking/lock_driver_lockd.h | 1 -
src/locking/lock_driver_sanlock.c | 44 +--
src/locking/lock_manager.c | 10 +-
src/lxc/lxc_controller.c | 3 +-
src/lxc/lxc_driver.c | 2 +-
src/qemu/qemu_conf.c | 1 -
src/qemu/qemu_conf.h | 1 -
src/qemu/qemu_driver.c | 3 -
src/security/security_dac.c | 22 +-
src/security/security_manager.c | 233 +++++++---------
src/security/security_manager.h | 19 +-
src/security/security_selinux.c | 21 +-
src/util/virlockspace.c | 15 +-
src/util/virlockspace.h | 4 -
tests/seclabeltest.c | 2 +-
tests/securityselinuxlabeltest.c | 2 +-
tests/securityselinuxtest.c | 2 +-
tests/testutilsqemu.c | 2 +-
tests/virlockspacetest.c | 29 +-
23 files changed, 305 insertions(+), 559 deletions(-)
--
2.18.0
6 years, 1 month
[libvirt] [RFC PATCH auto partition NUMA guest domains v1 0/2] auto partition guests providing the host NUMA topology
by Wim Ten Have
From: Wim ten Have <wim.ten.have(a)oracle.com>
This patch extends the guest domain administration adding support
to automatically advertise the host NUMA node capabilities obtained
architecture under a guest by creating a vNUMA copy.
The mechanism is enabled by setting the check='numa' attribute under
the CPU 'host-passthrough' topology:
<cpu mode='host-passthrough' check='numa' .../>
When enabled the mechanism automatically renders the host capabilities
provided NUMA architecture, evenly balances the guest reserved vcpu
and memory amongst its vNUMA composed cells and have the cell allocated
vcpus pinned towards the host NUMA node physical cpusets. This in such
way that the host NUMA topology is still in effect under the partitioned
guest domain.
Below example auto partitions the host 'lscpu' listed physical NUMA detail
under a guest domain vNUMA description.
[root@host ]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 240
On-line CPU(s) list: 0-239
Thread(s) per core: 2
Core(s) per socket: 15
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E7-8895 v2 @ 2.80GHz
Stepping: 7
CPU MHz: 3449.555
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 5586.28
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 38400K
NUMA node0 CPU(s): 0-14,120-134
NUMA node1 CPU(s): 15-29,135-149
NUMA node2 CPU(s): 30-44,150-164
NUMA node3 CPU(s): 45-59,165-179
NUMA node4 CPU(s): 60-74,180-194
NUMA node5 CPU(s): 75-89,195-209
NUMA node6 CPU(s): 90-104,210-224
NUMA node7 CPU(s): 105-119,225-239
Flags: ...
The guest 'anuma' without the auto partition rendering enabled
reads; "<cpu mode='host-passthrough' check='none'/>"
<domain type='kvm'>
<name>anuma</name>
<uuid>3f439f5f-1156-4d48-9491-945a2c0abc6d</uuid>
<memory unit='KiB'>67108864</memory>
<currentMemory unit='KiB'>67108864</currentMemory>
<vcpu placement='static'>16</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-2.11'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none'/>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/anuma.qcow2'/>
Enabling the auto partitioning the guest 'anuma' XML is rewritten
as listed below; "<cpu mode='host-passthrough' check='numa'>"
<domain type='kvm'>
<name>anuma</name>
<uuid>3f439f5f-1156-4d48-9491-945a2c0abc6d</uuid>
<memory unit='KiB'>67108864</memory>
<currentMemory unit='KiB'>67108864</currentMemory>
<vcpu placement='static'>16</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0-14,120-134'/>
<vcpupin vcpu='1' cpuset='15-29,135-149'/>
<vcpupin vcpu='2' cpuset='30-44,150-164'/>
<vcpupin vcpu='3' cpuset='45-59,165-179'/>
<vcpupin vcpu='4' cpuset='60-74,180-194'/>
<vcpupin vcpu='5' cpuset='75-89,195-209'/>
<vcpupin vcpu='6' cpuset='90-104,210-224'/>
<vcpupin vcpu='7' cpuset='105-119,225-239'/>
<vcpupin vcpu='8' cpuset='0-14,120-134'/>
<vcpupin vcpu='9' cpuset='15-29,135-149'/>
<vcpupin vcpu='10' cpuset='30-44,150-164'/>
<vcpupin vcpu='11' cpuset='45-59,165-179'/>
<vcpupin vcpu='12' cpuset='60-74,180-194'/>
<vcpupin vcpu='13' cpuset='75-89,195-209'/>
<vcpupin vcpu='14' cpuset='90-104,210-224'/>
<vcpupin vcpu='15' cpuset='105-119,225-239'/>
</cputune>
<os>
<type arch='x86_64' machine='pc-q35-2.11'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='numa'>
<topology sockets='8' cores='1' threads='2'/>
<numa>
<cell id='0' cpus='0,8' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='21'/>
<sibling id='2' value='31'/>
<sibling id='3' value='21'/>
<sibling id='4' value='21'/>
<sibling id='5' value='31'/>
<sibling id='6' value='31'/>
<sibling id='7' value='31'/>
</distances>
</cell>
<cell id='1' cpus='1,9' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='21'/>
<sibling id='1' value='10'/>
<sibling id='2' value='21'/>
<sibling id='3' value='31'/>
<sibling id='4' value='31'/>
<sibling id='5' value='21'/>
<sibling id='6' value='31'/>
<sibling id='7' value='31'/>
</distances>
</cell>
<cell id='2' cpus='2,10' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='21'/>
<sibling id='2' value='10'/>
<sibling id='3' value='21'/>
<sibling id='4' value='31'/>
<sibling id='5' value='31'/>
<sibling id='6' value='21'/>
<sibling id='7' value='31'/>
</distances>
</cell>
<cell id='3' cpus='3,11' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='21'/>
<sibling id='1' value='31'/>
<sibling id='2' value='21'/>
<sibling id='3' value='10'/>
<sibling id='4' value='31'/>
<sibling id='5' value='31'/>
<sibling id='6' value='31'/>
<sibling id='7' value='21'/>
</distances>
</cell>
<cell id='4' cpus='4,12' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='21'/>
<sibling id='1' value='31'/>
<sibling id='2' value='31'/>
<sibling id='3' value='31'/>
<sibling id='4' value='10'/>
<sibling id='5' value='21'/>
<sibling id='6' value='21'/>
<sibling id='7' value='31'/>
</distances>
</cell>
<cell id='5' cpus='5,13' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='21'/>
<sibling id='2' value='31'/>
<sibling id='3' value='31'/>
<sibling id='4' value='21'/>
<sibling id='5' value='10'/>
<sibling id='6' value='31'/>
<sibling id='7' value='21'/>
</distances>
</cell>
<cell id='6' cpus='6,14' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='31'/>
<sibling id='2' value='21'/>
<sibling id='3' value='31'/>
<sibling id='4' value='21'/>
<sibling id='5' value='31'/>
<sibling id='6' value='10'/>
<sibling id='7' value='21'/>
</distances>
</cell>
<cell id='7' cpus='7,15' memory='8388608' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='31'/>
<sibling id='2' value='31'/>
<sibling id='3' value='21'/>
<sibling id='4' value='31'/>
<sibling id='5' value='21'/>
<sibling id='6' value='21'/>
<sibling id='7' value='10'/>
</distances>
</cell>
</numa>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/anuma.qcow2'/>
Finally the auto partitioned guest anuma 'lscpu' listed virtual vNUMA detail.
[root@anuma ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E7-8895 v2 @ 2.80GHz
Stepping: 7
CPU MHz: 2793.268
BogoMIPS: 5586.53
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
NUMA node0 CPU(s): 0,8
NUMA node1 CPU(s): 1,9
NUMA node2 CPU(s): 2,10
NUMA node3 CPU(s): 3,11
NUMA node4 CPU(s): 4,12
NUMA node5 CPU(s): 5,13
NUMA node6 CPU(s): 6,14
NUMA node7 CPU(s): 7,15
Flags: ...
Wim ten Have (2):
domain: auto partition guests providing the host NUMA topology
qemuxml2argv: add tests that exercise vNUMA auto partition topology
docs/formatdomain.html.in | 7 +
docs/schemas/cputypes.rng | 1 +
src/conf/cpu_conf.c | 3 +-
src/conf/cpu_conf.h | 1 +
src/conf/domain_conf.c | 166 ++++++++++++++++++
.../cpu-host-passthrough-nonuma.args | 25 +++
.../cpu-host-passthrough-nonuma.xml | 18 ++
.../cpu-host-passthrough-numa.args | 29 +++
.../cpu-host-passthrough-numa.xml | 18 ++
tests/qemuxml2argvtest.c | 2 +
10 files changed, 269 insertions(+), 1 deletion(-)
create mode 100644 tests/qemuxml2argvdata/cpu-host-passthrough-nonuma.args
create mode 100644 tests/qemuxml2argvdata/cpu-host-passthrough-nonuma.xml
create mode 100644 tests/qemuxml2argvdata/cpu-host-passthrough-numa.args
create mode 100644 tests/qemuxml2argvdata/cpu-host-passthrough-numa.xml
--
2.17.1
6 years, 1 month
[libvirt] ANNOUNCE: virt-manager 2.0.0 released
by Cole Robinson
I'm happy to announce the release of virt-manager 2.0.0!
virt-manager is a desktop application for managing KVM, Xen, and LXC
virtualization via libvirt.
The release can be downloaded from:
http://virt-manager.org/download/
The 2.0.0 isn't hugely significant here, the app will largely look the
same to most people. Internally we had the big change of python3 support
which definitely bumps up the minimum supported host version
virt-manager can run on.
I also took the opportunity to remove some uncommonly used features from
virt-manager's UI, chief among them the host interface management UI. In
practice I don't think this will really impact anyone because it didn't
work that well to begin with. More details are in this thread:
https://www.redhat.com/archives/virt-tools-list/2018-October/msg00032.html
The big changes in this release include:
- Finish port to Python 3 (Radostin Stoyanov, Cole Robinson)
- Improved VM defaults for supported OS: q35 PCIe, usb3, CPU host-model
- Search based OS selection UI for new VMs (Daniel P. Berrangé,
Cole Robinson)
- Track OS name for lifetime of domain in <metadata> XML
- Host interface management UI has been completely removed
- Show domain IP on interface details page (Lin Ma, Cole Robinson)
- More efficient stats polling with AllDomainStats (Simon Kobyda,
Cole Robinson)
- TPM device model and backend UI (Marc-André Lureau, Stefan Berger)
- Show <channel> connection state in UI (Lin Ma)
- Show attached devices in <controller> UI (Lin Ma)
- UI option to plug/unplug VM nic link (Simon Kobyda)
- UI support for disk discard and detect_zeroes (Povilas Kanapickas,
Lin Ma)
- Improved SUSE --location URL/ISO detection (Charles Arnold)
- cli and UI support for SCSI persistent reservations (Lin Ma)
- cli: Add --network mtu.size= option (Anya Harter)
- cli: Add --disk driver.copy_on_read (Anya Harter)
- cli: Add --disk geometry support (Anya Harter)
- cli: Add --sound codec support (Anya Harter)
- cli: Add --hostdev net/char/block for LXC (Lubomir Rintel)
- cli: Add --memorybacking access_mode and source_type (Marc-André
Lureau)
- cli: Add --boot rebootTimout (Yossi Ovadia)
- cli: Add --destroy-on-exit
Thanks to everyone who has contributed to this release through testing,
bug reporting, submitting patches, and otherwise sending in feedback!
Thanks,
Cole
6 years, 1 month
[libvirt] [PATCH] conf: Fix bug in finding alloc through matching vcpus
by Wang Huaqiang
The @alloc object returned by virDomainResctrlVcpuMatch is not
properly referenced and un-referenced in virDomainCachetuneDefParse.
This patch fixes this problem.
Signed-off-by: Wang Huaqiang <huaqiang.wang(a)intel.com>
---
src/conf/domain_conf.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index cad84b9..eb73eaf 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -18833,7 +18833,7 @@ virDomainResctrlVcpuMatch(virDomainDefPtr def,
* Just updating memory allocation information of that group
*/
if (virBitmapEqual(def->resctrls[i]->vcpus, vcpus)) {
- *alloc = def->resctrls[i]->alloc;
+ *alloc = virObjectRef(def->resctrls[i]->alloc);
break;
}
if (virBitmapOverlaps(def->resctrls[i]->vcpus, vcpus)) {
@@ -19224,8 +19224,6 @@ virDomainMemorytuneDefParse(virDomainDefPtr def,
if (!alloc)
goto cleanup;
new_alloc = true;
- } else {
- alloc = virObjectRef(alloc);
}
for (i = 0; i < n; i++) {
--
2.7.4
6 years, 1 month
[libvirt] [ocaml PATCH] doc: invoke ocamldoc with -colorize-code
by Pino Toscano
This way, the OCaml snippets are colorized.
The OCaml version required is higher than the first version shipping
ocamldoc with this option, so that can be done unconditionally.
Signed-off-by: Pino Toscano <ptoscano(a)redhat.com>
---
Makefile.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile.in b/Makefile.in
index ad5a036..f119dbc 100644
--- a/Makefile.in
+++ b/Makefile.in
@@ -23,7 +23,7 @@ INSTALL = @INSTALL@
MAKENSIS = @MAKENSIS@
OCAMLDOC = @OCAMLDOC@
-OCAMLDOCFLAGS := -html -sort
+OCAMLDOCFLAGS := -html -sort -colorize-code
SUBDIRS = libvirt examples
--
2.17.2
6 years, 1 month