[libvirt] [PATCH v2 0/4] target-i386: Implement query-cpu-model-expansion
by Eduardo Habkost
This series implements query-cpu-model-expansion on target-i386.
Changes v1 -> v2:
-----------------
This version is highly simplified compared to v1. It contains
only an implementation that will return a limited set of
properties. I have a follow-up series that will expend type=full
expansion to return every single QOM property, but this version
will return the same data for type=static and type=full expansion
for simplicity (except that type=static expansion will use the
"base" CPU model as base).
This means this version also won't include "pmu" and
"host-cache-info" in full expansion, and won't require special
code for those properties.
The unit test code was also removed in this version, to keep the
series simple and easier to review. Most of the patches on the
previous series were changes just to make the test case work. I
will send the test-case-related changes as a follow-up series.
---
Cc: Cornelia Huck <cornelia.huck(a)de.ibm.com>
Cc: Christian Borntraeger <borntraeger(a)de.ibm.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: libvir-list(a)redhat.com
Cc: Jiri Denemark <jdenemar(a)redhat.com>
Cc: "Jason J. Herne" <jjherne(a)linux.vnet.ibm.com>
Cc: Markus Armbruster <armbru(a)redhat.com>
Cc: Richard Henderson <rth(a)twiddle.net>
Cc: Igor Mammedov <imammedo(a)redhat.com>
Cc: Eric Blake <eblake(a)redhat.com>
Eduardo Habkost (4):
target-i386: Reorganize and document CPUID initialization steps
qapi-schema: Comment about full expansion of non-migration-safe models
target-i386: Define static "base" CPU model
target-i386: Implement query-cpu-model-expansion QMP command
qapi-schema.json | 9 ++
target/i386/cpu-qom.h | 2 +
monitor.c | 4 +-
target/i386/cpu.c | 317 ++++++++++++++++++++++++++++++++++++++++++++------
4 files changed, 298 insertions(+), 34 deletions(-)
--
2.11.0.259.g40922b1
7 years, 10 months
[libvirt] Performance about x-data-plane
by Weiwei Jia
Hi,
With QEMU x-data-plane, I find the performance has not been improved
very much. Please see following two settings.
Setting 1: I/O thread in host OS (VMM) reads 4KB each time from disk
(8GB in total). Pin the I/O thread to pCPU 5 which will serve I/O
thread dedicatedly. I find the performance is around 250 MB/s.
Setting 2: I/O thread in guest OS (VMM) reads 4KB each time from
virtual disk (8GB in total). Pin the I/O thread to vCPU 5 and pin vCPU
5 thread to pCPU5 so that vCPU 5 handles this I/O thread dedicatedly
and pCPU5 serve vCPU5 dedicatedly. In order to keep vCPU5 not to be
idle, I also pin one cpu intensive thread (while (1) {i++}) on vCPU 5
so that the I/O thread on it can be served without delay. For this
setting, I find the performance for this I/O thread is around 190
MB/s.
NOTE: For setting 2, I also pin the QEMU dedicated IOthread
(x-data-plane) in host OS to pCPU to handle I/O requests from guest OS
dedicatedly.
I think for setting 2, the performance of I/O thread should be almost
the same as setting 1. I cannot understand why it is 60 MB/s lower
than setting 1. I am wondering whether there are something wrong with
my x-data-plane setting or virtio setting for VM. Would you please
give me some hints? Thank you.
Libvirt version: 2.4.0
QEMU version: 2.3.0
The libvirt xml configuration file is like following (I only start
one VM with following xml config).
<domain type='kvm' id='1'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>vm</name>
<uuid>3290a8d0-9d9f-b2c4-dd46-5d0d8a730cd6</uuid>
<memory unit='KiB'>8290304</memory>
<currentMemory unit='KiB'>8290304</currentMemory>
<vcpu placement='static'>15</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='3'/>
<vcpupin vcpu='4' cpuset='4'/>
<vcpupin vcpu='5' cpuset='5'/>
<vcpupin vcpu='6' cpuset='6'/>
<vcpupin vcpu='7' cpuset='7'/>
<vcpupin vcpu='8' cpuset='8'/>
<vcpupin vcpu='9' cpuset='9'/>
<vcpupin vcpu='10' cpuset='10'/>
<vcpupin vcpu='11' cpuset='11'/>
<vcpupin vcpu='12' cpuset='12'/>
<vcpupin vcpu='13' cpuset='13'/>
<vcpupin vcpu='14' cpuset='14'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-2.2'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/vm.img'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0'>
<alias name='usb0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='scsi' index='0'>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:8e:3d:06'/>
<source network='default'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/8'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/8'>
<source path='/dev/pts/8'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</memballoon>
</devices>
<seclabel type='none'/>
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.virtio-disk0.scsi=off'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.virtio-disk0.config-wce=off'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
</qemu:commandline>
</domain>
Thank you,
Weiwei Jia
7 years, 10 months
[libvirt] [PATCH for 3.0.0] Revert "perf: Add cache_l1d perf event support"
by Daniel P. Berrange
This reverts commit ae16c95f1bb5591c27676c5de8d383e5612c3568.
The data was calculated incorrectly and the event name needs
to be changed.
---
docs/formatdomain.html.in | 7 -------
docs/schemas/domaincommon.rng | 1 -
include/libvirt/libvirt-domain.h | 11 -----------
src/libvirt-domain.c | 2 --
src/qemu/qemu_driver.c | 1 -
src/util/virperf.c | 6 +-----
src/util/virperf.h | 1 -
tests/genericxml2xmlindata/generic-perf.xml | 1 -
tools/virsh.pod | 5 +----
9 files changed, 2 insertions(+), 33 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 30cb196..3f7f875 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -1937,7 +1937,6 @@
<event name='stalled_cycles_frontend' enabled='no'/>
<event name='stalled_cycles_backend' enabled='no'/>
<event name='ref_cpu_cycles' enabled='no'/>
- <event name='cache_l1d' enabled='no'/>
</perf>
...
</pre>
@@ -2016,12 +2015,6 @@
by applications running on the platform</td>
<td><code>perf.ref_cpu_cycles</code></td>
</tr>
- <tr>
- <td><code>cache_l1d</code></td>
- <td>the count of total level 1 data cache by applications running on
- the platform</td>
- <td><code>perf.cache_l1d</code></td>
- </tr>
</table>
<h3><a name="elementsDevices">Devices</a></h3>
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index be0a609..4d76315 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -433,7 +433,6 @@
<value>stalled_cycles_frontend</value>
<value>stalled_cycles_backend</value>
<value>ref_cpu_cycles</value>
- <value>cache_l1d</value>
</choice>
</attribute>
<attribute name="enabled">
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 1e0e74c..e303140 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -2188,17 +2188,6 @@ void virDomainStatsRecordListFree(virDomainStatsRecordPtr *stats);
*/
# define VIR_PERF_PARAM_REF_CPU_CYCLES "ref_cpu_cycles"
-/**
- * VIR_PERF_PARAM_CACHE_L1D:
- *
- * Macro for typed parameter name that represents cache_l1d
- * perf event which can be used to measure the count of total
- * level 1 data cache by applications running on the platform.
- * It corresponds to the "perf.cache_l1d" field in the
- * *Stats APIs.
- */
-# define VIR_PERF_PARAM_CACHE_L1D "cache_l1d"
-
int virDomainGetPerfEvents(virDomainPtr dom,
virTypedParameterPtr *params,
int *nparams,
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 3023f30..5b3e842 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -11250,8 +11250,6 @@ virConnectGetDomainCapabilities(virConnectPtr conn,
* CPU frequency scaling by applications running
* as unsigned long long. It is produced by the
* ref_cpu_cycles perf event.
- * "perf.cache_l1d" - The count of total level 1 data cache as unsigned
- * long long. It is produced by cache_l1d perf event.
*
* Note that entire stats groups or individual stat fields may be missing from
* the output in case they are not supported by the given hypervisor, are not
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 42f9889..d4422f3 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9877,7 +9877,6 @@ qemuDomainSetPerfEvents(virDomainPtr dom,
VIR_PERF_PARAM_STALLED_CYCLES_FRONTEND, VIR_TYPED_PARAM_BOOLEAN,
VIR_PERF_PARAM_STALLED_CYCLES_BACKEND, VIR_TYPED_PARAM_BOOLEAN,
VIR_PERF_PARAM_REF_CPU_CYCLES, VIR_TYPED_PARAM_BOOLEAN,
- VIR_PERF_PARAM_CACHE_L1D, VIR_TYPED_PARAM_BOOLEAN,
NULL) < 0)
return -1;
diff --git a/src/util/virperf.c b/src/util/virperf.c
index 8554723..f64692b 100644
--- a/src/util/virperf.c
+++ b/src/util/virperf.c
@@ -43,8 +43,7 @@ VIR_ENUM_IMPL(virPerfEvent, VIR_PERF_EVENT_LAST,
"cache_references", "cache_misses",
"branch_instructions", "branch_misses",
"bus_cycles", "stalled_cycles_frontend",
- "stalled_cycles_backend", "ref_cpu_cycles",
- "cache_l1d");
+ "stalled_cycles_backend", "ref_cpu_cycles");
struct virPerfEvent {
int type;
@@ -113,9 +112,6 @@ static struct virPerfEventAttr attrs[] = {
.attrConfig = 0,
# endif
},
- {.type = VIR_PERF_EVENT_CACHE_L1D,
- .attrType = PERF_TYPE_HW_CACHE,
- .attrConfig = PERF_COUNT_HW_CACHE_L1D},
};
typedef struct virPerfEventAttr *virPerfEventAttrPtr;
diff --git a/src/util/virperf.h b/src/util/virperf.h
index 4c562af..1f43c92 100644
--- a/src/util/virperf.h
+++ b/src/util/virperf.h
@@ -47,7 +47,6 @@ typedef enum {
the backend of the instruction
processor pipeline */
VIR_PERF_EVENT_REF_CPU_CYCLES, /* Count of ref cpu cycles */
- VIR_PERF_EVENT_CACHE_L1D, /* Count of level 1 data cache*/
VIR_PERF_EVENT_LAST
} virPerfEventType;
diff --git a/tests/genericxml2xmlindata/generic-perf.xml b/tests/genericxml2xmlindata/generic-perf.xml
index d1418d0..437cd65 100644
--- a/tests/genericxml2xmlindata/generic-perf.xml
+++ b/tests/genericxml2xmlindata/generic-perf.xml
@@ -26,7 +26,6 @@
<event name='stalled_cycles_frontend' enabled='yes'/>
<event name='stalled_cycles_backend' enabled='yes'/>
<event name='ref_cpu_cycles' enabled='yes'/>
- <event name='cache_l1d' enabled='yes'/>
</perf>
<devices>
</devices>
diff --git a/tools/virsh.pod b/tools/virsh.pod
index cfa7a24..0e434c0 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -945,8 +945,7 @@ I<--perf> returns the statistics of all enabled perf events:
"perf.bus_cycles" - the count of bus cycles,
"perf.stalled_cycles_frontend" - the count of stalled frontend cpu cycles,
"perf.stalled_cycles_backend" - the count of stalled backend cpu cycles,
-"perf.ref_cpu_cycles" - the count of ref cpu cycles,
-"perf.cache_l1d" - the count of level 1 data cache
+"perf.ref_cpu_cycles" - the count of ref cpu cycles
See the B<perf> command for more details about each event.
@@ -2311,8 +2310,6 @@ B<Valid perf event names>
ref_cpu_cycles - Provides the count of total cpu cycles
not affected by CPU frequency scaling by
applications running on the platform.
- cache_l1d - Provides the count of total level 1 data cache
- by applications running on the platform.
B<Note>: The statistics can be retrieved using the B<domstats> command using
the I<--perf> flag.
--
2.9.3
7 years, 10 months
[libvirt] [PATCH] qemu: Copy SELinux labels for namespace too
by Michal Privoznik
When creating new /dev/* for qemu, we do chown() and copy ACLs to
create the exact copy from the original /dev. I though that
copying SELinux labels is not necessary as SELinux will chose the
sane defaults. Surprisingly, it does not leaving namespace with
the following labels:
crw-rw-rw-. root root system_u:object_r:tmpfs_t:s0 random
crw-------. root root system_u:object_r:tmpfs_t:s0 rtc0
drwxrwxrwt. root root system_u:object_r:tmpfs_t:s0 shm
crw-rw-rw-. root root system_u:object_r:tmpfs_t:s0 urandom
As a result, domain is unable to start:
error: internal error: process exited while connecting to monitor: Error in GnuTLS initialization: Failed to acquire random data.
qemu-kvm: cannot initialize crypto: Unable to initialize GNUTLS library: Failed to acquire random data.
The solution is to copy the SELinux labels as well.
Reported-by: Andrea Bolognani <abologna(a)redhat.com>
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_domain.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 1399dee0d..a29866673 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -63,6 +63,9 @@
#if defined(HAVE_SYS_MOUNT_H)
# include <sys/mount.h>
#endif
+#ifdef WITH_SELINUX
+# include <selinux/selinux.h>
+#endif
#include <libxml/xpathInternals.h>
@@ -6958,6 +6961,9 @@ qemuDomainCreateDevice(const char *device,
char *canonDevicePath = NULL;
struct stat sb;
int ret = -1;
+#ifdef WITH_SELINUX
+ char *tcon = NULL;
+#endif
if (virFileResolveAllLinks(device, &canonDevicePath) < 0) {
if (errno == ENOENT && allow_noent) {
@@ -7023,10 +7029,34 @@ qemuDomainCreateDevice(const char *device,
goto cleanup;
}
+#ifdef WITH_SELINUX
+ if (getfilecon_raw(canonDevicePath, &tcon) < 0 &&
+ (errno != ENOTSUP && errno != ENODATA)) {
+ virReportSystemError(errno,
+ _("Unable to get SELinux label on %s"), canonDevicePath);
+ goto cleanup;
+ }
+
+ if (tcon &&
+ setfilecon_raw(devicePath, (VIR_SELINUX_CTX_CONST char *) tcon) < 0) {
+ VIR_WARNINGS_NO_WLOGICALOP_EQUAL_EXPR
+ if (errno != EOPNOTSUPP && errno != ENOTSUP) {
+ VIR_WARNINGS_RESET
+ virReportSystemError(errno,
+ _("Unable to set SELinux label on %s"),
+ devicePath);
+ goto cleanup;
+ }
+ }
+#endif
+
ret = 0;
cleanup:
VIR_FREE(canonDevicePath);
VIR_FREE(devicePath);
+#ifdef WITH_SELINUX
+ freecon(tcon);
+#endif
return ret;
}
@@ -7472,6 +7502,9 @@ struct qemuDomainAttachDeviceMknodData {
const char *file;
struct stat sb;
void *acl;
+#ifdef WITH_SELINUX
+ char *tcon;
+#endif
};
@@ -7515,6 +7548,19 @@ qemuDomainAttachDeviceMknodHelper(pid_t pid ATTRIBUTE_UNUSED,
goto cleanup;
}
+#ifdef WITH_SELINUX
+ if (setfilecon_raw(data->file, (VIR_SELINUX_CTX_CONST char *) data->tcon) < 0) {
+ VIR_WARNINGS_NO_WLOGICALOP_EQUAL_EXPR
+ if (errno != EOPNOTSUPP && errno != ENOTSUP) {
+ VIR_WARNINGS_RESET
+ virReportSystemError(errno,
+ _("Unable to set SELinux label on %s"),
+ data->file);
+ goto cleanup;
+ }
+ }
+#endif
+
switch ((virDomainDeviceType) data->devDef->type) {
case VIR_DOMAIN_DEVICE_DISK: {
virDomainDiskDefPtr def = data->devDef->data.disk;
@@ -7571,6 +7617,9 @@ qemuDomainAttachDeviceMknodHelper(pid_t pid ATTRIBUTE_UNUSED,
cleanup:
if (ret < 0 && delDevice)
unlink(data->file);
+#ifdef WITH_SELINUX
+ freecon(data->tcon);
+#endif
virFileFreeACLs(&data->acl);
return ret;
}
@@ -7605,6 +7654,15 @@ qemuDomainAttachDeviceMknod(virQEMUDriverPtr driver,
return ret;
}
+#ifdef WITH_SELINUX
+ if (getfilecon_raw(file, &data.tcon) < 0 &&
+ (errno != ENOTSUP && errno != ENODATA)) {
+ virReportSystemError(errno,
+ _("Unable to get SELinux label on %s"), file);
+ goto cleanup;
+ }
+#endif
+
if (virSecurityManagerPreFork(driver->securityManager) < 0)
goto cleanup;
@@ -7619,6 +7677,9 @@ qemuDomainAttachDeviceMknod(virQEMUDriverPtr driver,
ret = 0;
cleanup:
+#ifdef WITH_SELINUX
+ freecon(data.tcon);
+#endif
virFileFreeACLs(&data.acl);
return 0;
}
--
2.11.0
7 years, 10 months
[libvirt] [PATCH 0/2] Add perf event support for Level 1 Data Cache
by Nitesh Konkar
Compute the config value for cache_l1d (renamed to
cache_l1dra) correctly and add another perf event
support for cache_l1drm.
Nitesh Konkar (2):
perf: Compute cache_l1d config value correctly
perf: add cache_l1drm perf event support
docs/formatdomain.html.in | 19 +++++++++++++------
docs/news.xml | 5 +++--
docs/schemas/domaincommon.rng | 3 ++-
include/libvirt/libvirt-domain.h | 23 +++++++++++++++++------
src/libvirt-domain.c | 8 ++++++--
src/qemu/qemu_driver.c | 3 ++-
src/util/virperf.c | 13 ++++++++++---
src/util/virperf.h | 3 ++-
tests/genericxml2xmlindata/generic-perf.xml | 3 ++-
tools/virsh.pod | 9 ++++++---
10 files changed, 63 insertions(+), 26 deletions(-)
--
1.9.3
7 years, 10 months
[libvirt] [PATCH] news: Add support for guest CPU configuration on s390
by Jiri Denemark
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
docs/news.xml | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index 50c3b3ea2..a076836ed 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -73,6 +73,11 @@
volumes when building a new logical pool on target volume(s).
</description>
</change>
+ <change>
+ <summary>
+ qemu: Add support for guest CPU configuration on s390(x)
+ </summary>
+ </change>
</section>
<section title="Improvements">
<change>
--
2.11.0
7 years, 10 months
[libvirt] [PATCH] qemu: document snapshot-related fixes in the release notes
by Andrea Bolognani
---
docs/news.xml | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index 50c3b3e..043d1fe 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -225,6 +225,15 @@
for x86_64 HVM domains.
</description>
</change>
+ <change>
+ <summary>
+ qemu: snapshot-related fixes
+ </summary>
+ <description>
+ Properly handle image locking and stop looking for a compression
+ program when the memory image format is "raw".
+ </description>
+ </change>
</section>
</release>
<release version="v2.5.0" date="2016-12-04">
--
2.7.4
7 years, 10 months
[libvirt] [PATCH] docs: add entry for aggregation of pcie-root-ports to news.xml
by Laine Stump
---
docs/news.xml | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index 50c3b3e..18006e8 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -140,6 +140,19 @@
the storage pool XML.
</description>
</change>
+ <change>
+ <summary>
+ qemu: aggregate pcie-root-ports onto multiple functions of a slot
+ </summary>
+ <description>
+ When pcie-root-ports are added to pcie-root in order to
+ provide a place to connect PCI Express endpoint devices,
+ libvirt now aggregates multiple root-ports together onto the
+ same slot (up to 8 per slot) in order to conserve
+ slots. Using this method, it's possible to connect more than
+ 200 endpoint devices to a guest that uses PCIe without
+ requiring setup of any PCIe switches.
+ </description>
</section>
<section title="Bug fixes">
<change>
--
2.7.4
7 years, 10 months
[libvirt] [PATCH 0/3] More adjustments for recent storage probe logic
by John Ferlan
Adjustments based on recent activity...
The !writelabel path of patch 3 is probably a moot point since patches
to check at FS and Logical startup for a valid data on the device(s)
were reverted.
John Ferlan (3):
storage: Alter logic when both BLKID and PARTED unavailable
storage: Clean up return value checking
storage: Alter error message in probe/empty checks
src/storage/storage_backend.c | 44 ++++++++++++++++++++++++-------------------
1 file changed, 25 insertions(+), 19 deletions(-)
--
2.7.4
7 years, 10 months