[libvirt] [RFC] libvirt vGPU QEMU integration
by Neo Jia
Hi libvirt experts,
I am starting this email thread to discuss the potential solution / proposal of
integrating vGPU support into libvirt for QEMU.
Some quick background, NVIDIA is implementing a VFIO based mediated device
framework to allow people to virtualize their devices without SR-IOV, for
example NVIDIA vGPU, and Intel KVMGT. Within this framework, we are reusing the
VFIO API to process the memory / interrupt as what QEMU does today with passthru
device.
The difference here is that we are introducing a set of new sysfs file for
virtual device discovery and life cycle management due to its virtual nature.
Here is the summary of the sysfs, when they will be created and how they should
be used:
1. Discover mediated device
As part of physical device initialization process, vendor driver will register
their physical devices, which will be used to create virtual device (mediated
device, aka mdev) to the mediated framework.
Then, the sysfs file "mdev_supported_types" will be available under the physical
device sysfs, and it will indicate the supported mdev and configuration for this
particular physical device, and the content may change dynamically based on the
system's current configurations, so libvirt needs to query this file every time
before create a mdev.
Note: different vendors might have their own specific configuration sysfs as
well, if they don't have pre-defined types.
For example, we have a NVIDIA Tesla M60 on 86:00.0 here registered, and here is
NVIDIA specific configuration on an idle system.
For example, to query the "mdev_supported_types" on this Tesla M60:
cat /sys/bus/pci/devices/0000:86:00.0/mdev_supported_types
# vgpu_type_id, vgpu_type, max_instance, num_heads, frl_config, framebuffer,
max_resolution
11 ,"GRID M60-0B", 16, 2, 45, 512M, 2560x1600
12 ,"GRID M60-0Q", 16, 2, 60, 512M, 2560x1600
13 ,"GRID M60-1B", 8, 2, 45, 1024M, 2560x1600
14 ,"GRID M60-1Q", 8, 2, 60, 1024M, 2560x1600
15 ,"GRID M60-2B", 4, 2, 45, 2048M, 2560x1600
16 ,"GRID M60-2Q", 4, 4, 60, 2048M, 2560x1600
17 ,"GRID M60-4Q", 2, 4, 60, 4096M, 3840x2160
18 ,"GRID M60-8Q", 1, 4, 60, 8192M, 3840x2160
2. Create/destroy mediated device
Two sysfs files are available under the physical device sysfs path : mdev_create
and mdev_destroy
The syntax of creating a mdev is:
echo "$mdev_UUID:vendor_specific_argument_list" >
/sys/bus/pci/devices/.../mdev_create
The syntax of destroying a mdev is:
echo "$mdev_UUID:vendor_specific_argument_list" >
/sys/bus/pci/devices/.../mdev_destroy
The $mdev_UUID is a unique identifier for this mdev device to be created, and it
is unique per system.
For NVIDIA vGPU, we require a vGPU type identifier (shown as vgpu_type_id in
above Tesla M60 output), and a VM UUID to be passed as
"vendor_specific_argument_list".
If there is no vendor specific arguments required, either "$mdev_UUID" or
"$mdev_UUID:" will be acceptable as input syntax for the above two commands.
To create a M60-4Q device, libvirt needs to do:
echo "$mdev_UUID:vgpu_type_id=20,vm_uuid=$VM_UUID" >
/sys/bus/pci/devices/0000\:86\:00.0/mdev_create
Then, you will see a virtual device shows up at:
/sys/bus/mdev/devices/$mdev_UUID/
For NVIDIA, to create multiple virtual devices per VM, it has to be created
upfront before bringing any of them online.
Regarding error reporting and detection, on failure, write() to sysfs using fd
returns error code, and write to sysfs file through command prompt shows the
string corresponding to error code.
3. Start/stop mediated device
Under the virtual device sysfs, you will see a new "online" sysfs file.
you can do cat /sys/bus/mdev/devices/$mdev_UUID/online to get the current status
of this virtual device (0 or 1), and to start a virtual device or stop a virtual
device you can do:
echo "1|0" > /sys/bus/mdev/devices/$mdev_UUID/online
libvirt needs to query the current state before changing state.
Note: if you have multiple devices, you need to write to the "online" file
individually.
For NVIDIA, if there are multiple mdev per VM, libvirt needs to bring all of
them "online" before starting QEMU.
4. Launch QEMU/VM
Pass the mdev sysfs path to QEMU as vfio-pci device:
-device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$mdev_UUID,id=vgpu0
5. Shutdown sequence
libvirt needs to shutdown the qemu, bring the virtual device offline, then destroy the
virtual device
6. VM Reset
No change or requirement for libvirt as this will be handled via VFIO reset API
and QEMU process will keep running as before.
7. Hot-plug
It optional for vendors to support hot-plug.
And it is same syntax to create a virtual device for hot-plug.
For hot-unplug, after executing QEMU monitor "device del" command, libvirt needs
to write to "destroy" sysfs to complete hot-unplug process.
Since hot-plug is optional, then mdev_create or mdev_destroy operations may
return an error if it is not supported.
Thanks,
Neo
8 years, 3 months
[libvirt] [PATCH v1] qemu-migration: Disallow migration of read only disk
by Corey S. McQuay
From: "Corey S. McQuay" <csmcquay(a)linux.vnet.ibm.com>
Currently Libvirt allows attempts to migrate read only disks. Qemu cannot handle this as read only
disks cannot be written to on the destination system. The end result is a cryptic error message
and a failed migration.
This patch causes migration to fail earlier and provides a meaningful error message stating that
migrating read only disks is not supported.
Signed-off-by: Corey S. McQuay <csmcquay(a)linux.vnet.ibm.com>
Reviewed-by: Jason J. Herne <jjherne(a)linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy(a)linux.vnet.ibm.com>
---
src/qemu/qemu_migration.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index d018add..7d0a78f 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2387,6 +2387,28 @@ qemuMigrationIsSafe(virDomainDefPtr def,
return true;
}
+static bool
+qemuMigrationAreAllDisksRW(virDomainDefPtr def,
+ size_t nmigrate_disks,
+ const char **migrate_disks)
+{
+ size_t i;
+
+ for (i = 0; i < def->ndisks; i++) {
+ virDomainDiskDefPtr disk = def->disks[i];
+
+ if (qemuMigrateDisk(disk, nmigrate_disks, migrate_disks) &&
+ disk->src->readonly) {
+ virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+ _("Cannot migrate read-only disk %s"),
+ disk->dst);
+ return false;
+ }
+ }
+
+ return true;
+}
+
/** qemuMigrationSetOffline
* Pause domain for non-live migration.
*/
@@ -3132,6 +3154,9 @@ qemuMigrationBeginPhase(virQEMUDriverPtr driver,
!qemuMigrationIsSafe(vm->def, nmigrate_disks, migrate_disks))
goto cleanup;
+ if (!qemuMigrationAreAllDisksRW(vm->def, nmigrate_disks, migrate_disks))
+ goto cleanup;
+
if (flags & VIR_MIGRATE_POSTCOPY &&
(!(flags & VIR_MIGRATE_LIVE) ||
flags & VIR_MIGRATE_PAUSED)) {
--
1.8.3.1
8 years, 3 months
[libvirt] [PATCH 1/2] vz: implicitly support additional migration flags
by Pavel Glushchak
* Added VIR_MIGRATE_LIVE, VIR_MIGRATE_UNDEFINE_SOURCE and
VIR_MIGRATE_PERSIST_DEST to supported migration flags
Signed-off-by: Pavel Glushchak <pglushchak(a)virtuozzo.com>
---
src/vz/vz_driver.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index b34fe33..7a12632 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -2887,8 +2887,11 @@ vzEatCookie(const char *cookiein, int cookieinlen, unsigned int flags)
goto cleanup;
}
-#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PAUSED | \
- VIR_MIGRATE_PEER2PEER)
+#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PAUSED | \
+ VIR_MIGRATE_PEER2PEER | \
+ VIR_MIGRATE_LIVE | \
+ VIR_MIGRATE_UNDEFINE_SOURCE | \
+ VIR_MIGRATE_PERSIST_DEST)
#define VZ_MIGRATION_PARAMETERS \
VIR_MIGRATE_PARAM_DEST_XML, VIR_TYPED_PARAM_STRING, \
--
2.7.4
8 years, 3 months
[libvirt] [PATCH] Bypass caching in saving VM memory upon external memory snapshot.
by fuweiwei
From: Fuweiwei <fuweiwei2(a)huawei.com>
Currently in qemu-kvm platform, the process of making an external memory
snapshot is based on the "migration-to-file" sheme. It will use the system
cache to speed up dumping. However, it will make external disk snapshots
afterwards, which must wait for the completion of flushing the dirty pages
to the snapshot file. i.e. In virFileWrapperFdClose() after qemuMigrationToFile(),
it should wait until the libvirt_iohelper thread finishes fdatasync and exits.
During this time, the VM is paused (since it is suspended from the last iteration
of migration-to-file, to the completion of disk snapshots).
Assuming saving 4GB dirty memory at 200MB/s fdatasync speed, the VM will pause
for up to 20s, which is unfriendly to guests.
So I propose that it may be better to bypass caching upon external memory
snapshot, via the VIR_DOMAIN_SAVE_BYPASS_CACHE flag. As a result, it may avoid
long-term fdatasync in libvirt_iohelper thread and achieve seemless VM suspend.
Signed-off-by: Fuweiwei <fuweiwei2(a)huawei.com>
---
src/qemu/qemu_driver.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2089359..f954c23 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14117,7 +14117,8 @@ qemuDomainSnapshotCreateActiveExternal(virConnectPtr conn,
goto cleanup;
if ((ret = qemuDomainSaveMemory(driver, vm, snap->def->file,
- xml, compressed, resume, 0,
+ xml, compressed, resume,
+ VIR_DOMAIN_SAVE_BYPASS_CACHE,
QEMU_ASYNC_JOB_SNAPSHOT)) < 0)
goto cleanup;
--
1.8.3.1
8 years, 3 months
[libvirt] [PATCH 0/7] fspool: backend directory
by Olga Krishtal
Hi everyone, we would like to propose the first implementation of fspool
with directory backend.
Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources. Furthermore new API follows
storage API closely where it makes sense. Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.
The patchset provides 'dir' backend which simply expose directories in some
directory in host filesystem. The virsh commands are provided too. So it is
ready to play with, just replace 'pool' in xml descriptions and virsh commands
to 'fspool' and 'volume' to 'item'.
Examle and usage:
Define:
virsh -c qemu:///system fspool-define-as fs_pool_name dir --target /path/on/host
Build
virsh -c qemu:///system fspool-build fs_pool_name
Start
virsh -c qemu:///system fspool-start fs_pool_name
Look inside
virsh -c qemu:///system fspool-list (--all) fspool_name
Fspool called POOL, on the host fs uses /fs_driver to hold items.
virsh -c qemu:///system fspool-dumpxml POOL
<fspool type='dir'>
<name>POOL</name>
<uuid>c57c9d7c-b1d5-4c45-ba9c-67f03d4da160</uuid>
<capacity unit='bytes'>733722615808</capacity>
<allocation unit='bytes'>1331486720</allocation>
<available unit='bytes'>534810800128</available>
<source>
</source>
<target>
<path>/fs_driver</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</fspool>
virsh -c qemu:///system fspool-info POOL
Name: POOL
UUID: c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
State: running
Persistent: yes
Autostart: no autostart
Capacity: 683.33 GiB
Allocation: 1.24 GiB
Available: 498.08 GiB
virsh -c qemu+unix:///system item-list POOL
Name Path
------------------------------------------------------------------------------
item1 /fs_driver/item1
item10 /fs_driver/item10
item11 /fs_driver/item11
item12 /fs_driver/item12
item15 /fs_driver/item15
Fspool of directory type is some directory on host fs that holds items (subdirs).
Example of usage for items:
virsh -c vz+unix:///system item-create-as POOL item1 1g - create item
virsh -c qemu+unix:///system item-dumpxml item1 POOL
<fsitem>
<name>item1</name>
<key>/fs_driver/item1</key>
<source>
fspoo ='POOL'
</source>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<target>
<format type='dir'/>
</target>
</fsitem>
virsh -c qemu+unix:///system item-info item1 POOL
Name: item1
Type: dir
Capacity: 683.33 GiB
Allocation: 634.87 MiB
Autostart: no autostart
Capacity: 683.33 GiB
Allocation: 1.24 GiB
Available: 498.08 GiB
virsh -c qemu+unix:///system item-list POOL
Name Path
------------------------------------------------------------------------------
item1 /fs_driver/item1
item10 /fs_driver/item10
item11 /fs_driver/item11
item12 /fs_driver/item12
item15 /fs_driver/item15
Olga Krishtal (7):
fspool: introduce filesystem pools API
fspool: usual driver based implementation of filesystem pools API
fspools: configuration and internal representation
fspools: acl support for filesystem pools
remote: filesystem pools driver implementation
fspool: default implementation of filesystem pools
virsh: filesystem pools commands
configure.ac | 33 +
daemon/Makefile.am | 4 +
daemon/libvirtd.c | 10 +
daemon/remote.c | 35 +
include/libvirt/libvirt-fs.h | 273 +++++
include/libvirt/libvirt.h | 1 +
include/libvirt/virterror.h | 8 +
po/POTFILES.in | 6 +
src/Makefile.am | 46 +
src/access/viraccessdriver.h | 12 +
src/access/viraccessdrivernop.c | 19 +
src/access/viraccessdriverpolkit.c | 47 +
src/access/viraccessdriverstack.c | 49 +
src/access/viraccessmanager.c | 31 +
src/access/viraccessmanager.h | 11 +
src/access/viraccessperm.c | 15 +-
src/access/viraccessperm.h | 124 +++
src/conf/fs_conf.c | 1624 +++++++++++++++++++++++++++
src/conf/fs_conf.h | 310 ++++++
src/datatypes.c | 154 +++
src/datatypes.h | 94 ++
src/driver-fs.h | 210 ++++
src/driver.h | 3 +
src/fs/fs_backend.h | 85 ++
src/fs/fs_backend_dir.c | 334 ++++++
src/fs/fs_backend_dir.h | 8 +
src/fs/fs_driver.c | 2164 ++++++++++++++++++++++++++++++++++++
src/fs/fs_driver.h | 10 +
src/libvirt-fs.c | 1715 ++++++++++++++++++++++++++++
src/libvirt.c | 28 +
src/libvirt_private.syms | 53 +
src/libvirt_public.syms | 46 +
src/remote/remote_driver.c | 72 +-
src/remote/remote_protocol.x | 522 ++++++++-
src/rpc/gendispatch.pl | 19 +-
src/util/virerror.c | 37 +
tools/Makefile.am | 4 +
tools/virsh-fspool.c | 1728 ++++++++++++++++++++++++++++
tools/virsh-fspool.h | 36 +
tools/virsh-item.c | 1274 +++++++++++++++++++++
tools/virsh-item.h | 37 +
tools/virsh.c | 4 +
tools/virsh.h | 9 +
43 files changed, 11294 insertions(+), 10 deletions(-)
create mode 100644 include/libvirt/libvirt-fs.h
create mode 100644 src/conf/fs_conf.c
create mode 100644 src/conf/fs_conf.h
create mode 100644 src/driver-fs.h
create mode 100644 src/fs/fs_backend.h
create mode 100644 src/fs/fs_backend_dir.c
create mode 100644 src/fs/fs_backend_dir.h
create mode 100644 src/fs/fs_driver.c
create mode 100644 src/fs/fs_driver.h
create mode 100644 src/libvirt-fs.c
create mode 100644 tools/virsh-fspool.c
create mode 100644 tools/virsh-fspool.h
create mode 100644 tools/virsh-item.c
create mode 100644 tools/virsh-item.h
--
1.8.3.1
8 years, 3 months
[libvirt] [PATCH] remove the dead code this patch is to remove the dead code Signed-off-by: JieWang <wangjie88@huawei.com>
by JieWang
Signed-off-by: JieWang <wangjie88(a)huawei.com>
---
src/qemu/qemu_migration.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 6a683f7..759e15a 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1602,7 +1602,6 @@ qemuMigrationPrecreateDisk(virConnectPtr conn,
_("cannot precreate storage for disk type '%s'"),
virStorageTypeToString(disk->src->type));
goto cleanup;
- break;
}
if ((vol = virStorageVolLookupByName(pool, volName))) {
--
1.9.5.msysgit.1
8 years, 4 months
[libvirt] [PATCH v3 00/24] qemu: Add support for new vcpu hotplug and unplug
by Peter Krempa
v3 fixes stuff pointed out in reviews for v2:
- more than 10 vcpus problem (patch 8 and new patch 10 adding tests)
- few typos and other problems
and stuff found while testing:
- ordering function for qsort being broken (patch 21)
You can fetch the changes at:
git fetch git://pipo.sk/pipo/libvirt.git vcpu-unplug-3
Peter Krempa (24):
qemu: monitor: Return structures from qemuMonitorGetCPUInfo
qemu: monitor: Return struct from qemuMonitor(Text|Json)QueryCPUs
qemu: caps: Add capability for query-hotpluggable-cpus command
qemu: Forbid config when topology based cpu count doesn't match the
config
qemu: capabilities: Extract availability of new cpu hotplug for
machine types
qemu: monitor: Extract QOM path from query-cpus reply
qemu: monitor: Add support for calling query-hotpluggable-cpus
qemu: monitor: Add algorithm for combining query-(hotpluggable-)-cpus
data
tests: Add test infrastructure for qemuMonitorGetCPUInfo
tests: cpu-hotplug: Add data for x86 hotplug with 11+ vcpus
tests: cpu-hotplug: Add data for ppc64 platform including hotplug
tests: cpu-hotplug: Add data for ppc64 out-of-order hotplug
tests: cpu-hotplug: Add data for ppc64 without threads enabled
qemu: domain: Extract cpu-hotplug related data
qemu: domain: Prepare for VCPUs vanishing while libvirt is not running
util: Extract and rename qemuDomainDelCgroupForThread to
virCgroupDelThread
conf: Add XML for individual vCPU hotplug
qemu: migration: Prepare for non-contiguous vcpu configurations
qemu: command: Add helper to convert vcpu definition to JSON props
qemu: process: Copy final vcpu order information into the vcpu
definition
qemu: command: Add support for sparse vcpu topologies
qemu: Use modern vcpu hotplug approach if possible
qemu: hotplug: Allow marking unplugged devices by alias
qemu: hotplug: Add support for VCPU unplug
docs/formatdomain.html.in | 45 +++
docs/schemas/domaincommon.rng | 25 ++
src/conf/domain_conf.c | 154 +++++++++-
src/conf/domain_conf.h | 6 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 31 +-
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_command.c | 50 +++-
src/qemu/qemu_command.h | 3 +
src/qemu/qemu_domain.c | 312 +++++++++++++++++----
src/qemu/qemu_domain.h | 19 +-
src/qemu/qemu_driver.c | 247 +++++++++-------
src/qemu/qemu_hotplug.c | 124 +++++++-
src/qemu/qemu_hotplug.h | 7 +
src/qemu/qemu_migration.c | 16 +-
src/qemu/qemu_monitor.c | 268 +++++++++++++++++-
src/qemu/qemu_monitor.h | 58 +++-
src/qemu/qemu_monitor_json.c | 266 +++++++++++++++---
src/qemu/qemu_monitor_json.h | 8 +-
src/qemu/qemu_monitor_text.c | 41 +--
src/qemu/qemu_monitor_text.h | 3 +-
src/qemu/qemu_process.c | 187 +++++++++++-
src/util/vircgroup.c | 20 ++
src/util/vircgroup.h | 4 +
.../generic-vcpus-individual.xml | 23 ++
tests/genericxml2xmltest.c | 2 +
tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 55 ++--
.../qemumonitorjson-cpuinfo-ppc64-basic-cpus.json | 77 +++++
...emumonitorjson-cpuinfo-ppc64-basic-hotplug.json | 27 ++
.../qemumonitorjson-cpuinfo-ppc64-basic.data | 40 +++
...mumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json | 149 ++++++++++
...onitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json | 28 ++
.../qemumonitorjson-cpuinfo-ppc64-hotplug-1.data | 51 ++++
...mumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json | 221 +++++++++++++++
...onitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json | 29 ++
.../qemumonitorjson-cpuinfo-ppc64-hotplug-2.data | 62 ++++
...mumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json | 221 +++++++++++++++
...onitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json | 29 ++
.../qemumonitorjson-cpuinfo-ppc64-hotplug-4.data | 62 ++++
...umonitorjson-cpuinfo-ppc64-no-threads-cpus.json | 77 +++++
...nitorjson-cpuinfo-ppc64-no-threads-hotplug.json | 125 +++++++++
.../qemumonitorjson-cpuinfo-ppc64-no-threads.data | 72 +++++
...nitorjson-cpuinfo-x86-basic-pluggable-cpus.json | 50 ++++
...orjson-cpuinfo-x86-basic-pluggable-hotplug.json | 82 ++++++
...emumonitorjson-cpuinfo-x86-basic-pluggable.data | 39 +++
.../qemumonitorjson-cpuinfo-x86-full-cpus.json | 104 +++++++
.../qemumonitorjson-cpuinfo-x86-full-hotplug.json | 115 ++++++++
.../qemumonitorjson-cpuinfo-x86-full.data | 76 +++++
tests/qemumonitorjsontest.c | 184 +++++++++++-
.../qemuxml2argv-cpu-hotplug-startup.args | 20 ++
.../qemuxml2argv-cpu-hotplug-startup.xml | 29 ++
tests/qemuxml2argvtest.c | 2 +
tests/testutils.c | 4 +-
53 files changed, 3677 insertions(+), 276 deletions(-)
create mode 100644 tests/genericxml2xmlindata/generic-vcpus-individual.xml
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-basic.data
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-1.data
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-2.data
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-hotplug-4.data
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-ppc64-no-threads.data
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-basic-pluggable.data
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-full-cpus.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-full-hotplug.json
create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-cpuinfo-x86-full.data
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-hotplug-startup.xml
--
2.8.2
8 years, 4 months
[libvirt] [PATCH] storage_backend_rbd: fix typos
by Chen Hanxiao
From: Chen Hanxiao <chenhanxiao(a)gmail.com>
s/failed/failed to
Signed-off-by: Chen Hanxiao <chenhanxiao(a)gmail.com>
---
src/storage/storage_backend_rbd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/storage/storage_backend_rbd.c b/src/storage/storage_backend_rbd.c
index 9665fbc..4dd4b24 100644
--- a/src/storage/storage_backend_rbd.c
+++ b/src/storage/storage_backend_rbd.c
@@ -894,7 +894,7 @@ virStorageBackendRBDSnapshotProtect(rbd_image_t image,
VIR_DEBUG("Querying if RBD snapshot %s@%s is protected", imgname, snapname);
if ((r = rbd_snap_is_protected(image, snapname, &protected)) < 0) {
- virReportSystemError(-r, _("failed verify if RBD snapshot %s@%s "
+ virReportSystemError(-r, _("failed to verify if RBD snapshot %s@%s "
"is protected"), imgname, snapname);
goto cleanup;
}
@@ -904,7 +904,7 @@ virStorageBackendRBDSnapshotProtect(rbd_image_t image,
imgname, snapname);
if ((r = rbd_snap_protect(image, snapname)) < 0) {
- virReportSystemError(-r, _("failed protect RBD snapshot %s@%s"),
+ virReportSystemError(-r, _("failed to protect RBD snapshot %s@%s"),
imgname, snapname);
goto cleanup;
}
--
1.8.3.1
8 years, 4 months
[libvirt] Plans for next release
by Daniel Veillard
So if we want a 2.2.0 release around Sep 1st, I suggest entering freeze this Friday
probably at the end of the day (european time), then push the rc2 next Tuesday for a
release on Thursday,
hope this works for everybody,
Daniel
--
Daniel Veillard | Open Source and Standards, Red Hat
veillard(a)redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | virtualization library http://libvirt.org/
8 years, 4 months
[libvirt] [PATCH 0/3] virsh: Option completers and small improvements/fixes for autocomplete
by Nishith Shah
This series introduces option completers and adds some minor improvements
and fixes(not bugs per se, just better/sane behavior) in vshReadlineParse.
The first patch introduces the usage of option completers to auto-complete
arguments for a particular option.
The second and third patches provide small improvements like completing
the options of type VSH_OT_ARGV or VSH_OT_DATA, and to complete multiple
options as well, if a previous option requires an argument, and that
argument has been provided.
Nishith Shah (3):
virsh: Introduce usage of option completers to auto-complete arguments
virsh: Allow data or argument options to be completed as well
virsh: Complete multiple options when any one option requires data
tools/vsh.c | 75 ++++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 49 insertions(+), 26 deletions(-)
--
2.1.4
8 years, 4 months