Re: [libvirt] question about rdma migration
by Michael R. Hines
Hi Roy,
On 02/09/2016 03:57 AM, Roy Shterman wrote:
> Hi,
>
> I tried to understand the rdma-migration in qemu code and i have two
> questions about it:
>
> 1. I'm working with qemu-kvm using libvirt and i'm getting
>
> MEMLOCK max locked-in-memory address space 65536 65536 bytes
>
> in qemu process so I don't understand how can you use rdma-pin-all
> with such low MEMLOCK.
>
> I found a solution in libvirt to lock all vm memory in advance and to
> enlarge MEMLOCK.
> It uses memoryBacking locking and memory tuning hard_limit of vm
> memory but I couldn't find a usage of this in rdma-migration code.
>
You're absolutey right, the RDMA migration code itself doesn't set this
lock limit explicitly because there are system-wide restrictions in both
appArmour,
/etc/security, as well as SELINUX that restrict applications from
arbitrarily setting their maximum memory lock limits.
The other problem is CGROUPS: If someone sets a cgroup control for
maximum memory and forgets about that mlock() limits, then
there will be a conflict.
So, libvirt must have a policy to deal with all of these possibilities,
not just handle a special case for RDMA migration.
The only way "simple" way (without patching the problems above) to apply
a higher lock limit to QEMU is to set the ulimit for libvirt
(or for QEMU if starting QEMU manually) in your environment or the
command line with $ ulimit # before attempting the migration,
then the RDMA subsystem will be able to lock the memory successfully.
The other option is to use /etc/security/limits.conf and set the option
for a specific libvirt process user and make sure your libvirt/qemu
are not running as root.
QEMU itself also has a "mlock" option built into the command line, but
it also suffers from the same problem --- you have to find
a way (currently) to increase the limit before using the option.
> 2. Do you have any comparison of IOPS and bandwidth between TCP
> migration and rdma migration?
>
Yes, lots of comparisons.
http://wiki.qemu.org/Features/RDMALiveMigration
http://www.canturkisci.com/ETC/papers/IBMJRD2011/preprint.pdf
> Regards,
> Roy
>
>
8 years, 1 month
[libvirt] [PATCH 0/2] vz: add serial number to disk devices
by Nikolay Shirokovskiy
Only the first patch is really on the subject. The second one is
bugfix that is included mainly because it touches the same place
(and nice to have to test first patch too...)
Nikolay Shirokovskiy (2):
vz: add serial number to disk devices
vz: set something in disk driver name
src/vz/vz_sdk.c | 16 ++++++++++++++++
src/vz/vz_utils.c | 6 +++---
2 files changed, 19 insertions(+), 3 deletions(-)
--
1.8.3.1
8 years, 1 month
[libvirt] Qemu: create empty cdrom
by Gromak Yuriy
Hello.
Qemu is latest from master branch.
Tryingto start a domain, which is connected toa blankcdrom:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='sdb' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' target='1' bus='0'
unit='0'/>
</disk>
But I get an error:
qemu-system-x86_64: -drive
if=none,id=drive-scsi0-0-1-0,readonly=on,format=raw: Can't use 'raw' as
a block driver for the protocol level.
8 years, 1 month
[libvirt] qemu-guest-agent windows
by Umar Draz
Hello All,
I have install qemu guest agent on windows 10, but unable to get the ip
address using this command
virsh qemu-agent-command myvm '{ "execute": "guest-network-get-interfaces"
}'
I am getting the following error on above command.
ibvirt: QEMU Driver error : internal error: unable to execute QEMU agent
command 'guest-network-get-interfaces': this feature or command is not
currently supported
but the same command successfully working on linux vms.
Would you please help if there any other way to get the interfaces ip of
windows vm
Br.
Umar
8 years, 1 month
[libvirt] [PATCH rfc v2 0/8] fspool: backend directory
by Olga Krishtal
Hi everyone, we would like to propose the first implementation of fspool
with directory backend.
Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources. Furthermore new API follows
storage API closely where it makes sense. Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.
The patchset provides 'dir' backend which simply expose directories in some
directory in host filesystem. The virsh commands are provided too. So it is
ready to play with, just replace 'pool' in xml descriptions and virsh commands
to 'fspool' and 'volume' to 'item'.
Examle and usage:
Define:
virsh -c qemu:///system fspool-define-as fs_pool_name dir --target /path/on/host
Build
virsh -c qemu:///system fspool-build fs_pool_name
Start
virsh -c qemu:///system fspool-start fs_pool_name
Look inside
virsh -c qemu:///system fspool-list (--all) fspool_name
Fspool called POOL, on the host fs uses /fs_driver to hold items.
virsh -c qemu:///system fspool-dumpxml POOL
<fspool type='dir'>
<name>POOL</name>
<uuid>c57c9d7c-b1d5-4c45-ba9c-67f03d4da160</uuid>
<capacity unit='bytes'>733722615808</capacity>
<allocation unit='bytes'>1331486720</allocation>
<available unit='bytes'>534810800128</available>
<source>
</source>
<target>
<path>/fs_driver</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</fspool>
virsh -c qemu:///system fspool-info POOL
Name: POOL
UUID: c57c9d7c-b1d5-4c45-ba9c-67f03d4da160
State: running
Persistent: yes
Autostart: no autostart
Capacity: 683.33 GiB
Allocation: 1.24 GiB
Available: 498.08 GiB
virsh -c qemu+unix:///system item-list POOL
Name Path
------------------------------------------------------------------------------
item1 /fs_driver/item1
item10 /fs_driver/item10
item11 /fs_driver/item11
item12 /fs_driver/item12
item15 /fs_driver/item15
Fspool of directory type is some directory on host fs that holds items (subdirs).
Example of usage for items:
virsh -c vz+unix:///system item-create-as POOL item1 1g - create item
virsh -c qemu+unix:///system item-dumpxml item1 POOL
<fsitem>
<name>item1</name>
<key>/fs_driver/item1</key>
<source>
</source>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<target>
<format type='dir'/>
</target>
</fsitem>
virsh -c qemu+unix:///system item-info item1 POOL
Name: item1
Type: dir
Capacity: 683.33 GiB
Allocation: 634.87 MiB
Autostart: no autostart
Capacity: 683.33 GiB
Allocation: 1.24 GiB
Available: 498.08 GiB
virsh -c qemu+unix:///system item-list POOL
Name Path
------------------------------------------------------------------------------
item1 /fs_driver/item1
item10 /fs_driver/item10
item11 /fs_driver/item11
item12 /fs_driver/item12
item15 /fs_driver/item15
v2:
- renamed Fs to FS
- in configure.ac script macro m4 is used
- updates docs
- created simple tests
- updated virsh.pod
- added information abot fspool in fotmatfs.html
Olga Krishtal (8):
fspool: introduce filesystem pools API
fspool: usual driver based implementation of filesystem pools API
fspools: configuration and internal representation
fspools: acl support for filesystem pools
remote: filesystem pools driver implementation
fspool: default implementation of filesystem pools
virsh: filesystem pools commands
fspools: docs and tests for fspool directory backend
configure.ac | 38 +
daemon/Makefile.am | 4 +
daemon/libvirtd.c | 10 +
daemon/remote.c | 35 +
docs/formatfs.html.in | 208 ++
docs/fspool.html.in | 41 +
docs/schemas/fsitem.rng | 66 +
docs/schemas/fspool.rng | 82 +
docs/sitemap.html.in | 4 +
include/libvirt/libvirt-fs.h | 260 +++
include/libvirt/libvirt.h | 1 +
include/libvirt/virterror.h | 8 +
m4/virt-driver-fspool.m4 | 52 +
po/POTFILES.in | 6 +
src/Makefile.am | 46 +
src/access/viraccessdriver.h | 12 +
src/access/viraccessdrivernop.c | 19 +
src/access/viraccessdriverpolkit.c | 47 +
src/access/viraccessdriverstack.c | 49 +
src/access/viraccessmanager.c | 31 +
src/access/viraccessmanager.h | 11 +
src/access/viraccessperm.c | 15 +-
src/access/viraccessperm.h | 124 ++
src/check-driverimpls.pl | 2 +
src/conf/fs_conf.c | 1637 ++++++++++++++++
src/conf/fs_conf.h | 323 +++
src/datatypes.c | 154 ++
src/datatypes.h | 94 +
src/driver-fs.h | 192 ++
src/driver.h | 3 +
src/fs/fs_backend.h | 107 +
src/fs/fs_backend_dir.c | 355 ++++
src/fs/fs_backend_dir.h | 8 +
src/fs/fs_driver.c | 2058 ++++++++++++++++++++
src/fs/fs_driver.h | 10 +
src/libvirt-fs.c | 1556 +++++++++++++++
src/libvirt.c | 28 +
src/libvirt_private.syms | 53 +
src/libvirt_public.syms | 42 +
src/remote/remote_driver.c | 66 +
src/remote/remote_protocol.x | 466 ++++-
src/remote_protocol-structs | 165 ++
src/rpc/gendispatch.pl | 23 +-
src/util/virerror.c | 37 +
tests/Makefile.am | 12 +
tests/fsitemxml2xmlin/item.xml | 13 +
tests/fsitemxml2xmlout/item.xml | 13 +
tests/fsitemxml2xmltest.c | 105 +
.../dir-missing-target-path-invalid.xml | 12 +
tests/fspoolxml2xmlin/fspool-dir.xml | 16 +
tests/fspoolxml2xmlout/fspool-dir.xml | 16 +
tests/fspoolxml2xmltest.c | 81 +
tools/Makefile.am | 2 +
tools/virsh-fsitem.c | 1292 ++++++++++++
tools/virsh-fsitem.h | 39 +
tools/virsh-fspool.c | 1586 +++++++++++++++
tools/virsh-fspool.h | 38 +
tools/virsh.c | 4 +
tools/virsh.h | 9 +
tools/virsh.pod | 252 ++-
60 files changed, 12028 insertions(+), 10 deletions(-)
create mode 100644 docs/formatfs.html.in
create mode 100644 docs/fspool.html.in
create mode 100644 docs/schemas/fsitem.rng
create mode 100644 docs/schemas/fspool.rng
create mode 100644 include/libvirt/libvirt-fs.h
create mode 100644 m4/virt-driver-fspool.m4
create mode 100644 src/conf/fs_conf.c
create mode 100644 src/conf/fs_conf.h
create mode 100644 src/driver-fs.h
create mode 100644 src/fs/fs_backend.h
create mode 100644 src/fs/fs_backend_dir.c
create mode 100644 src/fs/fs_backend_dir.h
create mode 100644 src/fs/fs_driver.c
create mode 100644 src/fs/fs_driver.h
create mode 100644 src/libvirt-fs.c
create mode 100644 tests/fsitemxml2xmlin/item.xml
create mode 100644 tests/fsitemxml2xmlout/item.xml
create mode 100644 tests/fsitemxml2xmltest.c
create mode 100644 tests/fspoolschemadata/dir-missing-target-path-invalid.xml
create mode 100644 tests/fspoolxml2xmlin/fspool-dir.xml
create mode 100644 tests/fspoolxml2xmlout/fspool-dir.xml
create mode 100644 tests/fspoolxml2xmltest.c
create mode 100644 tools/virsh-fsitem.c
create mode 100644 tools/virsh-fsitem.h
create mode 100644 tools/virsh-fspool.c
create mode 100644 tools/virsh-fspool.h
--
1.8.3.1
8 years, 1 month
[libvirt] [PATCH 0/8] IVSHMEM -- third time's the charm
by Martin Kletzander
In this version we bring back the model, but disable migration for now.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1347049
Martin Kletzander (8):
conf: Fix virDomainShmemDefFind
qemu: Disable migration with ivshmem
conf, qemu: Add support for shmem model
conf, qemu: Add newer shmem models
qemu: Add capabilities for ivshmem-{plain,doorbell}
qemu: Save various defaults for shmem
qemu: Support newer ivshmem device variants
qemu: Add support for hot/cold-(un)plug of shmem devices
docs/schemas/domaincommon.rng | 11 +
src/conf/domain_conf.c | 50 +++--
src/conf/domain_conf.h | 10 +
src/libvirt_private.syms | 2 +
src/qemu/qemu_capabilities.c | 4 +
src/qemu/qemu_capabilities.h | 2 +
src/qemu/qemu_command.c | 100 ++++++++-
src/qemu/qemu_command.h | 10 +
src/qemu/qemu_domain.c | 49 +++-
src/qemu/qemu_driver.c | 39 +++-
src/qemu/qemu_hotplug.c | 247 ++++++++++++++++++++-
src/qemu/qemu_hotplug.h | 6 +
src/qemu/qemu_migration.c | 6 +
.../caps_2.6.0-gicv2.aarch64.xml | 2 +
.../caps_2.6.0-gicv3.aarch64.xml | 2 +
tests/qemucapabilitiesdata/caps_2.6.0.ppc64le.xml | 2 +
tests/qemucapabilitiesdata/caps_2.6.0.x86_64.xml | 2 +
tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 2 +
tests/qemuhotplugtest.c | 21 ++
.../qemuhotplug-ivshmem-doorbell-detach.xml | 7 +
.../qemuhotplug-ivshmem-doorbell.xml | 4 +
.../qemuhotplug-ivshmem-plain-detach.xml | 6 +
.../qemuhotplug-ivshmem-plain.xml | 3 +
...muhotplug-base-live+ivshmem-doorbell-detach.xml | 1 +
.../qemuhotplug-base-live+ivshmem-doorbell.xml | 65 ++++++
.../qemuhotplug-base-live+ivshmem-plain-detach.xml | 1 +
.../qemuhotplug-base-live+ivshmem-plain.xml | 58 +++++
.../qemuxml2argv-shmem-plain-doorbell.args | 43 ++++
...m.xml => qemuxml2argv-shmem-plain-doorbell.xml} | 11 +-
tests/qemuxml2argvdata/qemuxml2argv-shmem.args | 2 +-
tests/qemuxml2argvdata/qemuxml2argv-shmem.xml | 2 +
tests/qemuxml2argvtest.c | 3 +
tests/qemuxml2xmloutdata/qemuxml2xmlout-shmem.xml | 9 +
33 files changed, 760 insertions(+), 22 deletions(-)
create mode 100644 tests/qemuhotplugtestdevices/qemuhotplug-ivshmem-doorbell-detach.xml
create mode 100644 tests/qemuhotplugtestdevices/qemuhotplug-ivshmem-doorbell.xml
create mode 100644 tests/qemuhotplugtestdevices/qemuhotplug-ivshmem-plain-detach.xml
create mode 100644 tests/qemuhotplugtestdevices/qemuhotplug-ivshmem-plain.xml
create mode 120000 tests/qemuhotplugtestdomains/qemuhotplug-base-live+ivshmem-doorbell-detach.xml
create mode 100644 tests/qemuhotplugtestdomains/qemuhotplug-base-live+ivshmem-doorbell.xml
create mode 120000 tests/qemuhotplugtestdomains/qemuhotplug-base-live+ivshmem-plain-detach.xml
create mode 100644 tests/qemuhotplugtestdomains/qemuhotplug-base-live+ivshmem-plain.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-shmem-plain-doorbell.args
copy tests/qemuxml2argvdata/{qemuxml2argv-shmem.xml => qemuxml2argv-shmem-plain-doorbell.xml} (82%)
--
2.10.0
8 years, 1 month
[libvirt] [PATCH 0/2] support qemu drive cache.* parameters
by Nikolay Shirokovskiy
Nikolay Shirokovskiy (2):
conf: add disk cache tuning parameters after qemu
qemu: support <cachetune> in domain disk xml
.gnulib | 2 +-
docs/schemas/domaincommon.rng | 22 ++++++
src/conf/domain_conf.c | 90 ++++++++++++++++++++++
src/conf/domain_conf.h | 7 ++
src/qemu/qemu_capabilities.c | 12 ++-
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_command.c | 30 ++++++++
tests/qemucapabilitiesdata/caps_1.6.0.x86_64.xml | 3 +
tests/qemucapabilitiesdata/caps_1.7.0.x86_64.xml | 3 +
tests/qemucapabilitiesdata/caps_2.1.1.x86_64.xml | 3 +
tests/qemucapabilitiesdata/caps_2.4.0.x86_64.xml | 3 +
tests/qemucapabilitiesdata/caps_2.5.0.x86_64.xml | 3 +
.../caps_2.6.0-gicv2.aarch64.xml | 3 +
.../caps_2.6.0-gicv3.aarch64.xml | 3 +
tests/qemucapabilitiesdata/caps_2.6.0.ppc64le.xml | 3 +
tests/qemucapabilitiesdata/caps_2.6.0.x86_64.xml | 3 +
tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 3 +
17 files changed, 194 insertions(+), 2 deletions(-)
--
1.8.3.1
8 years, 1 month
[libvirt] [PATCH 0/4] vbox: address thread-safety issues.
by Dawid Zamirski
This patch series solves (at least in my testing) vbox driver
thread-safety issues that were also outlined on libvirt-users ML [1]
and I was affected with. Those patches try to follow the suggestions
made Matthias' [2] in that thread as close as possible. Here's where
my patch differs from that suggesions:
* vboxGlobalData - still needs to keep reference to ISession and
IVirutalBox session because it's apparently not possible to have
multiple instances created/destroyed safely with pfnComInitialize
and pfnComUninitalize calls on per-connection basis.
* as such vboxPrivate (the new struct introduced here) also has
references to ISession and IVirutalBox (which are just referenes to
the ones from the global) mainly to immitate ISession instance
per-connection. Apparently newer VBOX SDKs introduced
pfnClinetInitialize that can allegedly create multiple ISessions and
we might want to take advantage of that in the future hopefully
without making additional changes allover the driver code that this
patch did.
The gist of the change is in patch 3 and it also contains more
in-depth explanation on how the issue is being resolved. Also, please
note that patch 4 should be squashed into 3 as it was kept separate
only for code-review purposes and 3rd patch won't compile without 4
applied on top.
[1] https://www.redhat.com/archives/libvirt-users/2012-April/msg00122.html
[2] https://www.redhat.com/archives/libvirt-users/2012-April/msg00125.htmlgq
Dawid Zamirski (4):
vbox: add vboxPrivate struct.
vbox: replace vboxGlobalData with vboxPrivate.
vbox: change API (un)initialization logic.
vbox: update rest of the code to for prior changes.
src/vbox/vbox_common.c | 275 ++++++++++++---------------
src/vbox/vbox_common.h | 32 ++--
src/vbox/vbox_network.c | 51 +++--
src/vbox/vbox_storage.c | 20 +-
src/vbox/vbox_tmpl.c | 433 +++++++++++++++++++++---------------------
src/vbox/vbox_uniformed_api.h | 128 +++++++------
6 files changed, 460 insertions(+), 479 deletions(-)
--
2.7.4
8 years, 1 month
[libvirt] Cpu Modeling
by Jason J. Herne
Hi Jiri & Eduardo,
You might remember a discussion with David Hildenbrand of IBM on the Qemu
mailing list regarding a new Qemu<->Libvirt interface for cpu modeling. I am
picking up this work from David and I wanted to confirm that we are
still on the
same page as to the direction of that interface.
For your reference:
https://www.redhat.com/archives/libvir-list/2016-June/thread.html#01413
https://lists.gnu.org/archive/html/qemu-devel/2016-09/threads.html#00489
The first link is to the discussion you were directly involved in. The
second
link is to the final patch set posted to qemu-devel. The cover letter
gives a
good overview of the interface added to Qemu and the proposed use-case for
Libvirt to use this new cpu modeling support. I'll paste in the most
relevant
section for your convenience:
--------------------------------Libvirt usecase----------------------------
Testing for runability:
- Simply try to start QEMU with KVM, compat machine, CPU model
- Could be done using query-cpu-model-comparison in the future.
Identifying host model, e.g. "virsh capabilities"
- query-cpu-model-expansion on "host" with "-M none --enable-kvm"
<cpu mode='host-model'>:
- simply copy the identified host model
<cpu mode='host-passthrough'>:
- "-cpu host"
"virsh cpu-baseline":
- query-cpu-model-baseline on two models with "-M none"
"virsh cpu-compare":
- query-cpu-model-comparison on two models with "-M none"
There might be some cenarios where libvirt wants to convert another CPU
model to a static variant, this can be done using query-cpu-model-expansion
---------------------------------------------------------------------------
Now that I've hopefully refreshed your memory :) I just want to make
sure that
you are still on-board with this plan. I realize that this new approach does
things differently than Libvirt does today for other platforms. Especially
x86_64. The big differences are as follows:
1. We will invoke qemu to gather the host cpu data used for virsh
capabilities.
Today this data seems to be collected directly from the host hardware
for most
(all?) architectures.
2. virsh cpu-models {arch} will also use a Qemu invocation to gather cpu
model
data.
3. Most architectures seem to use a "map" (xml file containing cpu model
data
that ships with Libvirt) to satisfy #1 and #2 above. Our new method does
not use
this map as it gets all of the data directly from Qemu.
4. virsh cpu-baseline and cpu-compare will now invoke qemu directly as well.
Note: I'm not sure exactly how much all of this will apply just to s390 with
other architectures moving over to the new interface if/when they want
to. Or if
we will want to change all architectures to this new interface at the
same time.
Any guidance?
Thanks for your time and consideration.
--
-- Jason J. Herne (jjherne(a)linux.vnet.ibm.com)
8 years, 1 month
[libvirt] [PATCH] allow snapshots of network sheepdog disks
by Vasiliy Tolstov
Sometimes ago in f7c1410b0ee5b878e81f2eddf86c609947a9b27c ability to
snapshot sheepdog disk removed. But sheepdog have ability to store vm
state inside special object type.
Vasiliy Tolstov (1):
sheepdog: allow snapshot
src/qemu/qemu_driver.c | 6 ++++++
1 file changed, 6 insertions(+)
--
2.7.4
8 years, 1 month