Re: [libvirt] VirtIO-SCSI disks limitation
by Osier Yang
[[ TO libvir-list ]]
Hi, Daniel,
I'm going to share the thread to public list for further discussion.
Hope you
don't mind.
On 26/11/13 02:37, Daniel Erez wrote:
> Hi Osier,
>
> It seems there's a limitation in libvirt that allows up to six disks in a
> virtio-scsi controller. I.e. when sending more than six disks, libvirt
> automatically creates a new controller but of type virtual LSI Logic SCSI.
> Is this behavior a known issue?
For narrow SCSI bus, we allow 6 disks indeed.
For wide SCSI bus, we allow 15 disks (not including the controller
itself on unit 7).
I'm doubting if we have problem on detecting if it supports wide SCSI
bus though, since as far as I see from the user cases, it's always
narrow SCSI bus.
> Shouldn't libvirt allow up to 256 disks
> per controller or at least create a new controller of type virtio-scsi when needed?
The controller model for virtio-scsi controller is lsilogic, which we can't
change simply, since it might affect the existing guests.
There was the similar discussion in libvir-list before [1].
But auto generation for controller is quite old, which I'm also not quite
clear about. I'd like see another discussion to make it more clear whether
we should do some work for upper layer app (e.g. oVirt).
Basicly two points:
* Should we do some changes on the maximum units for a SCSI controller,
I.e. Should 7 (narrow bus) 16 (wide bus) be changed to other numbers?
I'm afraid the changes could affect existing guests though.
* Do we really want to put the burden on users, I.E, let them create the
controller explicitly. For use cases like one wants to add many
disks for
a guest, they need to know whether it's narrow SCSI bus or wide SCSI
bus first (which we don't expose outside), and then do the calculation
to know when to create a new SCSI controller.
@Daniel, am I correct on your problems? Please comments if it doesn't
cover all your thoughts.
[1] http://www.redhat.com/archives/libvir-list/2012-November/msg00537.html
Regards,
Osier
>
> [the issue as been discussed as part of: http://gerrit.ovirt.org/#/c/20630]
>
> Thanks,
> Daniel
>
>
> ----- Original Message -----
>> From: "Dave Allan" <dallan(a)redhat.com>
>> To: "Daniel Erez" <derez(a)redhat.com>
>> Cc: "Ayal Baron" <abaron(a)redhat.com>, "Osier Yang" <jyang(a)redhat.com>, "John Ferlan" <jferlan(a)redhat.com>
>> Sent: Monday, November 25, 2013 8:19:42 PM
>> Subject: Re: VirtIO-SCSI disks limitation
>>
>> Hi Daniel,
>>
>> Talk to Osier Yang and John Ferlan (cc'd).
>>
>> Dave
>>
>>
>> On Mon, Nov 25, 2013 at 12:48:45PM -0500, Daniel Erez wrote:
>>> Hi Dave,
>>>
>>> I'm an engineer at oVirt team and I'm working on VirtIO-SCSI integration.
>>> I would appreciate it if you could refer me to a point of contact at
>>> libvirt.
>>> In specific, I need to know if there's any hardcoded limitation for the
>>> number of disks per VirtIO-SCSI controller.
>>>
>>> Best Regards,
>>> Daniel
11 years
[libvirt] [PATCH v3] sasl: Fix authentication when using PLAIN mechanism
by Christophe Fergeau
With some authentication mechanism (PLAIN for example), sasl_client_start()
can return SASL_OK, which translates to virNetSASLSessionClientStart()
returning VIR_NET_SASL_COMPLETE.
cyrus-sasl documentation is a bit vague as to what to do in such situation,
but upstream clarified this a bit in
http://asg.andrew.cmu.edu/archive/message.php?mailbox=archive.cyrus-sasl&...
When we got VIR_NET_SASL_COMPLETE after virNetSASLSessionClientStart() and
if the remote also tells us that authentication is complete, then we should
end the authentication procedure rather than forcing a call to
virNetSASLSessionClientStep(). Without this patch, when trying to use SASL
PLAIN, I get:
error :authentication failed : Failed to step SASL negotiation: -1
(SASL(-1): generic failure: Unable to find a callback: 32775)
This patch is based on a spice-gtk patch by Dietmar Maurer.
---
Change since v2:
- move the added test out of the for(;;) loop
src/remote/remote_driver.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index df7558b..f9fd915 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -4121,10 +4121,18 @@ remoteAuthSASL(virConnectPtr conn, struct private_data *priv,
VIR_DEBUG("Client step result complete: %d. Data %zu bytes %p",
complete, serverinlen, serverin);
+ /* Previous server call showed completion & sasl_client_start() told us
+ * we are locally complete too */
+ if (complete && err == VIR_NET_SASL_COMPLETE)
+ goto done;
+
/* Loop-the-loop...
- * Even if the server has completed, the client must *always* do at least one step
- * in this loop to verify the server isn't lying about something. Mutual auth */
+ * Even if the server has completed, the client must loop until sasl_client_start() or
+ * sasl_client_step() return SASL_OK to verify the server isn't lying
+ * about something. Mutual auth
+ * */
for (;;) {
+
restep:
if ((err = virNetSASLSessionClientStep(sasl,
serverin,
@@ -4195,6 +4203,7 @@ remoteAuthSASL(virConnectPtr conn, struct private_data *priv,
priv->is_secure = 1;
}
+done:
VIR_DEBUG("SASL authentication complete");
virNetClientSetSASLSession(priv->client, sasl);
ret = 0;
--
1.8.4.2
11 years
[libvirt] [PATCH 0/3] SASL valgrind fixes
by Christophe Fergeau
Hey,
While running virsh through valgrind for some SASL tests, I triggered some
leaks/invalid reads, this patch series fixes these.
Christophe
11 years
[libvirt] [PATCHv6 0/5]Write separate module for hostdev passthrough
by Chunyan Liu
These patches implements a separate module for hostdev passthrough so that it
could be shared by different drivers and can maintain a global state of a host
device. Plus, add passthrough to libxl driver, and change qemu driver and lxc
driver to use hostdev common library instead of their own hostdev APIs.
---
Changes to v5:
* For upgrade concern, check netconfig file in old stateDir xxx/qemu/ if it's
not found in new location xxx/hostdevmgr/, since if there is existing VM,
then after upgrade, NetConfigRestore should find the netconfig file from old
stateDir place.
* Rebase to new qemu_hostdev changes, e.g, prefer VFIO, and add CheckSupport.
* Put into separate patches for adding a pci backend type for xen and other
hostdev common library stuff
* Other fixes according to Daniel's comments.
v5 is here:
http://www.redhat.com/archives/libvir-list/2013-September/msg00745.html
Chunyan Liu (5):
Add a hostdev PCI backend type
Add hostdev passthrough common library
Add pci passthrough to libxl driver
Change qemu driver to use hostdev common library
Change lxc driver to use hostdev common library
docs/schemas/domaincommon.rng | 1 +
po/POTFILES.in | 3 +-
src/Makefile.am | 3 +-
src/conf/domain_conf.c | 3 +-
src/conf/domain_conf.h | 1 +
src/libvirt_private.syms | 20 +
src/libxl/libxl_conf.c | 63 ++
src/libxl/libxl_conf.h | 4 +
src/libxl/libxl_domain.c | 9 +
src/libxl/libxl_driver.c | 448 +++++++++++-
src/lxc/lxc_conf.h | 4 -
src/lxc/lxc_driver.c | 47 +-
src/lxc/lxc_hostdev.c | 413 ----------
src/lxc/lxc_hostdev.h | 43 --
src/lxc/lxc_process.c | 24 +-
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_conf.h | 9 +-
src/qemu/qemu_domain.c | 22 +
src/qemu/qemu_driver.c | 77 +--
src/qemu/qemu_hostdev.c | 1453 -----------------------------------
src/qemu/qemu_hostdev.h | 74 --
src/qemu/qemu_hotplug.c | 133 ++--
src/qemu/qemu_process.c | 40 +-
src/util/virhostdev.c | 1671 +++++++++++++++++++++++++++++++++++++++++
src/util/virhostdev.h | 129 ++++
src/util/virpci.c | 30 +-
src/util/virpci.h | 9 +-
src/util/virscsi.c | 28 +-
src/util/virscsi.h | 8 +-
src/util/virusb.c | 29 +-
src/util/virusb.h | 8 +-
31 files changed, 2607 insertions(+), 2203 deletions(-)
delete mode 100644 src/lxc/lxc_hostdev.c
delete mode 100644 src/lxc/lxc_hostdev.h
delete mode 100644 src/qemu/qemu_hostdev.c
delete mode 100644 src/qemu/qemu_hostdev.h
create mode 100644 src/util/virhostdev.c
create mode 100644 src/util/virhostdev.h
11 years
[libvirt] Help required in simulating libvirt TLS server
by Arun Viswanath
Hi All,
Will some one explain how is this tls libvirt server is implemented. For my
testing purpose I need to implement the similar TLS server in Java or
Python and this server is capable to receive all libvirt calls like
getCapabilities, hostname etc and return response as I'm configured.
Basically I need to simulate the libvirt TLS server. I tried creating many
TLS server but none of the my TLS server implemenation is capable to do
proper handshake with python libvirt client and do successful calls. Any
ideas or help will be appreciable.
Thanks In Advance,
Arun V
11 years
[libvirt] [PATCHv4 0/8] glusterfs storage pool
by Eric Blake
v3: https://www.redhat.com/archives/libvir-list/2013-November/msg00348.html
Depends on:
https://www.redhat.com/archives/libvir-list/2013-November/msg00955.html
Changes since then, addressing review feedback:
- rebase to other improvements in the meantime
- New patches 4-7
- pool changed to require <name>volume</name> to have no slash,
with subdirectory within a volume selected by <dir path=.../>
which must begin with slash
- documentation improved to match actual testing
- directories, symlinks are handled
- volume owner and timestamps are handled
- volume xml tests added, with several bugs in earlier version
fixed along the way
- compared gluster pool with a netfs pool to ensure both can
see the same level of detail from the same gluster storage
If you think it will help review, ask me to provide an interdiff
from v3 (although I have not done it yet).
Eric Blake (8):
storage: initial support for linking with libgfapi
storage: document gluster pool
storage: implement rudimentary glusterfs pool refresh
storage: add network-dir as new storage volume type
storage: improve directory support in gluster pool
storage: improve allocation stats reported on gluster files
storage: improve handling of symlinks in gluster
storage: probe qcow2 volumes in gluster pool
configure.ac | 21 ++
docs/formatstorage.html.in | 15 +-
docs/schemas/storagepool.rng | 26 +-
docs/storage.html.in | 91 +++++-
include/libvirt/libvirt.h.in | 2 +
libvirt.spec.in | 15 +
m4/virt-gluster.m4 | 28 ++
po/POTFILES.in | 1 +
src/Makefile.am | 10 +
src/conf/storage_conf.c | 28 +-
src/conf/storage_conf.h | 3 +-
src/qemu/qemu_command.c | 6 +-
src/qemu/qemu_conf.c | 4 +-
src/storage/storage_backend.c | 14 +-
src/storage/storage_backend.h | 6 +-
src/storage/storage_backend_fs.c | 5 +-
src/storage/storage_backend_gluster.c | 381 +++++++++++++++++++++++
src/storage/storage_backend_gluster.h | 29 ++
tests/storagepoolxml2xmlin/pool-gluster-sub.xml | 9 +
tests/storagepoolxml2xmlin/pool-gluster.xml | 8 +
tests/storagepoolxml2xmlout/pool-gluster-sub.xml | 12 +
tests/storagepoolxml2xmlout/pool-gluster.xml | 12 +
tests/storagepoolxml2xmltest.c | 2 +
tests/storagevolxml2xmlin/vol-gluster-dir.xml | 13 +
tests/storagevolxml2xmlout/vol-gluster-dir.xml | 18 ++
tests/storagevolxml2xmltest.c | 1 +
tools/virsh-volume.c | 5 +-
27 files changed, 740 insertions(+), 25 deletions(-)
create mode 100644 m4/virt-gluster.m4
create mode 100644 src/storage/storage_backend_gluster.c
create mode 100644 src/storage/storage_backend_gluster.h
create mode 100644 tests/storagepoolxml2xmlin/pool-gluster-sub.xml
create mode 100644 tests/storagepoolxml2xmlin/pool-gluster.xml
create mode 100644 tests/storagepoolxml2xmlout/pool-gluster-sub.xml
create mode 100644 tests/storagepoolxml2xmlout/pool-gluster.xml
create mode 100644 tests/storagevolxml2xmlin/vol-gluster-dir.xml
create mode 100644 tests/storagevolxml2xmlout/vol-gluster-dir.xml
--
1.8.3.1
11 years
[libvirt] [test-API][PATCH] Add blockjob related cases
by Jincheng Miao
Add:
* repos/domain/blkstatsflags.py
* repos/domain/block_iotune.py
* repos/domain/block_peek.py
* repos/domain/block_resize.py
* repos/domain/domain_blkio.py
* cases/basic_blockjob.conf
Modify: replace virsh commands to calling api in test function
* repos/domain/blkstats.py
* repos/domain/domain_blkinfo.py
---
cases/basic_blockjob.conf | 87 ++++++++++++++++++++++
repos/domain/blkstats.py | 2 -
repos/domain/blkstatsflags.py | 63 ++++++++++++++++
repos/domain/block_iotune.py | 118 +++++++++++++++++++++++++++++
repos/domain/block_peek.py | 69 +++++++++++++++++
repos/domain/block_resize.py | 88 ++++++++++++++++++++++
repos/domain/domain_blkinfo.py | 87 ++++++++++++----------
repos/domain/domain_blkio.py | 165 +++++++++++++++++++++++++++++++++++++++++
8 files changed, 639 insertions(+), 40 deletions(-)
create mode 100644 cases/basic_blockjob.conf
create mode 100644 repos/domain/blkstatsflags.py
create mode 100644 repos/domain/block_iotune.py
create mode 100644 repos/domain/block_peek.py
create mode 100644 repos/domain/block_resize.py
create mode 100644 repos/domain/domain_blkio.py
diff --git a/cases/basic_blockjob.conf b/cases/basic_blockjob.conf
new file mode 100644
index 0000000..65af2c3
--- /dev/null
+++ b/cases/basic_blockjob.conf
@@ -0,0 +1,87 @@
+domain:install_linux_cdrom
+ guestname
+ $defaultname
+ guestos
+ $defaultos
+ guestarch
+ $defaultarch
+ vcpu
+ $defaultvcpu
+ memory
+ $defaultmem
+ hddriver
+ $defaulthd
+ nicdriver
+ $defaultnic
+ macaddr
+ 54:52:00:45:c3:8a
+
+domain:install_linux_check
+ guestname
+ $defaultname
+ virt_type
+ $defaulthv
+ hddriver
+ $defaulthd
+ nicdriver
+ $defaultnic
+
+domain:block_iotune
+ guestname
+ $defaultname
+ bytes_sec
+ 100000
+ iops_sec
+ 0
+
+domain:block_iotune
+ guestname
+ $defaultname
+ bytes_sec
+ 0
+ iops_sec
+ 1000
+
+domain:block_peek
+ guestname
+ $defaultname
+
+domain:block_peek
+ guestname
+ $defaultname
+
+domain:block_resize
+ guestname
+ $defaultname
+ diskpath
+ /var/lib/libvirt/images/libvirt-test-api
+ disksize
+ 1G
+
+domain:blkstats
+ guestname
+ $defaultname
+
+domain:blkstatsflags
+ guestname
+ $defaultname
+ flags
+ 0
+
+domain:domain_blkinfo
+ guestname
+ $defaultname
+ blockdev
+ /var/lib/libvirt/images/libvirt-test-api
+
+domain:domain_blkio
+ guestname
+ $defaultname
+ weight
+ 500
+
+domain:undefine
+ guestname
+ $defaultname
+
+options cleanup=enable
diff --git a/repos/domain/blkstats.py b/repos/domain/blkstats.py
index 0254922..27c2a46 100644
--- a/repos/domain/blkstats.py
+++ b/repos/domain/blkstats.py
@@ -1,8 +1,6 @@
#!/usr/bin/evn python
# To test domain block device statistics
-import os
-import sys
import time
import libxml2
diff --git a/repos/domain/blkstatsflags.py b/repos/domain/blkstatsflags.py
new file mode 100644
index 0000000..4c84a18
--- /dev/null
+++ b/repos/domain/blkstatsflags.py
@@ -0,0 +1,63 @@
+#!/usr/bin/evn python
+# To test domain block device statistics with flags
+
+import time
+import libxml2
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+
+required_params = ('guestname', 'flags')
+optional_params = {}
+
+def check_guest_status(domobj):
+ """Check guest current status"""
+ state = domobj.info()[0]
+ if state == libvirt.VIR_DOMAIN_SHUTOFF or state == libvirt.VIR_DOMAIN_SHUTDOWN:
+ # add check function
+ return False
+ else:
+ return True
+
+def check_blkstats():
+ """Check block device statistic result"""
+ pass
+
+def blkstatsflags(params):
+ """Domain block device statistic"""
+ logger = params['logger']
+ guestname = params['guestname']
+ flags = int(params['flags'])
+
+ conn = sharedmod.libvirtobj['conn']
+
+ domobj = conn.lookupByName(guestname)
+
+ # Check domain block status
+ if check_guest_status(domobj):
+ pass
+ else:
+ domobj.create()
+ time.sleep(90)
+ try:
+ xml = domobj.XMLDesc(0)
+ doc = libxml2.parseDoc(xml)
+ cont = doc.xpathNewContext()
+ devs = cont.xpathEval("/domain/devices/disk/target/@dev")
+
+ for dev in devs:
+ path = dev.content
+ blkstats = domobj.blockStatsFlags(path, flags)
+ # check_blkstats()
+ logger.debug(blkstats)
+ for entry in blkstats.keys():
+ logger.info("%s %s %s" %(path, entry, blkstats[entry]))
+
+ except libvirtError, e:
+ logger.error("API error message: %s, error code is %s"
+ % (e.message, e.get_error_code()))
+ return 1
+
+ return 0
diff --git a/repos/domain/block_iotune.py b/repos/domain/block_iotune.py
new file mode 100644
index 0000000..f92eaf6
--- /dev/null
+++ b/repos/domain/block_iotune.py
@@ -0,0 +1,118 @@
+#!/usr/bin/evn python
+# To test domain block device iotune
+
+import time
+import libxml2
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+
+required_params = ('guestname', 'bytes_sec', 'iops_sec')
+optional_params = {}
+
+def check_guest_status(domobj):
+ """Check guest current status"""
+ state = domobj.info()[0]
+ if state == libvirt.VIR_DOMAIN_SHUTOFF or \
+ state == libvirt.VIR_DOMAIN_SHUTDOWN:
+ # add check function
+ return False
+ else:
+ return True
+
+def prepare_block_iotune(param, wbs, rbs, tbs, wis, ris, tis, logger):
+ """prepare the block iotune parameter
+ """
+ logger.info("write_bytes_sec : %s" % wbs)
+ param['write_bytes_sec'] = wbs
+ logger.info("read_bytes_sec : %s" % rbs)
+ param['read_bytes_sec'] = rbs
+ logger.info("total_bytes_sec : %s" % tbs)
+ param['total_bytes_sec'] = tbs
+ logger.info("write_iops_sec : %s" % wis)
+ param['write_iops_sec'] = wis
+ logger.info("read_iops_sec : %s" % ris)
+ param['read_iops_sec'] = ris
+ logger.info("total_iops_sec : %s\n" % tis)
+ param['total_iops_sec'] = tis
+ return 0
+
+def check_iotune(expected_param, result_param):
+ """check block iotune configuration
+ """
+ for k in expected_param.keys():
+ if expected_param[k] != result_param[k]:
+ return 1
+ return 0
+
+def block_iotune(params):
+ """Domain block device iotune"""
+ logger = params['logger']
+ guestname = params['guestname']
+ bytes_sec = int(params['bytes_sec'])
+ iops_sec = int(params['iops_sec'])
+ flag = 0
+
+ conn = sharedmod.libvirtobj['conn']
+
+ domobj = conn.lookupByName(guestname)
+
+ # Check domain block status
+ if check_guest_status(domobj):
+ pass
+ else:
+ domobj.create()
+ time.sleep(90)
+
+ try:
+ xml = domobj.XMLDesc(0)
+ doc = libxml2.parseDoc(xml)
+ cont = doc.xpathNewContext()
+ vdevs = cont.xpathEval("/domain/devices/disk/target/@dev")
+ vdev = vdevs[0].content
+
+ iotune_para = {'write_bytes_sec': 0L,
+ 'total_iops_sec': 0L,
+ 'read_iops_sec': 0L,
+ 'read_bytes_sec': 0L,
+ 'write_iops_sec': 0L,
+ 'total_bytes_sec': 0L
+ }
+
+ logger.info("prepare block iotune:")
+ prepare_block_iotune(iotune_para, bytes_sec, bytes_sec, 0,
+ iops_sec, iops_sec, 0, logger)
+
+ logger.info("start to set block iotune:")
+ domobj.setBlockIoTune(vdev, iotune_para, flag)
+
+ res = domobj.blockIoTune(vdev, flag)
+ ret = check_iotune(iotune_para, res)
+ if not ret:
+ logger.info("set pass")
+ else:
+ logger.error("fails to set")
+ return 1
+
+ logger.info("prepare block iotune:")
+ prepare_block_iotune(iotune_para, 0, 0, bytes_sec,
+ 0, 0, iops_sec, logger)
+
+ logger.info("start to set block iotune:")
+ domobj.setBlockIoTune(vdev, iotune_para, flag)
+
+ res = domobj.blockIoTune(vdev, flag)
+ ret = check_iotune(iotune_para, res)
+ if not ret:
+ logger.info("set pass")
+ else:
+ logger.error("fails to set")
+ return 1
+
+ except libvirtError, e:
+ logger.error("API error message: %s, error code is %s"
+ % (e.message, e.get_error_code()))
+ return 1
+
+ return 0
\ No newline at end of file
diff --git a/repos/domain/block_peek.py b/repos/domain/block_peek.py
new file mode 100644
index 0000000..f159f48
--- /dev/null
+++ b/repos/domain/block_peek.py
@@ -0,0 +1,69 @@
+#!/usr/bin/evn python
+# To test domain block device peek
+
+import time
+import libxml2
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+
+required_params = ('guestname',)
+optional_params = {}
+
+def check_guest_status(domobj):
+ """Check guest current status"""
+ state = domobj.info()[0]
+ if state == libvirt.VIR_DOMAIN_SHUTOFF or \
+ state == libvirt.VIR_DOMAIN_SHUTDOWN:
+ # add check function
+ return False
+ else:
+ return True
+
+def block_peek(params):
+ """domain block peek test function
+ """
+ logger = params['logger']
+ guestname = params['guestname']
+ flag = 0
+
+ conn = sharedmod.libvirtobj['conn']
+
+ domobj = conn.lookupByName(guestname)
+
+ # Check domain block status
+ if check_guest_status(domobj):
+ pass
+ else:
+ domobj.create()
+ time.sleep(90)
+
+ try:
+ xml = domobj.XMLDesc(0)
+ doc = libxml2.parseDoc(xml)
+ cont = doc.xpathNewContext()
+ vdevs = cont.xpathEval("/domain/devices/disk/target/@dev")
+ vdev = vdevs[0].content
+
+ logger.info("start to test block_peek.")
+ logger.info("get the MBR's last byte of domain %s %s is:"
+ % (guestname, vdev))
+
+ last_byte = domobj.blockPeek(vdev, 511, 1, flag)
+ logger.info(last_byte)
+
+ # compare with '\xaa'
+ if last_byte == '\xaa':
+ logger.info("Pass: the last byte is \\xaa")
+ else:
+ logger.error("Failed: the last byte is not \\xaa")
+ logger.error("please make sure the guest is bootable")
+ return 1
+
+ except libvirtError, e:
+ logger.error("API error message: %s, error code is %s"
+ % (e.message, e.get_error_code()))
+ return 1
+
+ return 0
\ No newline at end of file
diff --git a/repos/domain/block_resize.py b/repos/domain/block_resize.py
new file mode 100644
index 0000000..1dc4b45
--- /dev/null
+++ b/repos/domain/block_resize.py
@@ -0,0 +1,88 @@
+#!/usr/bin/evn python
+# To test domain block device resize
+
+import time
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+from utils import utils
+
+required_params = ('guestname', 'diskpath', 'disksize',)
+optional_params = {}
+
+def check_guest_status(domobj):
+ """Check guest current status"""
+ state = domobj.info()[0]
+ if state == libvirt.VIR_DOMAIN_SHUTOFF or \
+ state == libvirt.VIR_DOMAIN_SHUTDOWN:
+ # add check function
+ return False
+ else:
+ return True
+
+def block_resize(params):
+ """domain block resize test function
+ """
+ logger = params['logger']
+ guestname = params['guestname']
+ diskpath = params['diskpath']
+ disksize = params['disksize']
+ flag = 0
+
+ out = utils.get_capacity_suffix_size(disksize)
+ if len(out) == 0:
+ logger.error("disksize parse error: \'%s\'" % disksize)
+ logger.error("disksize should be a number with capacity suffix")
+ return 1
+
+ if out['suffix'] == 'K':
+ flag = 0
+ disksize = long(out['capacity'])
+ elif out['suffix'] == 'B':
+ flag = 1
+ disksize = long(out['capacity_byte'])
+ elif out['suffix'] == 'M':
+ flag = 0
+ disksize = long(out['capacity']) * 1024
+ elif out['suffix'] == 'G':
+ flag = 0
+ disksize = long(out['capacity']) * 1024 * 1024
+ else:
+ logger.error("disksize parse error: with a unsupported suffix \'%s\'"
+ % out['suffix'])
+ logger.error("the available disksize suffix of block_resize is: ")
+ logger.error("B, K, M, G, T")
+ return 1
+
+ conn = sharedmod.libvirtobj['conn']
+
+ domobj = conn.lookupByName(guestname)
+
+ # Check domain block status
+ if check_guest_status(domobj):
+ pass
+ else:
+ domobj.create()
+ time.sleep(90)
+
+ try:
+ logger.info("resize domain disk to %s" % disksize)
+ domobj.blockResize(diskpath, disksize, flag)
+
+ # Currently, the units of disksize which get from blockInfo is byte.
+ block_info = domobj.blockInfo(diskpath, 0)
+
+ if block_info[0] == disksize * (1 + 1023 * (1 - flag)):
+ logger.info("domain disk resize success")
+ else:
+ logger.error("error: domain disk change into %s" % block_info[0])
+ return 1
+
+ except libvirtError, e:
+ logger.error("API error message: %s, error code is %s"
+ % (e.message, e.get_error_code()))
+ return 1
+
+ return 0
+
\ No newline at end of file
diff --git a/repos/domain/domain_blkinfo.py b/repos/domain/domain_blkinfo.py
index b6051aa..4978c32 100644
--- a/repos/domain/domain_blkinfo.py
+++ b/repos/domain/domain_blkinfo.py
@@ -1,9 +1,6 @@
#!/usr/bin/env python
-# To test "virsh domblkinfo" command
+# To test domain's blockkinfo API
-import os
-import sys
-import re
import commands
import libvirt
@@ -11,10 +8,8 @@ from libvirt import libvirtError
from src import sharedmod
-GET_DOMBLKINFO_MAC = "virsh domblkinfo %s %s | awk '{print $2}'"
GET_CAPACITY = "du -b %s | awk '{print $1}'"
GET_PHYSICAL_K = " du -B K %s | awk '{print $1}'"
-VIRSH_DOMBLKINFO = "virsh domblkinfo %s %s"
required_params = ('guestname', 'blockdev',)
optional_params = {}
@@ -32,8 +27,8 @@ def check_domain_exists(conn, guestname, logger):
""" check if the domain exists, may or may not be active """
guest_names = []
ids = conn.listDomainsID()
- for id in ids:
- obj = conn.lookupByID(id)
+ for domain_id in ids:
+ obj = conn.lookupByID(domain_id)
guest_names.append(obj.name())
guest_names += conn.listDefinedDomains()
@@ -43,18 +38,28 @@ def check_domain_exists(conn, guestname, logger):
return False
else:
return True
+
+def check_guest_status(domobj):
+ """Check guest current status"""
+ state = domobj.info()[0]
+ if state == libvirt.VIR_DOMAIN_SHUTOFF or \
+ state == libvirt.VIR_DOMAIN_SHUTDOWN:
+ # add check function
+ return False
+ else:
+ return True
def check_block_data(blockdev, blkdata, logger):
""" check data about capacity,allocation,physical """
status, apparent_size = get_output(GET_CAPACITY % blockdev, logger)
if not status:
- if apparent_size == blkdata[0]:
- logger.info("the capacity of '%s' is %s, checking succeeded" % \
- (blockdev, apparent_size))
+ if apparent_size == str(blkdata[0]):
+ logger.info("the capacity of '%s' is %s, checking succeeded"
+ % (blockdev, apparent_size))
else:
- logger.error("apparent-size from 'du' is %s, \n\
- but from 'domblkinfo' is %s, checking failed" % \
- (apparent_size, blkdata[0]))
+ logger.error("apparent-size from 'du' is %s" % apparent_size)
+ logger.error("but from 'domain blockinfo' is %d, checking failed"
+ % blkdata[0])
return 1
else:
return 1
@@ -64,14 +69,15 @@ def check_block_data(blockdev, blkdata, logger):
block_size_b = int(block_size_k[:-1]) * 1024
# Temporarily, we only test the default case, assuming
# Allocation value is equal to Physical value
- if str(block_size_b) == blkdata[1] and str(block_size_b) == blkdata[2]:
- logger.info("the block size of '%s' is %s, same with \n\
- Allocation and Physical value, checking succeeded" % \
- (blockdev, block_size_b))
+ if block_size_b == blkdata[1] and block_size_b == blkdata[2]:
+ logger.info("the block size of '%s' is %s"
+ % (blockdev, block_size_b))
+ logger.info("Allocation and Physical value's checking succeeded")
else:
- logger.error("the block size from 'du' is %s, \n\
- the Allocation value is %s, Physical value is %s, \n\
- checking failed" % (block_size_b, blkdata[1], blkdata[2]))
+ logger.error("the block size from 'du' is %d" % block_size_b)
+ logger.error("the Allocation value is %d, Physical value is %d"
+ % (blkdata[1], blkdata[2]))
+ logger.error("checking failed")
return 1
return 0
@@ -79,7 +85,7 @@ def check_block_data(blockdev, blkdata, logger):
def domain_blkinfo(params):
""" using du command to check the data
- in the output of virsh domblkinfo
+ in the output of API blockinfo
"""
logger = params['logger']
guestname = params.get('guestname')
@@ -93,24 +99,29 @@ def domain_blkinfo(params):
if not check_domain_exists(conn, guestname, logger):
logger.error("need a defined guest")
return 1
-
- logger.info("the output of virsh domblkinfo is:")
- status, output = get_output(VIRSH_DOMBLKINFO % (guestname, blockdev), logger)
- if not status:
- logger.info("\n" + output)
- else:
+
+ domobj = conn.lookupByName(guestname)
+
+ if not check_guest_status(domobj):
+ logger.error("guest is not started.")
return 1
-
- status, data_str = get_output(GET_DOMBLKINFO_MAC % (guestname, blockdev), logger)
- if not status:
- blkdata = data_str.rstrip().split('\n')
- logger.info("capacity,allocation,physical list: %s" % blkdata)
- else:
+
+ try:
+ logger.info("the output of domain blockinfo is:")
+ block_info = domobj.blockInfo(blockdev, 0)
+ logger.info("Capacity : %d " % block_info[0])
+ logger.info("Allocation: %d " % block_info[1])
+ logger.info("Physical : %d " % block_info[2])
+
+ except libvirtError, e:
+ logger.error("API error message: %s, error code is %s"
+ % (e.message, e.get_error_code()))
return 1
-
- if check_block_data(blockdev, blkdata, logger):
- logger.error("checking domblkinfo data FAILED")
+
+ if check_block_data(blockdev, block_info, logger):
+ logger.error("checking domain blockinfo data FAILED")
return 1
else:
- logger.info("checking domblkinfo data SUCCEEDED")
+ logger.info("checking domain blockinfo data SUCCEEDED")
+
return 0
diff --git a/repos/domain/domain_blkio.py b/repos/domain/domain_blkio.py
new file mode 100644
index 0000000..2603113
--- /dev/null
+++ b/repos/domain/domain_blkio.py
@@ -0,0 +1,165 @@
+#!/usr/bin/evn python
+# To test domain blkio parameters
+
+import os
+import time
+import libxml2
+import libvirt
+import commands
+from libvirt import libvirtError
+
+from src import sharedmod
+
+CGROUP_PATH = "/cgroup"
+BLKIO_PATH1 = "%s/blkio/libvirt/qemu/%s"
+BLKIO_PATH2 = "/sys/fs%s/blkio/machine/%s.libvirt-qemu"
+GET_PARTITION = "df -P %s | tail -1 | awk {'print $1'}"
+
+required_params = ('guestname', 'weight',)
+optional_params = {}
+
+def get_output(command, logger):
+ """execute shell command
+ """
+ status, ret = commands.getstatusoutput(command)
+ if status:
+ logger.error("executing "+ "\"" + command + "\"" + " failed")
+ logger.error(ret)
+ return status, ret
+
+def get_device(domobj, logger):
+ """get the disk device which domain image stored in
+ """
+ xml = domobj.XMLDesc(0)
+ doc = libxml2.parseDoc(xml)
+ cont = doc.xpathNewContext()
+ devs = cont.xpathEval("/domain/devices/disk/source/@file")
+ image_file = devs[0].content
+
+ status, output = get_output(GET_PARTITION % image_file, logger)
+ if not status:
+ return output[:-1]
+ else:
+ logger.error("get device error: ")
+ logger.error(GET_PARTITION % image_file)
+ return ""
+
+def check_blkio_paras(domain_blkio_path, domainname, blkio_paras, logger):
+ """check blkio parameters according to cgroup filesystem
+ """
+ logger.info("checking blkio parameters from cgroup")
+ if 'weight' in blkio_paras:
+ expected_weight = blkio_paras['weight']
+ status, output = get_output("cat %s/blkio.weight"
+ % domain_blkio_path, logger)
+ if not status:
+ logger.info("%s/blkio.weight is \"%s\""
+ % (domain_blkio_path, output))
+ else:
+ return 1
+
+ if int(output) == expected_weight:
+ logger.info("the weight matches with cgroup blkio.weight")
+ return 0
+ else:
+ logger.error("the weight mismatches with cgroup blkio.weight")
+ return 1
+
+ if 'device_weight' in blkio_paras:
+ expected_device_weight = blkio_paras['device_weight']
+ status, output = get_output("cat %s/blkio.weight_device"
+ % domain_blkio_path, logger)
+ if not status:
+ logger.info("%s/blkio.weight_device is \"%s\""
+ % (domain_blkio_path, output))
+ else:
+ return 1
+
+ if output.split(' ')[1] == expected_device_weight.split(',')[1]:
+ logger.info("the device_weight matches with cgroup \
+ blkio.weight_device")
+ return 0
+ else:
+ logger.error("the device_weight mismatches with cgroup \
+ blkio.weight_device")
+ return 1
+
+ return 0
+
+def check_guest_status(domobj):
+ """Check guest current status"""
+ state = domobj.info()[0]
+ if state == libvirt.VIR_DOMAIN_SHUTOFF or \
+ state == libvirt.VIR_DOMAIN_SHUTDOWN:
+ # add check function
+ return False
+ else:
+ return True
+
+def domain_blkio(params):
+ """domain blkio parameters test function"""
+ logger = params['logger']
+ guestname = params['guestname']
+ expected_weight = params['weight']
+ flag = 0
+
+ conn = sharedmod.libvirtobj['conn']
+
+ domobj = conn.lookupByName(guestname)
+
+ # Check domain block status
+ if check_guest_status(domobj):
+ pass
+ else:
+ domobj.create()
+ time.sleep(90)
+
+ if os.path.exists(CGROUP_PATH):
+ blkio_path = BLKIO_PATH1 % (CGROUP_PATH, guestname)
+ else:
+ blkio_path = BLKIO_PATH2 % (CGROUP_PATH, guestname)
+
+
+ try:
+ blkio_paras = domobj.blkioParameters(flag)
+
+
+ logger.info("the blkio weight of %s is: %d"
+ % (guestname, blkio_paras['weight']))
+
+ status = check_blkio_paras(blkio_path, guestname, blkio_paras,
+ logger)
+ if status != 0:
+ return 1
+
+ logger.info("start to set param weight to %s" % expected_weight)
+ blkio_paras = {'weight':int(expected_weight)}
+ status = domobj.setBlkioParameters(blkio_paras, flag)
+ if status != 0:
+ return 1
+
+ status = check_blkio_paras(blkio_path, guestname, blkio_paras,
+ logger)
+ if status != 0:
+ return 1
+
+ device = get_device(domobj, logger)
+ device_weight = "%s,%s" % (device, expected_weight)
+ logger.info("start to set param device_weight to %s"
+ % device_weight)
+ blkio_paras = {'device_weight':device_weight}
+ status = domobj.setBlkioParameters(blkio_paras, flag)
+ if status != 0:
+ return 1
+
+ status = check_blkio_paras(blkio_path, guestname, blkio_paras,
+ logger)
+ if status != 0:
+ return 1
+
+ except libvirtError, e:
+ logger.error("API error message: %s, error code is %s"
+ % (e.message, e.get_error_code()))
+ return 1
+
+ return 0
--
1.8.3.1
11 years
[libvirt] [PATCH RFC 0/6] Add support for snapshots on gluster.
by Peter Krempa
This series has to be applied on top of the refactoring series sent earlier today.
First 3 patches are additional fixes that should be good to be commited. The rest is work
in progress state to gather possible comments.
Peter Krempa (6):
qemu: snapshot: Touch up error message
qemu: snapshot: Add functions similar to disk source pool translation
qemu: snapshots: Declare supported and unsupported snapshot configs
RFC: snapshot: Add support for specifying snapshot disk backing type
RFC: conf: snapshot: Parse more snapshot information
RFC: qemu: snapshot: Add support for external active snapshots on
gluster
src/conf/snapshot_conf.c | 21 ++-
src/conf/snapshot_conf.h | 15 +-
src/qemu/qemu_command.c | 2 +-
src/qemu/qemu_command.h | 9 +
src/qemu/qemu_conf.c | 23 +++
src/qemu/qemu_conf.h | 6 +
src/qemu/qemu_driver.c | 426 ++++++++++++++++++++++++++++++++++++++++-------
7 files changed, 434 insertions(+), 68 deletions(-)
--
1.8.4.3
11 years
[libvirt] [PATCH 00/22] Misc refactors and cleanups leading to gluster snapshot support
by Peter Krempa
Peter Krempa (22):
conf: Implement virStorageVolType enum helper functions
test: Implement fake storage pool driver in qemuxml2argv test
qemuxml2argv: Add test to verify correct usage of disk type="volume"
qemuxml2argv: Add test for disk type='volume' with iSCSI pools
qemu: Refactor qemuTranslatePool source
qemu: Split out formatting of network disk source URI
qemu: Simplify call pattern of qemuBuildDriveURIString
qemu: Use qemuBuildNetworkDriveURI to handle http/ftp and friends
qemu: Migrate sheepdog source generation into common function
qemu: Split out NBD command generation
qemu: Unify formatting of RBD sources
qemu: Refactor disk source string formatting
conf: Support disk source formatting without needing a
virDomainDiskDefPtr
conf: Clean up virDomainDiskSourceDefFormatInternal
conf: Split out seclabel formating code for disk source
conf: Export disk source formatter and parser
snapshot: conf: Use common parsing and formatting functions for source
snapshot: conf: Fix NULL dereference when <driver> element is empty
conf: Add functions to copy and free network disk source definitions
qemu: snapshot: Detect internal snapshots also for sheepdog and RBD
conf: Add helper do clear disk source authentication struct
qemu: Clear old translated pool source
src/conf/domain_conf.c | 261 ++++++---
src/conf/domain_conf.h | 25 +
src/conf/snapshot_conf.c | 56 +-
src/conf/snapshot_conf.h | 1 +
src/conf/storage_conf.c | 4 +
src/conf/storage_conf.h | 2 +
src/libvirt_private.syms | 6 +
src/qemu/qemu_command.c | 650 +++++++++++----------
src/qemu/qemu_command.h | 6 +
src/qemu/qemu_conf.c | 129 ++--
src/qemu/qemu_conf.h | 2 +
src/qemu/qemu_driver.c | 3 +-
.../qemuxml2argv-disk-source-pool-mode.args | 10 +
.../qemuxml2argv-disk-source-pool-mode.xml | 4 +-
.../qemuxml2argv-disk-source-pool.args | 8 +
.../qemuxml2argv-disk-source-pool.xml | 2 +-
tests/qemuxml2argvtest.c | 166 ++++++
17 files changed, 874 insertions(+), 461 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-source-pool-mode.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-source-pool.args
--
1.8.4.3
11 years