[libvirt] [PATCH] Fix parted sector size assumption
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
Parted does not report disk size in 512 byte units, but
rather the disks' logical sector size, which with modern
drives might be 4k.
* src/storage/parthelper.c: Remove hardcoded 512 byte sector
size
---
src/storage/parthelper.c | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/src/storage/parthelper.c b/src/storage/parthelper.c
index acc9171..964aa78 100644
--- a/src/storage/parthelper.c
+++ b/src/storage/parthelper.c
@@ -157,17 +157,17 @@ int main(int argc, char **argv)
part->num, '\0',
type, '\0',
content, '\0',
- part->geom.start * 512llu, '\0',
- (part->geom.end + 1 ) * 512llu, '\0',
- part->geom.length * 512llu, '\0');
+ part->geom.start * dev->sector_size, '\0',
+ (part->geom.end + 1 ) * dev->sector_size, '\0',
+ part->geom.length * dev->sector_size, '\0');
} else {
printf("%s%c%s%c%s%c%llu%c%llu%c%llu%c",
"-", '\0',
type, '\0',
content, '\0',
- part->geom.start * 512llu, '\0',
- (part->geom.end + 1 ) * 512llu, '\0',
- part->geom.length * 512llu, '\0');
+ part->geom.start * dev->sector_size, '\0',
+ (part->geom.end + 1 ) * dev->sector_size, '\0',
+ part->geom.length * dev->sector_size, '\0');
}
part = ped_disk_next_partition(disk, part);
}
--
1.7.6
13 years, 2 months
Re: [libvirt] The design choice for how to enable block I/O throttling function in libvirt
by Stefan Hajnoczi
On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke <agl(a)us.ibm.com> wrote:
> On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
>> On Tue, Aug 30, 2011 at 3:55 AM, Zhi Yong Wu <zwu.kernel(a)gmail.com> wrote:
>> > I am trying to enable block I/O throttling function in libvirt. But
>> > currently i met some design questions, and don't make sure if we
>> > should extend blkiotune to support block I/O throttling or introduce
>> > one new libvirt command "blkiothrottle" to cover it or not. If you
>> > have some better idea, pls don't hesitate to drop your comments.
>>
>> A little bit of context: this discussion is about adding libvirt
>> support for QEMU disk I/O throttling.
>
> Thanks for the additional context Stefan.
>
>> Today libvirt supports the cgroups blkio-controller, which handles
>> proportional shares and throughput/iops limits on host block devices.
>> blkio-controller does not support network file systems (NFS) or other
>> QEMU remote block drivers (curl, Ceph/rbd, sheepdog) since they are
>> not host block devices. QEMU I/O throttling works with all types of
>> -drive and therefore complements blkio-controller.
>
> The first question that pops into my mind is: Should a user need to understand
> when to use the cgroups blkio-controller vs. the QEMU I/O throttling method? In
> my opinion, it would be nice if libvirt had a single interface for block I/O
> throttling and libvirt would decide which mechanism to use based on the type of
> device and the specific limits that need to be set.
Yes, I agree it would be simplest to pick the right mechanism,
depending on the type of throttling the user wants. More below.
>> I/O throttling can be applied independently to each -drive attached to
>> a guest and supports throughput/iops limits. For more information on
>> this QEMU feature and a comparison with blkio-controller, see Ryan
>> Harper's KVM Forum 2011 presentation:
>
>> http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-i...
>
> From the presentation, it seems that both the cgroups method the the qemu method
> offer comparable control (assuming a block device) so it might possible to apply
> either method from the same API in a transparent manner. Am I correct or are we
> suggesting that the Qemu throttling approach should always be used for Qemu
> domains?
QEMU I/O throttling does not provide a proportional share mechanism.
So you cannot assign weights to VMs and let them receive a fraction of
the available disk time. That is only supported by cgroups
blkio-controller because it requires a global view which QEMU does not
have.
So I think the two are complementary:
If proportional share should be used on a host block device, use
cgroups blkio-controller.
Otherwise use QEMU I/O throttling.
Stefan
13 years, 2 months
[libvirt] [test-API][PATCH] Fix a typo which block windows cdrom install
by Wayne Sun
---
repos/domain/install_windows_cdrom.py | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/repos/domain/install_windows_cdrom.py b/repos/domain/install_windows_cdrom.py
index 9cf9e3b..b8333e2 100644
--- a/repos/domain/install_windows_cdrom.py
+++ b/repos/domain/install_windows_cdrom.py
@@ -296,7 +296,7 @@ def install_windows_cdrom(params):
logger.debug("the uri to connect is %s" % uri)
if params.has_key('imagepath') and not params.has_key('volumepath'):
- imgfullpath = os..path.join(params.get('imagepath'), guestname)
+ imgfullpath = os.path.join(params.get('imagepath'), guestname)
elif not params.has_key('imagepath') and not params.has_key('volumepath'):
if hypervisor == 'xen':
imgfullpath = os.path.join('/var/lib/xen/images', guestname)
--
1.7.1
13 years, 2 months
[libvirt] [test-API][PATCH v3] Add ownership_test.py test case
by Wayne Sun
* Save a domain to a file which chown is qemu:qemu, check the ownership
of the file after save and restore operation. With use_nfs enable or
not, the saved file could be on local or mounted root_squash nfs dir.
---
repos/domain/ownership_test.py | 315 ++++++++++++++++++++++++++++++++++++++++
1 files changed, 315 insertions(+), 0 deletions(-)
create mode 100644 repos/domain/ownership_test.py
diff --git a/repos/domain/ownership_test.py b/repos/domain/ownership_test.py
new file mode 100644
index 0000000..1957428
--- /dev/null
+++ b/repos/domain/ownership_test.py
@@ -0,0 +1,315 @@
+#!/usr/bin/env python
+"""Setting the dynamic_ownership in /etc/libvirt/qemu.conf,
+ check the ownership of saved domain file. Test could be on
+ local or root_squash nfs. The default owner of the saved
+ domain file is qemu:qemu in this case.
+ domain:ownership_test
+ guestname
+ #GUESTNAME#
+ dynamic_ownership
+ 0|1
+ use_nfs
+ enable|disable
+
+ use_nfs is a flag for decide using root_squash nfs or not
+"""
+
+__author__ = 'Wayne Sun: gsun(a)redhat.com'
+__date__ = 'Mon Jul 25, 2011'
+__version__ = '0.1.0'
+__credits__ = 'Copyright (C) 2011 Red Hat, Inc.'
+__all__ = ['ownership_test']
+
+import os
+import re
+import sys
+import commands
+
+QEMU_CONF = "/etc/libvirt/qemu.conf"
+SAVE_FILE = "/mnt/test.save"
+TEMP_FILE = "/tmp/test.save"
+
+from utils.Python import utils
+
+def append_path(path):
+ """Append root path of package"""
+ if path in sys.path:
+ pass
+ else:
+ sys.path.append(path)
+
+from lib import connectAPI
+from lib import domainAPI
+from utils.Python import utils
+from exception import LibvirtAPI
+
+pwd = os.getcwd()
+result = re.search('(.*)libvirt-test-API', pwd)
+append_path(result.group(0))
+
+def return_close(conn, logger, ret):
+ conn.close()
+ logger.info("closed hypervisor connection")
+ return ret
+
+def check_params(params):
+ """Verify inputing parameter dictionary"""
+ logger = params['logger']
+ keys = ['guestname', 'dynamic_ownership', 'use_nfs']
+ for key in keys:
+ if key not in params:
+ logger.error("%s is required" %key)
+ return 1
+ return 0
+
+def nfs_setup(util, logger):
+ """setup nfs on localhost
+ """
+ logger.info("set nfs service")
+ cmd = "echo /tmp *\(rw,root_squash\) > /etc/exports"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("failed to config nfs export")
+ return 1
+
+ logger.info("start nfs service")
+ cmd = "service nfs start"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("failed to start nfs service")
+ return 1
+ else:
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ return 0
+
+def chown_file(util, filepath, logger):
+ """touch a file and setting the chown
+ """
+ if os.path.exists(filepath):
+ os.remove(filepath)
+
+ touch_cmd = "touch %s" % filepath
+ logger.info(touch_cmd)
+ ret, out = util.exec_cmd(touch_cmd, shell=True)
+ if ret:
+ logger.error("failed to touch a new file")
+ logger.error(out[0])
+ return 1
+
+ logger.info("set chown of %s as 107:107" % filepath)
+ chown_cmd = "chown 107:107 %s" % filepath
+ ret, out = util.exec_cmd(chown_cmd, shell=True)
+ if ret:
+ logger.error("failed to set the ownership of %s" % filepath)
+ return 1
+
+ logger.info("set %s mode as 664" % filepath)
+ cmd = "chmod 664 %s" % filepath
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("failed to set the mode of %s" % filepath)
+ return 1
+
+ return 0
+
+def prepare_env(util, guestname, dynamic_ownership, use_nfs, logger):
+ """configure dynamic_ownership in /etc/libvirt/qemu.conf,
+ set chown of the file to save
+ """
+ logger.info("set the dynamic ownership in %s as %s" % \
+ (QEMU_CONF, dynamic_ownership))
+ set_cmd = "echo dynamic_ownership = %s >> %s" % (dynamic_ownership, QEMU_CONF)
+ ret, out = util.exec_cmd(set_cmd, shell=True)
+ if ret:
+ logger.error("failed to set dynamic ownership")
+ return 1
+
+ logger.info("restart libvirtd")
+ restart_cmd = "service libvirtd restart"
+ ret, out = util.exec_cmd(restart_cmd, shell=True)
+ if ret:
+ logger.error("failed to restart libvirtd")
+ return 1
+ else:
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ if use_nfs == 'enable':
+ filepath = TEMP_FILE
+ elif use_nfs == 'disable':
+ filepath = SAVE_FILE
+
+ ret = chown_file(util, filepath, logger)
+ if ret:
+ return 1
+
+ if use_nfs == 'enable':
+ ret = nfs_setup(util, logger)
+ if ret:
+ return 1
+
+ cmd = "setsebool virt_use_nfs 1"
+ logger.info(cmd)
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("Failed to setsebool virt_use_nfs")
+ return 1
+
+ logger.info("mount the nfs path to /mnt")
+ mount_cmd = "mount -o vers=3 127.0.0.1:/tmp /mnt"
+ ret, out = util.exec_cmd(mount_cmd, shell=True)
+ if ret:
+ logger.error("Failed to mount the nfs path")
+ for i in range(len(out)):
+ logger.info(out[i])
+ return 1
+
+ return 0
+
+def ownership_get(logger):
+ """check the ownership of file"""
+
+ statinfo = os.stat(SAVE_FILE)
+ uid = statinfo.st_uid
+ gid = statinfo.st_gid
+
+ logger.info("the uid and gid of %s is %s:%s" %(SAVE_FILE, uid, gid))
+
+ return 0, uid, gid
+
+def ownership_test(params):
+ """Save a domain to a file, check the ownership of
+ the file after save and restore
+ """
+ # Initiate and check parameters
+ params_check_result = check_params(params)
+ if params_check_result:
+ return 1
+
+ logger = params['logger']
+ guestname = params['guestname']
+ dynamic_ownership = params['dynamic_ownership']
+ use_nfs = params['use_nfs']
+ test_result = False
+
+ util = utils.Utils()
+
+ # set env
+ logger.info("prepare the environment")
+ ret = prepare_env(util, guestname, dynamic_ownership, use_nfs, logger)
+ if ret:
+ logger.error("failed to prepare the environment")
+ return 1
+
+ # Connect to local hypervisor connection URI
+ uri = util.get_uri('127.0.0.1')
+ conn = connectAPI.ConnectAPI()
+ virconn = conn.open(uri)
+
+ # save domain to the file
+ logger.info("save the domain to the file")
+ domobj = domainAPI.DomainAPI(virconn)
+ try:
+ domobj.save(guestname, SAVE_FILE)
+ logger.info("Success save domain to file")
+ except LibvirtAPI, e:
+ logger.error("API error message: %s, error code is %s" % \
+ (e.response()['message'], e.response()['code']))
+ logger.error("Error: fail to save %s domain" %guestname)
+ return return_close(conn, logger, 1)
+
+ logger.info("check the ownership of %s after save" % SAVE_FILE)
+ ret, uid, gid = ownership_get(logger)
+ if use_nfs == 'enable':
+ if uid == 107 and gid == 107:
+ logger.info("As expected, the chown not change.")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, it's not as expected" % \
+ (SAVE_FILE, uid, gid))
+ return return_close(conn, logger, 1)
+ elif use_nfs == 'disable':
+ if dynamic_ownership == '1':
+ if uid == 0 and gid == 0:
+ logger.info("As expected, the chown changed to root:root")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, it's not as expected" % \
+ (SAVE_FILE, uid, gid))
+ return return_close(conn, logger, 1)
+ elif dynamic_ownership == '0':
+ if uid == 107 and gid == 107:
+ logger.info("As expected, the chown not change.")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, it's not as expected" % \
+ (SAVE_FILE, uid, gid))
+ return return_close(conn, logger, 1)
+ else:
+ logger.error("wrong dynamic_ownership value %s" % dynamic_ownership)
+ return return_close(conn, logger, 1)
+
+
+ # restore domain from file
+ logger.info("restore the domain from the file")
+ try:
+ domobj.restore(guestname, SAVE_FILE)
+ logger.info("check the ownership of %s after restore" % SAVE_FILE)
+ ret, uid, gid = ownership_get(logger)
+ if uid == 107 and gid == 107:
+ logger.info("As expected, the chown not change.")
+ test_result = True
+ else:
+ logger.error("The chown of %s is %s:%s, not change back as expected" % \
+ (SAVE_FILE, uid, gid))
+ test_result = False
+ except LibvirtAPI, e:
+ logger.error("API error message: %s, error code is %s" % \
+ (e.response()['message'], e.response()['code']))
+ logger.error("Error: fail to restore %s domain" %guestname)
+ test_result = False
+
+ if test_result:
+ return return_close(conn, logger, 0)
+ else:
+ return return_close(conn, logger, 1)
+
+def ownership_test_clean(params):
+ """clean testing environment"""
+ logger = params['logger']
+ use_nfs = params['use_nfs']
+
+ util = utils.Utils()
+
+ if use_nfs == 'enable':
+ logger.info("umount the nfs path")
+ umount_cmd = "umount /mnt"
+ ret, out = util.exec_cmd(umount_cmd, shell=True)
+ if ret:
+ logger.error("Failed to mount the nfs path")
+ for i in range(len(out)):
+ logger.error(out[i])
+
+ logger.info("stop nfs service")
+ cmd = "service nfs stop"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret:
+ logger.error("Failed to stop nfs service")
+ for i in range(len(out)):
+ logger.error(out[i])
+
+ logger.info("clear the exports file")
+ cmd = ">/etc/exports"
+ if ret:
+ logger.error("Failed to clear exports file")
+
+ filepath = TEMP_FILE
+ elif use_nfs == 'disable':
+ filepath = SAVE_FILE
+
+ if os.path.exists(filepath):
+ logger.info("remove dump file from save %s" % filepath)
+ os.remove(filepath)
+
--
1.7.1
13 years, 2 months
[libvirt] [test-API][PATCH 1/3] Remove API wrappers that have been flagged internal use
by Guannan Ren
*lib/connectAPI.py
---
lib/connectAPI.py | 108 -----------------------------------------------------
1 files changed, 0 insertions(+), 108 deletions(-)
diff --git a/lib/connectAPI.py b/lib/connectAPI.py
index cfa4fea..9f2b728 100644
--- a/lib/connectAPI.py
+++ b/lib/connectAPI.py
@@ -248,114 +248,6 @@ class ConnectAPI(object):
code = e.get_error_code()
raise exception.LibvirtAPI(message, code)
- def dispatch_domain_event_callback(self, dom, event, detail):
- try:
- return self.conn.dispatchDomainEventCallbacks(dom, event, detail)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_generic_callback(self, dom, cbData):
- try:
- return self.conn.dispatchDomainEventGenericCallback(dom, cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_graphics_callback(self,
- dom,
- phase,
- localAddr,
- remoteAddr,
- authScheme,
- subject,
- cbData):
- try:
- return self.conn.dispatchDomainEventGraphicsCallback(dom,
- phase,
- localAddr,
- remoteAddr,
- authScheme,
- subject,
- cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_IOError_callback(self,
- dom,
- srcPath,
- devAlias,
- action,
- cbData):
- try:
- return self.conn.dispatchDomainEventIOErrorCallback(dom,
- srcPath,
- devAlias,
- action,
- cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_IOError_reason_callback(self,
- dom,
- srcPath,
- devAlias,
- action,
- reason,
- cbData):
- try:
- return self.conn.dispatchDomainEventIOErrorReasonCallback(dom,
- srcPath,
- devAlias,
- action,
- reason,
- cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_lifecycle_callback(self,
- dom,
- event,
- detail,
- cbData):
- try:
- return self.conn.dispatchDomainEventLifecycleCallback(dom,
- event,
- detail,
- cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_RTC_change_callback(self, dom, offset, cbData):
- try:
- return self.conn.dispatchDomainEventRTCChangeCallback(dom,
- offset,
- cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
- def dispatch_domain_event_watchdog_callback(self, dom, action, cbData):
- try:
- return self.conn.dispatchDomainEventWatchdogCallback(dom,
- action,
- cbData)
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
def domain_event_deregister(self, cb):
try:
return self.conn.domainEventDeregister(cb)
--
1.7.1
13 years, 2 months
[libvirt] [PATCH] Remove bogus virSecurityManagerSetProcessFDLabel method
by Daniel P. Berrange
The virSecurityManagerSetProcessFDLabel method was introduced
after a mis-understanding from a conversation about SELinux
socket labelling. The virSecurityManagerSetSocketLabel method
should have been used for all such scenarios.
* src/security/security_apparmor.c, src/security/security_apparmor.c,
src/security/security_driver.h, src/security/security_manager.c,
src/security/security_manager.h, src/security/security_selinux.c,
src/security/security_stack.c: Remove SetProcessFDLabel driver
---
src/security/security_apparmor.c | 29 -----------------------------
src/security/security_dac.c | 9 ---------
src/security/security_driver.h | 4 ----
src/security/security_manager.c | 11 -----------
src/security/security_manager.h | 3 ---
src/security/security_selinux.c | 14 --------------
src/security/security_stack.c | 18 ------------------
7 files changed, 0 insertions(+), 88 deletions(-)
diff --git a/src/security/security_apparmor.c b/src/security/security_apparmor.c
index dbd1290..299dcc6 100644
--- a/src/security/security_apparmor.c
+++ b/src/security/security_apparmor.c
@@ -799,34 +799,6 @@ AppArmorSetImageFDLabel(virSecurityManagerPtr mgr,
return reload_profile(mgr, vm, fd_path, true);
}
-static int
-AppArmorSetProcessFDLabel(virSecurityManagerPtr mgr,
- virDomainObjPtr vm,
- int fd)
-{
- int rc = -1;
- char *proc = NULL;
- char *fd_path = NULL;
-
- const virSecurityLabelDefPtr secdef = &vm->def->seclabel;
-
- if (secdef->imagelabel == NULL)
- return 0;
-
- if (virAsprintf(&proc, "/proc/self/fd/%d", fd) == -1) {
- virReportOOMError();
- return rc;
- }
-
- if (virFileResolveLink(proc, &fd_path) < 0) {
- virSecurityReportError(VIR_ERR_INTERNAL_ERROR,
- "%s", _("could not find path for descriptor"));
- return rc;
- }
-
- return reload_profile(mgr, vm, fd_path, true);
-}
-
virSecurityDriver virAppArmorSecurityDriver = {
0,
SECURITY_APPARMOR_NAME,
@@ -863,5 +835,4 @@ virSecurityDriver virAppArmorSecurityDriver = {
AppArmorRestoreSavedStateLabel,
AppArmorSetImageFDLabel,
- AppArmorSetProcessFDLabel,
};
diff --git a/src/security/security_dac.c b/src/security/security_dac.c
index e5465fc..af02236 100644
--- a/src/security/security_dac.c
+++ b/src/security/security_dac.c
@@ -697,14 +697,6 @@ virSecurityDACSetImageFDLabel(virSecurityManagerPtr mgr ATTRIBUTE_UNUSED,
return 0;
}
-static int
-virSecurityDACSetProcessFDLabel(virSecurityManagerPtr mgr ATTRIBUTE_UNUSED,
- virDomainObjPtr vm ATTRIBUTE_UNUSED,
- int fd ATTRIBUTE_UNUSED)
-{
- return 0;
-}
-
virSecurityDriver virSecurityDriverDAC = {
sizeof(virSecurityDACData),
@@ -743,5 +735,4 @@ virSecurityDriver virSecurityDriverDAC = {
virSecurityDACRestoreSavedStateLabel,
virSecurityDACSetImageFDLabel,
- virSecurityDACSetProcessFDLabel,
};
diff --git a/src/security/security_driver.h b/src/security/security_driver.h
index 94f27f8..aea90b0 100644
--- a/src/security/security_driver.h
+++ b/src/security/security_driver.h
@@ -84,9 +84,6 @@ typedef int (*virSecurityDomainSecurityVerify) (virSecurityManagerPtr mgr,
typedef int (*virSecurityDomainSetImageFDLabel) (virSecurityManagerPtr mgr,
virDomainObjPtr vm,
int fd);
-typedef int (*virSecurityDomainSetProcessFDLabel) (virSecurityManagerPtr mgr,
- virDomainObjPtr vm,
- int fd);
struct _virSecurityDriver {
size_t privateDataLen;
@@ -124,7 +121,6 @@ struct _virSecurityDriver {
virSecurityDomainRestoreSavedStateLabel domainRestoreSavedStateLabel;
virSecurityDomainSetImageFDLabel domainSetSecurityImageFDLabel;
- virSecurityDomainSetProcessFDLabel domainSetSecurityProcessFDLabel;
};
virSecurityDriverPtr virSecurityDriverLookup(const char *name);
diff --git a/src/security/security_manager.c b/src/security/security_manager.c
index b2fd0d0..cae9b83 100644
--- a/src/security/security_manager.c
+++ b/src/security/security_manager.c
@@ -346,14 +346,3 @@ int virSecurityManagerSetImageFDLabel(virSecurityManagerPtr mgr,
virSecurityReportError(VIR_ERR_NO_SUPPORT, __FUNCTION__);
return -1;
}
-
-int virSecurityManagerSetProcessFDLabel(virSecurityManagerPtr mgr,
- virDomainObjPtr vm,
- int fd)
-{
- if (mgr->drv->domainSetSecurityProcessFDLabel)
- return mgr->drv->domainSetSecurityProcessFDLabel(mgr, vm, fd);
-
- virSecurityReportError(VIR_ERR_NO_SUPPORT, __FUNCTION__);
- return -1;
-}
diff --git a/src/security/security_manager.h b/src/security/security_manager.h
index 38342c2..12cd498 100644
--- a/src/security/security_manager.h
+++ b/src/security/security_manager.h
@@ -96,8 +96,5 @@ int virSecurityManagerVerify(virSecurityManagerPtr mgr,
int virSecurityManagerSetImageFDLabel(virSecurityManagerPtr mgr,
virDomainObjPtr vm,
int fd);
-int virSecurityManagerSetProcessFDLabel(virSecurityManagerPtr mgr,
- virDomainObjPtr vm,
- int fd);
#endif /* VIR_SECURITY_MANAGER_H__ */
diff --git a/src/security/security_selinux.c b/src/security/security_selinux.c
index cddbed5..ca54f9b 100644
--- a/src/security/security_selinux.c
+++ b/src/security/security_selinux.c
@@ -1321,19 +1321,6 @@ SELinuxSetImageFDLabel(virSecurityManagerPtr mgr ATTRIBUTE_UNUSED,
return SELinuxFSetFilecon(fd, secdef->imagelabel);
}
-static int
-SELinuxSetProcessFDLabel(virSecurityManagerPtr mgr ATTRIBUTE_UNUSED,
- virDomainObjPtr vm,
- int fd)
-{
- const virSecurityLabelDefPtr secdef = &vm->def->seclabel;
-
- if (secdef->label == NULL)
- return 0;
-
- return SELinuxFSetFilecon(fd, secdef->label);
-}
-
virSecurityDriver virSecurityDriverSELinux = {
0,
SECURITY_SELINUX_NAME,
@@ -1370,5 +1357,4 @@ virSecurityDriver virSecurityDriverSELinux = {
SELinuxRestoreSavedStateLabel,
SELinuxSetImageFDLabel,
- SELinuxSetProcessFDLabel,
};
diff --git a/src/security/security_stack.c b/src/security/security_stack.c
index f263f5b..3f601c1 100644
--- a/src/security/security_stack.c
+++ b/src/security/security_stack.c
@@ -402,23 +402,6 @@ virSecurityStackSetImageFDLabel(virSecurityManagerPtr mgr,
}
-static int
-virSecurityStackSetProcessFDLabel(virSecurityManagerPtr mgr,
- virDomainObjPtr vm,
- int fd)
-{
- virSecurityStackDataPtr priv = virSecurityManagerGetPrivateData(mgr);
- int rc = 0;
-
- if (virSecurityManagerSetProcessFDLabel(priv->secondary, vm, fd) < 0)
- rc = -1;
- if (virSecurityManagerSetProcessFDLabel(priv->primary, vm, fd) < 0)
- rc = -1;
-
- return rc;
-}
-
-
virSecurityDriver virSecurityDriverStack = {
sizeof(virSecurityStackData),
"stack",
@@ -455,5 +438,4 @@ virSecurityDriver virSecurityDriverStack = {
virSecurityStackRestoreSavedStateLabel,
virSecurityStackSetImageFDLabel,
- virSecurityStackSetProcessFDLabel,
};
--
1.7.4.4
13 years, 2 months
[libvirt] [PATCH] snapshot: forbid snapshot on autodestroy domain
by Eric Blake
There is no reason to forbid pausing an autodestroy domain
(not to mention that 'virsh start --paused --autodestroy'
succeeds in creating a paused autodestroy domain).
Meanwhile, qemu was failing to enforce the API documentation that
autodestroy domains cannot be saved. And while the original
documentation only mentioned save/restore, snapshots are another
form of saving that are close enough in semantics as to make no
sense on one-shot domains.
* src/qemu/qemu_driver.c (qemudDomainSuspend): Drop bogus check.
(qemuDomainSaveInternal, qemuDomainSnapshotCreateXML): Forbid
saves of autodestroy domains.
* src/libvirt.c (virDomainCreateWithFlags, virDomainCreateXML):
Document snapshot interaction.
---
Sending this one separately for v1, but I'll insert it into the
front of my snapshot series (before 1/43) when I post v4 of that.
src/libvirt.c | 4 ++--
src/qemu/qemu_driver.c | 18 ++++++++++++------
2 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index 65a099b..711580e 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -1822,7 +1822,7 @@ virDomainGetConnect (virDomainPtr dom)
* object is finally released. This will also happen if the
* client application crashes / loses its connection to the
* libvirtd daemon. Any domains marked for auto destroy will
- * block attempts at migration or save-to-file
+ * block attempts at migration, save-to-file, or snapshots.
*
* Returns a new domain object or NULL in case of failure
*/
@@ -7073,7 +7073,7 @@ error:
* object is finally released. This will also happen if the
* client application crashes / loses its connection to the
* libvirtd daemon. Any domains marked for auto destroy will
- * block attempts at migration or save-to-file
+ * block attempts at migration, save-to-file, or snapshots.
*
* If the VIR_DOMAIN_START_BYPASS_CACHE flag is set, and there is a
* managed save file for this domain (created by virDomainManagedSave()),
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index f21122d..737eec8 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1360,12 +1360,6 @@ static int qemudDomainSuspend(virDomainPtr dom) {
goto cleanup;
}
- if (qemuProcessAutoDestroyActive(driver, vm)) {
- qemuReportError(VIR_ERR_OPERATION_INVALID,
- "%s", _("domain is marked for auto destroy"));
- goto cleanup;
- }
-
priv = vm->privateData;
if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) {
@@ -2225,6 +2219,12 @@ qemuDomainSaveInternal(struct qemud_driver *driver, virDomainPtr dom,
int directFlag = 0;
virFileDirectFdPtr directFd = NULL;
+ if (qemuProcessAutoDestroyActive(driver, vm)) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is marked for auto destroy"));
+ return -1;
+ }
+
memset(&header, 0, sizeof(header));
memcpy(header.magic, QEMUD_SAVE_MAGIC, sizeof(header.magic));
header.version = QEMUD_SAVE_VERSION;
@@ -8471,6 +8471,12 @@ static virDomainSnapshotPtr qemuDomainSnapshotCreateXML(virDomainPtr domain,
goto cleanup;
}
+ if (qemuProcessAutoDestroyActive(driver, vm)) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is marked for auto destroy"));
+ goto cleanup;
+ }
+
/* in a perfect world, we would allow qemu to tell us this. The problem
* is that qemu only does this check device-by-device; so if you had a
* domain that booted from a large qcow2 device, but had a secondary raw
--
1.7.4.4
13 years, 2 months
[libvirt] [PATCH] Fix sanlock socket security labelling
by Daniel P. Berrange
It is not possible to change the label of a TCP socket once it
has been opened. When creating a TCP socket care must be taken
to ensure the socket creation label is set & then cleared.
Remove the bogus call to virSecurityManagerSetProcessFDLabel
from the lock driver guest setup code and instead make use of
virSecurityManagerSetSocketLabel
---
src/qemu/qemu_process.c | 19 ++++++++++++-------
1 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 58b4d36..c22974f 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2081,15 +2081,26 @@ static int qemuProcessHook(void *data)
h->vm->pid = getpid();
VIR_DEBUG("Obtaining domain lock");
+ /*
+ * Since we're going to leak the returned FD to QEMU,
+ * we need to make sure it gets a sensible label.
+ * This mildly sucks, because there could be other
+ * sockets the lock driver opens that we don't want
+ * labelled. So far we're ok though.
+ */
+ if (virSecurityManagerSetSocketLabel(h->driver->securityManager, h->vm) < 0)
+ goto cleanup;
if (virDomainLockProcessStart(h->driver->lockManager,
h->vm,
/* QEMU is always pased initially */
true,
&fd) < 0)
goto cleanup;
+ if (virSecurityManagerClearSocketLabel(h->driver->securityManager, h->vm) < 0)
+ goto cleanup;
if (qemuProcessLimits(h->driver) < 0)
- return -1;
+ goto cleanup;
/* This must take place before exec(), so that all QEMU
* memory allocation is on the correct NUMA node
@@ -2111,12 +2122,6 @@ static int qemuProcessHook(void *data)
if (virSecurityManagerSetProcessLabel(h->driver->securityManager, h->vm) < 0)
goto cleanup;
- if (fd != -1) {
- VIR_DEBUG("Setting up lock manager FD labelling");
- if (virSecurityManagerSetProcessFDLabel(h->driver->securityManager, h->vm, fd) < 0)
- goto cleanup;
- }
-
ret = 0;
cleanup:
--
1.7.4.4
13 years, 2 months
[libvirt] [PATCH] Fix incorrect path length check in sanlock lockspace setup
by Daniel P. Berrange
The code for creating a sanlock lockspace accidentally used
SANLK_NAME_LEN instead of SANLK_PATH_LEN for a size check.
This meant disk paths were limited to 48 bytes !
* src/locking/lock_driver_sanlock.c: Fix disk path length
check
---
src/locking/lock_driver_sanlock.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/locking/lock_driver_sanlock.c b/src/locking/lock_driver_sanlock.c
index b85f1fa..b93fe01 100644
--- a/src/locking/lock_driver_sanlock.c
+++ b/src/locking/lock_driver_sanlock.c
@@ -159,10 +159,10 @@ static int virLockManagerSanlockSetupLockspace(void)
memcpy(ls.name, VIR_LOCK_MANAGER_SANLOCK_AUTO_DISK_LOCKSPACE, SANLK_NAME_LEN);
ls.host_id = 0; /* Doesn't matter for initialization */
ls.flags = 0;
- if (!virStrcpy(ls.host_id_disk.path, path, SANLK_NAME_LEN)) {
+ if (!virStrcpy(ls.host_id_disk.path, path, SANLK_PATH_LEN)) {
virLockError(VIR_ERR_INTERNAL_ERROR,
_("Lockspace path '%s' exceeded %d characters"),
- path, SANLK_NAME_LEN);
+ path, SANLK_PATH_LEN);
goto error;
}
ls.host_id_disk.offset = 0;
--
1.7.4.4
13 years, 2 months
[libvirt] [PATCH] build: simplify use of verify
by Eric Blake
Back in 2008 when this line of util.h was written, gnulib's verify
module didn't allow the use of multiple verify() in one file
in combination with our choice of gcc -W options. But that has
since been fixed in gnulib, and newer gnulib even maps verify()
to the C1x feature of _Static_assert, which gives even nicer
diagnostics with a new enough compiler, so we might as well go
with the simpler verify().
* src/util/util.h (VIR_ENUM_IMPL): Use simpler verify, now that
gnulib module is smarter.
---
As pointed out here:
https://www.redhat.com/archives/libvir-list/2011-August/msg01348.html
src/util/util.h | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/util/util.h b/src/util/util.h
index 6e6265f..908ba7b 100644
--- a/src/util/util.h
+++ b/src/util/util.h
@@ -202,7 +202,7 @@ const char *virEnumToString(const char *const*types,
# define VIR_ENUM_IMPL(name, lastVal, ...) \
static const char *const name ## TypeList[] = { __VA_ARGS__ }; \
- extern int (* name ## Verify (void)) [verify_true (ARRAY_CARDINALITY(name ## TypeList) == lastVal)]; \
+ verify(ARRAY_CARDINALITY(name ## TypeList) == lastVal); \
const char *name ## TypeToString(int type) { \
return virEnumToString(name ## TypeList, \
ARRAY_CARDINALITY(name ## TypeList), \
--
1.7.4.4
13 years, 2 months