[libvirt] [PATCH 2/2] support for multiple graphics devices
by Pritesh Kothari
Hi All,
I have added support for multiple graphics devices, the patches are as below.
I have checked them against current cvs head and works fine
PATCH 1/2: contains changes in libvirt for multiple graphics devices
PATCH 2/2: contains corresponding changes in qemu driver.
Regards,
Pritesh
15 years, 6 months
[libvirt] Authentication with virConnectAuthPtr
by Eduardo Otubo
Hello all,
I'll start using virConnectAuthPtr to handle the authentication proccess
at the phyp driver I'm wrinting
<https://www.redhat.com/archives/libvir-list/2009-April/msg00493.html>
and I have some doubts in my mind:
1) What are the types of authentication that 'int *credtype' can hold?
And who fills it with the information I'll need?
2) Once known the credential type, I need to use the function pointer
'virConnectAuthCallbackPtr cb' to get the information whatever it is,
right? I mean, it can be a password, a pubkey or anything else, right?
Is there a callback able to handle password or pubkeys?
3) And finally I'll be able to use the 'void *cbdata' to manage the
authentication in my way. In my case, using libssh. In fact, in the end
of the process, I'll just need a password or a key to get things
working.
There is no driver authenticatin agaist ssh channel, that's why I got so
confused with this topic.
I don't know if my thoughts are pretty clear here. But I would like to
confirm the information 2 and 3, and check how 1 works. Could anyone
help me?
Thanks in advance,
[]'s
--
Eduardo Otubo
Software Engineer
Linux Technology Center
IBM Systems & Technology Group
Mobile: +55 19 8135 0885
otubo(a)linux.vnet.ibm.com
15 years, 6 months
[libvirt] discrepancies in issuing virsh capabilities with different user-accounts
by Gerrit Slomma
Hello
When doing a virsh capabilities with root i get a different output than
when issuing the command with my own unprivileged account roadrunner.
Furthermore the first issuing of the command as roadrunner throws a
error-message.
Restarting of the libvirt-daemon gives the correct output.
Sample:
Last login: Wed May 6 21:00:56 2009 from 192.168.1.120
[root@rr018 ~]# virsh capabilities|grep kvm
[root@rr018 ~]# su - roadrunner
[roadrunner@rr018 ~]$ virsh capabilities|grep kvm
libvir: Remote Fehler : unable to connect to
'@/home/roadrunner/.libvirt/libvirt-sock': Verbindungsaufbau abgelehnt
Fehler: Verbindung zum Hypervisor scheiterte
Failed to bind socket to '@/home/roadrunner/.libvirt/libvirt-sock':
Address already in use
Failed to bind socket to '@/home/roadrunner/.libvirt/libvirt-sock':
Address already in use
Failed to bind socket to '@/home/roadrunner/.libvirt/libvirt-sock':
Address already in use
Failed to bind socket to '@/home/roadrunner/.libvirt/libvirt-sock':
Address already in use
^C
[roadrunner@rr018 ~]$ virsh capabilities|grep kvm
<domain type='kvm'>
<emulator>/usr/bin/kvm</emulator>
<domain type='kvm'>
[roadrunner@rr018 ~]$ exit
logout
[root@rr018 ~]# virsh capabilities|grep kvm
[root@rr018 ~]# /etc/init.d/libvirtd restart
libvirtd-Daemon beenden: [ OK ]
libvirtd-Daemon starten: [ OK ]
[root@rr018 ~]# virsh capabilities|grep kvm
<domain type='kvm'>
<emulator>/usr/bin/kvm</emulator>
<domain type='kvm'>
[root@rr018 ~]#
15 years, 6 months
Re: [libvirt] KVM processes -- should we be able to attach them to the libvirtd process?
by Daniel P. Berrange
On Wed, May 06, 2009 at 08:56:18PM +0200, Gerrit Slomma wrote:
> Daniel P. Berrange schrieb:
> >On Tue, May 05, 2009 at 11:38:13PM -0500, Matthew Farrellee wrote:
> >
> >>It doesn't appear to be the case that the libvirtd daemon can trivially
> >>restart and continue with no interruptions. Right now it loses track of
> >>VMs.
> >>
> >
> >That a is a bug then, if you can reproduce it, please file a BZ ticket
> >so we can track it down & fix it.
> >
> >
> >>In a scenario where VMs are not deployed and locked to specific physical
> >>nodes, it can be highly valuable to have ways to ensure a VM is no
> >>longer running when a layer of its management stops functioning.
> >>
> >
> >IMHO this is a problem to be solved by clustering software. If the
> >clustering software detects a failure with the management service,
> >then it should power fence the entire node. Relying on management
> >service failure to kill the VMs will never be reliable enough.
> >
> I think he is pointing towards a VM that runs on a host where it isn't
> defined at via a corresponding *.xml.
> If you restart a libvirt i looses connection to this or these specific
> VM(s).
That is a bug that needs fixing. Even if there is no persistent config,
we should not loose track of the running VM, because we always write
out the 'live' XML config to /var/run/libvirt explicitly so that it
is available at restart.
Daniel
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 6 months
[libvirt] PATCH: Raise log level for dlopen() problems
by Daniel P. Berrange
Some of the problems when dlopen'ing a module really should be reported to
the user/admin more readily. So this raises the logging level for the
important failure messages, so they're visible by default
Daniel
diff -r 15c4668d403b src/driver.c
--- a/src/driver.c Wed Apr 29 15:29:17 2009 +0100
+++ b/src/driver.c Thu Apr 30 14:49:27 2009 +0100
@@ -54,13 +54,13 @@ virDriverLoadModule(const char *name)
return NULL;
if (access(modfile, R_OK) < 0) {
- DEBUG("Moodule %s not accessible", modfile);
+ VIR_WARN("Module %s not accessible", modfile);
goto cleanup;
}
handle = dlopen(modfile, RTLD_NOW | RTLD_LOCAL);
if (!handle) {
- DEBUG("failed to load module %s %s", modfile, dlerror());
+ VIR_ERROR("failed to load module %s %s", modfile, dlerror());
goto cleanup;
}
@@ -70,12 +70,12 @@ virDriverLoadModule(const char *name)
regsym = dlsym(handle, regfunc);
if (!regsym) {
- DEBUG("Missing module registration symbol %s", regfunc);
+ VIR_ERROR("Missing module registration symbol %s", regfunc);
goto cleanup;
}
if ((*regsym)() < 0) {
- DEBUG("Failed module registration %s", regfunc);
+ VIR_ERROR("Failed module registration %s", regfunc);
goto cleanup;
}
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 6 months
[libvirt] PATCH: More robust name & UUID uniqueness checking for QEMU
by Daniel P. Berrange
When defining a VM config, we need to apply the following logi
- If existing VM has same UUID
- If name also matches => allow
- Else => raise error
- Else
- If name matches => raise error
- Else => allow
When creating a live VM, or restoring a VM we need to apply similar,
but slightly different logic
- If existing VM has same UUID
- If name also matches
- If existing VM is running => raise error
- Else => allow
- Else => raise error
- Else
- If name matches => raise error
- Else => allow
This patch applies those checks for the QEMU driver
Daniel
diff -r 4e6a98395da5 src/qemu_driver.c
--- a/src/qemu_driver.c Thu Apr 30 14:50:14 2009 +0100
+++ b/src/qemu_driver.c Thu Apr 30 15:03:03 2009 +0100
@@ -2145,22 +2145,37 @@ static virDomainPtr qemudDomainCreate(vi
if (virSecurityDriverVerify(conn, def) < 0)
goto cleanup;
- vm = virDomainFindByName(&driver->domains, def->name);
- if (vm) {
- qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
- _("domain '%s' is already defined"),
- def->name);
- goto cleanup;
- }
+ /* See if a VM with matching UUID already exists */
vm = virDomainFindByUUID(&driver->domains, def->uuid);
if (vm) {
- char uuidstr[VIR_UUID_STRING_BUFLEN];
-
- virUUIDFormat(def->uuid, uuidstr);
- qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
- _("domain with uuid '%s' is already defined"),
- uuidstr);
- goto cleanup;
+ /* UUID matches, but if names don't match, refuse it */
+ if (STRNEQ(vm->def->name, def->name)) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(vm->def->uuid, uuidstr);
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain '%s' is already defined with uuid %s"),
+ vm->def->name, uuidstr);
+ goto cleanup;
+ }
+
+ /* UUID & name match, but if VM is already active, refuse it */
+ if (virDomainIsActive(vm)) {
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain is already active as '%s'"), vm->def->name);
+ goto cleanup;
+ }
+ virDomainObjUnlock(vm);
+ } else {
+ /* UUID does not match, but if a name matches, refuse it */
+ vm = virDomainFindByName(&driver->domains, def->name);
+ if (vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(vm->def->uuid, uuidstr);
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain '%s' is already defined with uuid %s"),
+ def->name, uuidstr);
+ goto cleanup;
+ }
}
if (!(vm = virDomainAssignDef(conn,
@@ -2348,6 +2363,11 @@ static int qemudDomainDestroy(virDomainP
_("no domain with matching uuid '%s'"), uuidstr);
goto cleanup;
}
+ if (!virDomainIsActive(vm)) {
+ qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
+ "%s", _("domain is not running"));
+ goto cleanup;
+ }
qemudShutdownVMDaemon(dom->conn, driver, vm);
event = virDomainEventNewFromObj(vm,
@@ -3258,17 +3278,36 @@ static int qemudDomainRestore(virConnect
goto cleanup;
}
- /* Ensure the name and UUID don't already exist in an active VM */
+ /* See if a VM with matching UUID already exists */
vm = virDomainFindByUUID(&driver->domains, def->uuid);
- if (!vm)
- vm = virDomainFindByName(&driver->domains, def->name);
if (vm) {
+ /* UUID matches, but if names don't match, refuse it */
+ if (STRNEQ(vm->def->name, def->name)) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(vm->def->uuid, uuidstr);
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain '%s' is already defined with uuid %s"),
+ vm->def->name, uuidstr);
+ goto cleanup;
+ }
+
+ /* UUID & name match, but if VM is already active, refuse it */
if (virDomainIsActive(vm)) {
qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_INVALID,
_("domain is already active as '%s'"), vm->def->name);
goto cleanup;
- } else {
- virDomainObjUnlock(vm);
+ }
+ virDomainObjUnlock(vm);
+ } else {
+ /* UUID does not match, but if a name matches, refuse it */
+ vm = virDomainFindByName(&driver->domains, def->name);
+ if (vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(vm->def->uuid, uuidstr);
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain '%s' is already defined with uuid %s"),
+ def->name, uuidstr);
+ goto cleanup;
}
}
@@ -3603,18 +3642,41 @@ static virDomainPtr qemudDomainDefine(vi
if (virSecurityDriverVerify(conn, def) < 0)
goto cleanup;
- vm = virDomainFindByName(&driver->domains, def->name);
+ /* See if a VM with matching UUID already exists */
+ vm = virDomainFindByUUID(&driver->domains, def->uuid);
if (vm) {
+ /* UUID matches, but if names don't match, refuse it */
+ if (STRNEQ(vm->def->name, def->name)) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(vm->def->uuid, uuidstr);
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain '%s' is already defined with uuid %s"),
+ vm->def->name, uuidstr);
+ goto cleanup;
+ }
+
+ /* UUID & name match */
virDomainObjUnlock(vm);
newVM = 0;
+ } else {
+ /* UUID does not match, but if a name matches, refuse it */
+ vm = virDomainFindByName(&driver->domains, def->name);
+ if (vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(vm->def->uuid, uuidstr);
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("domain '%s' is already defined with uuid %s"),
+ def->name, uuidstr);
+ goto cleanup;
+ }
}
if (!(vm = virDomainAssignDef(conn,
&driver->domains,
def))) {
- virDomainDefFree(def);
- goto cleanup;
- }
+ goto cleanup;
+ }
+ def = NULL;
vm->persistent = 1;
if (virDomainSaveConfig(conn,
@@ -3636,6 +3698,7 @@ static virDomainPtr qemudDomainDefine(vi
if (dom) dom->id = vm->def->id;
cleanup:
+ virDomainDefFree(def);
if (vm)
virDomainObjUnlock(vm);
if (event)
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 6 months
[libvirt] PATCH: Don't reset / detach host PCI devices in test scripts !
by Daniel P. Berrange
The PCI passthrough patches made it so that qemudBuildCommandLine() would
actually try to detach your host devices & reset them. Most definitely not
what you want when running this via a test case!
This patch moves the host device management out into a separate method,
so that qemudBuildCommandLine() doesn't do anything except safely build
the command line.
Daniel
qemu_conf.c | 46 -----------------------------------
qemu_driver.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+), 46 deletions(-)
Index: src/qemu_conf.c
===================================================================
RCS file: /data/cvs/libvirt/src/qemu_conf.c,v
retrieving revision 1.133
diff -u -p -u -p -r1.133 qemu_conf.c
--- src/qemu_conf.c 2 Mar 2009 20:22:35 -0000 1.133
+++ src/qemu_conf.c 2 Mar 2009 20:47:36 -0000
@@ -47,7 +47,6 @@
#include "datatypes.h"
#include "xml.h"
#include "nodeinfo.h"
-#include "pci.h"
#define VIR_FROM_THIS VIR_FROM_QEMU
@@ -1395,52 +1394,7 @@ int qemudBuildCommandLine(virConnectPtr
ADD_ARG_LIT("-pcidevice");
ADD_ARG_LIT(pcidev);
VIR_FREE(pcidev);
-
- if (hostdev->managed) {
- pciDevice *dev = pciGetDevice(conn,
- hostdev->source.subsys.u.pci.domain,
- hostdev->source.subsys.u.pci.bus,
- hostdev->source.subsys.u.pci.slot,
- hostdev->source.subsys.u.pci.function);
- if (!dev)
- goto error;
-
- if (pciDettachDevice(conn, dev) < 0) {
- pciFreeDevice(conn, dev);
- goto error;
- }
-
- pciFreeDevice(conn, dev);
- } /* else {
- XXX validate that non-managed device isn't in use, eg
- by checking that device is either un-bound, or bound
- to pci-stub.ko
- } */
}
-
- }
-
- /* Now that all the PCI hostdevs have be dettached, we can reset them */
- for (i = 0 ; i < vm->def->nhostdevs ; i++) {
- virDomainHostdevDefPtr hostdev = vm->def->hostdevs[i];
- pciDevice *dev;
-
- if (hostdev->mode != VIR_DOMAIN_HOSTDEV_MODE_SUBSYS ||
- hostdev->source.subsys.type != VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI)
- continue;
-
- dev = pciGetDevice(conn,
- hostdev->source.subsys.u.pci.domain,
- hostdev->source.subsys.u.pci.bus,
- hostdev->source.subsys.u.pci.slot,
- hostdev->source.subsys.u.pci.function);
- if (!dev)
- goto error;
-
- if (pciResetDevice(conn, dev) < 0)
- goto error;
-
- pciFreeDevice(conn, dev);
}
if (migrateFrom) {
Index: src/qemu_driver.c
===================================================================
RCS file: /data/cvs/libvirt/src/qemu_driver.c,v
retrieving revision 1.208
diff -u -p -u -p -r1.208 qemu_driver.c
--- src/qemu_driver.c 2 Mar 2009 17:39:43 -0000 1.208
+++ src/qemu_driver.c 2 Mar 2009 20:47:36 -0000
@@ -1133,6 +1133,79 @@ static int qemudNextFreeVNCPort(struct q
return -1;
}
+static int qemuPrepareHostDevices(virConnectPtr conn,
+ virDomainDefPtr def) {
+ int i;
+
+ /* We have to use 2 loops here. *All* devices must
+ * be detached before we reset any of them, because
+ * in some cases you have to reset the whole PCI bus,
+ * which impacts all devices on it
+ */
+
+ for (i = 0 ; i < def->nhostdevs ; i++) {
+ virDomainHostdevDefPtr hostdev = def->hostdevs[i];
+
+ if (hostdev->mode != VIR_DOMAIN_HOSTDEV_MODE_SUBSYS)
+ continue;
+ if (hostdev->source.subsys.type != VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI)
+ continue;
+
+ if (!hostdev->managed) {
+ pciDevice *dev = pciGetDevice(conn,
+ hostdev->source.subsys.u.pci.domain,
+ hostdev->source.subsys.u.pci.bus,
+ hostdev->source.subsys.u.pci.slot,
+ hostdev->source.subsys.u.pci.function);
+ if (!dev)
+ goto error;
+
+ if (pciDettachDevice(conn, dev) < 0) {
+ pciFreeDevice(conn, dev);
+ goto error;
+ }
+
+ pciFreeDevice(conn, dev);
+ } /* else {
+ XXX validate that non-managed device isn't in use, eg
+ by checking that device is either un-bound, or bound
+ to pci-stub.ko
+ } */
+ }
+
+ /* Now that all the PCI hostdevs have be dettached, we can safely
+ * reset them */
+ for (i = 0 ; i < def->nhostdevs ; i++) {
+ virDomainHostdevDefPtr hostdev = def->hostdevs[i];
+ pciDevice *dev;
+
+ if (hostdev->mode != VIR_DOMAIN_HOSTDEV_MODE_SUBSYS)
+ continue;
+ if (hostdev->source.subsys.type != VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_PCI)
+ continue;
+
+ dev = pciGetDevice(conn,
+ hostdev->source.subsys.u.pci.domain,
+ hostdev->source.subsys.u.pci.bus,
+ hostdev->source.subsys.u.pci.slot,
+ hostdev->source.subsys.u.pci.function);
+ if (!dev)
+ goto error;
+
+ if (pciResetDevice(conn, dev) < 0) {
+ pciFreeDevice(conn, dev);
+ goto error;
+ }
+
+ pciFreeDevice(conn, dev);
+ }
+
+ return 0;
+
+error:
+ return -1;
+}
+
static virDomainPtr qemudDomainLookupByName(virConnectPtr conn,
const char *name);
@@ -1210,6 +1283,9 @@ static int qemudStartVMDaemon(virConnect
return -1;
}
+ if (qemuPrepareHostDevices(conn, vm->def) < 0)
+ return -1;
+
vm->def->id = driver->nextvmid++;
if (qemudBuildCommandLine(conn, driver, vm,
qemuCmdFlags, &argv, &progenv,
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 6 months
[libvirt] [PATCH] Refresh QEMU driver caps in getCapabilities
by Cole Robinson
Hi all,
The attached patch fixes QEMU getCapabilities to refresh caps before
returning them to the user. Currently the capabilities are only
refreshed once (at daemon startup), which means libvirtd needs to be
restarted to pick up changes if QEMU or KVM are installed while the
daemon is running. See:
https://bugzilla.redhat.com/show_bug.cgi?id=460649
There are several things 'wrong' with this change:
- We reset/rescan fields that won't change (host arch, mac address
prefix). This should be fixed at some point, but isn't a big deal since
total performance impact is negligible (see below).
- We only refresh the capabilities when the user calls getCapabilities,
which means we are still carrying around stale caps prior to that, which
is what libvirt validates against. In practice, virt-manager and
virt-install both call getCapabilities often so this isn't an issue,
though the caps internal API should probably address this at some point.
To test the performance impact, I used a simple python script:
import libvirt
conn = libvirt.open("qemu:///system")
for i in range(0, 30):
conn.getCapabilities()
The time difference was on average .02 seconds slower, which I think is
negligible.
If at somepoint in the future, capabilities generation becomes smarter
(searching PATH emulators, scraping device list output, etc.) it might
be worth re-checking the time impact. But for now it doesn't seem to be
an issue.
Thanks,
Cole
15 years, 6 months