[libvirt] Supporting vhost-net and macvtap in libvirt for QEMU
by Anthony Liguori
Disclaimer: I am neither an SR-IOV nor a vhost-net expert, but I've CC'd
people that are who can throw tomatoes at me for getting bits wrong :-)
I wanted to start a discussion about supporting vhost-net in libvirt.
vhost-net has not yet been merged into qemu but I expect it will be soon
so it's a good time to start this discussion.
There are two modes worth supporting for vhost-net in libvirt. The
first mode is where vhost-net backs to a tun/tap device. This is
behaves in very much the same way that -net tap behaves in qemu today.
Basically, the difference is that the virtio backend is in the kernel
instead of in qemu so there should be some performance improvement.
Current, libvirt invokes qemu with -net tap,fd=X where X is an already
open fd to a tun/tap device. I suspect that after we merge vhost-net,
libvirt could support vhost-net in this mode by just doing -net
vhost,fd=X. I think the only real question for libvirt is whether to
provide a user visible switch to use vhost or to just always use vhost
when it's available and it makes sense. Personally, I think the later
makes sense.
The more interesting invocation of vhost-net though is one where the
vhost-net device backs directly to a physical network card. In this
mode, vhost should get considerably better performance than the current
implementation. I don't know the syntax yet, but I think it's
reasonable to assume that it will look something like -net
tap,dev=eth0. The effect will be that eth0 is dedicated to the guest.
On most modern systems, there is a small number of network devices so
this model is not all that useful except when dealing with SR-IOV
adapters. In that case, each physical device can be exposed as many
virtual devices (VFs). There are a few restrictions here though. The
biggest is that currently, you can only change the number of VFs by
reloading a kernel module so it's really a parameter that must be set at
startup time.
I think there are a few ways libvirt could support vhost-net in this
second mode. The simplest would be to introduce a new tag similar to
<source network='br0'>. In fact, if you probed the device type for the
network parameter, you could probably do something like <source
network='eth0'> and have it Just Work.
Another model would be to have libvirt see an SR-IOV adapter as a
network pool whereas it handled all of the VF management. Considering
how inflexible SR-IOV is today, I'm not sure whether this is the best model.
Has anyone put any more thought into this problem or how this should be
modeled in libvirt? Michael, could you share your current thinking for
-net syntax?
--
Regards,
Anthony Liguori
11 months, 2 weeks
[libvirt] [PATCH 0/4] Multiple problems with saving to block devices
by Daniel P. Berrange
This patch series makes it possible to save to a block device,
instead of a plain file. There were multiple problems
- WHen save failed, we might de-reference a NULL pointer
- When save failed, we unlinked the device node !!
- The approach of using >> to append, doesn't work with block devices
- CGroups was blocking QEMU access to the block device when enabled
One remaining problem is not in libvirt, but rather QEMU. The QEMU
exec: based migration often fails to detect failure of the command
and will thus hang forever attempting a migration that'll never
succeed! Fortunately you can now work around this in libvirt using
the virsh domjobabort command
11 years, 6 months
[libvirt] qemu-namespace handling?
by Philipp Hahn
Hello,
some time ago I hand to manipulate the domain XML description using Pythons
Elemtree XML implementation, which had problems generating the right format
for libvirt: elemtree just supports adding Qname elements (that
is "{http://libvirt.org/schemas/domain/qemu/1.0}commandline") which
internally would create a temporary binding of this namespace to the "ns0"
Prefix.
My work-around for Elemtree was the add the name-space mapping for "qemu"
to "http://libvirt.org/schemas/domain/qemu/1.0" to ETs internal mapping table
and add an "xmlns:qemu" attribute by hand:
ET._namespace_map[QEMU_URI] = 'qemu'
domain.attrib['xmlns:qemu'] = QEMU_URI
libvirt on the other hand expects the prefix to be "qemu" and only checks,
that this prefix is bound to the URI mentioned above at the root node).
The following examples would be XML valid, but are not accepted by libvirt:
<domain>...
<qemu:commandline xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
...</qemu:commandline>
</domain>
<domain xmlns:ns0="http://libvirt.org/schemas/domain/qemu/1.0">...
<ns0:commandline>
...</ns0:commandline>
</domain>
The following (esoteric) example might be wrongly accepted by libvirt
(untested):
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
<qemu:commandline xmlns:qemu="urn:foo">
...</qemu:commandline>
</domain>
I don't know if this is worth fixing, but I still encountered the first two
problems myself and had to spend some time to detecting what I did wrong. So
at least I want to share my finding with others, so they don't do the same
mistake.
Sincerely
Philipp Hahn
--
Philipp Hahn Open Source Software Engineer hahn(a)univention.de
Univention GmbH Linux for Your Business fon: +49 421 22 232- 0
Mary-Somerville-Str.1 D-28359 Bremen fax: +49 421 22 232-99
http://www.univention.de/
12 years, 12 months
[libvirt] LPC2011 Virtualization Micro Conf
by Jes Sorensen
Hi,
With the success of last year's Virtualization micro-conference track
at Linux Plumbers 2010, I have accepted to organize a similar track
for Linux Plumbers 2011 in Santa Rosa. Please see the official Linux
Plumbers 2011 website for full details about the conference:
http://www.linuxplumbersconf.org/2011/
The Linux Plumbers 2011 Virtualization track is focusing on general
free software Linux Virtualization. It is not reserved for a specific
hypervisor, but will focus on general virtualization issues and in
particular collaboration amongst projects. This would include KVM,
Xen, QEMU, containers etc.
Deadline:
---------
The deadline for submissions is April 30th. Please visit the following
link to submit your proposal:
http://www.linuxplumbersconf.org/2011/ocw/events/LPC2011MC/proposals
Example topics:
---------------
- Kernel and Hypervisor KVM/QEMU/Xen interaction
- QEMU integration, sharing of code between the different projects
- IO Performance and scalability
- Live Migration
- Managing and supporting enterprise storage
- Support for new hardware features, and/or provide guest access to
these features.
- Guest agents
- Virtualization management tools, libvirt, etc.
- Desktop integration
- Consumer Electronics device emulation
- Custom platform configuration and coordination with the kernel
Audience:
---------
Virtualization hypervisor developers, developers of virtualization
management tools and applications, embedded virtualization developers,
vendors and others.
Best regards,
Jes
13 years
[libvirt] libvirt(-java): virDomainMigrateSetMaxDowntime
by Thomas Treutner
Hi,
I'm facing some troubles with virDomainMigrate &
virDomainMigrateSetMaxDowntime. The core problem is that KVM's default
value for the maximum allowed downtime is 30ms (max_downtime in
migration.c, it's nanoseconds there; 0.12.3) which is too low for my VMs
when they're busy (~50% CPU util and above). Migrations then take
literally forever, I had to abort them after 15 minutes or so. I'm using
GBit Ethernet, so plenty bandwidth should be available. Increasing the
allowed downtime to 50ms seems to help, but I have not tested situations
where the VM is completely utilized. Anyways, the default value is too
low for me, so I tried virDomainMigrateSetMaxDowntime resp. the Java
wrapper function.
Here I'm facing a problem I can overcome only with a quite crude hack:
org.libvirt.Domain.migrate(..) blocks until the migration is done, which
is of course reasonable. So I tried calling migrateSetMaxDowntime(..)
before migrating, causing an error:
"Requested operation is not valid: domain is not being migrated"
This tells me that calling migrateSetMaxDowntime is only allowed during
migrations. As I'm migrating VMs automatically and without any user
intervention I'd need to create some glue code that runs in an extra
thread, waiting "some time" hoping that the migration was kicked off in
the main thread yet and then calling migrateSetMaxDowntime. I'd like to
avoid such quirks in the long run, if possible.
So my question: Would it be possible to extend the migrate() method
resp. virDomainMigrate() function with an optional maxDowntime parameter
that is passed down as QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME so that
qemuDomainWaitForMigrationComplete would set the value? Or are there
easier ways?
Thanks and regards,
-t
13 years
Re: [libvirt] migration of vnlink VMs
by Oved Ourfalli
----- Original Message -----
> From: "Laine Stump" <lstump(a)redhat.com>
> To: "Oved Ourfalli" <ovedo(a)redhat.com>
> Cc: "Ayal Baron" <abaron(a)redhat.com>, "Barak Azulay" <bazulay(a)redhat.com>, "Shahar Havivi" <shaharh(a)redhat.com>,
> "Itamar Heim" <iheim(a)redhat.com>, "Dan Kenigsberg" <danken(a)redhat.com>
> Sent: Thursday, April 28, 2011 10:20:35 AM
> Subject: Re: migration of vnlink VMs
> Oved,
>
> Would it be okay to repost this message to the thread on libvir-list
> so
> that other parties can add their thoughts?
>
Of course. I'm sending my answer to the libvirt list.
> On 04/27/2011 09:58 AM, Oved Ourfalli wrote:
> > Laine, hello.
> >
> > We read your proposal for abstraction of guest<--> host network
> > connection in libvirt.
> >
> > You has an open issue there regarding the vepa/vnlink attributes:
> > "3) What about the parameters in the<virtualport> element that are
> > currently used by vepa/vnlink. Do those belong with the host, or
> > with the guest?"
> >
> > The parameters for the virtualport element should be on the guest,
> > and not the host, because a specific interface can run multiple
> > profiles,
>
> Are you talking about host interface or guest interface? If you mean
> that multiple different profiles can be used when connecting to a
> particular switch - as long as there are only a few different
> profiles,
> rather than each guest having its own unique profile, then it still
> seems better to have the port profile live with the network definition
> (and just define multiple networks, one for each port profile).
>
The profile names can be changed regularly, so it looks like it will be better to put them in the guest level, so that the network host file won't have to be changed on all hosts once something has changed in the profiles.
Also, you will have a duplication of data, writing all the profile name on all the hosts that are connected to the vn-link/vepa switch.
>
> > so it will be a mistake to define a profile to be interface
> > specific on the host. Moreover, putting it in the guest level will
> > enable us in the future (if supported by libvirt/qemu) to migrate
> > a vm from a host with vepa/vnlink interfaces, to another host with
> > a bridge, for example.
>
> It seems to me like doing exactly the opposite would make it easier to
> migrate to a host that used a different kind of switching (from vepa
> to
> vnlink, or from a bridged interface to vepa, etc), since the port
> profile required for a particular host's network would be at the host
> waiting to be used.
You are right, but we would want to have the option to prevent that from happening in case we wouldn't want to allow it.
We can make the ability to migrate between different network types configurable, and we would like an easy way to tell libvirt - "please allow/don't allow it".
>
> > So, in the networks at the host level you will have:
> > <network type='direct'>
> > <name>red-network</name>
> > <source mode='vepa'>
> > <pool>
> > <interface>
> > <name>eth0</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth4</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth18</name>
> > .....
> > </interface>
> > </pool>
> > </source>
> > </network>
> >
> > And in the guest you will have (for vepa):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbg">
> > <parameters managerid="11" typeid="1193047" typeidversion="2"
> > instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
> > </virtualport>
> > </interface>
> >
> > Or (for vnlink):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbh">
> > <parameters profile_name="profile1"/>
> > </virtualport>
> > </interface>
>
> This illustrates the problem I was wondering about - in your example
> it
> would not be possible for the guest to migrate from the host using a
> vepa switch to the host using a vnlink switch (and it would be
> possible
You are right. When trying to migrate between vepa and vnlink there will be missing attributes in each in case we leave it on the host.
> to migrate to a host using a standard bridge only if the virtualport
> element was ignored). If the virtualport element lived with the
> network
> definition of red-network on each host, it could be migrated without
> problem.
>
> The only problematic thing would be if any of the attributes within
> <parameters> was unique for each guest (I don't know anything about
> the
> individual attributes, but "instanceid" sounds like it might be
> different for each guest).
>
> > Then, when migrating from a vepa/vnlink host to another vepa/vnlink
> > host containing red-network, the profile attributes will be
> > available at the guest domain xml.
> > In case the target host has a red-network, which isn't vepa/vnlink,
> > we want to be able to choose whether to make the use of the profile
> > attributes optional (i.e., libvirt won't fail in case of migrating
> > to a network of another type), or mandatory (i.e., libvirt will fail
> > in case of migration to a non-vepa/vnlink network).
> >
> > We have something similar in CPU flags:
> > <cpu match="exact">
> > <model>qemu64</model>
> > <topology sockets="S" cores="C" threads="T"/>
> > <feature policy="require/optional/disable......"
> > name="sse2"/>
> > </cpu>
>
> In this analogy, does "CPU flags" == "mode (vepa/vnlink/bridge)" or
> does
> "CPU flags" == "virtualport parameters" ? It seems like what you're
> wanting can be satisfied by simply not defining "red-network" on the
> hosts that don't have the proper networking setup available (maybe
> what
> you *really* want to call it is "red-vnlink-network").
What I meant to say in that is that we would like to have the ability to say if an attribute must me used, or not.
The issues you mention are indeed interesting. I'm cc-ing libvirt-list to see what other people think.
Putting it on the guest will indeed make it problematic to migrate between networks that need different parameters (vnlink/vepa for example).
Oved
13 years, 1 month
[libvirt] [BUG] Xen->libvirt: localtime reported as UTC
by Philipp Hahn
Hello,
just a report, no fix for that bug yet.
If I create a domain and set <clock offset='localtime'/>, that information is
correctly translated to Xends sxpr data, but on reading it back I get it
reported as 'utc':
# virsh dumpxml 85664d3f-68dd-a4c2-4d2f-be7f276b95f0 | grep clock
<clock offset='utc'/>
# gfind localtime
./85664d3f-68dd-a4c2-4d2f-be7f276b95f0/config.sxp: (platform
((device_model /usr/lib64/xen/bin/qemu-dm) (localtime 1)))
./85664d3f-68dd-a4c2-4d2f-be7f276b95f0/config.sxp: (localtime 1)
BYtE
Philipp
--
Philipp Hahn Open Source Software Engineer hahn(a)univention.de
Univention GmbH Linux for Your Business fon: +49 421 22 232- 0
Mary-Somerville-Str.1 D-28359 Bremen fax: +49 421 22 232-99
http://www.univention.de/
13 years, 1 month
[libvirt] [PATCH] add sendevent command and related APIs
by Lai Jiangshan
Enable libvirt send some events to the guest.
This command currently only supports NMI and key events.
Signed-off-by: Lai Jiangshan <laijs(a)cn.fujitsu.com>
---
daemon/remote.c | 52 +++++++++++++++++++++
daemon/remote_dispatch_args.h | 2
daemon/remote_dispatch_prototypes.h | 16 ++++++
daemon/remote_dispatch_table.h | 10 ++++
include/libvirt/libvirt.h.in | 3 +
src/driver.h | 7 ++
src/esx/esx_driver.c | 2
src/libvirt.c | 88 ++++++++++++++++++++++++++++++++++++
src/libvirt_public.syms | 2
src/libxl/libxl_driver.c | 2
src/lxc/lxc_driver.c | 2
src/openvz/openvz_driver.c | 2
src/phyp/phyp_driver.c | 4 +
src/qemu/qemu_driver.c | 86 +++++++++++++++++++++++++++++++++++
src/qemu/qemu_monitor.c | 27 +++++++++++
src/qemu/qemu_monitor.h | 3 +
src/qemu/qemu_monitor_json.c | 68 +++++++++++++++++++++++++++
src/qemu/qemu_monitor_json.h | 3 +
src/qemu/qemu_monitor_text.c | 56 ++++++++++++++++++++++
src/qemu/qemu_monitor_text.h | 2
src/remote/remote_driver.c | 50 ++++++++++++++++++++
src/remote/remote_protocol.c | 22 +++++++++
src/remote/remote_protocol.h | 19 +++++++
src/remote/remote_protocol.x | 14 +++++
src/test/test_driver.c | 2
src/uml/uml_driver.c | 2
src/vbox/vbox_tmpl.c | 2
src/vmware/vmware_driver.c | 2
src/xen/xen_driver.c | 2
src/xenapi/xenapi_driver.c | 2
tools/virsh.c | 56 ++++++++++++++++++++++
31 files changed, 608 insertions(+), 2 deletions(-)
diff --git a/daemon/remote.c b/daemon/remote.c
index 1700c2d..5f9e78a 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -2836,6 +2836,58 @@ remoteDispatchDomainGetBlkioParameters(struct qemud_server *server
}
static int
+remoteDispatchDomainSendEventNMI(struct qemud_server *server ATTRIBUTE_UNUSED,
+ struct qemud_client *client ATTRIBUTE_UNUSED,
+ virConnectPtr conn,
+ remote_message_header *hdr ATTRIBUTE_UNUSED,
+ remote_error *rerr,
+ remote_domain_send_event_nmi_args *args,
+ void *ret ATTRIBUTE_UNUSED)
+{
+ virDomainPtr dom;
+
+ dom = get_nonnull_domain(conn, args->dom);
+ if (dom == NULL) {
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+
+ if (virDomainSendEventNMI(dom, args->vcpu) == -1) {
+ virDomainFree(dom);
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+ virDomainFree(dom);
+ return 0;
+}
+
+static int
+remoteDispatchDomainSendEventKey(struct qemud_server *server ATTRIBUTE_UNUSED,
+ struct qemud_client *client ATTRIBUTE_UNUSED,
+ virConnectPtr conn,
+ remote_message_header *hdr ATTRIBUTE_UNUSED,
+ remote_error *rerr,
+ remote_domain_send_event_key_args *args,
+ void *ret ATTRIBUTE_UNUSED)
+{
+ virDomainPtr dom;
+
+ dom = get_nonnull_domain(conn, args->dom);
+ if (dom == NULL) {
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+
+ if (virDomainSendEventKey(dom, args->key) == -1) {
+ virDomainFree(dom);
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+ virDomainFree(dom);
+ return 0;
+}
+
+static int
remoteDispatchDomainSetVcpus (struct qemud_server *server ATTRIBUTE_UNUSED,
struct qemud_client *client ATTRIBUTE_UNUSED,
virConnectPtr conn,
diff --git a/daemon/remote_dispatch_args.h b/daemon/remote_dispatch_args.h
index f9537d7..289a42e 100644
--- a/daemon/remote_dispatch_args.h
+++ b/daemon/remote_dispatch_args.h
@@ -178,3 +178,5 @@
remote_domain_migrate_set_max_speed_args val_remote_domain_migrate_set_max_speed_args;
remote_storage_vol_upload_args val_remote_storage_vol_upload_args;
remote_storage_vol_download_args val_remote_storage_vol_download_args;
+ remote_domain_send_event_nmi_args val_remote_domain_send_event_nmi_args;
+ remote_domain_send_event_key_args val_remote_domain_send_event_key_args;
diff --git a/daemon/remote_dispatch_prototypes.h b/daemon/remote_dispatch_prototypes.h
index 18bf41d..f920193 100644
--- a/daemon/remote_dispatch_prototypes.h
+++ b/daemon/remote_dispatch_prototypes.h
@@ -1618,3 +1618,19 @@ static int remoteDispatchSupportsFeature(
remote_error *err,
remote_supports_feature_args *args,
remote_supports_feature_ret *ret);
+static int remoteDispatchDomainSendEventNMI(
+ struct qemud_server *server,
+ struct qemud_client *client,
+ virConnectPtr conn,
+ remote_message_header *hdr,
+ remote_error *err,
+ remote_domain_send_event_nmi_args *args,
+ void *ret);
+static int remoteDispatchDomainSendEventKey(
+ struct qemud_server *server,
+ struct qemud_client *client,
+ virConnectPtr conn,
+ remote_message_header *hdr,
+ remote_error *err,
+ remote_domain_send_event_key_args *args,
+ void *ret);
diff --git a/daemon/remote_dispatch_table.h b/daemon/remote_dispatch_table.h
index b39f7c2..a706b19 100644
--- a/daemon/remote_dispatch_table.h
+++ b/daemon/remote_dispatch_table.h
@@ -1052,3 +1052,13 @@
.args_filter = (xdrproc_t) xdr_remote_storage_vol_download_args,
.ret_filter = (xdrproc_t) xdr_void,
},
+{ /* DomainSendEventNmi => 210 */
+ .fn = (dispatch_fn) remoteDispatchDomainSendEventNMI,
+ .args_filter = (xdrproc_t) xdr_remote_domain_send_event_nmi_args,
+ .ret_filter = (xdrproc_t) xdr_void,
+},
+{ /* DomainSendEventKey => 211 */
+ .fn = (dispatch_fn) remoteDispatchDomainSendEventKey,
+ .args_filter = (xdrproc_t) xdr_remote_domain_send_event_key_args,
+ .ret_filter = (xdrproc_t) xdr_void,
+},
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index bd36015..adbe482 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -2517,6 +2517,9 @@ int virDomainOpenConsole(virDomainPtr dom,
virStreamPtr st,
unsigned int flags);
+int virDomainSendEventNMI(virDomainPtr domain, unsigned int vcpu);
+int virDomainSendEventKey(virDomainPtr domain, const char *key);
+
#ifdef __cplusplus
}
#endif
diff --git a/src/driver.h b/src/driver.h
index e5f91ca..6caf13f 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -515,6 +515,11 @@ typedef int
virStreamPtr st,
unsigned int flags);
+typedef int
+ (*virDrvDomainSendEventNMI)(virDomainPtr dom, unsigned int vcpu);
+
+typedef int
+ (*virDrvDomainSendEventKey)(virDomainPtr dom, const char *key);
/**
* _virDriver:
@@ -639,6 +644,8 @@ struct _virDriver {
virDrvDomainSnapshotDelete domainSnapshotDelete;
virDrvQemuDomainMonitorCommand qemuDomainMonitorCommand;
virDrvDomainOpenConsole domainOpenConsole;
+ virDrvDomainSendEventNMI domainSendEventNMI;
+ virDrvDomainSendEventKey domainSendEventKey;
};
typedef int
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index deda372..7167712 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -4675,6 +4675,8 @@ static virDriver esxDriver = {
esxDomainSnapshotDelete, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
diff --git a/src/libvirt.c b/src/libvirt.c
index 9bdb4c8..245247f 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -5245,6 +5245,94 @@ error:
}
/**
+ * virDomainSendEvnetNMI:
+ * @domain: pointer to domain object, or NULL for Domain0
+ * @vcpu: the virtual CPU id to send NMI to
+ *
+ * Send NMI to a special vcpu of the guest
+ *
+ * Returns 0 in case of success, -1 in case of failure.
+ */
+
+int virDomainSendEventNMI(virDomainPtr domain, unsigned int vcpu)
+{
+ virConnectPtr conn;
+ VIR_DOMAIN_DEBUG(domain, "vcpu=%u", vcpu);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECTED_DOMAIN(domain)) {
+ virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__);
+ virDispatchError(NULL);
+ return (-1);
+ }
+ if (domain->conn->flags & VIR_CONNECT_RO) {
+ virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ goto error;
+ }
+
+ conn = domain->conn;
+
+ if (conn->driver->domainSendEventNMI) {
+ int ret;
+ ret = conn->driver->domainSendEventNMI(domain, vcpu);
+ if (ret < 0)
+ goto error;
+ return ret;
+ }
+
+ virLibConnError (VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ virDispatchError(domain->conn);
+ return -1;
+}
+
+/**
+ * virDomainSendEventKey:
+ * @domain: pointer to domain object, or NULL for Domain0
+ * @key: the string of key or key sequence
+ *
+ * Send a special key or key sequence to the guest
+ *
+ * Returns 0 in case of success, -1 in case of failure.
+ */
+
+int virDomainSendEventKey(virDomainPtr domain, const char *key)
+{
+ virConnectPtr conn;
+ VIR_DOMAIN_DEBUG(domain, "key=%s", key);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECTED_DOMAIN(domain)) {
+ virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__);
+ virDispatchError(NULL);
+ return (-1);
+ }
+ if (domain->conn->flags & VIR_CONNECT_RO) {
+ virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ goto error;
+ }
+
+ conn = domain->conn;
+
+ if (conn->driver->domainSendEventKey) {
+ int ret;
+ ret = conn->driver->domainSendEventKey(domain, key);
+ if (ret < 0)
+ goto error;
+ return ret;
+ }
+
+ virLibConnError (VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ virDispatchError(domain->conn);
+ return -1;
+}
+
+/**
* virDomainSetVcpus:
* @domain: pointer to domain object, or NULL for Domain0
* @nvcpus: the new number of virtual CPUs for this domain
diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms
index b4aed41..cd0f474 100644
--- a/src/libvirt_public.syms
+++ b/src/libvirt_public.syms
@@ -434,6 +434,8 @@ LIBVIRT_0.9.0 {
virEventRunDefaultImpl;
virStorageVolDownload;
virStorageVolUpload;
+ virDomainSendEventNMI;
+ virDomainSendEventKey;
} LIBVIRT_0.8.8;
# .... define new API here using predicted next version number ....
diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index e996ff6..040fc16 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -2353,6 +2353,8 @@ static virDriver libxlDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
static virStateDriver libxlStateDriver = {
diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c
index e905302..1284ab1 100644
--- a/src/lxc/lxc_driver.c
+++ b/src/lxc/lxc_driver.c
@@ -2906,6 +2906,8 @@ static virDriver lxcDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
lxcDomainOpenConsole, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
static virStateDriver lxcStateDriver = {
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index fb30c37..26ba0a5 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -1654,6 +1654,8 @@ static virDriver openvzDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
int openvzRegister(void) {
diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c
index 51f9ff6..d5d0ea6 100644
--- a/src/phyp/phyp_driver.c
+++ b/src/phyp/phyp_driver.c
@@ -4054,7 +4054,9 @@ static virDriver phypDriver = {
NULL, /* domainRevertToSnapshot */
NULL, /* domainSnapshotDelete */
NULL, /* qemuMonitorCommand */
- NULL, /* domainOpenConsole */
+ NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
static virStorageDriver phypStorageDriver = {
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index dd12dc8..02af591 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1659,6 +1659,90 @@ static int qemudDomainSetMemory(virDomainPtr dom, unsigned long newmem) {
return qemudDomainSetMemoryFlags(dom, newmem, VIR_DOMAIN_MEM_LIVE);
}
+static int qemuDomainSendEventNMI(virDomainPtr domain, unsigned int vcpu)
+{
+ struct qemud_driver *driver = domain->conn->privateData;
+ virDomainObjPtr vm = NULL;
+ int ret = -1;
+ qemuDomainObjPrivatePtr priv;
+
+ qemuDriverLock(driver);
+ vm = virDomainFindByUUID(&driver->domains, domain->uuid);
+ if (!vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(domain->uuid, uuidstr);
+ qemuReportError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching uuid '%s'"), uuidstr);
+ goto cleanup;
+ }
+
+ if (!virDomainObjIsActive(vm)) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is not running"));
+ goto cleanup;
+ }
+
+ priv = vm->privateData;
+
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ret = qemuMonitorSendEventNMI(priv->mon, vcpu);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEndJob(vm) == 0) {
+ vm = NULL;
+ goto cleanup;
+ }
+
+cleanup:
+ if (vm)
+ virDomainObjUnlock(vm);
+ qemuDriverUnlock(driver);
+ return ret;
+}
+
+static int qemuDomainSendEventKey(virDomainPtr domain, const char *key)
+{
+ struct qemud_driver *driver = domain->conn->privateData;
+ virDomainObjPtr vm = NULL;
+ int ret = -1;
+ qemuDomainObjPrivatePtr priv;
+
+ qemuDriverLock(driver);
+ vm = virDomainFindByUUID(&driver->domains, domain->uuid);
+ if (!vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(domain->uuid, uuidstr);
+ qemuReportError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching uuid '%s'"), uuidstr);
+ goto cleanup;
+ }
+
+ if (!virDomainObjIsActive(vm)) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID,
+ "%s", _("domain is not running"));
+ goto cleanup;
+ }
+
+ priv = vm->privateData;
+
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ret = qemuMonitorSendEventKey(priv->mon, key);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEndJob(vm) == 0) {
+ vm = NULL;
+ goto cleanup;
+ }
+
+cleanup:
+ if (vm)
+ virDomainObjUnlock(vm);
+ qemuDriverUnlock(driver);
+ return ret;
+}
+
static int qemudDomainGetInfo(virDomainPtr dom,
virDomainInfoPtr info) {
struct qemud_driver *driver = dom->conn->privateData;
@@ -6923,6 +7007,8 @@ static virDriver qemuDriver = {
qemuDomainSnapshotDelete, /* domainSnapshotDelete */
qemuDomainMonitorCommand, /* qemuDomainMonitorCommand */
qemuDomainOpenConsole, /* domainOpenConsole */
+ qemuDomainSendEventNMI, /* domainSendEventNMI */
+ qemuDomainSendEventKey, /* domainSendEventKey */
};
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 2d28f8d..bc2e269 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -2228,3 +2228,30 @@ int qemuMonitorArbitraryCommand(qemuMonitorPtr mon,
ret = qemuMonitorTextArbitraryCommand(mon, cmd, reply);
return ret;
}
+
+
+int qemuMonitorSendEventNMI(qemuMonitorPtr mon, unsigned int vcpu)
+{
+ int ret;
+
+ VIR_DEBUG("mon=%p, vcpu=%u", mon, vcpu);
+
+ if (mon->json)
+ ret = qemuMonitorJSONSendEventNMI(mon, vcpu);
+ else
+ ret = qemuMonitorTextSendEventNMI(mon, vcpu);
+ return ret;
+}
+
+int qemuMonitorSendEventKey(qemuMonitorPtr mon, const char *key)
+{
+ int ret;
+
+ VIR_DEBUG("mon=%p, key sequence=%s", mon, key);
+
+ if (mon->json)
+ ret = qemuMonitorJSONSendEventKey(mon, key);
+ else
+ ret = qemuMonitorTextSendEventKey(mon, key);
+ return ret;
+}
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index c90219b..fdc9859 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -423,6 +423,9 @@ int qemuMonitorArbitraryCommand(qemuMonitorPtr mon,
char **reply,
bool hmp);
+int qemuMonitorSendEventNMI(qemuMonitorPtr mon, unsigned int vcpu);
+int qemuMonitorSendEventKey(qemuMonitorPtr mon, const char *key);
+
/**
* When running two dd process and using <> redirection, we need a
* shell that will not truncate files. These two strings serve that
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 20a78e1..5149d9e 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2513,3 +2513,71 @@ cleanup:
return ret;
}
+
+int qemuMonitorJSONSendEventNMI(qemuMonitorPtr mon, unsigned int vcpu)
+{
+ int ret = -1;
+ virJSONValuePtr cmd;
+ virJSONValuePtr reply = NULL;
+ char *hmp_cmd;
+
+ /*
+ * FIXME: qmp nmi is not supported until qemu-0.16.0,
+ * use human-monitor-command instead temporary.
+ */
+ if (virAsprintf(&hmp_cmd, "nmi %u", vcpu) < 0) {
+ virReportOOMError();
+ return -1;
+ }
+
+ cmd = qemuMonitorJSONMakeCommand("human-monitor-command",
+ "s:command-line", hmp_cmd,
+ NULL);
+ if (!cmd)
+ goto out_free_hmp_cmd;
+
+ ret = qemuMonitorJSONCommand(mon, cmd, &reply);
+
+ if (ret == 0)
+ ret = qemuMonitorJSONCheckError(cmd, reply);
+
+ virJSONValueFree(cmd);
+ virJSONValueFree(reply);
+out_free_hmp_cmd:
+ VIR_FREE(hmp_cmd);
+ return ret;
+}
+
+int qemuMonitorJSONSendEventKey(qemuMonitorPtr mon, const char *key_seq)
+{
+ int ret = -1;
+ virJSONValuePtr cmd;
+ virJSONValuePtr reply = NULL;
+ char *hmp_cmd;
+
+ /*
+ * FIXME: qmp sendkey is not supported until qemu-0.16.0,
+ * use human-monitor-command instead temporary.
+ */
+ if (virAsprintf(&hmp_cmd, "sendkey %s", key_seq) < 0) {
+ virReportOOMError();
+ return -1;
+ }
+
+ cmd = qemuMonitorJSONMakeCommand("human-monitor-command",
+ "s:command-line", hmp_cmd,
+ NULL);
+ if (!cmd)
+ goto out_free_hmp_cmd;
+
+ ret = qemuMonitorJSONCommand(mon, cmd, &reply);
+
+ if (ret == 0)
+ ret = qemuMonitorJSONCheckError(cmd, reply);
+
+ virJSONValueFree(cmd);
+ virJSONValueFree(reply);
+out_free_hmp_cmd:
+ VIR_FREE(hmp_cmd);
+ return ret;
+}
diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h
index 086f0e1..dc206df 100644
--- a/src/qemu/qemu_monitor_json.h
+++ b/src/qemu/qemu_monitor_json.h
@@ -204,4 +204,7 @@ int qemuMonitorJSONArbitraryCommand(qemuMonitorPtr mon,
char **reply_str,
bool hmp);
+int qemuMonitorJSONSendEventNMI(qemuMonitorPtr mon, unsigned int vcpu);
+int qemuMonitorJSONSendEventKey(qemuMonitorPtr mon, const char *key);
+
#endif /* QEMU_MONITOR_JSON_H */
diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c
index e0e3292..d3416a8 100644
--- a/src/qemu/qemu_monitor_text.c
+++ b/src/qemu/qemu_monitor_text.c
@@ -2627,3 +2627,59 @@ int qemuMonitorTextArbitraryCommand(qemuMonitorPtr mon, const char *cmd,
return ret;
}
+
+int qemuMonitorTextSendEventNMI(qemuMonitorPtr mon, unsigned int vcpu)
+{
+ char *cmd;
+ char *reply = NULL;
+ int ret = -1;
+
+ if (virAsprintf(&cmd, "nmi %u", vcpu) < 0) {
+ virReportOOMError();
+ return -1;
+ }
+ if (qemuMonitorTextArbitraryCommand(mon, cmd, &reply)) {
+ qemuReportError(VIR_ERR_OPERATION_FAILED,
+ _("failed to send NMI using command '%s'"),
+ cmd);
+ goto cleanup;
+ }
+
+ ret = 0;
+
+cleanup:
+ VIR_FREE(cmd);
+ VIR_FREE(reply);
+ return ret;
+}
+
+int qemuMonitorTextSendEventKey(qemuMonitorPtr mon, const char *key)
+{
+ char *cmd;
+ char *reply = NULL;
+ int ret = -1;
+
+ if (virAsprintf(&cmd, "sendkey %s", key) < 0) {
+ virReportOOMError();
+ return -1;
+ }
+ if (qemuMonitorTextArbitraryCommand(mon, cmd, &reply)) {
+ qemuReportError(VIR_ERR_OPERATION_FAILED,
+ _("failed to send key using command '%s'"),
+ cmd);
+ goto cleanup;
+ }
+
+ if (strstr(reply, "unknown key") != NULL) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("sent unknown key"));
+ goto cleanup;
+ }
+
+ ret = 0;
+
+cleanup:
+ VIR_FREE(cmd);
+ VIR_FREE(reply);
+ return ret;
+}
diff --git a/src/qemu/qemu_monitor_text.h b/src/qemu/qemu_monitor_text.h
index 0838a2b..4a03c40 100644
--- a/src/qemu/qemu_monitor_text.h
+++ b/src/qemu/qemu_monitor_text.h
@@ -198,4 +198,6 @@ int qemuMonitorTextDeleteSnapshot(qemuMonitorPtr mon, const char *name);
int qemuMonitorTextArbitraryCommand(qemuMonitorPtr mon, const char *cmd,
char **reply);
+int qemuMonitorTextSendEventNMI(qemuMonitorPtr mon, unsigned int vcpu);
+int qemuMonitorTextSendEventKey(qemuMonitorPtr mon, const char *key);
#endif /* QEMU_MONITOR_TEXT_H */
diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index bf94e70..676f473 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -2929,6 +2929,54 @@ done:
}
static int
+remoteDomainSendEventNMI(virDomainPtr domain, unsigned int vcpu)
+{
+ int rv = -1;
+ remote_domain_send_event_nmi_args args;
+ struct private_data *priv = domain->conn->privateData;
+
+ remoteDriverLock(priv);
+
+ make_nonnull_domain (&args.dom, domain);
+ args.vcpu = vcpu;
+
+ if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_SEND_EVENT_NMI,
+ (xdrproc_t) xdr_remote_domain_send_event_nmi_args, (char *) &args,
+ (xdrproc_t) xdr_void, (char *) NULL) == -1)
+ goto done;
+
+ rv = 0;
+
+done:
+ remoteDriverUnlock(priv);
+ return rv;
+}
+
+static int
+remoteDomainSendEventKey(virDomainPtr domain, const char *key)
+{
+ int rv = -1;
+ remote_domain_send_event_key_args args;
+ struct private_data *priv = domain->conn->privateData;
+
+ remoteDriverLock(priv);
+
+ make_nonnull_domain (&args.dom, domain);
+ args.key = (char *)key;
+
+ if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_SEND_EVENT_KEY,
+ (xdrproc_t) xdr_remote_domain_send_event_key_args, (char *) &args,
+ (xdrproc_t) xdr_void, (char *) NULL) == -1)
+ goto done;
+
+ rv = 0;
+
+done:
+ remoteDriverUnlock(priv);
+ return rv;
+}
+
+static int
remoteDomainSetVcpus (virDomainPtr domain, unsigned int nvcpus)
{
int rv = -1;
@@ -11296,6 +11344,8 @@ static virDriver remote_driver = {
remoteDomainSnapshotDelete, /* domainSnapshotDelete */
remoteQemuDomainMonitorCommand, /* qemuDomainMonitorCommand */
remoteDomainOpenConsole, /* domainOpenConsole */
+ remoteDomainSendEventNMI, /* domainSendEventNMI */
+ remoteDomainSendEventKey, /* domainSendEventKey */
};
static virNetworkDriver network_driver = {
diff --git a/src/remote/remote_protocol.c b/src/remote/remote_protocol.c
index 5604371..a829219 100644
--- a/src/remote/remote_protocol.c
+++ b/src/remote/remote_protocol.c
@@ -1463,6 +1463,28 @@ xdr_remote_domain_undefine_args (XDR *xdrs, remote_domain_undefine_args *objp)
}
bool_t
+xdr_remote_domain_send_event_nmi_args (XDR *xdrs, remote_domain_send_event_nmi_args *objp)
+{
+
+ if (!xdr_remote_nonnull_domain (xdrs, &objp->dom))
+ return FALSE;
+ if (!xdr_u_int (xdrs, &objp->vcpu))
+ return FALSE;
+ return TRUE;
+}
+
+bool_t
+xdr_remote_domain_send_event_key_args (XDR *xdrs, remote_domain_send_event_key_args *objp)
+{
+
+ if (!xdr_remote_nonnull_domain (xdrs, &objp->dom))
+ return FALSE;
+ if (!xdr_remote_nonnull_string (xdrs, &objp->key))
+ return FALSE;
+ return TRUE;
+}
+
+bool_t
xdr_remote_domain_set_vcpus_args (XDR *xdrs, remote_domain_set_vcpus_args *objp)
{
diff --git a/src/remote/remote_protocol.h b/src/remote/remote_protocol.h
index d9bf151..027ef88 100644
--- a/src/remote/remote_protocol.h
+++ b/src/remote/remote_protocol.h
@@ -2200,6 +2200,19 @@ struct remote_storage_vol_download_args {
u_int flags;
};
typedef struct remote_storage_vol_download_args remote_storage_vol_download_args;
+
+struct remote_domain_send_event_nmi_args {
+ remote_nonnull_domain dom;
+ u_int vcpu;
+};
+typedef struct remote_domain_send_event_nmi_args remote_domain_send_event_nmi_args;
+
+struct remote_domain_send_event_key_args {
+ remote_nonnull_domain dom;
+ remote_nonnull_string key;
+};
+typedef struct remote_domain_send_event_key_args remote_domain_send_event_key_args;
+
#define REMOTE_PROGRAM 0x20008086
#define REMOTE_PROTOCOL_VERSION 1
@@ -2413,6 +2426,8 @@ enum remote_procedure {
REMOTE_PROC_DOMAIN_MIGRATE_SET_MAX_SPEED = 207,
REMOTE_PROC_STORAGE_VOL_UPLOAD = 208,
REMOTE_PROC_STORAGE_VOL_DOWNLOAD = 209,
+ REMOTE_PROC_DOMAIN_SEND_EVENT_NMI = 210,
+ REMOTE_PROC_DOMAIN_SEND_EVENT_KEY = 211,
};
typedef enum remote_procedure remote_procedure;
@@ -2561,6 +2576,8 @@ extern bool_t xdr_remote_domain_create_with_flags_ret (XDR *, remote_domain_cre
extern bool_t xdr_remote_domain_define_xml_args (XDR *, remote_domain_define_xml_args*);
extern bool_t xdr_remote_domain_define_xml_ret (XDR *, remote_domain_define_xml_ret*);
extern bool_t xdr_remote_domain_undefine_args (XDR *, remote_domain_undefine_args*);
+extern bool_t xdr_remote_domain_send_event_nmi_args (XDR *, remote_domain_send_event_nmi_args*);
+extern bool_t xdr_remote_domain_send_event_key_args (XDR *, remote_domain_send_event_key_args*);
extern bool_t xdr_remote_domain_set_vcpus_args (XDR *, remote_domain_set_vcpus_args*);
extern bool_t xdr_remote_domain_set_vcpus_flags_args (XDR *, remote_domain_set_vcpus_flags_args*);
extern bool_t xdr_remote_domain_get_vcpus_flags_args (XDR *, remote_domain_get_vcpus_flags_args*);
@@ -2918,6 +2935,8 @@ extern bool_t xdr_remote_domain_create_with_flags_ret ();
extern bool_t xdr_remote_domain_define_xml_args ();
extern bool_t xdr_remote_domain_define_xml_ret ();
extern bool_t xdr_remote_domain_undefine_args ();
+extern bool_t xdr_remote_domain_send_event_nmi_args ();
+extern bool_t xdr_remote_domain_send_event_key_args ();
extern bool_t xdr_remote_domain_set_vcpus_args ();
extern bool_t xdr_remote_domain_set_vcpus_flags_args ();
extern bool_t xdr_remote_domain_get_vcpus_flags_args ();
diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x
index 675eccd..34600d7 100644
--- a/src/remote/remote_protocol.x
+++ b/src/remote/remote_protocol.x
@@ -817,6 +817,16 @@ struct remote_domain_undefine_args {
remote_nonnull_domain dom;
};
+struct remote_domain_send_event_nmi_args {
+ remote_nonnull_domain dom;
+ unsigned int vcpu;
+};
+
+struct remote_domain_send_event_key_args {
+ remote_nonnull_domain dom;
+ remote_nonnull_string key;
+};
+
struct remote_domain_set_vcpus_args {
remote_nonnull_domain dom;
int nvcpus;
@@ -2176,8 +2186,10 @@ enum remote_procedure {
REMOTE_PROC_DOMAIN_GET_BLKIO_PARAMETERS = 206,
REMOTE_PROC_DOMAIN_MIGRATE_SET_MAX_SPEED = 207,
REMOTE_PROC_STORAGE_VOL_UPLOAD = 208,
- REMOTE_PROC_STORAGE_VOL_DOWNLOAD = 209
+ REMOTE_PROC_STORAGE_VOL_DOWNLOAD = 209,
+ REMOTE_PROC_DOMAIN_SEND_EVENT_NMI = 210,
+ REMOTE_PROC_DOMAIN_SEND_EVENT_KEY = 211,
/*
* Notice how the entries are grouped in sets of 10 ?
* Nice isn't it. Please keep it this way when adding more.
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 17f5ad9..2163850 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -5447,6 +5447,8 @@ static virDriver testDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
static virNetworkDriver testNetworkDriver = {
diff --git a/src/uml/uml_driver.c b/src/uml/uml_driver.c
index e2bd5f2..756877d 100644
--- a/src/uml/uml_driver.c
+++ b/src/uml/uml_driver.c
@@ -2249,6 +2249,8 @@ static virDriver umlDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
umlDomainOpenConsole, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
static int
diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c
index 8bd27dd..73c5c87 100644
--- a/src/vbox/vbox_tmpl.c
+++ b/src/vbox/vbox_tmpl.c
@@ -8647,6 +8647,8 @@ virDriver NAME(Driver) = {
vboxDomainSnapshotDelete, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
virNetworkDriver NAME(NetworkDriver) = {
diff --git a/src/vmware/vmware_driver.c b/src/vmware/vmware_driver.c
index b5e416b..eb64087 100644
--- a/src/vmware/vmware_driver.c
+++ b/src/vmware/vmware_driver.c
@@ -1007,6 +1007,8 @@ static virDriver vmwareDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
int
diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c
index 9f47722..bd82001 100644
--- a/src/xen/xen_driver.c
+++ b/src/xen/xen_driver.c
@@ -2141,6 +2141,8 @@ static virDriver xenUnifiedDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
xenUnifiedDomainOpenConsole, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
/**
diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c
index 27206a0..0f85ad8 100644
--- a/src/xenapi/xenapi_driver.c
+++ b/src/xenapi/xenapi_driver.c
@@ -1885,6 +1885,8 @@ static virDriver xenapiDriver = {
NULL, /* domainSnapshotDelete */
NULL, /* qemuDomainMonitorCommand */
NULL, /* domainOpenConsole */
+ NULL, /* domainSendEventNMI */
+ NULL, /* domainSendEventKey */
};
/**
diff --git a/tools/virsh.c b/tools/virsh.c
index faeaf47..0b78c6d 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -2910,6 +2910,61 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd)
}
/*
+ * "sendevent" command
+ */
+static const vshCmdInfo info_sendevent[] = {
+ {"help", N_("send events to the guest")},
+ {"desc", N_("Send events (NMI or Keys) to the guest domain.")},
+ {NULL, NULL}
+};
+
+static const vshCmdOptDef opts_sendevent[] = {
+ {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")},
+ {"eventtype", VSH_OT_DATA, VSH_OFLAG_REQ, N_("the type of event (nmi or key)")},
+ {"eventcontent", VSH_OT_DATA, VSH_OFLAG_REQ, N_("content for the event.")},
+ {NULL, 0, 0, NULL}
+};
+
+
+static int
+cmdSendEvent(vshControl *ctl, const vshCmd *cmd)
+{
+ virDomainPtr dom;
+ const char *type;
+ int ret = TRUE;
+
+ if (!vshConnectionUsability(ctl, ctl->conn))
+ return FALSE;
+
+ if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+ return FALSE;
+
+ if (vshCommandOptString(cmd, "eventtype", &type) < 0)
+ return FALSE;
+
+ if (STREQ(type, "nmi")) {
+ int cpu;
+
+ if ((vshCommandOptInt(cmd, "eventcontent", &cpu) < 0)
+ || (virDomainSendEventNMI(dom, cpu) < 0))
+ ret = FALSE;
+ } else if (STREQ(type, "key")) {
+ const char *key;
+
+ if ((vshCommandOptString(cmd, "eventcontent", &key) < 0)
+ || (virDomainSendEventKey(dom, key) < 0))
+ ret = FALSE;
+ } else {
+ virDomainFree(dom);
+ vshError(ctl, _("Invalid event type: %s, only \"nmi\" or \"key\" supported currently."), type);
+ return FALSE;
+ }
+
+ virDomainFree(dom);
+ return ret;
+}
+
+/*
* "setmemory" command
*/
static const vshCmdInfo info_setmem[] = {
@@ -10693,6 +10748,7 @@ static const vshCmdDef domManagementCmds[] = {
{"setmaxmem", cmdSetmaxmem, opts_setmaxmem, info_setmaxmem},
{"setmem", cmdSetmem, opts_setmem, info_setmem},
{"setvcpus", cmdSetvcpus, opts_setvcpus, info_setvcpus},
+ {"sendevent", cmdSendEvent, opts_sendevent, info_sendevent},
{"shutdown", cmdShutdown, opts_shutdown, info_shutdown},
{"start", cmdStart, opts_start, info_start},
{"suspend", cmdSuspend, opts_suspend, info_suspend},
13 years, 1 month