[libvirt] Supporting vhost-net and macvtap in libvirt for QEMU
by Anthony Liguori
Disclaimer: I am neither an SR-IOV nor a vhost-net expert, but I've CC'd
people that are who can throw tomatoes at me for getting bits wrong :-)
I wanted to start a discussion about supporting vhost-net in libvirt.
vhost-net has not yet been merged into qemu but I expect it will be soon
so it's a good time to start this discussion.
There are two modes worth supporting for vhost-net in libvirt. The
first mode is where vhost-net backs to a tun/tap device. This is
behaves in very much the same way that -net tap behaves in qemu today.
Basically, the difference is that the virtio backend is in the kernel
instead of in qemu so there should be some performance improvement.
Current, libvirt invokes qemu with -net tap,fd=X where X is an already
open fd to a tun/tap device. I suspect that after we merge vhost-net,
libvirt could support vhost-net in this mode by just doing -net
vhost,fd=X. I think the only real question for libvirt is whether to
provide a user visible switch to use vhost or to just always use vhost
when it's available and it makes sense. Personally, I think the later
makes sense.
The more interesting invocation of vhost-net though is one where the
vhost-net device backs directly to a physical network card. In this
mode, vhost should get considerably better performance than the current
implementation. I don't know the syntax yet, but I think it's
reasonable to assume that it will look something like -net
tap,dev=eth0. The effect will be that eth0 is dedicated to the guest.
On most modern systems, there is a small number of network devices so
this model is not all that useful except when dealing with SR-IOV
adapters. In that case, each physical device can be exposed as many
virtual devices (VFs). There are a few restrictions here though. The
biggest is that currently, you can only change the number of VFs by
reloading a kernel module so it's really a parameter that must be set at
startup time.
I think there are a few ways libvirt could support vhost-net in this
second mode. The simplest would be to introduce a new tag similar to
<source network='br0'>. In fact, if you probed the device type for the
network parameter, you could probably do something like <source
network='eth0'> and have it Just Work.
Another model would be to have libvirt see an SR-IOV adapter as a
network pool whereas it handled all of the VF management. Considering
how inflexible SR-IOV is today, I'm not sure whether this is the best model.
Has anyone put any more thought into this problem or how this should be
modeled in libvirt? Michael, could you share your current thinking for
-net syntax?
--
Regards,
Anthony Liguori
1 year, 1 month
[libvirt] [PATCH 0/4] Multiple problems with saving to block devices
by Daniel P. Berrange
This patch series makes it possible to save to a block device,
instead of a plain file. There were multiple problems
- WHen save failed, we might de-reference a NULL pointer
- When save failed, we unlinked the device node !!
- The approach of using >> to append, doesn't work with block devices
- CGroups was blocking QEMU access to the block device when enabled
One remaining problem is not in libvirt, but rather QEMU. The QEMU
exec: based migration often fails to detect failure of the command
and will thus hang forever attempting a migration that'll never
succeed! Fortunately you can now work around this in libvirt using
the virsh domjobabort command
11 years, 9 months
[libvirt] [PATCHv4 00/51] another round of snapshot patches
by Eric Blake
I think I've addressed most findings from round 3 - by implementing
the ability to redefine a snapshot, it becomes possible to restore
snapshot hierarchy when recreating a transient domain by the same
name. New goodies in this round: several bug fixes, add virsh
snapshot-edit, drop undefine --snapshots-full (you can only remove
snapshot metadata on undefine). I tested as I went, but this went
through so many rebases that there may be some nasties that snuck
in; but I wanted to get this posted now. I also know that I'm
missing at least one major feature requested in the v3 review:
namely, transient domains _should_ auto-remove snapshot metadata
files when they halt, but right now aren't doing that.
v3 was at:
https://www.redhat.com/archives/libvir-list/2011-August/msg01132.html
Also available here:
git fetch git://repo.or.cz/libvirt/ericb.git snapshot
or browse online at:
http://repo.or.cz/w/libvirt/ericb.git/shortlog/refs/heads/snapshot
I'm also trying to group things by several bugzilla related to
various patches (looks like I still need to create a few):
Eric Blake (51):
https://bugzilla.redhat.com/show_bug.cgi?id=674537
snapshot: fix corner case on OOM during creation
https://bugzilla.redhat.com/show_bug.cgi?id=733762
snapshot: better events when starting paused
snapshot: fine-tune ability to start paused
snapshot: expose --running and --paused in virsh
snapshot: fine-tune qemu saved images starting paused
snapshot: improve reverting to qemu paused snapshots
snapshot: properly revert qemu to offline snapshots
snapshot: fine-tune qemu snapshot revert states
no bug filed yet... should be one about no stale metadata
snapshot: allow deletion of just snapshot metadata
snapshot: add snapshot-list --parent to virsh
https://bugzilla.redhat.com/show_bug.cgi?id=733529
snapshot: speed up snapshot location
snapshot: avoid crash when deleting qemu snapshots
snapshot: track current domain across deletion of children
snapshot: simplify acting on just children
no bug filed yet... should be one about no stale metadata
snapshot: let qemu discard only snapshot metadata
snapshot: identify which snapshots have metadata
snapshot: reflect new dumpxml and list options in virsh
snapshot: identify qemu snapshot roots
snapshot: allow recreation of metadata
snapshot: refactor virsh snapshot creation
snapshot: improve virsh snapshot-create, add snapshot-edit
snapshot: add qemu snapshot creation without metadata
no bug filed yet... should be one about snapshot migration
snapshot: add qemu snapshot redefine support
snapshot: prevent stranding snapshot data on domain destruction
snapshot: teach virsh about new undefine flags
snapshot: refactor some qemu code
snapshot: cache qemu-img location
snapshot: support new undefine flags in qemu
snapshot: prevent migration from stranding snapshot data
https://bugzilla.redhat.com/show_bug.cgi?id=638510
snapshot: refactor domain xml output
snapshot: allow full domain xml in snapshot
snapshot: correctly escape generated xml
snapshot: update rng to support full domain in xml
snapshot: store qemu domain details in xml
snapshot: additions to domain xml for disks
snapshot: reject transient disks where code is not ready
snapshot: introduce new deletion flag
snapshot: expose new delete flag in virsh
snapshot: allow halting after snapshot
snapshot: expose halt-after-creation in virsh
snapshot: wire up new qemu monitor command
snapshot: support extra state in snapshots
snapshot: add <disks> to snapshot xml
snapshot: also support disks by path
snapshot: add virsh domblklist command
snapshot: add flag for requesting disk snapshot
snapshot: wire up disk-only flag to snapshot-create
snapshot: reject unimplemented disk snapshot features
snapshot: make it possible to audit external snapshot
snapshot: wire up live qemu disk snapshots
snapshot: use SELinux and lock manager with external snapshots
docs/formatdomain.html.in | 40 +-
docs/formatsnapshot.html.in | 269 ++-
docs/schemas/Makefile.am | 1 +
docs/schemas/domain.rng | 2555 +-------------------
docs/schemas/{domain.rng => domaincommon.rng} | 32 +-
docs/schemas/domainsnapshot.rng | 84 +-
examples/domain-events/events-c/event-test.c | 37 +-
include/libvirt/libvirt.h.in | 66 +-
src/conf/domain_audit.c | 12 +-
src/conf/domain_audit.h | 4 +-
src/conf/domain_conf.c | 902 ++++++--
src/conf/domain_conf.h | 76 +-
src/esx/esx_driver.c | 38 +-
src/libvirt.c | 256 ++-
src/libvirt_private.syms | 8 +
src/libxl/libxl_conf.c | 5 +
src/libxl/libxl_driver.c | 11 +-
src/qemu/qemu_command.c | 5 +
src/qemu/qemu_conf.h | 1 +
src/qemu/qemu_driver.c | 1532 +++++++++---
src/qemu/qemu_hotplug.c | 18 +-
src/qemu/qemu_migration.c | 48 +-
src/qemu/qemu_migration.h | 2 -
src/qemu/qemu_monitor.c | 24 +
src/qemu/qemu_monitor.h | 4 +
src/qemu/qemu_monitor_json.c | 33 +
src/qemu/qemu_monitor_json.h | 4 +
src/qemu/qemu_monitor_text.c | 40 +
src/qemu/qemu_monitor_text.h | 4 +
src/qemu/qemu_process.c | 11 +-
src/uml/uml_driver.c | 56 +-
src/vbox/vbox_tmpl.c | 43 +-
src/xen/xend_internal.c | 12 +-
src/xenxs/xen_sxpr.c | 5 +
src/xenxs/xen_xm.c | 5 +
tests/domainsnapshotxml2xmlin/disk_snapshot.xml | 16 +
tests/domainsnapshotxml2xmlout/disk_snapshot.xml | 77 +
tests/domainsnapshotxml2xmlout/full_domain.xml | 35 +
.../qemuxml2argv-disk-snapshot.args | 7 +
.../qemuxml2argv-disk-snapshot.xml | 39 +
.../qemuxml2argv-disk-transient.xml | 27 +
tests/qemuxml2argvtest.c | 2 +
tests/virsh-optparse | 20 +
tools/virsh.c | 772 +++++-
tools/virsh.pod | 214 ++-
45 files changed, 3978 insertions(+), 3474 deletions(-)
copy docs/schemas/{domain.rng => domaincommon.rng} (98%)
create mode 100644 tests/domainsnapshotxml2xmlin/disk_snapshot.xml
create mode 100644 tests/domainsnapshotxml2xmlout/disk_snapshot.xml
create mode 100644 tests/domainsnapshotxml2xmlout/full_domain.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-snapshot.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-snapshot.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-transient.xml
--
1.7.4.4
12 years, 1 month
[libvirt] [PATCHv4] add-vcpu-usage
by Hu Tao
show `vcpu usages' by `virt-top -1'
Before this patch, `virt-top -1' shows total cpu usages
which euqal to `vcpu usages' + `hypervisor usages'. This
patch adds another column for domains showing `vcpu
usages'. An example is:
PHYCPU %CPU example_domain
0 10.4 10.4 0.8
1 1.6 1.6 1.4
2 2.6 2.6 2.6
3 0.0 0.0 0.1
---
virt-top/virt_top.ml | 72 ++++++++++++++++++++++++++++++++++++-------------
1 files changed, 53 insertions(+), 19 deletions(-)
diff --git a/virt-top/virt_top.ml b/virt-top/virt_top.ml
index e2fe554..0dcb170 100644
--- a/virt-top/virt_top.ml
+++ b/virt-top/virt_top.ml
@@ -448,6 +448,7 @@ let collect, clear_pcpu_display_data =
(* Save pcpu_usages structures across redraws too (only for pCPU display). *)
let last_pcpu_usages = Hashtbl.create 13 in
+ let last_vcpu_usages = Hashtbl.create 13 in
let clear_pcpu_display_data () =
(* Clear out pcpu_usages used by PCPUDisplay display_mode
@@ -652,12 +653,17 @@ let collect, clear_pcpu_display_data =
(try
let domid = rd.rd_domid in
let maplen = C.cpumaplen nr_pcpus in
- let cpu_stats = D.get_cpu_stats rd.rd_dom nr_pcpus in
- let rec find_usages_from_stats = function
+ let cpu_stats = D.get_cpu_stats rd.rd_dom false in
+ let rec find_cpu_usages = function
| ("cpu_time", D.TypedFieldUInt64 usages) :: _ -> usages
- | _ :: params -> find_usages_from_stats params
+ | _ :: params -> find_cpu_usages params
| [] -> 0L in
- let pcpu_usages = Array.map find_usages_from_stats cpu_stats in
+ let rec find_vcpu_usages = function
+ | ("vcpu_time", D.TypedFieldUInt64 usages) :: _ -> usages
+ | _ :: params -> find_vcpu_usages params
+ | [] -> 0L in
+
+ let pcpu_usages = Array.map find_cpu_usages cpu_stats in
let maxinfo = rd.rd_info.D.nr_virt_cpu in
let nr_vcpus, vcpu_infos, cpumaps =
D.get_vcpus rd.rd_dom maxinfo maplen in
@@ -669,11 +675,19 @@ let collect, clear_pcpu_display_data =
(* Update last_pcpu_usages. *)
Hashtbl.replace last_pcpu_usages domid pcpu_usages;
- (match prev_pcpu_usages with
- | Some prev_pcpu_usages
+ (* vcpu usages *)
+ let vcpu_usages = Array.map find_vcpu_usages cpu_stats in
+ let prev_vcpu_usages =
+ try Some (Hashtbl.find last_vcpu_usages domid)
+ with Not_found -> None in
+ Hashtbl.replace last_vcpu_usages domid vcpu_usages;
+
+ (match prev_pcpu_usages, prev_vcpu_usages with
+ | Some prev_pcpu_usages, Some prev_vcpu_usages
when Array.length prev_pcpu_usages = Array.length pcpu_usages ->
- Some (domid, name, nr_vcpus, vcpu_infos, pcpu_usages,
- prev_pcpu_usages, cpumaps, maplen)
+ Some (domid, name, nr_vcpus, vcpu_infos, pcpu_usages,
+ prev_pcpu_usages, vcpu_usages, prev_vcpu_usages,
+ cpumaps, maplen)
| _ -> None (* ignore missing / unequal length prev_vcpu_infos *)
);
with
@@ -691,13 +705,24 @@ let collect, clear_pcpu_display_data =
List.iteri (
fun di (domid, name, nr_vcpus, vcpu_infos, pcpu_usages,
- prev_pcpu_usages, cpumaps, maplen) ->
+ prev_pcpu_usages, vcpu_usages, prev_vcpu_usages,
+ cpumaps, maplen) ->
(* Which pCPUs can this dom run on? *)
for p = 0 to Array.length pcpu_usages - 1 do
pcpus.(p).(di) <- pcpu_usages.(p) -^ prev_pcpu_usages.(p)
- done
+ done
) doms;
+ let vcpus = Array.make_matrix nr_pcpus nr_doms 0L in
+ List.iteri (
+ fun di (domid, name, nr_vcpus, vcpu_infos, pcpu_usages,
+ prev_pcpu_usages, vcpu_usages, prev_vcpu_usages,
+ cpumaps, maplen) ->
+ for p = 0 to Array.length vcpu_usages - 1 do
+ vcpus.(p).(di) <- vcpu_usages.(p) -^ prev_vcpu_usages.(p)
+ done
+ ) doms;
+
(* Sum the CPU time used by each pCPU, for the %CPU column. *)
let pcpus_cpu_time = Array.map (
fun row ->
@@ -709,7 +734,7 @@ let collect, clear_pcpu_display_data =
Int64.to_float !cpu_time
) pcpus in
- Some (doms, pcpus, pcpus_cpu_time)
+ Some (doms, pcpus, vcpus, pcpus_cpu_time)
) else
None in
@@ -913,7 +938,7 @@ let redraw =
loop domains_lineno doms
| PCPUDisplay -> (*---------- Showing physical CPUs ----------*)
- let doms, pcpus, pcpus_cpu_time =
+ let doms, pcpus, vcpus, pcpus_cpu_time =
match pcpu_display with
| Some p -> p
| None -> failwith "internal error: no pcpu_display data" in
@@ -922,9 +947,9 @@ let redraw =
let dom_names =
String.concat "" (
List.map (
- fun (_, name, _, _, _, _, _, _) ->
+ fun (_, name, _, _, _, _, _, _, _, _) ->
let len = String.length name in
- let width = max (len+1) 7 in
+ let width = max (len+1) 12 in
pad width name
) doms
) in
@@ -941,18 +966,27 @@ let redraw =
addch ' ';
List.iteri (
- fun di (domid, name, _, _, _, _, _, _) ->
+ fun di (domid, name, _, _, _, _, _, _, _, _) ->
let t = pcpus.(p).(di) in
+ let tv = vcpus.(p).(di) in
let len = String.length name in
- let width = max (len+1) 7 in
- let str =
+ let width = max (len+1) 12 in
+ let str_pcpu =
if t <= 0L then ""
else (
let t = Int64.to_float t in
let percent = 100. *. t /. total_cpu_per_pcpu in
- sprintf "%s " (Show.percent percent)
+ sprintf "%s" (Show.percent percent)
) in
- addstr (pad width str);
+ let str_vcpu =
+ if tv <= 0L then ""
+ else (
+ let tv = Int64.to_float tv in
+ let percent = 100. *. tv /. total_cpu_per_pcpu in
+ sprintf "%s" (Show.percent percent)
+ ) in
+ let str = sprintf "%s %s" str_pcpu str_vcpu in
+ addstr (pad width str);
()
) doms
) pcpus;
--
1.7.1
12 years, 4 months
[libvirt] Bug report 826704 - sanlock releases all resources on virsh detach-disk
by Frido Roose
Hello,
I logged a bug about using virsh detach-disk cleaning up all sanlock resources for the domain instead of only the device in question.
After a quick look into the code, I think a new method similar to virLockManagerSanlockAddResource is needed in case of detaching a disk from the domain, like e.g. virLockManagerSanlockDelResource (…).
Now it looks like virLockManagerSanlockRelease is called, which releases all resources:
if ((rv = sanlock_release(-1, priv->vm_pid, SANLK_REL_ALL, 0, NULL)) < 0) {
virsh detach-disk should then call virLockManagerSanlockDelResource for the given resource imo.
Any thoughts about this or why it is implemented like this?
--
Frido Roose
12 years, 5 months
[libvirt] [RFC 0/5] block: File descriptor passing using -open-hook-fd
by Stefan Hajnoczi
Libvirt can take advantage of SELinux to restrict the QEMU process and prevent
it from opening files that it should not have access to. This improves
security because it prevents the attacker from escaping the QEMU process if
they manage to gain control.
NFS has been a pain point for SELinux because it does not support labels (which
I believe are stored in extended attributes). In other words, it's not
possible to use SELinux goodness on QEMU when image files are located on NFS.
Today we have to allow QEMU access to any file on the NFS export rather than
restricting specifically to the image files that the guest requires.
File descriptor passing is a solution to this problem and might also come in
handy elsewhere. Libvirt or another external process chooses files which QEMU
is allowed to access and provides just those file descriptors - QEMU cannot
open the files itself.
This series adds the -open-hook-fd command-line option. Whenever QEMU needs to
open an image file it sends a request over the given UNIX domain socket. The
response includes the file descriptor or an errno on failure. Please see the
patches for details on the protocol.
The -open-hook-fd approach allows QEMU to support file descriptor passing
without changing -drive. It also supports snapshot_blkdev and other commands
that re-open image files.
Anthony Liguori <aliguori(a)us.ibm.com> wrote most of these patches. I added a
demo -open-hook-fd server and added some small fixes. Since Anthony is
traveling right now I'm sending the RFC for discussion.
Anthony Liguori (3):
block: add open() wrapper that can be hooked by libvirt
block: add new command line parameter that and protocol description
block: plumb up open-hook-fd option
Stefan Hajnoczi (2):
osdep: add qemu_recvmsg() wrapper
Example -open-hook-fd server
block.c | 107 ++++++++++++++++++++++++++++++++++++++
block.h | 2 +
block/raw-posix.c | 18 +++----
block/raw-win32.c | 2 +-
block/vdi.c | 2 +-
block/vmdk.c | 6 +--
block/vpc.c | 2 +-
block/vvfat.c | 4 +-
block_int.h | 12 +++++
osdep.c | 46 +++++++++++++++++
qemu-common.h | 2 +
qemu-options.hx | 42 +++++++++++++++
test-fd-passing.c | 147 +++++++++++++++++++++++++++++++++++++++++++++++++++++
vl.c | 3 ++
14 files changed, 378 insertions(+), 17 deletions(-)
create mode 100644 test-fd-passing.c
--
1.7.10
12 years, 5 months
[libvirt] [PATCH 00/12] Fine grained access control for libvirt APIs
by Daniel P. Berrange
This is a repost of
https://www.redhat.com/archives/libvir-list/2012-January/msg00907.html
which got no comments last time out.
This series of patch is the minimal required to get a working proof
of concept implementation of fine grained access control in libvirt.
This demonstrates
- Obtaining a client identity from a socket
- Ensuring RPC calls are executed with the correct identity sset
- A policykit access driver that checks based on access vector alone
- A SELinux access driver that checks based on access vector + object
- A set of hooks in the QEMU driver to protect virDomainObjPtr access
Things that are not done
- APIs for changing the real/effective identity post-connect
- A simple RBAC access driver for doing (Access vector, object)
checks
- SELinux policy for the SELinux driver
- Access control hooks on all other QEMU driver methods
- Access control hooks in LXC, UML, other libvirtd side drivers
- Access control hooks in storage, network, interface, etc drivers
- Document WTF todo to propagate SELinux contexts across TCP
sockets using IPSec. Any hints welcome...
- Lots more I can't think of right now
I should note that the policykit driver is mostly useless because it
is unable to let you do checks on anything other than permission name
and UNIX process ID at this time. So what I've implemented with the
polkit driver is really little more than a slightly more fine grained
version of the VIR_CONNECT_RO flag. In theory it is supposed to be
extendable to allow other types of identity information besides
the process ID, and to include some kind of object identiers in
the permission check, but no one seems to be attacking this.
So I expect the simple RBAC driver to be the most used one in the
common case usage of libvirt, and of course the SELinux driver.
12 years, 5 months
[libvirt] [PATCH] Wire up <loader> to set the QEMU BIOS path
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
* src/qemu/qemu_command.c: Wire up -bios with <loader>
* tests/qemuxml2argvdata/qemuxml2argv-bios.args,
tests/qemuxml2argvdata/qemuxml2argv-bios.xml: Expand
existing BIOS test case to cover <loader>
---
src/qemu/qemu_command.c | 9 +++++++++
tests/qemuxml2argvdata/qemuxml2argv-bios.args | 3 ++-
tests/qemuxml2argvdata/qemuxml2argv-bios.xml | 1 +
3 files changed, 12 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index ea9431f..c82f5bc 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -4052,6 +4052,11 @@ qemuBuildCommandLine(virConnectPtr conn,
if (enableKVM)
virCommandAddArg(cmd, "-enable-kvm");
+ if (def->os.loader) {
+ virCommandAddArg(cmd, "-bios");
+ virCommandAddArg(cmd, def->os.loader);
+ }
+
/* Set '-m MB' based on maxmem, because the lower 'memory' limit
* is set post-startup using the balloon driver. If balloon driver
* is not supported, then they're out of luck anyway. Update the
@@ -7581,6 +7586,10 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps,
WANT_VALUE();
if (!(def->os.kernel = strdup(val)))
goto no_memory;
+ } else if (STREQ(arg, "-bios")) {
+ WANT_VALUE();
+ if (!(def->os.loader = strdup(val)))
+ goto no_memory;
} else if (STREQ(arg, "-initrd")) {
WANT_VALUE();
if (!(def->os.initrd = strdup(val)))
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-bios.args b/tests/qemuxml2argvdata/qemuxml2argv-bios.args
index f9727c4..ac98000 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-bios.args
+++ b/tests/qemuxml2argvdata/qemuxml2argv-bios.args
@@ -1,5 +1,6 @@
LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test QEMU_AUDIO_DRV=none \
-/usr/bin/qemu -S -M pc -m 1024 -smp 1 -nodefaults -device sga \
+/usr/bin/qemu -S -M pc -bios /usr/share/seabios/bios.bin \
+-m 1024 -smp 1 -nodefaults -device sga \
-monitor unix:/tmp/test-monitor,server,nowait -no-acpi -boot c \
-hda /dev/HostVG/QEMUGuest1 -serial pty \
-usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 \
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-bios.xml b/tests/qemuxml2argvdata/qemuxml2argv-bios.xml
index cfc5587..ac15d45 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-bios.xml
+++ b/tests/qemuxml2argvdata/qemuxml2argv-bios.xml
@@ -6,6 +6,7 @@
<vcpu>1</vcpu>
<os>
<type arch='i686' machine='pc'>hvm</type>
+ <loader>/usr/share/seabios/bios.bin</loader>
<boot dev='hd'/>
<bootmenu enable='yes'/>
<bios useserial='yes'/>
--
1.7.7.6
12 years, 5 months
[libvirt] [Patch v2] vmware: detect when a domain was shut down from the inside
by Jean-Baptiste Rouault
This patch adds an internal function vmwareUpdateVMStatus to
update the real state of the domain. This function is used in
various places in the driver, in particular to detect when
the domain has been shut down by the user with the "halt"
command.
---
v2:
- Replace internal function vmwareGetVMStatus by vmwareUpdateVMStatus
- Improve vmrun list output parsing
- variable initialization and coding-style fixes
src/vmware/vmware_driver.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 95 insertions(+), 0 deletions(-)
diff --git a/src/vmware/vmware_driver.c b/src/vmware/vmware_driver.c
index 8f9d922..53e28e7 100644
--- a/src/vmware/vmware_driver.c
+++ b/src/vmware/vmware_driver.c
@@ -28,6 +28,7 @@
#include "datatypes.h"
#include "virfile.h"
#include "memory.h"
+#include "util.h"
#include "uuid.h"
#include "command.h"
#include "vmx.h"
@@ -181,6 +182,64 @@ vmwareGetVersion(virConnectPtr conn, unsigned long *version)
}
static int
+vmwareUpdateVMStatus(struct vmware_driver *driver, virDomainObjPtr vm)
+{
+ virCommandPtr cmd;
+ char *outbuf = NULL;
+ char *vmxAbsolutePath = NULL;
+ char *parsedVmxPath = NULL;
+ char *str;
+ char *saveptr = NULL;
+ bool found = false;
+ int oldState = virDomainObjGetState(vm, NULL);
+ int newState;
+ int ret = -1;
+
+ cmd = virCommandNewArgList(VMRUN, "-T", vmw_types[driver->type],
+ "list", NULL);
+ virCommandSetOutputBuffer(cmd, &outbuf);
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+
+ if (virFileResolveAllLinks(((vmwareDomainPtr) vm->privateData)->vmxPath,
+ &vmxAbsolutePath) < 0)
+ goto cleanup;
+
+ for(str = outbuf ; (parsedVmxPath = strtok_r(str, "\n", &saveptr)) != NULL;
+ str = NULL) {
+
+ if (parsedVmxPath[0] != '/')
+ continue;
+
+ if (STREQ(parsedVmxPath, vmxAbsolutePath)) {
+ found = true;
+ /* If the vmx path is in the output, the domain is running or
+ * is paused but we have no way to detect if it is paused or not. */
+ if (oldState == VIR_DOMAIN_PAUSED)
+ newState = oldState;
+ else
+ newState = VIR_DOMAIN_RUNNING;
+ break;
+ }
+ }
+
+ if (!found) {
+ vm->def->id = -1;
+ newState = VIR_DOMAIN_SHUTOFF;
+ }
+
+ virDomainObjSetState(vm, newState, 0);
+
+ ret = 0;
+
+cleanup:
+ virCommandFree(cmd);
+ VIR_FREE(outbuf);
+ VIR_FREE(vmxAbsolutePath);
+ return ret;
+}
+
+static int
vmwareStopVM(struct vmware_driver *driver,
virDomainObjPtr vm,
virDomainShutoffReason reason)
@@ -331,6 +390,9 @@ vmwareDomainShutdownFlags(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_RUNNING) {
vmwareError(VIR_ERR_INTERNAL_ERROR, "%s",
_("domain is not in running state"));
@@ -485,6 +547,8 @@ vmwareDomainReboot(virDomainPtr dom, unsigned int flags)
vmwareSetSentinal(cmd, vmw_types[driver->type]);
vmwareSetSentinal(cmd, vmxPath);
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_RUNNING) {
vmwareError(VIR_ERR_INTERNAL_ERROR, "%s",
@@ -596,6 +660,9 @@ vmwareDomainCreateWithFlags(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
if (virDomainObjIsActive(vm)) {
vmwareError(VIR_ERR_OPERATION_INVALID,
"%s", _("Domain is already running"));
@@ -645,6 +712,9 @@ vmwareDomainUndefineFlags(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
if (virDomainObjIsActive(vm)) {
vm->persistent = 0;
} else {
@@ -874,6 +944,21 @@ vmwareDomainXMLFromNative(virConnectPtr conn, const char *nativeFormat,
return xml;
}
+static void vmwareDomainObjListUpdateDomain(void *payload, const void *name ATTRIBUTE_UNUSED, void *data)
+{
+ struct vmware_driver *driver = data;
+ virDomainObjPtr vm = payload;
+ virDomainObjLock(vm);
+ vmwareUpdateVMStatus(driver, vm);
+ virDomainObjUnlock(vm);
+}
+
+static void
+vmwareDomainObjListUpdateAll(virDomainObjListPtr doms, struct vmware_driver *driver)
+{
+ virHashForEach(doms->objs, vmwareDomainObjListUpdateDomain, driver);
+}
+
static int
vmwareNumDefinedDomains(virConnectPtr conn)
{
@@ -881,6 +966,7 @@ vmwareNumDefinedDomains(virConnectPtr conn)
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListNumOfDomains(&driver->domains, 0);
vmwareDriverUnlock(driver);
@@ -894,6 +980,7 @@ vmwareNumDomains(virConnectPtr conn)
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListNumOfDomains(&driver->domains, 1);
vmwareDriverUnlock(driver);
@@ -908,6 +995,7 @@ vmwareListDomains(virConnectPtr conn, int *ids, int nids)
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListGetActiveIDs(&driver->domains, ids, nids);
vmwareDriverUnlock(driver);
@@ -922,6 +1010,7 @@ vmwareListDefinedDomains(virConnectPtr conn,
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListGetInactiveNames(&driver->domains, names, nnames);
vmwareDriverUnlock(driver);
return n;
@@ -944,6 +1033,9 @@ vmwareDomainGetInfo(virDomainPtr dom, virDomainInfoPtr info)
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
info->state = virDomainObjGetState(vm, NULL);
info->cpuTime = 0;
info->maxMem = vm->def->mem.max_balloon;
@@ -979,6 +1071,9 @@ vmwareDomainGetState(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
*state = virDomainObjGetState(vm, reason);
ret = 0;
--
1.7.9.1
12 years, 5 months