On Mon, Apr 18, 2011 at 05:50:01PM -0400, Cole Robinson wrote:
First, thanks a lot for the patch! I'm sure everyone will be glad
to see
libguestfs integration in virt-manager.
> [This patch is incomplete and just for discussion]
>
> Using libguestfs inspection[1], we can find out a lot about a virtual
> machine, such as what operating system is installed, the OS version,
> the OS distro, the hostname, the (real) disk usage, what apps are
> installed, and lots more. It would be nice if virt-manager could use
> this information: for example we could change the generic "computer"
> icon into a Windows or Fedora logo if Windows or Fedora was installed
> in the virtual machine.
>
That icon bit was originally intended for the current design, but since we've
never really tracked OS info after install time, it never came about.
Particularly for OS type/version tracking, I think we need a few things:
- Everything standardize on some naming scheme, ideally libosinfo (though
nothing is using it yet :/ ). libosinfo could also distribute the OS icons
We sort of got bored of waiting for that train. We have a primitive
but rather effective system in libguestfs, which I explain at the end
of this email.
- Libvirt domain XML schema is extended with a field to track the
installed OS. For most libvirt drivers this would just be metadata
(vmware/esx the exception). Even though the data might be out of
date if the guest is upgraded, I think it has become clear that
there is a lot of value in tracking this in XML.
Yes, I agree. This also solves the persistence problem.
It's a bit of a shame that the <description> field in the libvirt XML
isn't structured, at least so that different applications could store
their own data there without it being trampled upon by users or other
applications. (CC'd to libvir-list in case they have any thoughts
about that).
- virt-manager can offer an option to set the XML OS value from
libguestfs
inspection. Maybe even some preference to do this automatically for vms which
have no XML value, or some warning if the inspected OS doesn't match the
stored value.
None of this affects the validity of the patch btw.
> This patch adds a daemon thread running in the background which
> performs batch inspection on the virt-manager VMs. If inspection is
> successful, it then passes the results up to the UI (ie. the
> vmmManager object) to be displayed.
>
> Currently I've just added the inspected hostname to the UI as you can
> see in this screenshot:
>
>
http://oirase.annexia.org/tmp/vmm-hostnames.png
>
Ah, that's a nice idea, but I'd probably want to run it by the UX guys first.
I've removed that from the code already. It was just a quick hack to
show what could be done.
At the very least a first step would be showing the hostname under
Details->Overview, probably just add a new field under the Name:
Agreed.
> In future there is room for much greater customization of the
UI, such
> as changing the icons, displaying disk usage and more.
>
> The background thread is transparent to the UI. Well, in fact because
> of a bug in libguestfs, it is currently NOT transparent because we
> weren't releasing the Python GIL across libguestfs calls. If you
> apply the following patch to libguestfs, then it *is* transparent and
> doesn't interrupt the UI:
>
>
https://www.redhat.com/archives/libguestfs/2011-April/msg00076.html
>
Okay, so when this patch is committed we probably want to add a libguestfs
version check to make sure we aren't using bad bindings.
Agreed. Also I have pushed the Python fix into:
stable branch 1.8 version >= 1.8.6
stable branch 1.10 version >= 1.10.1
development branch version >= 1.11.2
and these should be available in Fedora 14 onwards in a few weeks.
> I think that virt-manager should probably cache the inspection
data
> across runs. However I don't know if virt-manager has a mechanism to
> do this already or if we'd need to add one.
Do you mean cache across runs of virt-manager? We could use gconf, we already
have examples of setting per-vm preferences like console scaling, we could use
the same to store hostname, os type/distro, etc. But then we have to decide
when to repoll this info, and if/how to allow the user to force repolling.
Or libvirt XML maybe .. see above.
> Also this patch doesn't yet change ./configure, so it'll
just fail if
> the python-libguestfs package is not installed.
>
We don't really use configure for package checks actually. It's pretty easy in
the code to just try the import and disable the feature if the package isn't
available. Then distros just use rpm/dpkg deps to pull in the packages we want
We should probably do the check entirely at runtime and make
python-libguestfs completely optional (but on Debian, map it to an
Enhances-type dependency). There's no need to have virt-manager
depending on libguestfs, since this is just an extra feature, not a
requirement.
> [1]
http://libguestfs.org/virt-inspector.1.html
>
On 04/18/2011 01:01 PM, Richard W.M. Jones wrote:
> From 11278a7509e4edbcb28eac4c2c4a50fc8d68342e Mon Sep 17 00:00:00 2001
> From: Richard W.M. Jones <rjones(a)redhat.com>
> Date: Mon, 18 Apr 2011 17:44:51 +0100
> Subject: [PATCH] Add inspection thread to virt-manager.
>
> ---
> src/virtManager/engine.py | 14 ++++
> src/virtManager/inspection.py | 162 +++++++++++++++++++++++++++++++++++++++++
> src/virtManager/manager.py | 26 ++++++-
> 3 files changed, 200 insertions(+), 2 deletions(-)
> create mode 100644 src/virtManager/inspection.py
>
> diff --git a/src/virtManager/engine.py b/src/virtManager/engine.py
> index 383deb3..40d5b39 100644
> --- a/src/virtManager/engine.py
> +++ b/src/virtManager/engine.py
> @@ -45,6 +45,7 @@ from virtManager.create import vmmCreate
> from virtManager.host import vmmHost
> from virtManager.error import vmmErrorDialog
> from virtManager.systray import vmmSystray
> +from virtManager.inspection import vmmInspection
> import virtManager.uihelpers as uihelpers
> import virtManager.util as util
>
> @@ -239,6 +240,9 @@ class vmmEngine(vmmGObject):
> if not self.config.support_threading:
> logging.debug("Libvirt doesn't support threading,
skipping.")
>
> + self.inspection_thread = None
> + self.create_inspection_thread()
> +
> # Counter keeping track of how many manager and details windows
> # are open. When it is decremented to 0, close the app or
> # keep running in system tray if enabled
> @@ -533,6 +537,16 @@ class vmmEngine(vmmGObject):
> logging.debug("Exiting app normally.")
> gtk.main_quit()
>
> + def create_inspection_thread(self):
> + if not self.config.support_threading:
> + logging.debug("No inspection thread because "
> + "libvirt is not thread-safe.")
> + return
> + self.inspection_thread = vmmInspection(self)
> + self.inspection_thread.daemon = True
> + self.inspection_thread.start()
> + return
> +
> def add_connection(self, uri, readOnly=None, autoconnect=False):
> conn = self._check_connection(uri)
> if conn:
> diff --git a/src/virtManager/inspection.py b/src/virtManager/inspection.py
> new file mode 100644
> index 0000000..70f2f52
> --- /dev/null
> +++ b/src/virtManager/inspection.py
> @@ -0,0 +1,162 @@
> +#
> +# Copyright (C) 2011 Red Hat, Inc.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
> +# MA 02110-1301 USA.
> +#
> +
> +import sys
> +import time
> +import traceback
> +from Queue import Queue
> +from threading import Thread
> +
> +from guestfs import GuestFS
> +
> +import logging
> +import util
> +
> +class vmmInspection(Thread):
> + _name = "inspection thread"
> + _wait = 60 # seconds
> +
> + def __init__(self, engine):
> + Thread.__init__(self, name=self._name)
> + self._q = Queue()
> + self._engine = engine
> + self._vmcache = dict()
> +
> + # Called by the main thread whenever a VM is added to vmlist.
> + def vm_added(self, connection, vmuuid):
> + obj = (connection, vmuuid)
> + self._q.put(obj)
> +
> + def run(self):
> + # Wait a few seconds before we do anything. This prevents
> + # inspection from being a burden for initial virt-manager
> + # interactivity (although it shouldn't affect interactivity at
> + # all).
> + logging.debug("%s: waiting" % self._name)
> + time.sleep(self._wait)
> +
> + logging.debug("%s: ready" % self._name)
> +
> + while True:
> + obj = self._q.get(True)
> + (connection, vmuuid) = obj
> +
> + logging.debug("%s: %s: processing started, connection = %s"
%
> + (self._name, vmuuid, connection))
> + try:
> + self._process(connection, vmuuid)
> + except:
> + logging.debug("%s: %s: processing raised:\n%s" %
> + (self._name, vmuuid, traceback.format_exc()))
> +
> + self._q.task_done()
> + logging.debug("%s: %s: processing done" % (self._name,
vmuuid))
> +
> + def _process(self, connection, vmuuid):
> + if vmuuid in self._vmcache:
> + self._update_ui(connection, vmuuid)
> + return
> +
> + if not connection or connection.is_remote():
> + logging.debug("%s: %s: no connection object or "
> + "connection is remote" % (self._name, vmuuid))
> + return
> + vm = connection.get_vm(vmuuid)
> + if not vm:
> + logging.debug("%s: %s: no vm object" % (self._name, vmuuid))
> + return
> +
> + xml = vm.get_xml()
> + if not xml:
> + logging.debug("%s: %s: cannot get domain XML" %
> + (self._name, vmuuid))
> + return
> +
This 2 calls won't ever be none, they'll either return something or throw an
exception.
OK, I will fix it.
> + g = GuestFS()
> + # One day we will be able to do ...
> + #g.add_libvirt_dom(...)
> + # but until that day comes ...
> + disks = self._get_disks_from_xml(xml)
> + for (disk, format) in disks:
> + g.add_drive_opts(disk, readonly=1, format=format)
> +
> + g.launch()
> +
> + # Inspect the operating system.
> + roots = g.inspect_os()
> + if len(roots) == 0:
> + logging.debug("%s: %s: no operating systems found" %
> + (self._name, vmuuid))
> + return
> +
> + # Arbitrarily pick the first root device.
> + root = roots[0]
> +
> + # Inspection.
> + name = g.inspect_get_type (root)
> + distro = g.inspect_get_distro (root)
> + hostname = g.inspect_get_hostname (root)
> +
> + self._vmcache[vmuuid] = (name, distro, hostname)
> +
> + # Force the libguestfs handle to close right now.
> + del g
> +
> + # Update the UI.
> + self._update_ui(connection, vmuuid)
> +
> + # Call back into the manager (if one is displayed) to update the
> + # inspection data for the particular VM.
> + def _update_ui(self, connection, vmuuid):
> + wm = self._engine.windowManager
> + if not wm: return
> +
> + name, distro, hostname = self._vmcache[vmuuid]
> +
> + wm.vm_set_inspection_data(connection, vmuuid, name, distro, hostname)
> +
The preferred way to do this would be with gtk signals. Have the
inspection thread do something like vm.set_hostname(), which will
trigger a self.emit("config-changed"), which is already wired up in
manager.py to refresh the row listing. This will let you drop
self._engine (generally if we are passing engine around there is
probably a cleaner way to do things, but we aren't too disciplined
on that at the moment).
OK, I will fix this too once I work out what gtk signals are ..
> + # Get the list of disks belonging to this VM from the
libvirt XML.
> + def _get_disks_from_xml(self, xml):
> + def f(doc, ctx):
> + nodes = ctx.xpathEval("//devices/disk")
> + disks = []
> + for node in nodes:
> + ctx.setContextNode(node)
> + types = ctx.xpathEval("./@type")
> +
> + if types and types[0].content:
> + t = types[0].content
> + disk = None
> +
> + if t == "file":
> + disk = ctx.xpathEval("./source/@file")
> + elif t == "block":
> + disk = ctx.xpathEval("./source/@dev")
> +
> + if disk and disk[0].content:
> + d = disk[0].content
> + types = ctx.xpathEval("./driver/@type")
> + if types and types[0].content:
> + disks.append((d, types[0].content))
> + else:
> + disks.append((d, None))
> +
> + return disks
This _get_disk_from_xml function isn't needed. You can just do:
for disk in vm.get_disk_devices():
path = disk.path
driver_type = disk.driver_type
disks.append((path, driver_type))
the 'disk' in this case is a virtinst.VirtualDisk instance, incase you need
other XML values in the future
OK, that's a much better idea! I'll make these and other changes and
send an update in a few days.
Thanks for looking at this.
Rich.
----------------------------------------------------------------------
Operating systems are classified at two levels, 'type' and 'distro':
http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_type
http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_distro
The version number of the OS is classified separately:
http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_major_version
http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_minor_version
And finally in order to distinguish between certain flavours of
Windows and RHEL you also need the "product variant":
http://libguestfs.org/guestfs.3.html#guestfs_inspect_get_product_variant
In the patch I sent, 'type' and 'distro' fields are pushed up to the
vmmManager object already.
----------------------------------------------------------------------
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into Xen guests.
http://et.redhat.com/~rjones/virt-p2v