[Libvir] PATCH: Separate QEMU impl of nodeinfo API
by Daniel P. Berrange
While fixing the QEMU nodeinfo API to correctly deal with case where CPU
sockets have sparse numbering (eg sockets 0 & 3 are populated), I realized
that OpenVZ doesn't have a nodeinfo API, and its requirements are basically
identical to the QEMU driver's. So this patch moves the impl of the nodeinfo
API into a nodeinfo.c file, and makes both the QEMU and OpenVZ driver call
out to this shared impl. I also put #ifdef __linux__ around the impl since
code reading /proc/cpuinfo is never going to work on any non-Linux platform.
For non linux I just return -1 which'll get treated as not-implemented.
If QEMU driver is ported to work on Solaris, the nodeinfo.c file can be
easily extended for their custom impl. Finally I'm adding a testcase with
a bunch of example /proc/cpuinfo files to validate correctness
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 4 months
[Libvir] Release of libvirt-0.3.1
by Daniel Veillard
Quite a few things were fixed since 0.3.0, and it was looking like
a new release at this point would be a good idea. Availbale at
ftp://libvirt.org/libvirt/
* Documentation:
- index to remote page
- script to test certificates
- IPv6 remote support docs (Daniel Berrange)
- document VIRSH_DEFAULT_CONNECT_URI in virsh man page (David Lutterkort)
- Relax-NG early grammar for the network XML (David Lutterkort)
* Bug fixes:
- leaks in disk XML parsing (Masayuki Sunou)
- hypervisor alignment call problems on PPC64 (Christian Ehrhardt)
- dead client registration in daemon event loop (Daniel Berrange)
- double free in error handling (Daniel Berrange)
- close on exec for log file descriptors in the daemon (Daniel Berrange)
- avoid caching problem in remote daemon (Daniel Berrange)
- avoid crash after QEmu domain failure (Daniel Berrange)
* Improvements:
- checks of x509 certificates and keys (Daniel Berrange)
- error reports in the daemon (Daniel Berrange)
- checking of Ethernet MAC addresses in XML configs (Masayuki Sunou)
- support for a new clock switch between UTC and localtime (Daniel Berrange)
- early version of OpenVZ support (Shuveb Hussain)
- support for input devices on PS/2 and USB buses (Daniel Berrange)
- more tests especially the QEmu support (Daniel Berrange)
- range check in credit scheduler (with Saori Fukuta and Atsushi Sakai)
- add support for listen VNC parameter un QEmu and fix command line arg (Daniel Berrange)
* Cleanups:
- debug tracing (Richard Jones)
- removal of --with-qemud-pid-file (Richard Jones)
- remove unused virDeviceMode
- new util module for code shared between drivers (Shuveb Hussain)
- xen header location detection (Richard Jones)
thanks to everybody who helped for this release, with bug reports or patches !
Daniel
--
Red Hat Virtualization group http://redhat.com/virtualization/
Daniel Veillard | virtualization library http://libvirt.org/
veillard(a)redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/
17 years, 4 months
[Libvir] PATCH: Make QEMU driver honour 'listen' flag for VNC
by Daniel P. Berrange
In QEMU 0.9.0 or later it is possible to tell QEMU to only listen on a
particular IP address. THis patch adapts the code so that it honours the
'listen' attribute on the <graphics> tag if using QEMU >= 0.9.0. It also
re-enables the tests for this capability that I temporarily disabled.
src/qemu_conf.c | 37 +++++++++++++-----
src/qemu_conf.h | 1
tests/qemuxml2argvdata/qemuxml2argv-graphics-sdl.args | 2
tests/qemuxml2argvtest.c | 4 -
tests/qemuxml2xmltest.c | 2
5 files changed, 32 insertions(+), 14 deletions(-)
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 4 months
[Libvir] PATCH: Fix crash in cleanup when VM creation fails
by Daniel P. Berrange
If using the 'virDomainCreateLinux' call to create a VM, a so called
'transient' domain will be created - ie one without any config file.
There is special code in the qemudShutdownVMDaemon method to cleanup
the resources associated with such domains, in particuarly free'ing
the struct qemud_vm object. Unfortunately in the virDomainCreateLinux
codepath this is a problem, because we still need the 'struct qemud_vm'
object in certain edge cases, and so the caller has to free it. We
currently have a double free() in that scenario. This patch removes
the call to qemudFreeVMDaemon from qemudShutdownVMDaemon. Instead it
is now always the caller's responsibility to cleanup after transient
domains.
qemu_driver.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 4 months
[Libvir] Remote daemon & virDomainFree interaction
by Daniel P. Berrange
In looking at a problem with domain object cleanup in virt-manager I came
across a problem in the remote driver, well the internal driver API itself
actually. Specifically the implmenetation of virDomainFree() never calls
into the driver API - it simply uses virFreeDomain() release the memory
associated with the virDomainPtr object.
Couple this with the remote driver though, and virDomainPtr objects in the
remote daemon never get released, because the virDomainFree call is never
propagated over the wire to the server.
Its quite easy to see this in practice. Simply add a printf to the impl
of virDomainLookupByName which prints out the ref count. Then run either
virsh or virt-manager for a while
Get info QEMUGuest1 69 c7a5fdbd-edaf-9455-926a-d65c16db1809
Get info QEMUGuest1 70 c7a5fdbd-edaf-9455-926a-d65c16db1809
Get info QEMUGuest1 71 c7a5fdbd-edaf-9455-926a-d65c16db1809
We need to make virDomainFree call into the driver API, and also make sure
that the remote driver implements it.
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 4 months
[Libvir] [PATCH] virDomainMigrate version 4 (for discussion only!)
by Richard W.M. Jones
This is version 4 of the virDomainMigrate patch. It includes remote
support, but no qemu, virsh or python yet. And it needs lots more
testing which I intend to do more of tomorrow.
Interface
---------
The interface is now this:
virDomainPtr
virDomainMigrate (virDomainPtr domain, virConnectPtr dconn,
unsigned long flags, const char *dname,
const char *uri, unsigned long resource);
The caller may set dname, uri and resource to 0/NULL and forget about
them. Or else the caller may set, in particular, uri to allow for more
complicated migration strategies (especially for qemu).
https://www.redhat.com/archives/libvir-list/2007-July/msg00249.html
Driver support
--------------
As outlined in the diagram in this email:
https://www.redhat.com/archives/libvir-list/2007-July/msg00264.html
migration happens in two stages.
Firstly we send a "prepare" message to the destination host. The
destination host may reply with a cookie. It may also suggest a URI (in
the current Xen implementation it just returns gethostname). Secondly
we send a "perform" message to the source host.
Correspondingly, there are two new driver API functions:
typedef int
(*virDrvDomainMigratePrepare)
(virConnectPtr dconn,
char **cookie,
int *cookielen,
const char *uri_in,
char **uri_out,
unsigned long flags,
const char *dname,
unsigned long resource);
typedef int
(*virDrvDomainMigratePerform)
(virDomainPtr domain,
const char *cookie,
int cookielen,
const char *uri,
unsigned long flags,
const char *dname,
unsigned long resource);
Remote support
--------------
To make this work in the remote case I have had to export two private
functions from the API which only remote should call:
__virDomainMigratePrepare
__virDomainMigratePerform
The reason for this is that libvirtd is just linked to the regular
libvirt.so, so can only make calls into libvirt through exported symbols
in the dynamic symbol table.
There are two corresponding wire messages
(REMOTE_PROC_DOMAIN_MIGRATE_PREPARE and
REMOTE_PROC_DOMAIN_MIGRATE_PERFORM) but they just do dumb argument
shuffling, albeit rather complicated because of the number of arguments
passed in and out.
The complete list of messages which go across the wire during a
migration is:
client -- prepare --> destination host
client <-- prepare reply -- destination host
client -- perform --> source host
client <-- perform reply -- source host
client -- lookupbyname --> destination host
client <-- lookupbyname reply -- destination host
Xen URIs
--------
Xen recognises the following forms of URI:
hostname
hostname:port
tcp://hostname/
tcp://hostname:port/
Capabilities
------------
I have extended capabilities with <migration_features>. For Xen this is:
<capabilities>
<host>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
Rich.
--
Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/
Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod
Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in
England and Wales under Company Registration No. 03798903
17 years, 4 months
[Libvir] OpenVZ XML format and VPS properties get/set interface [long]
by Shuveb Hussain
Hi,
I started a discussion on OpenVZ XML format a while ago. But let me do
it again with more explanation about OpenVZ this time, so that others
can understand how it is different and how this can best fit into the
libvirt model of doing things.
Terminology: Virtual Private Server (VPS), Virtual Environment (VE) and
Domain are all the same.
OpenVZ is a lot about providing QoS to its users. About 20 carefully
chosen parameters regarding various resources such as memory, CPU, disk
and network are chosen. These are then used to provide minimum guarantee
on any system running OpenVZ. Most of the time, these are limits that
can be set per Virtual Private Server(VPS).
In Xen or QEMU, if a disk image is available(Xen needs an additional
kernel), it is possible to run the domain. Then forget all about it
after the domain is shutoff. This is not possible in OpenVZ. When a new
VPS/VE/Domain needs to be created, it needs a file system. This needs to
be created along with its related configuration files in specific
locations. Only after this can it be started. There is a "destroy"
command available in OpenVZ, which is different from the destroy in
libvirt. This will completely erase the file system and remove the
related config file as well.
Since there are many configurable parameters, the OpenVZ tools provides
2 sample templates or profiles on which newly created Virtual
Environments(VEs) can be based. So, during VPS creation, rather than
taking a million parameters, the name of the profile is taken as an
argument and the variables in the file are used to create the VE. These
values can later on be overridden and also be optionally stored in the
VE's private config file to ensure persistence across reboots.
Since there are many parameters needed during VE creation, using the
profile name is practical. So, in the proposed XML file, I'm using the
profile name.
OpenVZ has its own config file format. We are storing the UUID there in
a comment, since UUIDs are not used by OpenVZ. While a VE is created,
the easiest way to do it is using a so called template cache. This is
just a tar file of a Linux distro FS that is used to create a new file
system for a VE. There are no disk images. The VE root fs resides on the
host file system as a bunch of files and directories. A few template
caches are usually available, say one based on Debian, one based on
Fedora Core and another based on Suse. The user can choose which one to
use while creating a new VE. However, the name of the template cache is
not stored anywhere once the VE filesystem is created. I think one more
comment is needed in the per-VE config file for this, just as we are
storing the UUID.
Here is a sample template. This one is called vps.basic, comes with the
OpenVZ tools:
-----------------------------------------------------------------
ONBOOT="no"
# UBC parameters (in form of barrier:limit)
# Primary parameters
AVNUMPROC="40:40"
NUMPROC="65:65"
NUMTCPSOCK="80:80"
NUMOTHERSOCK="80:80"
VMGUARPAGES="6144:2147483647"
# Secondary parameters
KMEMSIZE="2752512:2936012"
TCPSNDBUF="319488:524288"
TCPRCVBUF="319488:524288"
OTHERSOCKBUF="132096:336896"
DGRAMRCVBUF="132096:132096"
OOMGUARPAGES="6144:2147483647"
# Auxiliary parameters
LOCKEDPAGES="32:32"
SHMPAGES="8192:8192"
PRIVVMPAGES="49152:53575"
NUMFILE="2048:2048"
NUMFLOCK="100:110"
NUMPTY="16:16"
NUMSIGINFO="256:256"
DCACHESIZE="1048576:1097728"
PHYSPAGES="0:2147483647"
NUMIPTENT="128:128"
# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="1048576:1153434"
DISKINODES="200000:220000"
QUOTATIME="0"
# CPU fair sheduler parameter
CPUUNITS="1000"
------------------------------------------------------------
Here is the proposed XML format:
<domain type='openvz'>
<name>105</name>
<uuid>8509a1d4-1569-4467-8b37-4e433a1ac7b1</uuid>
<filesystem>
<template>gentoo-20060317-i686-stage3</template>
<quota level='first'>10737418240</quota>
<quota level='second' uid='500'>5368709120</quota>
</filesystem>
<profile>vps.basic</profile>
<devices>
<interface>
<ipaddress>192.168.1.105</ipaddress>
</interface>
</devices>
<nameserver>192.168.1.1</nameserver>
<hostname>fedora105</hostname>
</domain>
I don't think the "filesystem" tag can fit logically into "devices",
since it has quota and other information. The "template" is the name of
the template cache used to create the VE.
One of the main reasons many people(especially hosting providers) use
OpenVZ is since it can be used to provide service level agreements.
There must be a way to set/get various VPS parameters from libvirt. I
understand concerns about driver specific code in libvirt based clients
like virt-manager. The capabilities paradigm will not fit here, since
this is simply about various properties of the VE/domain, not the
hardware or the VM capabilities. Please correct me, if I am wrong. So,
how to we do it?
Thanks and Regards,
--
Shuveb Hussain
Unix is very user friendly. It is just a
little choosy about who its friends are
http://www.binarykarma.com
17 years, 4 months