[libvirt] RFC New virDomainBlockPull API family to libvirt
by Adam Litke
Unfortunately, after committing the blockPull API to libvirt, the qemu
community decided to change the API somewhat and we needed to revert the
libvirt implementation. Now that the qemu API is settling out again, I would
like to propose an updated libvirt public API which has required only a minor
set of changes to sync back up to the qemu API.
Summary of changes:
- Qemu dropped incremental streaming so remove libvirt incremental
BlockPull() API
- Rename virDomainBlockPullAll() to virDomainBlockPull()
- Changes required to qemu monitor handlers for changed command names
Currently, qemu block device streaming completely flattens a disk image's
backing file chain. Consider the following chain: A->B->C where C is a leaf
image that is backed by B and B is backed by A. The current disk streaming
command will produce an independent image C with no backing file. Future
versions of qemu block streaming may support an option to specify a new base
image from the current chain. For example: stream --backing_file B C would
pull all blocks that are only in A to produce the chain: B->C (thus
eliminating a dependency on A but maintaining B as a backing image.
Do we want to create room in the BlockPull API to support this advanced usage
in the future? If so, then a new parameter must be added to BlockPull: const
char *backing_path. Even if we extend the API in this manner, the initial
implementation will not support it because qemu will not support it
immediately, and libvirt is missing core functionality to support it (no
public enumeration of the disk backing file chain). Thoughts?
--
To help speed the provisioning process for large domains, new QED disks are
created with backing to a template image. These disks are configured with
copy on read such that blocks that are read from the backing file are copied
to the new disk. This reduces I/O over a potentially costly path to the
backing image.
In such a configuration, there is a desire to remove the dependency on the
backing image as the domain runs. To accomplish this, qemu will provide an
interface to perform sequential copy on read operations during normal VM
operation. Once all data has been copied, the disk image's link to the
backing file is removed.
The virDomainBlockPull API family brings this functionality to libvirt.
virDomainBlockPull() instructs the hypervisor to stream the entire device in
the background. Progress of this operation can be checked with the function
virDomainBlockPullInfo(). An ongoing stream can be cancelled with
virDomainBlockPullAbort().
An event (VIR_DOMAIN_EVENT_ID_BLOCK_PULL) will be emitted when a disk has been
fully populated or if a BlockPull() operation was terminated due to an error.
This event is useful to avoid polling on virDomainBlockPullInfo() for
completion and could also be used by the security driver to revoke access to
the backing file when it is no longer needed.
/*
* BlockPull API
*/
/* An iterator for initiating and monitoring block pull operations */
typedef unsigned long long virDomainBlockPullCursor;
typedef struct _virDomainBlockPullInfo virDomainBlockPullInfo;
struct _virDomainBlockPullInfo {
/*
* The following fields provide an indication of block pull progress. @cur
* indicates the current position and will be between 0 and @end. @end is
* the final cursor position for this operation and represents completion.
* To approximate progress, divide @cur by @end.
*/
virDomainBlockPullCursor cur;
virDomainBlockPullCursor end;
};
typedef virDomainBlockPullInfo *virDomainBlockPullInfoPtr;
/**
* virDomainBlockPull:
* @dom: pointer to domain object
* @path: Fully-qualified filename of disk
* @flags: currently unused, for future extension
*
* Populate a disk image with data from its backing image. Once all data from
* its backing image has been pulled, the disk no longer depends on a backing
* image. This function pulls data for the entire device in the background.
* Progress of the operation can be checked with virDomainGetBlockPullInfo() and
* the operation can be aborted with virDomainBlockPullAbort(). When finished,
* an asynchronous event is raised to indicate the final status.
*
* Returns 0 if the operation has started, -1 on failure.
*/
int virDomainBlockPull(virDomainPtr dom,
const char *path,
unsigned int flags);
/**
* virDomainBlockPullAbort:
* @dom: pointer to domain object
* @path: fully-qualified filename of disk
* @flags: currently unused, for future extension
*
* Cancel a pull operation previously started by virDomainBlockPullAll().
*
* Returns -1 in case of failure, 0 when successful.
*/
int virDomainBlockPullAbort(virDomainPtr dom,
const char *path,
unsigned int flags);
/**
* virDomainGetBlockPullInfo:
* @dom: pointer to domain object
* @path: fully-qualified filename of disk
* @info: pointer to a virDomainBlockPullInfo structure
* @flags: currently unused, for future extension
*
* Request progress information on a block pull operation that has been started
* with virDomainBlockPull(). If an operation is active for the given
* parameters, @info will be updated with the current progress.
*
* Returns -1 in case of failure, 0 when successful.
*/
int virDomainGetBlockPullInfo(virDomainPtr dom,
const char *path,
virDomainBlockPullInfoPtr info,
unsigned int flags);
/**
* virConnectDomainEventBlockPullStatus:
*
* The final status of a virDomainBlockPull() operation
*/
typedef enum {
VIR_DOMAIN_BLOCK_PULL_COMPLETED = 0,
VIR_DOMAIN_BLOCK_PULL_FAILED = 1,
} virConnectDomainEventBlockPullStatus;
/**
* virConnectDomainEventBlockPullCallback:
* @conn: connection object
* @dom: domain on which the event occurred
* @path: fully-qualified filename of the affected disk
* @status: final status of the operation (virConnectDomainEventBlockPullStatus
*
* The callback signature to use when registering for an event of type
* VIR_DOMAIN_EVENT_ID_BLOCK_PULL with virConnectDomainEventRegisterAny()
*/
typedef void (*virConnectDomainEventBlockPullCallback)(virConnectPtr conn,
virDomainPtr dom,
const char *path,
int status,
void *opaque);
--
Adam Litke
IBM Linux Technology Center
13 years, 4 months
Re: [libvirt] Build libvirt 0.9.3 failed
by zhang xintao
config.log
38094 configure:54859:checking for LIBNL
38995 configure:54924:result: no
38096 configure:54924:error: libnl-devel >= 1.1 is required for macvtap support
I try to build libvirt 0.9.3 -without macvtap support but failed
same error message.
can you help me ?
On Wed, Jul 20, 2011 at 2:02 PM, zhang xintao <zhang.kvm(a)gmail.com> wrote:
> This is th config.log
>
> I try to build libvirt 0.9.3 -without macvtap support but failed
>
> same error message.
>
> can you help me ?
>
> On Wed, Jul 20, 2011 at 12:52 PM, Eric Blake <eblake(a)redhat.com> wrote:
>> On 07/19/2011 10:42 PM, zhang xintao wrote:
>>>
>>> I try to build libvirt 0.9.3 but failed
>>> ==========================
>>> Host OS:Ubuntu server 11.04
>>> ==========================
>>> configure:error libnl-devel>= 1.1 is required for macvtap support
>>>
>>> libnl-dev 1.1-6 is already installed
>>>
>>> who can help me?
>>
>> What does config.log say in the areas where it was checking for libnl
>> support? Meanwhile, are you okay building without macvtap support?
>>
>> --
>> Eric Blake eblake(a)redhat.com +1-801-349-2682
>> Libvirt virtualization library http://libvirt.org
>>
>
13 years, 4 months
[libvirt] [PATCH] Should determine the value of client first before locking it
by Guannan Ren
---
src/rpc/virnetclient.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/src/rpc/virnetclient.c b/src/rpc/virnetclient.c
index dfc4ed9..c100ef1 100644
--- a/src/rpc/virnetclient.c
+++ b/src/rpc/virnetclient.c
@@ -271,6 +271,9 @@ void virNetClientFree(virNetClientPtr client)
void virNetClientClose(virNetClientPtr client)
{
+ if (!client)
+ return;
+
virNetClientLock(client);
virNetSocketRemoveIOCallback(client->sock);
virNetSocketFree(client->sock);
--
1.7.1
13 years, 4 months
[libvirt] [PATCH] virsh: add custom readline generator
by Lai Jiangshan
Custom readline generator will help for some usecase.
Also add a custom readline generator for the "help" command.
Signed-off-by: Lai Jiangshan <laijs(a)cn.fujitsu.com>
---
diff --git a/tools/virsh.c b/tools/virsh.c
index fcd254d..51e43c1 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -13575,7 +13575,7 @@ vshCloseLogFile(vshControl *ctl)
* (i.e. STATE == 0), then we start at the top of the list.
*/
static char *
-vshReadlineCommandGenerator(const char *text, int state)
+vshReadlineCmdAndGrpGenerator(const char *text, int state, int grpname)
{
static int grp_list_index, cmd_list_index, len;
const char *name;
@@ -13604,8 +13604,13 @@ vshReadlineCommandGenerator(const char *text, int state)
return vshStrdup(NULL, name);
}
} else {
+ name = grp[grp_list_index].keyword;
cmd_list_index = 0;
grp_list_index++;
+
+ if (grpname && STREQLEN(name, text, len))
+ return vshStrdup(NULL, name);
+
}
}
@@ -13614,10 +13619,45 @@ vshReadlineCommandGenerator(const char *text, int state)
}
static char *
+vshReadlineCommandGenerator(const char *text, int state)
+{
+ return vshReadlineCmdAndGrpGenerator(text, state, 0);
+}
+
+static char *
+vshReadlineHelpOptionGenerator(const char *text, int state)
+{
+ return vshReadlineCmdAndGrpGenerator(text, state, 1);
+}
+
+struct vshCustomReadLine {
+ const char *name;
+ char *(*CustomReadLineOptionGenerator)(const char *text, int state);
+};
+
+struct vshCustomReadLine customeReadLine[] = {
+ { "help", vshReadlineHelpOptionGenerator },
+ { NULL, NULL }
+};
+
+static struct vshCustomReadLine *vshCustomReadLineSearch(const char *name)
+{
+ struct vshCustomReadLine *ret = customeReadLine;
+
+ for (ret = customeReadLine; ret->name; ret++) {
+ if (STREQ(ret->name, name))
+ return ret;
+ }
+
+ return NULL;
+}
+
+static char *
vshReadlineOptionsGenerator(const char *text, int state)
{
static int list_index, len;
static const vshCmdDef *cmd = NULL;
+ static const struct vshCustomReadLine *rl = NULL;
const char *name;
if (!state) {
@@ -13632,6 +13672,7 @@ vshReadlineOptionsGenerator(const char *text, int state)
memcpy(cmdname, rl_line_buffer, p - rl_line_buffer);
cmd = vshCmddefSearch(cmdname);
+ rl = vshCustomReadLineSearch(cmdname);
list_index = 0;
len = strlen(text);
VIR_FREE(cmdname);
@@ -13640,6 +13681,9 @@ vshReadlineOptionsGenerator(const char *text, int state)
if (!cmd)
return NULL;
+ if (rl)
+ return rl->CustomReadLineOptionGenerator(text, state);
+
if (!cmd->opts)
return NULL;
13 years, 4 months
[libvirt] [PATCH RFC v3 0/6] support cpu bandwidth in libvirt
by Wen Congyang
TODO:
1. We create sub directory for each vcpu in cpu subsystem. So
we should recalculate cpu.shares for each vcpu.
Changelog:
v3: fix some small bugs
implement the simple way
v2: almost rewrite the patchset to support to control each vcpu's
bandwidth.
Limit quota to [-1, 2^64/1000] at the schemas level. We will
check it at cgroup level.
Wen Congyang (6):
Introduce the function virCgroupForVcpu
cgroup: Implement cpu.cfs_period_us and cpu.cfs_quota_us tuning API
Update XML Schema for new entries
qemu: Implement period and quota tunable XML configuration and
parsing
qemu: Implement cfs_period and cfs_quota's modification
doc: Add documentation for new cputune elements period and quota
docs/formatdomain.html.in | 19 ++
docs/schemas/domain.rng | 26 +++-
src/conf/domain_conf.c | 20 ++-
src/conf/domain_conf.h | 2 +
src/libvirt_private.syms | 5 +
src/qemu/qemu_cgroup.c | 127 +++++++++++
src/qemu/qemu_cgroup.h | 4 +
src/qemu/qemu_driver.c | 259 ++++++++++++++++++++---
src/qemu/qemu_process.c | 4 +
src/util/cgroup.c | 153 +++++++++++++-
src/util/cgroup.h | 11 +
tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 2 +
12 files changed, 596 insertions(+), 36 deletions(-)
13 years, 4 months
[libvirt] [PATCH 00/10] network: physical device abstraction aka 'virtual switch'
by Laine Stump
This patch is in response to the following bug reports:
https://bugzilla.redhat.com/show_bug.cgi?id=643947 (RHEL)
https://bugzilla.redhat.com/show_bug.cgi?id=636106 (upstream)
It is functionally complete, and has gone through rudimentary testing
for bridge networks (host bridge) and direct networks in bridge mode
(macvtap). The patch series doesn't yet include updates to the domain
and network XML documentation, though, so it isn't ready to push.
I am sending it now to get feedback, both on the specifics of the
code, as well as on how it is designed and how it works. I will be
transferring info from my design document (at the end of this message)
into the libvirt doc files during this week, and will have them ready
for the V2 of the series that is sure to be requested.
*****************
(The working design document)
Network device abstraction aka virtual switch - V4
==================================================
The <interface> element of a guest's domain config in libvirt has a
<source> element that describes what resources on a host will be used
to connect the guest's network interface to the rest of the
world. This is very flexible, allowing several different types of
connection (virtual network, host bridge, direct macvtap connection to
physical interface, qemu usermode, user-defined via an external
script), but currently has the problem that unnecessary details of the
host resources are embedded into the guest's config; if the guest is
migrated to a different host, and that host has a different hardware
or network config (or possibly the same hardware, but that hardware is
currently in use by a different guest), the migration will fail.
This document outlines a change to libvirt's network XML that will
allow us to (optionally - old configs will remain valid) remove the
host details from the guest's domain XML (which can move around from
host to host) and place them in the network XML (which remains with a
single host); the domain XML will then use existing config elements to
associate each guest interface with a "network".
The motivating use case for this change is the "direct" connection
type (which uses macvtap for vepa and vnlink connections directly
between a guest and a physical interface, rather than through a
bridge), but it is applicable for all types of connection. (Another
hopeful side effect of this change will be to make libvirt's network
connection model easier to realize on non-Linux hypervisors (eg,
VMWare ESX) and for other network technologies, such as openvswitch,
VDE, and various VPN implementations).
Background
==========
(parts lifted from Dan Berrange's mail on this subject)
Currently <network> supports 3 connectivity modes
- Non-routed network, separate subnet (no <forward> element present)
- Routed network, separate subnet with NAT (<forward mode='nat'/>)
- Routed network, separate subnet (<forward mode='route'/>)
Each of these is implemented in the existing network driver by
creating a bridge device using brctl, and connecting the guest network
interfaces via tap devices (a detail which, now that I've stated it,
you should promptly forget!). All traffic between that bridge and the
outside network is done via the host's IP routing stack (ie, there is
no physical device directly connected to the bridge)
In the future, these two additional routed modes might be useful:
- Routed network, IP subnetting
- Routed network, separate subnet with VPN
The core goal of this proposal, though, is to replace type=bridge and
type=direct from the domain interface XML with new types of <network>
definitions so that the domain can just give "type='network'" and have
all the necessary details filled in at runtime. This basically means
we're adding several bridging modes (the submodes of "direct" have
been flattened out here):
- Bridged network, eth + bridge + tap
- Bridged network, eth + macvtap + vepa
- Bridged network, eth + macvtap + private
- Bridged network, eth + macvtap + passthrough
- Bridged network, eth + macvtap + bridge
Another "future expansion" could be to add:
- Bridged network, with VPN
Likewise, support for other technologies, such as openvswitch and VDE
would each be another entry on this list.
(Dan also listed each of the above "+sriov" separately, but that ends
up being handled in an orthogonal manner (by just specifying a pool of
interfaces for a single network), so I'm only giving the abbreviated
list)
I. Changes to domain <interface> element
========================================
In many cases, the <interface> element of the domain XML will be
identical to what is used now when connecting the interface to a
libvirt-style virtual network:
<interface type='network'>
<source network='red-network'/>
<mac address='xx:xx:xx:xx:xx:xx'/>
</interface>
Depending on the definition of the network "red-network" on the host
the guest was started on / migrated to, this could be either a direct
(macvtap) connection using one of the various direct modes
(vepa/private/bridge/passthrough), a bridge (again, pointed to by the
definition of 'red-network'), or a virtual network (using the current
network definition syntax). This way the same guest could be migrated
not only between macvtap-enabled hosts, but from there to a host using
a bridge, or maybe a host in a remote location that used a virtual
network with a secure tunnel to connect back to the rest of the
red-network.
(Part of the migration process would of course check that the
destination host had a network of the proper name with adequate
available resources, and fail if it didn't; management software at a
level above libvirt would probably filter a list of candidate
migration destinations based on available networks and any various
details of those networks (eg. it could search for only networks using
vepa for the connection), and only attempt migration to one that had
the matching network available).
<virtualport> element of <interface>
------------------------------------
Since mamy of the attributes/sub-elements of <virtualport> (used by
some modes of "direct" interface connections) are identical for all
interfaces connecting to any given switch, most of the information in
<virtualport> will be optional in the domain's interface definition -
it can be filled in from a similar <virtualport> element that will be
added to the <network> definition.
Some parameters in <virtualport> ("instanceid", for example) must be
unique for every interface, though, so those will still be specified
in the <interface> XML. The two <virtualport> elements will be OR'ed
at runtime to arrive at the actual set of parameters that are
used.
(Open Question: What should be the policy when a parameter is
specified in both places? Should one take precedence? Or should it be
considered an error?)
portgroup attribute of <source>
-------------------------------
The <source> element of an interface definition will be able to
optionally specify a "portgroup" attribute. If portgroup is *NOT*
given, the default portgroup of the network will be used (if a default
is defined, otherwise no portgroup will be used). If portgroup *IS*
specified, the source network must have a portgroup by that name (or
the domain startup/migration will fail), and the attributes of that
portgroup will be used for the connection. Here is an example
<interface> definition that has both a reduced <virtualport> element,
as well as a portgroup attribute:
<interface type='network'>
<source network='red-network' portgroup='engineering'/>
<virtualport type="802.1Qbg">
<parameters managerid="11" typeid="1193047" typeidversion="2"
instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
</virtualport>
<mac address='de:ad:be:ef:ca:fe'/>
</interface>
(The specifics of what can be in a portgroup are given below)
storing the actual chosen/running config in state dir
-----------------------------------------------------
Note: the following additions to the XML will only ever be visible in
the statedir copy of the domain config, which is used to keep track of
the state of running domains in case libvirtd is restarted while
domains are still running. The described element cannot be used in a
user-generated config file, and will never be present in a domain
interface config produced by the libvirt public API, nor by "virsh
dumpxml".
In order to remind libvirt about which interfaces are actually in use
in the event that libvirtd is restarted while domains are still
running, the copy of the domain XML stored in "statedir"
(/var/lib/libvirt/qemu/*.xml) may have an extra element <actual>
stored as a subelement of each <interface>:
<interface type='network'>
<source network='red-network' portgroup='engineering'/>
<virtualport type="802.1Qbg">
<parameters managerid="11" typeid="1193047" typeidversion="2"
instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
</virtualport>
<mac address='de:ad:be:ef:ca:fe'/>
<actual type='direct'>
<source dev='eth1' mode='vepa'/>
</actual>
</interface>
In short, merging the <auto> element up into <interface> will yield
the full interface as it was actually instantiated for the domain. In
this case, the interface still has a mac address of de:ad:be:ef:ca:fe,
and will use the same <virtualport> parameters, but the actual type of
the interface will be 'direct' (so macvtap will be used to connect the
interface), and the connection will be via physical device eth1 in
vepa mode.
network
II. Changes to <network> definition
===================================
As Dan has pointed out, any additions to <network> must be designed so
that existing management applications (written to understand <network>
prior to these new additions) will at least recognize that the XML
they've been given is for something new that they don't fully
understand. At the same time, the new types of network definition
should attempt to re-use as much of the existing elements/attributes
as possible, both to make it easier to extend these applications, as
well as to make the status displays of un-updated applications make as
much sense as possible.
The new types of network will be specified by extending the choices
for <forward mode='....'>.
The current modes are:
<forward mode='route|nat'/>
(in addition to not listing any mode, which equates to "isolated")
Here are suggested new modes:
<forward mode='bridge|vepa|private|passthrough'/>
A description of each:
bridge - equivalent to "<interface type='bridge'>" in the
interface definition. The bridge device to use would be
given in the existing <bridge name='xxx'>.
or
<interface type='direct'> ... <source mode='bridge'/>
(ie, macvtap bridge mode) with the physical interface
name given in <forward dev='xxx'> or from the pool of
devices given as subelements of <forward> (see below)
vepa - same as "<interface type='direct'>..." with <source
mode='vepa'/>
private - <interface type='direct'> ... <source mode='private'/>
passthrough - <interface type='direct'> ... <source mode='passthrough'/>
Interface Pools
---------------
In many cases, a single host network may have multiple physical
network devices associated with it (especially in the case of an
SRIOV-capable ethernet card, which will have several "virtual
functions" associated with a single physical ethernet connection). The
host will at least want to balance the load of multiple guests between
these multiple devices, and may even require (in the case of
passthrough mode, for example) that only a single guest interface be
attached to each host device.
The current specification for <forward> only allows for a single "dev"
attribute, though. In order to support multiple device names, we will
extend <forward> to allow 0 or more <interface> subelements:
<forward mode='vepa'>
<interface dev='eth10'/>
<interface dev='eth11'/>
<interface dev='eth12'/>
<interface dev='eth13'/>
</forward>
Note that, as a convenience, *on output* the first of these elements
will always be a duplicate of the "dev" attribute in <forward>
itself. When sending XML definnitions to libvirt, either a single
interface should be send in <forward>, or a pool of them as
sub-elements, but not both (if you do this, and the first in the pool
matches the one given in <forward>, it will be ignored, but if they
don't match, that is an error).
In the case of mode='passthrough' (as well as mode='private' if the
virtPortProfile has a mode setting of 802.1Qbh), only one guest
interface can be connected to a device at a time. libvirt will keep
track of which devices are in use, and attempt to assign a free
device; failure to assign a device will result in a failure of the
domain to start/migrate. For the other direct modes, libvirt will
simply keep track of the number of guest interfaces currently using
each device, and attempt to keep them balanced.
Portgroups
-----------
A <portgroup> (subelement of <network>) is just a way of easily
putting connections to the network into different classes, with each
class having a different level/type of service. Each <network> can
have multiple <portgroup> elements, and each <portgroup> has a name,
as well as various attributes associated with it. If an interface
definition specifies a portgroup, that portgroup's info will be used
to modify the interface's setup. If no portgroup is given and one of
the network's portgroups has "default='yes'", that default portgroup
will be used. If no portgroup is given in the interface definition,
and there is no default portgroup, then none will be used.
The first thing we will use portgroups for is as an alternate place to
specify <virtualport> parameters:
<portgroup name='engineering' default='yes'>
<virtualport type="802.1Qbg">
<parameters managerid="11" typeid="1193047" typeidversion="2"/>
</virtualport>
</portgroup>
Anything that is valid in an interface's <virtualport> is also valid here.
The next thing to specify in a portgroup will be bandwidth limiting /
QoS configuration. Since I don't know exactly what's needed for that,
I won't specify it here.
If anything is specified both directly under <network> and in a
<portgroup>, the value in portgroup will take precedence. (Again -
what will the precedence of items specified in the <interface> be?)
EXAMPLES
--------
Examples of 'red-network' for different types of connections (all of
these would work with minor variations of the interface XML given
above, eg the 'vepa' version would require <virtualport> in the
interface that specified an instanceid, and if the <interface>
specified a portgroup, it would need to also be in the <network>
definition (even if it was empty aside from name).
<!-- Existing usage - a libvirt virtual network -->
<network>
<name>red-network</name>
<bridge name='virbr0'/>
<forward mode='route'/>
...
</network>
<!-- The simplest - an existing host bridge -->
<network>
<name>red-network</name>
<forward mode='bridge'/>
<bridge name='eth0'/>
</network>
<!-- A macvtap connection to a vepa bridge -->
<network>
<name>red-network</name>
<forward mode='vepa' dev='eth10'/>
<virtualport type='802.1Qbg'>
<parameters managerid='11' typeid='1193047' typeidversion='2'/>
</virtualport>
<!-- NB: if <interface> doesn't specify portgroup, -->
<!-- 'accounting' is assumed -->
<portgroup name='accounting'>
<virtualport>
<parameters typeid='22'/>
</virtualport>
</portgroup>
<portgroup name='engineering'>
<virtualport>
<parameters typeid='33'/>
</virtualport>
</portgroup>
</network>
<!-- A macvtap passthrough connection (one guest interface per dev) -->
<network>
<name>red-network</name>
<forward mode='passthrough'>
<interface dev='eth10'/>
<interface dev='eth11'/>
<interface dev='eth12'/>
<interface dev='eth13'/>
<interface dev='eth14'/>
<interface dev='eth15'/>
<interface dev='eth16'/>
<interface dev='eth17'/>
</forward>
</network>
=============
Keeping Track of Interface Usage by Guests
==========================================
While libvirtd is running, each physical interface in a network's pool
will maintain a count of how many guest interfaces are using that
physical interface. Each guest interface will also maintain
information about which network, and which physical interface on that
network, it is using. The following situations could occur:
1) A guest is terminated while libvirtd is running.
libvirtd will notice this, and decrement the usage count for each
interface used by the guest, as maintained in the guest's state
info.
2) The host system is rebooted
When the libvirt network driver is restarted, no guests will yet be
running, so the usage count of each physical interface will be 0,
and get incremented as guests are started up.
3) libvirtd is restarted
When the network is restarted, the usage count for all physical
interfaces will be set to 0, just as if the entire system had
been rebooted. One of two situations might be encountered:
3a) The guest is still running when libvirtd is restarted. In this
case, the existing state information of the guest will be examined
to determine which physical interface usage count to increment.
3b) The guest has been terminated while libvirtd wasn't present. Since
the guest is no longer running, its state information will be thrown
away.
13 years, 4 months
[libvirt] Build libvirt 0.9.3 failed
by zhang xintao
I try to build libvirt 0.9.3 but failed
==========================
Host OS:Ubuntu server 11.04
==========================
configure:error libnl-devel >= 1.1 is required for macvtap support
libnl-dev 1.1-6 is already installed
who can help me?
13 years, 4 months
[libvirt] [PATCH] build: fix broken build
by Eric Blake
* src/libxl/libxl_driver.c (libxlDomainUndefineFlags): Use correct
enum value.
---
Pushing under the build-breaker rule.
src/libxl/libxl_driver.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 2e7197c..d52a8b6 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -2766,7 +2766,7 @@ libxlDomainUndefineFlags(virDomainPtr dom,
if (virFileExists(name)) {
if (flags & VIR_DOMAIN_UNDEFINE_MANAGED_SAVE) {
if (unlink(name) < 0) {
- libxlError(VIR_ERR_INTERNAL_ERR,
+ libxlError(VIR_ERR_INTERNAL_ERROR,
_("Failed to remove domain managed save image"));
goto cleanup;
}
--
1.7.4.4
13 years, 4 months
[libvirt] [PATCH v4 1/6] undefine: Define the new API
by Osier Yang
This introduces a new API virDomainUndefineFlags to control the
domain undefine process, as the existing API virDomainUndefine
doesn't support flags.
Currently only flag VIR_DOMAIN_UNDEFINE_MANAGED_SAVE is supported.
If the domain has a managed save image, including
VIR_DOMAIN_UNDEFINE_MANAGED_SAVE in @flags will also remove that
file, and omitting the flag will cause undefine process to fail.
This patch also changes the behavior of virDomainUndefine, if the
domain has a managed save image, the undefine will be refused.
---
include/libvirt/libvirt.h.in | 10 +++++++
src/driver.h | 4 +++
src/libvirt.c | 60 +++++++++++++++++++++++++++++++++++++++++-
src/libvirt_public.syms | 5 +++
4 files changed, 78 insertions(+), 1 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 607b5bc..5f9f08a 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -1200,6 +1200,16 @@ int virDomainMemoryPeek (virDomainPtr dom,
virDomainPtr virDomainDefineXML (virConnectPtr conn,
const char *xml);
int virDomainUndefine (virDomainPtr domain);
+
+typedef enum {
+ VIR_DOMAIN_UNDEFINE_MANAGED_SAVE = 1,
+
+ /* Future undefine control flags should come here. */
+} virDomainUndefineFlagsValues;
+
+
+int virDomainUndefineFlags (virDomainPtr domain,
+ unsigned int flags);
int virConnectNumOfDefinedDomains (virConnectPtr conn);
int virConnectListDefinedDomains (virConnectPtr conn,
char **const names,
diff --git a/src/driver.h b/src/driver.h
index 9d0d3de..4c4955f 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -219,6 +219,9 @@ typedef virDomainPtr
typedef int
(*virDrvDomainUndefine) (virDomainPtr dom);
typedef int
+ (*virDrvDomainUndefineFlags) (virDomainPtr dom,
+ unsigned int flags);
+typedef int
(*virDrvDomainSetVcpus) (virDomainPtr domain,
unsigned int nvcpus);
typedef int
@@ -733,6 +736,7 @@ struct _virDriver {
virDrvDomainCreateWithFlags domainCreateWithFlags;
virDrvDomainDefineXML domainDefineXML;
virDrvDomainUndefine domainUndefine;
+ virDrvDomainUndefineFlags domainUndefineFlags;
virDrvDomainAttachDevice domainAttachDevice;
virDrvDomainAttachDeviceFlags domainAttachDeviceFlags;
virDrvDomainDetachDevice domainDetachDevice;
diff --git a/src/libvirt.c b/src/libvirt.c
index 39e2041..2f5241a 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -6374,7 +6374,13 @@ error:
* virDomainUndefine:
* @domain: pointer to a defined domain
*
- * Undefine a domain but does not stop it if it is running
+ * Undefine a domain. If the domain is running, it's converted to
+ * transient domain, without stopping it. If the domain is inactive,
+ * the domain configuration is removed.
+ *
+ * If the domain has a managed save image (see
+ * virDomainHasManagedSaveImage()), then the undefine will fail. See
+ * virDomainUndefineFlags() for more control.
*
* Returns 0 in case of success, -1 in case of error
*/
@@ -6413,6 +6419,58 @@ error:
}
/**
+ * virDomainUndefineFlags:
+ * @domain: pointer to a defined domain
+ * @flags: bitwise-or of supported virDomainUndefineFlagsValues
+ *
+ * Undefine a domain. If the domain is running, it's converted to
+ * transient domain, without stopping it. If the domain is inactive,
+ * the domain configuration is removed.
+ *
+ * If the domain has a managed save image (see virDomainHasManagedSaveImage()),
+ * then including VIR_DOMAIN_UNDEFINE_MANAGED_SAVE in @flags will also remove
+ * that file, and omitting the flag will cause the undefine process to fail.
+ *
+ * Returns 0 in case of success, -1 in case of error
+ */
+int
+virDomainUndefineFlags(virDomainPtr domain,
+ unsigned int flags)
+{
+ virConnectPtr conn;
+
+ VIR_DOMAIN_DEBUG(domain, "flags=%x", flags);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECTED_DOMAIN(domain)) {
+ virLibDomainError(VIR_ERR_INVALID_DOMAIN, __FUNCTION__);
+ virDispatchError(NULL);
+ return -1;
+ }
+ conn = domain->conn;
+ if (conn->flags & VIR_CONNECT_RO) {
+ virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ goto error;
+ }
+
+ if (conn->driver->domainUndefineFlags) {
+ int ret;
+ ret = conn->driver->domainUndefineFlags (domain, flags);
+ if (ret < 0)
+ goto error;
+ return ret;
+ }
+
+ virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ virDispatchError(domain->conn);
+ return -1;
+}
+
+
+/**
* virConnectNumOfDefinedDomains:
* @conn: pointer to the hypervisor connection
*
diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms
index 5f2541a..5cc480e 100644
--- a/src/libvirt_public.syms
+++ b/src/libvirt_public.syms
@@ -466,4 +466,9 @@ LIBVIRT_0.9.3 {
virNodeGetMemoryStats;
} LIBVIRT_0.9.2;
+LIBVIRT_0.9.4 {
+ global:
+ virDomainUndefineFlags;
+} LIBVIRT_0.9.3;
+
# .... define new API here using predicted next version number ....
--
1.7.6
13 years, 4 months