[libvirt] RFC: API additions for enhanced snapshot support
by Eric Blake
Right now, libvirt has a snapshot API via virDomainSnapshotCreateXML,
but for qemu domains, it only works if all the guest disk images are
qcow2, and qemu rather than libvirt does all the work. However, it has
a couple of drawbacks: it is inherently tied to domains (there is no way
to manage snapshots of storage volumes not tied to a domain, even though
libvirt does that for qcow2 images associated with offline qemu domains
by using the qemu-img application). And it necessarily operates on all
of the images associated with a domain in parallel - if any disk image
is not qcow2, the snapshot fails, and there is no way to select a subset
of disks to save. However, it works on both active (disk and memory
state) and inactive domains (just disk state).
Upstream qemu is developing a 'live snapshot' feature, which allows the
creation of a snapshot without the current downtime of several seconds
required by the current 'savevm' monitor command, as well as means for
controlling applications (libvirt) to request that qemu pause I/O to a
particular disk, then externally perform a snapshot, then tell qemu to
resume I/O (perhaps on a different file name or fd from the host, but
with no change to the contents seen by the guest). Eventually, these
changes will make it possible for libvirt to create fast snapshots of
LVM partitions or btrfs files for guest disk images, as well as to
select which disks are saved in a snapshot (that is, save a
crash-consistent state of a subset of disks, without the corresponding
RAM state, rather than making a full system restore point); the latter
would work best with guest cooperation to quiesce disks before qemu
pauses I/O to that disk, but that is an orthogonal enhancement.
However, my first goal with API enhancements is to merely prove that
libvirt can manage a live snapshot by using qemu-img on a qcow2 image
rather than the current 'savevm' approach of qemu doing all the work.
Additionally, libvirt provides the virDomainSave command, which saves
just the state of the domain's memory, and stops the guest. A crude
libvirt-only snapshot could thus already be done by using virDomainSave,
then externally doing a snapshot of all disk images associated with the
domain by using virStorageVol APIs, except that such APIs don't yet
exist. Additionally, virDomainSave has no flags argument, so there is
no way to request that the guest be resumed after the snapshot completes.
Right now, I'm proposing the addition of virDomainSaveFlags, along with
a series of virStorageVolSnapshot* APIs that mirror the
virDomainSnapshot* APIs. This would mean adding:
/* Opaque type to manage a snapshot of a single storage volume. */
typedef virStorageVolSnapshotPtr;
/* Create a snapshot of a storage volume. XML is optional, if non-NULL,
it would be a new top-level element <volsnapshot> which is similar to
the top-level <domainsnapshot> for virDomainSnapshotCreateXML, to
specify name and description. Flags is 0 for now. */
virStorageVolSnapshotPtr virDomainSnapshotCreateXML(virStorageVolPtr
vol, const char *xml, unsigned int flags);
[For qcow2, this would be implemented with 'qemu-img snapshot -c',
similar to what virDomainSnapshotXML already does on inactive domains.
Later, we can add LVM and btrfs support, or even allow full file copies
of any file type. Also in the future, we could enhance XML to take a
new element that describes a relationship between the name of the
original and of the snapshot, in the case where a new filename has to be
created to complete the snapshot process.]
/* Probe if vol has snapshots. 1 if true, 0 if false, -1 on error.
Flags is 0 for now. */
int virStorageVolHasCurrentSnapshot(virStorageVolPtr vol, unsigned int
flags);
[For qcow2 images, snapshots can be contained within the same file and
managed with qemu-img -l, but for other formats, this may mean that
libvirt has to start managing externally saved data associated with the
storage pool that associates snapshots with filenames. In fact, even
for qcow2 it might be useful to support creation of new files backed by
the previous snapshot rather than cramming multiple snapshots in one
file, so we may have a use for flags to filter out the presence of
single-file vs. multiple-file snapshot setups.]
/* Revert a volume back to the state of a snapshot, returning 0 on
success. Flags is 0 for now. */
int virStorageVolRevertToSnapsot(virStorageVolSnapshotPtr snapshot,
unsigned int flags);
[For qcow2, this would involve qemu-img snapshot -a. Here, a useful
flag might be whether to delete any changes made after the point of the
snapshot; virDomainRevertToSnapshot should probably honor the same type
of flag.]
/* Return the most recent snapshot of a volume, if one exists, or NULL
on failure. Flags is 0 for now. */
virStorageVolSnapshotPtr virStorageVolSnapshotCurrent(virStorageVolPtr
vol, unsigned int flags);
/* Delete the storage associated with a snapshot (although the opaque
snapshot object must still be independently freed). If flags is 0, any
child snapshots based off of this one are rebased onto the parent; if
flags is VIR_STORAGE_VOL_SNAPSHOT_DELETE_CHILDREN , then any child
snapshots based off of this one are also deleted. */
int virStorageVolSnapshotDelete(virStorageVolSnapshotPtr snapshot,
unsigned int flags);
[For qcow2, this would involve qemu-img snapshot -d. For
multiple-file snapshots, this would also involve qemu-img commit.]
/* Free the object returned by
virStorageVolSnapshot{Current,CreateXML,LookupByName}. The storage
snapshot associated with this object still exists, if it has not been
deleted by virStorageVolSnapshotDelete. */
int virStorageVolSnapshotFree(virStorageVolSnapshotPtr snapshot);
/* Return the <volsnapshot> XML details about this snapshot object.
Flags is 0 for now. */
int virStorageVolSnapshotGetXMLDesc(virStorageVolSnapshotPtr snapshot,
unsigned int flags);
/* Return the names of all snapshots associated with this volume, using
len from virStorageVolSnapshotLen. Flags is 0 for now. */
int virStorageVolSnapshotListNames(virStorageVolPtr vol, char **names,
int nameslen, unsigned int flags);
[For qcow2, this involves qemu-img -l. Additionally, if
virStorageVolHasCurrentSnapshot learns to filter on in-file vs.
multi-file snapshots, then the same flags would apply here.]
/* Get the opaque object tied to a snapshot name. Flags is 0 for now. */
virStorageVolSnapshotPtr
virStorageVolSnapshotLookupByName(virStorageVolPtr vol, const char
*name, unsigned int flags);
/* Determine how many snapshots are tied to a volume, or -1 on error.
Flags is 0 for now. */
int virStorageVolSnapshotNum(virStorageVolPtr vol, unsigned int flags);
[Same flags as for virStorageVolSnapshotListNames.]
/* Save a domain into the file 'to' with additional actions. If flags
is 0, then xml is ignored, and this is like virDomainSave. If flags
includes VIR_DOMAIN_SAVE_DISKS, then all of the associated disk images
are also snapshotted, as if by virStorageVolSnapshotCreateXML; the xml
argument is optional, but if present, it should be a <domainsnapshot>
element with <disk> sub-elements for directions on each disk that needs
a non-empty xml argument for proper volume snapshot creation. If flags
includes VIR_DOMAIN_SAVE_RESUME, then the guest is resumed after the
offline snapshot is complete (note that VIR_DOMAIN_SAVE_RESUME without
VIR_DOMAIN_SAVE_DISKS makes little sense, as a saved state file is
rendered useless if the disk images are modified before it is resumed).
If flags includes VIR_DOMAIN_SAVE_QUIESCE, this requests that a guest
agent quiesce disk state before the saved state file is created. */
int virDomainSaveFlags(virDomainPtr domain, const char *to, const char
*xml, unsigned int flags);
Also, the existing virDomainSnapshotCreateXML can be made more powerful
by adding new flags and enhancing the existing XML for <domainsnapshot>.
When flags is 0, the current behavior of saving memory state alongside
all disks (for running domains, via savevm) or just snapshotting all
disks with default settings (for offline domains, via qemu-img) is kept.
If flags includes VIR_DOMAIN_SNAPSHOT_LIVE, then the guest must be
running, and the new monitor commands for live snapshots are used. If
flags includes VIR_DOMAIN_SNAPSHOT_DISKS_ONLY, then only the disks are
snapshotted (on a running guest, this generally means they will only be
crash-consistent, and will need an fsck before that disk state can be
remounted), but it will shave off time by not saving memory. If flags
includes VIR_DOMAIN_SNAPSHOT_QUIESCE, then this will additionally
request that a guest agent quiesce disk state before the live snapshot
is taken (increasing the likelihood of a stable disk, rather than a
crash-consistent disk; but it requires cooperation from the guest so it
is no more reliable than memballoon changes).
As for the XML changes, it makes sense to snapshot just a subset of
disks when you only care about crash-consistent state or if you can rely
on a guest agent to quiesce the subset of disk(s) you care about, so the
existing <domainsnapshot> element needs a new optional subelement to
control which disks are snapshotted; additionally, this subelement will
be useful for disk image formats that require additional complexity
(such as a secondary file name, rather than the inline snapshot feature
of qcow2). I'm envisioning something like the following:
<domainsnapshot>
<name>whatever</name>
<disk name='/path/to/image1' snapshot='no'/>
<disk name='/path/to/image2'>
<volsnapshot>...</volsnapshot>
</disk>
</domainsnapshot>
where there can be up to as many <disk> elements as there are disk
<devices> in the domain xml; any domain disk not listed is given default
treatment. The name attribute of <disk> is mandatory, in order to match
this disk element to one of the domain disks. The snapshot='yes|no'
attribute is optional, defaulting to yes, in order to skip a particular
disk. The <volsnapshot> subelement is optional, but if present, it
would be the same XML as is provided to the
virStorageVolSnapshotCreateXML. [And since my first phase of
implementation will be focused on inline qcow2 snapshots, I don't yet
know what that XML will need to contain for any other type of snapshots,
such as mapping out how the snapshot backing file will be named in
relation to the possibly new live file.]
Any feedback on this approach? Any other APIs that would be useful to
add? I'd like to get all the new APIs in place for 0.9.3 with minimal
qcow2 functionality, then use the time before 0.9.4 to further enhance
the APIs to cover more snapshot cases but without having to add any new
APIs.
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library http://libvirt.org
13 years, 8 months
[libvirt] [PATCH] virsh: add custom readline generator
by Lai Jiangshan
Custom readline generator will help for some usecase.
Also add a custom readline generator for the "help" command.
Signed-off-by: Lai Jiangshan <laijs(a)cn.fujitsu.com>
---
diff --git a/tools/virsh.c b/tools/virsh.c
index fcd254d..51e43c1 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -13575,7 +13575,7 @@ vshCloseLogFile(vshControl *ctl)
* (i.e. STATE == 0), then we start at the top of the list.
*/
static char *
-vshReadlineCommandGenerator(const char *text, int state)
+vshReadlineCmdAndGrpGenerator(const char *text, int state, int grpname)
{
static int grp_list_index, cmd_list_index, len;
const char *name;
@@ -13604,8 +13604,13 @@ vshReadlineCommandGenerator(const char *text, int state)
return vshStrdup(NULL, name);
}
} else {
+ name = grp[grp_list_index].keyword;
cmd_list_index = 0;
grp_list_index++;
+
+ if (grpname && STREQLEN(name, text, len))
+ return vshStrdup(NULL, name);
+
}
}
@@ -13614,10 +13619,45 @@ vshReadlineCommandGenerator(const char *text, int state)
}
static char *
+vshReadlineCommandGenerator(const char *text, int state)
+{
+ return vshReadlineCmdAndGrpGenerator(text, state, 0);
+}
+
+static char *
+vshReadlineHelpOptionGenerator(const char *text, int state)
+{
+ return vshReadlineCmdAndGrpGenerator(text, state, 1);
+}
+
+struct vshCustomReadLine {
+ const char *name;
+ char *(*CustomReadLineOptionGenerator)(const char *text, int state);
+};
+
+struct vshCustomReadLine customeReadLine[] = {
+ { "help", vshReadlineHelpOptionGenerator },
+ { NULL, NULL }
+};
+
+static struct vshCustomReadLine *vshCustomReadLineSearch(const char *name)
+{
+ struct vshCustomReadLine *ret = customeReadLine;
+
+ for (ret = customeReadLine; ret->name; ret++) {
+ if (STREQ(ret->name, name))
+ return ret;
+ }
+
+ return NULL;
+}
+
+static char *
vshReadlineOptionsGenerator(const char *text, int state)
{
static int list_index, len;
static const vshCmdDef *cmd = NULL;
+ static const struct vshCustomReadLine *rl = NULL;
const char *name;
if (!state) {
@@ -13632,6 +13672,7 @@ vshReadlineOptionsGenerator(const char *text, int state)
memcpy(cmdname, rl_line_buffer, p - rl_line_buffer);
cmd = vshCmddefSearch(cmdname);
+ rl = vshCustomReadLineSearch(cmdname);
list_index = 0;
len = strlen(text);
VIR_FREE(cmdname);
@@ -13640,6 +13681,9 @@ vshReadlineOptionsGenerator(const char *text, int state)
if (!cmd)
return NULL;
+ if (rl)
+ return rl->CustomReadLineOptionGenerator(text, state);
+
if (!cmd->opts)
return NULL;
13 years, 8 months
Re: [libvirt] mingw: test-poll pipe part fails
by Eric Blake
[adding libvirt]
On 06/04/2011 12:24 AM, Paolo Bonzini wrote:
> On Sat, Jun 4, 2011 at 00:37, Matthias Bolte
> <matthias.bolte(a)googlemail.com> wrote:
>> After testing a while and reading MSDN docs the problem seems to be
>> that MsgWaitForMultipleObjects doesn't work on pipes. It doesn't
>> actually wait but just returns immediately. Digging MSDN and googling
>> about this suggest that there is no simple solution to this.
>
> Yes, Windows pipes are that broken. :(
>
> Using socketpair is a possibly good idea, but I would do it on
> libvirtd only. I don't know exactly how libvirtd uses this pipe, but
> perhaps it can be changed to an eventfd-like abstraction that can be
> used with both Windows and Unix. This is similar to Eric's
> suggestion, but without the pipe at all. It would also be a
> libvirtd-specific suggestion.
I'm wondering if the problem here is that libvirt is trying to use the
pipe-to-self mechanism as a fundamental event loop idiom. That is, the
reason libvirt is calling poll is in order to minimize CPU until
something interesting happens, where interesting includes needing to
wake up a helper thread to do an action inside locks in response to the
receipt of a signal.
Maybe you are on to something, and replacing all uses of pipe() with
virPipeToSelf() (which uses pipe() for efficiency on Linux, but
socketpair() on mingw), would allow libvirt to continue to use the
pipe-to-self idiom while also using fds that can actually be poll'd on
mingw.
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library http://libvirt.org
13 years, 8 months
[libvirt] [PATCH] storage: fix volDelete return when volume still being allocated
by Matthew Booth
volDelete used to return VIR_ERR_INTERNAL_ERROR when attempting to delete a
volume which was still being allocated. It should return
VIR_ERR_OPERATION_INVALID.
* src/storage/storage_driver.c: Fix return of volDelete.
---
src/storage/storage_driver.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
index 2da2feb..d9c2137 100644
--- a/src/storage/storage_driver.c
+++ b/src/storage/storage_driver.c
@@ -1914,7 +1914,7 @@ storageVolumeDelete(virStorageVolPtr obj,
}
if (vol->building) {
- virStorageReportError(VIR_ERR_INTERNAL_ERROR,
+ virStorageReportError(VIR_ERR_OPERATION_INVALID,
_("volume '%s' is still being allocated."),
vol->name);
goto cleanup;
--
1.7.4.4
13 years, 8 months
[libvirt] [PATCH RFC V2 00/10] support cpu bandwidth in libvirt
by Wen Congyang
TODO:
1. We create sub directory for each vcpu in cpu subsystem. So
we should recalculate cpu.shares for each vcpu.
Changelog:
v2: almost rewrite the patchset to support to control each vcpu's
bandwidth.
Limit quota to [-1, 2^64/1000] at the schemas level. We will
check it at cgroup level.
Wen Congyang (10):
Introduce the function virCgroupForVcpu
cgroup: Implement cpu.cfs_period_us and cpu.cfs_quota_us tuning API
Update XML Schema for new entries
qemu: Implement period and quota tunable XML configuration and
parsing.
support to pass VIR_DOMAIN_AFFECT_CURRENT to virDomainGetVcpusFlags()
vcpubandwidth: introduce two new libvirt APIs
vcpubandwidth: implement the code to support new API for the qemu
driver
vcpubandwidth: implement the remote protocol to address the new API
vcpubandwidth: Implement virsh support
doc: Add documentation for new cputune elements period and quota
daemon/remote.c | 132 +++++++
docs/formatdomain.html.in | 19 +
docs/schemas/domain.rng | 29 ++-
include/libvirt/libvirt.h.in | 41 +++-
python/generator.py | 2 +
src/conf/domain_conf.c | 272 ++++++++++++++-
src/conf/domain_conf.h | 17 +
src/driver.h | 14 +
src/libvirt.c | 129 +++++++-
src/libvirt_private.syms | 9 +
src/libvirt_public.syms | 6 +
src/qemu/qemu_cgroup.c | 131 +++++++
src/qemu/qemu_cgroup.h | 2 +
src/qemu/qemu_driver.c | 429 ++++++++++++++++++++++-
src/qemu/qemu_process.c | 4 +
src/remote/remote_driver.c | 64 ++++
src/remote/remote_protocol.x | 32 ++-
src/rpc/gendispatch.pl | 30 ++
src/util/cgroup.c | 153 ++++++++-
src/util/cgroup.h | 11 +
tests/qemuxml2argvdata/qemuxml2argv-cputune.xml | 2 +
tools/virsh.c | 142 ++++++++-
tools/virsh.pod | 16 +
23 files changed, 1658 insertions(+), 28 deletions(-)
13 years, 8 months
[libvirt] [Patch 0/3]virsh: Patches for virsh logging
by Supriya Kannery
Virsh logging has some basic issues:
1. In code, magic numbers are used for logging rather than loglevel
variables.
2. Magic number "5" is used for logging which doesn't map to any
loglevel variable. Valid loglevel range is 0-4
3. Usage of loglevel variables doesn't align with that of libvirt
logging. In libvirt "DEBUG" loglevel is the superset and logs
messages at all other levels. Whereas in virsh, "ERROR" loglevel
behaves this way, which needs correction
4. virsh man page and code are inconsistent with respect to loglevels
Following patchset is to address the above mentioned issues.
1/3 - Avoid using magic numbers for logging
2/3 - Align log level usage to that of libvirt
3/3 - Update virsh manpage with related changes
tools/virsh.pod | 30 ++++++++++++
tools/virsh.c | 124 +++++++++++++++++++++++++++---------------------
2 files changed, 102 insertions(+), 52 deletions(-)
13 years, 8 months
[libvirt] [Question] qemu cpu pinning
by KAMEZAWA Hiroyuki
Hi,
When I run a VM(qemu-0.13) on my host with the latest libvirt,
I used following settings.
==
<domain type='kvm' id='1'>
<name>RHEL6</name>
<uuid>f7ad6bc3-e82a-1254-efb0-9e1a87d83d88</uuid>
<memory>2048000</memory>
<currentMemory>2048000</currentMemory>
<vcpu cpuset='4-7'>2</vcpu>
==
I expected all works for this domain will be tied to cpu 4-7.
After minites, I checked the VM behavior and it shows
==
[root@bluextal src]# cat /cgroup/cpuacct/libvirt/qemu/RHEL6/cpuacct.usage_percpu
0 511342 3027636 94237 657712515 257104928 513463748303 252386161
==
Hmm, cpu 1,2,3 are used for some purpose.
All threads for this qemu may be following.
==
[root@bluextal src]# cat /cgroup/cpuacct/libvirt/qemu/RHEL6/tasks
25707
25727
25728
25729
==
And I found
==
[root@bluextal src]# grep Cpus /proc/25707/status
Cpus_allowed: f0
Cpus_allowed_list: 4-7
[root@bluextal src]# grep Cpus /proc/25727/status
Cpus_allowed: ff
Cpus_allowed_list: 0-7
[root@bluextal src]# grep Cpus /proc/25728/status
Cpus_allowed: f0
Cpus_allowed_list: 4-7
[root@bluextal src]# grep Cpus /proc/25729/status
Cpus_allowed: f0
Cpus_allowed_list: 4-7
==
Thread 25727 has no limitation.
Is this an expected behavior and I need more setting in XML definition ?
Thanks,
-Kame
13 years, 8 months
[libvirt] Questions on libvirt storage internals
by Shehjar Tikoo
Hi All
I am working on integrating GlusterFS with OpenStack so that VM volumes can
be placed on shared GlusterFS volumes. I would highly appreciate if you
please help me find the answers to some questions:
1. Whats the difference between a storage driver and a storage backend driver?
2. Why does virDomainAttachDevice code path call the corresponding
domainAttach function in the hypervisor driver and not the volume or pool
creation method if a disk is being attached? Does it assume that the volume
has already been created before this call?
3. Which part of libvirtd source handles receiving messages from the
libvirt client?
Thanks
-Shehjar
13 years, 8 months
[libvirt] [RFC] exporting KVM host power saving capabilities through libvirt
by Vaidyanathan Srinivasan
Hi,
Linux host systems running KVM support various power management
capabilities. Most of the features like DVFS and sleep states can be
independently exploited by the host system itself based on system
utilisation subject to policies set by the administrator.
However, system-wide low power states like S3 and S4 would require
external communication and interaction with the systems management
stack in order to be used. The first steps in this direction would be
to allow systems management stack to discover host power saving
capabilities like S3 and S4 along with various other host CPU
capabilities.
Libvirt seems to be the main glue layer between the platform and the
systems-management stack. Adding host power savings capabilities as
part of libvirt host discovery mechanism seems to be one possible
approach without addition of any new APIs or agents.
libvirt has virConnectGetCapabilities() that would export an XML file
describing the capabilities of the host platform and guest features.
KVM hypervisor's capability to support S3 can be exported as a host
feature in the XML as follows:
<host>
<uuid>94a3492f-2635-2491-8c87-8de976fad119</uuid>
<cpu>
<arch>x86_64</arch>
<features> <<<=== New host feature fields
<S3/>
<S4/>
</features>
<model>Westmere</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='2' threads='2'/>
<feature name='rdtscp'/>
<feature name='xtpr'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/> <<<= These are host CPU features
System management software that works through libvirt, already queries
this capabilities XML file and hence no new API is needed.
As simple discovery mechanism can be added to libvirt as follows:
Index: libvirt-0.9.0/src/qemu/qemu_capabilities.c
===================================================================
--- libvirt-0.9.0.orig/src/qemu/qemu_capabilities.c
+++ libvirt-0.9.0/src/qemu/qemu_capabilities.c
@@ -738,6 +738,14 @@ virCapsPtr qemuCapsInit(virCapsPtr old_c
virCapabilitiesAddHostMigrateTransport(caps,
"tcp");
+ /* Add host energy management host capabilities */
+
+ /* if "pm-is-supported --suspend" == 0 */
+ virCapabilitiesAddHostFeature(caps, "S3");
+
+ /* if "pm-is-supported --hibernate" == 0 */
+ virCapabilitiesAddHostFeature(caps, "S4");
+
/* First the pure HVM guests */
for (i = 0 ; i < ARRAY_CARDINALITY(arch_info_hvm) ; i++)
if (qemuCapsInitGuest(caps, old_caps,
Please let me know your comments, I will code a working prototype
shortly and post for review/discussion.
Thanks,
Vaidy
13 years, 8 months
[libvirt] [PATCH 0/7] Add support for setting QoS
by Michal Privoznik
This patch series add support for setting traffic shaping and policing
on both domain's interface and network's virtual bridge. Basically,
this is done via 'tc' from iproute2 package. For shaping is HTB used,
for policing we need u32 match selector. Both should be available in
RHEL-6 kernel.
Michal Privoznik (7):
bandwidth: Define schema and create documentation
bandwidth: Declare internal structures
bandwidth: Add format parsing functions
bandwidth: Create format functions
bandwitdh: Implement functions to enable and disable QoS
bandwidth: Add test cases for network
bandwidth: Add domain schema test suite
configure.ac | 4 +
docs/formatdomain.html.in | 32 ++
docs/formatnetwork.html.in | 30 ++
docs/schemas/domain.rng | 50 +++
docs/schemas/network.rng | 51 +++
src/conf/domain_conf.c | 6 +
src/conf/domain_conf.h | 1 +
src/conf/network_conf.c | 8 +
src/conf/network_conf.h | 1 +
src/libvirt_private.syms | 6 +
src/network/bridge_driver.c | 8 +
src/qemu/qemu_command.c | 5 +
src/util/network.c | 508 +++++++++++++++++++++++++
src/util/network.h | 28 ++
tests/domainschemadata/domain-bandwidth.xml | 72 ++++
tests/networkxml2xmlin/bandwidth-network.xml | 16 +
tests/networkxml2xmlout/bandwidth-network.xml | 16 +
tests/networkxml2xmltest.c | 1 +
18 files changed, 843 insertions(+), 0 deletions(-)
create mode 100644 tests/domainschemadata/domain-bandwidth.xml
create mode 100644 tests/networkxml2xmlin/bandwidth-network.xml
create mode 100644 tests/networkxml2xmlout/bandwidth-network.xml
--
1.7.5.rc3
13 years, 9 months