[libvirt] [PATCH 0/2] Minor fixes for virTypedParams(De)Serialize
by Marc Hartmayer
Some minor fixes and two questions:
* Is the first method, which is described in the documentation for
virTypedParamsDeserialize, in sync with the actual code? ("Older
APIs do not rely on deserializer allocating memory for @params,
...")
* Do we also have to set *nparams = 0 in case of an error and the user
has allocated the memory? (virTypedParamsDeserialize)
Marc Hartmayer (2):
virTypedParamsSerialize: minor fixes
virTypedParamsDeserialize: minor fixes
src/util/virtypedparam.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
--
2.13.4
6 years, 4 months
[libvirt] [PATCH 0/9] Resolve libvirtd hang on termination with connected long running client
by John Ferlan
RFC:
https://www.redhat.com/archives/libvir-list/2018-January/msg00318.html
Adjustments since RFC...
Patches 1&2: No change, were already R-B'd
Patch 3: Removed code as noted in code review, update commit message
Patch 4: From old series removed, see below for more details
Patch 9: no change
NB: Patches 5-8 and 10 from Nikolay Shirokovskiy <nshirokovskiy(a)virtuozzo.com>
are removed as they seemed to not be necessary
Replaced the former patch 4 with series of patches to (slowly) provide
support to disable new connections, handle removing waiting jobs, causing
the waiting workers to quit, and allow any running jobs to complete.
As it turns out, waiting for running jobs to complete cannot be done
from the virNetServerClose callbacks because that means the event loop
processing done during virNetServerRun will not allow any currently
long running worker job thread a means to complete.
So when virNetDaemonQuit is called as a result of the libvirtd signal
handlers for SIG{QUIT|INT|TERM}, instead of just causing virNetServerRun
to quit immediately, alter to using a quitRequested flag and then use
that quitRequested flag to check for long running worker threads before
causing the event loop to quit resulting in libvirtd being able to run
through the virNetDaemonClose processing.
John Ferlan (9):
libvirtd: Alter refcnt processing for domain server objects
libvirtd: Alter refcnt processing for server program objects
netserver: Remove ServiceToggle during ServerDispose
util: Introduce virThreadPoolDrain
rpc: Introduce virNetServerQuitRequested
rpc: Introduce virNetServerWorkerCount
rpc: Alter virNetDaemonQuit processing
docs: Add news article for libvirtd issue
APPLY ONLY FOR TESTING PURPOSES
daemon/libvirtd.c | 43 +++++++++++++++++++++++---------
docs/news.xml | 12 +++++++++
src/libvirt_private.syms | 1 +
src/libvirt_remote.syms | 2 ++
src/qemu/qemu_driver.c | 5 ++++
src/rpc/virnetdaemon.c | 45 +++++++++++++++++++++++++++++++++-
src/rpc/virnetserver.c | 52 ++++++++++++++++++++++++++++++++++++---
src/rpc/virnetserver.h | 4 +++
src/util/virthreadpool.c | 64 ++++++++++++++++++++++++++++++++++++++++--------
src/util/virthreadpool.h | 2 ++
10 files changed, 204 insertions(+), 26 deletions(-)
--
2.13.6
6 years, 4 months
[libvirt] [PATCH] util: return generic error in virCopyLastError if error is not set
by Nikolay Shirokovskiy
virCopyLastError is intended to be used after last error is set.
However due to virLastErrorObject failures (very unlikely through
as thread local error is allocated on first use) we can have zero
fields in a copy as a result. In particular code field can be set
to VIR_ERR_OK.
In some places (qemu monitor, qemu agent and qemu migaration code
for example) we use copy result as a flag and this leads to bugs.
Let's set copy result to generic error ("cause unknown") in this case.
Approach is based on John Ferlan comments in [1] and [2] to initial
attempt which fixes issue in much less generic way.
[1] https://www.redhat.com/archives/libvir-list/2018-April/msg02700.html
[2] https://www.redhat.com/archives/libvir-list/2018-May/msg00124.html
---
src/util/virerror.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/util/virerror.c b/src/util/virerror.c
index c000b00..9f158af 100644
--- a/src/util/virerror.c
+++ b/src/util/virerror.c
@@ -343,7 +343,7 @@ virCopyLastError(virErrorPtr to)
if (err)
virCopyError(err, to);
else
- virResetError(to);
+ virErrorGenericFailure(err);
return to->code;
}
--
1.8.3.1
6 years, 4 months
Re: [libvirt] [Qemu-devel] [PATCH v7 1/3] qmp: adding 'wakeup-suspend-support' in query-target
by Markus Armbruster
Cc'ing a few more people.
Daniel Henrique Barboza <danielhb(a)linux.ibm.com> writes:
> When issuing the qmp/hmp 'system_wakeup' command, what happens in a
> nutshell is:
>
> - qmp_system_wakeup_request set runstate to RUNNING, sets a wakeup_reason
> and notify the event
> - in the main_loop, all vcpus are paused, a system reset is issued, all
> subscribers of wakeup_notifiers receives a notification, vcpus are then
> resumed and the wake up QAPI event is fired
>
> Note that this procedure alone doesn't ensure that the guest will awake
> from SUSPENDED state - the subscribers of the wake up event must take
> action to resume the guest, otherwise the guest will simply reboot.
>
> At this moment there are only two subscribers of the wake up event: one
> in hw/acpi/core.c and another one in hw/i386/xen/xen-hvm.c. This means
> that system_wakeup does not work as intended with other architectures.
>
> However, only the presence of 'system_wakeup' is required for QGA to
> support 'guest-suspend-ram' and 'guest-suspend-hybrid' at this moment.
> This means that the user/management will expect to suspend the guest using
> one of those suspend commands and then resume execution using system_wakeup,
> regardless of the support offered in system_wakeup in the first place.
>
> This patch adds a new flag called 'wakeup-suspend-support' in TargetInfo
> that allows the caller to query if the guest supports wake up from
> suspend via system_wakeup. It goes over the subscribers of the wake up
> event and, if it's empty, it assumes that the guest does not support
> wake up from suspend (and thus, pm-suspend itself).
>
> This is the expected output of query-target when running a x86 guest:
>
> {"execute" : "query-target"}
> {"return": {"arch": "x86_64", "wakeup-suspend-support": true}}
>
> This is the output when running a pseries guest:
>
> {"execute" : "query-target"}
> {"return": {"arch": "ppc64", "wakeup-suspend-support": false}}
>
> Given that the TargetInfo structure is read-only, adding a new flag to
> it is backwards compatible. There is no need to deprecate the old
> TargetInfo format.
>
> With this extra tool, management can avoid situations where a guest
> that does not have proper suspend/wake capabilities ends up in
> inconsistent state (e.g.
> https://github.com/open-power-host-os/qemu/issues/31).
>
> Reported-by: Balamuruhan S <bala24(a)linux.vnet.ibm.com>
> Signed-off-by: Daniel Henrique Barboza <danielhb(a)linux.ibm.com>
Is query-target is the right place to carry this flag? v7 is rather
late for this kind of question; my sincere apologies.
The flag is true after qemu_register_wakeup_notifier(). Callers so far:
* piix4_pm_realize() via acpi_pm1_cnt_init()
This is the realize method of device PIIX4_PM. It's an optional
onboard device (suppressed by -no-acpi) of machine types
pc-i440fx-VERSION, pc-VERSION, malta.
* pc_q35_init() via ich9_lpc_pm_init(), ich9_pm_init(),
acpi_pm1_cnt_init()
This is the initialization method of machine types pc-q35-VERSION.
Note that -no-acpi is not honored.
* vt82c686b_pm_realize() via acpi_pm1_cnt_init()
This is the realize method of device VT82C686B_PM. It's an onboard
device of machine type fulong2e. Again, -no-acpi is not honored.
* xen_hvm_init()
This one gets called with -accel xen. I suspect the actual callback
xen_wakeup_notifier() doesn't actually make wakeup work, unlike the
acpi_notify_wakeup() callback registered by the other callers.
Issue#1: this calls into question your assumption that the existence
of a wake-up notifier implies wake-up works. It still holds if
--accel xen is only accepted together with other configuration that
triggers registration of acpi_notify_wakeup(). Is it? Stefano,
Anthony?
Issue#2: the flag isn't a property of the target. Due to -no-acpi, it's
not even a property of the machine type. If it was, query-machines
would be the natural owner of the flag.
Perhaps query-machines is still the proper owner. The value of
wakeup-suspend-support would have to depend on -no-acpi for the machine
types that honor it. Not ideal; I'd prefer MachineInfo to be static.
Tolerable? I guess that's also a libvirt question.
6 years, 5 months
[libvirt] Call to virDomainIsActive hangs forever
by Mathieu Tarral
Hi !
I'm submitting my messages on this mailing list to request a bit of
help on a case that I have
where a Python application makes a call to virDomainIsActive, and the
call never returns.
I have tried to investigate, but as there are no debug symbols for
libvirt on Debian Stretch,
i can only have the following GDB backtrace:
(gdb) bt
#0 pthread_cond_wait@(a)GLIBC_2.3.2 () at
../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f49026f5b76 in virCondWait () from /usr/lib/libvirt.so.0
#2 0x00007f4902808bab in ?? () from /usr/lib/libvirt.so.0
#3 0x00007f490280a433 in virNetClientSendWithReply () from
/usr/lib/libvirt.so.0
#4 0x00007f490280abe2 in virNetClientProgramCall () from /usr/lib/libvirt.so.0
#5 0x00007f49027e0ea4 in ?? () from /usr/lib/libvirt.so.0
#6 0x00007f49027ea1bb in ?? () from /usr/lib/libvirt.so.0
#7 0x00007f49027b0ef3 in virDomainIsActive () from /usr/lib/libvirt.so.0
#8 0x00007f4902b7fbd0 in libvirt_virDomainIsActive () from
/usr/lib/python3/dist-packages/libvirtmod.cpython-35m-x86_64-linux-gnu.so
#9 0x0000558eeec696df in PyCFunction_Call () at ../Objects/methodobject.c:109
The libvirt driver used is QEMU, and i have a specific monitoring in
place using virtual machine introspection:
https://github.com/KVM-VMI/kvm-vmi
Now this specific monitoring somehow triggers this bug, and at this
point, i don't know if
it's a corner case in the libvirt QEMU driver or not.
That's why i would like to have your lights on this.
libvirt version: 3.0.0-4
-> Could you tell me where i should look in the code ?
-> Do you have more information about this virCondWait ? Which
condition is it waiting for ?
-> How can i get the symbols without having the recompile libvirt and
install it system wide, erasing the binaries installed by the package
?
Best regards,
--
Mathieu Tarral
6 years, 5 months
[libvirt] [RFC v3] external (pull) backup API
by Eric Blake
Here's my updated counterproposal for a backup API.
In comparison to v2 posted by Nikolay:
https://www.redhat.com/archives/libvir-list/2018-April/msg00115.html
- changed terminology a bit: Nikolay's "BlockSnapshot" is now called a
"Checkpoint", and "BlockExportStart/Stop" is now "BackupBegin/End"
- flesh out more API descriptions
- better documentation of proposed XML, for both checkpoints and backup
Barring any major issues turned up during review, I've already starting
to code this into libvirt with a goal of getting an implementation ready
for review this month.
Each domain will gain the ability to track a tree of Checkpoint
objects (we've previously mentioned the term "system checkpoint" in
the <domainsnapshot> XML as the combination of disk and RAM state; so
I'll use the term "disk checkpoint" in prose as needed, to make it
obvious that the checkpoints described here do not include RAM state).
I will use the virDomainSnapshot API as a guide, meaning that we will
track a tree of checkpoints where each checkpoint can have 0 or 1
parent checkpoints, in part because I plan to reuse a lot of the
snapshot code as a starting point for implementing checkpoint
tracking.
Qemu does NOT track a relationship between internal snapshots, so
libvirt has to manage the backing tree all by itself; by the same
argument, if qemu does not add a parent relationship to dirty bitmaps,
libvirt can probably manage everything itself by copying how it
manages parent relationships between internal snapshots. However, I
think it will be far easier for libvirt to exploit qemu dirty bitmaps
if qemu DOES add bitmap tracking; particularly if qemu adds ways to
easily compose a temporary bitmap that is the union of one bitmap plus
a fixed number of its parents.
Design-wise, libvirt will manage things so that there is only one
enabled dirty-bitmap per qcow2 image at a time, when no backup
operation is in effect. There is a notion of a current (or most
recent) checkpoint; when a new checkpoint is created, that becomes the
current one and the former checkpoint becomes the parent of the new
one. If there is no current checkpoint, then there is no active dirty
bitmap managed by libvirt.
Representing things on a timeline, when a guest is first created,
there is no dirty bitmap; later, the checkpoint "check1" is created,
which in turn creates "bitmap1" in the qcow2 image for all changes
past that point; when a second checkmark "check2" is created, a qemu
transaction is used to create and enable the new "bitmap2" bitmap at
the same time as disabling "bitmap1" bitmap. (Actually, it's probably
easier to name the bitmap in the qcow2 file with the same name as the
Checkpoint object being tracked in libvirt, but for discussion
purposes, it's less confusing if I use separate names for now.)
creation ....... check1 ....... check2 ....... active
no bitmap bitmap1 bitmap2
When a user wants to create a backup, they select which point in time
the backup starts from; the default value NULL represents a full
backup (all content since disk creation to the point in time of the
backup call, no bitmap is needed, use sync=full for push model or
sync=none for the pull model); any other value represents the name of
a checkpoint to use as an incremental backup (all content from the
checkpoint to the point in time of the backup call; libvirt forms a
temporary bitmap as needed, the uses sync=incremental for push model
or sync=none plus exporting the bitmap for the pull model). For
example, requesting an incremental backup from "check2" can just reuse
"bitmap2", but requesting an incremental backup from "check1" requires
the computation of the bitmap containing the union of "bitmap1" and
"bitmap2".
Libvirt will always create a new bitmap when starting a backup
operation, whether or not the user requests that a checkpoint be
created. Most users that want incremental backup sequences will
create a new checkpoint every time they do a backup; the new bitmap
that libvirt creates is then associated with that new checkpoint, and
even after the backup operation completes, the new bitmap remains in
the qcow2 file. But it is also possible to request a backup without a
new checkpoint (it merely means that it is not possible to create a
subsequent incremental backup from the backup just started); in that
case, libvirt will have to take care of merging the new bitmap back
into the previous one at the end of the backup operation.
I think that it should be possible to run multiple backup operations
in parallel in the long run. But in the interest of getting a proof
of concept implementation out quickly, it's easier to state that for
the initial implementation, libvirt supports at most one backup
operation at a time (to do another backup, you have to wait for the
current one to complete, or else abort and abandon the current
one). As there is only one backup job running at a time, the existing
virDomainGetJobInfo()/virDomainGetJobStats() will be able to report
statistics about the job (insofar as such statistics are available).
But in preparation for the future, when libvirt does add parallel job
support, starting a backup job will return a job id; and presumably
we'd add a new virDomainGetJobStatsByID() for grabbing statistics of
an arbitrary (rather than the most-recently-started) job.
Since live migration also acts as a job visible through
virDomainGetJobStats(), I'm going to treat an active backup job and
live migration as mutually exclusive. This is particularly true when
we have a pull model backup ongoing: if qemu on the source is acting
as an NBD server, you can't migrate away from that qemu and tell the
NBD client to reconnect to the NBD server on the migration
destination. So, to perform a migration, you have to cancel any
pending backup operations. Conversely, if a migration job is
underway, it will not be possible to start a new backup job until
migration completes. However, we DO need to modify migration to
ensure that any persistent bitmaps are migrated.
I also think that in the long run, it should be possible to start a
backup operation, and while it is still ongoing, create a new external
snapshot, and still be able to coordinate the transfer of bitmaps from
the old image to the new overlay. But for the first implementation,
it's probably easiest to state that an ongoing backup prevents
creation of a new snapshot. However, a current checkpoint (which
means we DO have an active bitmap, even if there is no active backup)
DOES need to be transfered to the new overlay, and conversely, a block
commit job needs to merge all bitmaps from the old overlay to the
backing file that is now becoming the active layer again. I don't
know if qemu has primitives for this in place yet; and if it does not,
the only conservative thing we can do in the initial implementation is
to state that the use of checkpoints is exclusive from the use of
snapshots (using one prevents the use of the other). Hopefully we
don't have to stay in that state for long.
For now, a user wanting guest I/O to be at a safe point can manually
use virDomainFSFreeze()/virDomainBackupBegin()/virDomainFSThaw(); we
may decide down the road to use the flags argument of
virDomainBackupBegin() to provide automatic guest quiescing through one
API (I'm not doing it right away, because we have to worry about
undoing effects if we fail to thaw after starting the backup).
So, to summarize, creating a backup will involve the following new APIs:
/**
* virDomainBackupBegin:
* @domain: a domain object
* @diskXml: description of storage to utilize and expose during
* the backup, or NULL
* @checkpointXml: description of a checkpoint to create, or NULL
* @flags: not used yet, pass 0
*
* Start a point-in-time backup job for the specified disks of a
* running domain.
*
* A backup job is mutually exclusive with domain migration
* (particularly when the job sets up an NBD export, since it is not
* possible to tell any NBD clients about a server migrating between
* hosts). For now, backup jobs are also mutually exclusive with any
* other block job on the same device, although this restriction may
* be lifted in a future release. Progress of the backup job can be
* tracked via virDomainGetJobStats(). The job remains active until a
* subsequent call to virDomainBackupEnd(), even if it no longer has
* anything to copy.
*
* There are two fundamental backup approaches. The first, called a
* push model, instructs the hypervisor to copy the state of the guest
* disk to the designated storage destination (which may be on the
* local file system or a network device); in this mode, the
* hypervisor writes the content of the guest disk to the destination,
* then emits VIR_DOMAIN_EVENT_ID_BLOCK_JOB_2 when the backup is
* either complete or failed (the backup image is invalid if the job
* is ended prior to the event being emitted). The second, called a
* pull model, instructs the hypervisor to expose the state of the
* guest disk over an NBD export; a third-party client can then
* connect to this export, and read whichever portions of the disk it
* desires. In this mode, there is no event; libvirt has to be
* informed when the third-party NBD client is done and the backup
* resources can be released.
*
* The @diskXml parameter is optional but usually provided, and
* contains details about the backup, including which backup mode to
* use, whether the backup is incremental from a previous checkpoint,
* which disks participate in the backup, the destination for a push
* model backup, and the temporary storage and NBD server details for
* a pull model backup. If omitted, the backup attempts to default to
* a push mode full backup of all disks, where libvirt generates a
* filename for each disk by appending a suffix of a timestamp in
* seconds since the Epoch. virDomainBackupGetXMLDesc() can be called
* to actual values selected. For more information, see
* formatcheckpoint.html#BackupAttributes.
*
* The @checkpointXml parameter is optional; if non-NULL, then libvirt
* behaves as if virDomainCheckpointCreateXML() were called with
* @checkpointXml, atomically covering the same guest state that will
* be part of the backup. The creation of a new checkpoint allows for
* future incremental backups.
*
* Returns a non-negative job id on success, or negative on failure.
* This operation returns quickly, such that a user can choose to
* start a backup job between virDomainFSFreeze() and
* virDomainFSThaw() in order to create the backup while guest I/O is
* quiesced.
*/
int virDomainBackupBegin(virDomainPtr domain, const char *diskXml,
const char *checkpointXml, unsigned int flags);
Note that this layout says that all disks participating in the backup
job have share the same incremental checkpoint as their starting point
(no way to have one backup job where disk A copies data since check1
while disk B copies data since check2). If we need the latter, then
we could get rid of the 'incremental' parameter, and instead have each
<disk> element within checkpointXml all out an optional <checkpoint>
name as its starting point. Also, qemu supports exposing multiple
disks through a single NBD server (you then connect multiple clients
to the one server to grab state from each disk). So the NBD details
are listed in parallel to the <disks>. Note that since a backup is
NOT a guest-visible action, the backup job does not alter the normal
<domain> XML.
/**
* virDomainBackupGetXMLDesc:
* @domain: a domain object
* @id: the id of an active backup job previously started with
* virDomainBackupBegin()
* @flags: not used yet, pass 0
*
* In some cases, a user can start a backup job without supplying all
* details, and rely on libvirt to fill in the rest (for example,
* selecting the port used for an NBD export). This API can then be
* used to learn what default values were chosen.
*
* Returns a NUL-terminated UTF-8 encoded XML instance, or NULL in
* case of error. The caller must free() the returned value.
*/
char *
virDomainBackupGetXMLDesc(virDomainPtr domain, int id,
unsigned int flags);
/**
* virDomainBackupEnd:
* @domain: a domain object
* @id: the id of an active backup job previously started with
* virDomainBackupBegin()
* @flags: bitwise-OR of supported virDomainBackupEndFlags
*
* Conclude a point-in-time backup job @id on the given domain.
*
* If the backup job uses the push model, but the event marking that
* all data has been copied has not yet been emitted, then the command
* fails unless @flags includes VIR_DOMAIN_BACKUP_END_ABORT. If the
* event has been issued, or if the backup uses the pull model, the
* flag has no effect.
*
* Returns 0 on success and -1 on failure.
*/
int virDomainBackupEnd(virDomainPtr domain, int id, unsigned int flags);
/**
* virDomainCheckpointCreateXML:
* @domain: a domain object
* @xmlDesc: description of the checkpoint to create
* @flags: bitwise-OR of supported virDomainCheckpointCreateFlags
*
* Create a new checkpoint using @xmlDesc on a running @domain.
* Typically, it is more common to create a new checkpoint as part of
* kicking off a backup job with virDomainBackupBegin(); however, it
* is also possible to start a checkpoint without a backup.
*
* See formatcheckpoint.html#CheckpointAttributes document for more
* details on @xmlDesc.
*
* If @flags includes VIR_DOMAIN_CHECKPOINT_CREATE_REDEFINE, then this
* is a request to reinstate checkpoint metadata that was previously
* discarded, rather than creating a new checkpoint. When redefining
* checkpoint metadata, the current checkpoint will not be altered
* unless the VIR_DOMAIN_CHECKPOINT_CREATE_CURRENT flag is also
* present. It is an error to request the
* VIR_DOMAIN_CHECKPOINT_CREATE_CURRENT flag without
* VIR_DOMAIN_CHECKPOINT_CREATE_REDEFINE.
*
* If @flags includes VIR_DOMAIN_CHECKPOINT_CREATE_NO_METADATA, then
* the domain's disk images are modified according to @xmlDesc, but
* then the just-created checkpoint has its metadata deleted. This
* flag is incompatible with VIR_DOMAIN_CHECKPOINT_CREATE_REDEFINE.
*
* Returns an (opaque) new virDomainCheckpointPtr on success, or NULL
* on failure.
*/
virDomainCheckpointPtr
virDomainCheckpointCreateXML(virDomainPtr domain, const char *xmlDesc,
unsigned int flags);
/**
* virDomainCheckpointDelete:
* @checkpoint: the checkpoint to remove
* @flags: not used yet, pass 0
* @flags: bitwise-OR of supported virDomainCheckpointDeleteFlags
*
* Removes a checkpoint from the domain.
*
* When removing a checkpoint, the record of which portions of the
* disk were dirtied after the checkpoint will be merged into the
* record tracked by the parent checkpoint, if any. Likewise, if the
* checkpoint being deleted was the current checkpoint, the parent
* checkpoint becomes the new current checkpoint.
*
* If @flags includes VIR_DOMAIN_CHECKPOINT_DELETE_METADATA_ONLY, then
* any checkpoint metadata tracked by libvirt is removed while keeping
* the checkpoint contents intact; if a hypervisor does not require
* any libvirt metadata to track checkpoints, then this flag is
* silently ignored.
*
* Returns 0 on success, -1 on error.
*/
int
virDomainCheckpointDelete(virDomainCheckpointPtr checkpoint,
unsigned int flags);
// Many additional functions copying heavily from virDomainSnapshot*:
virDomainCheckpointList(virDomainPtr domain,
virDomainCheckpointPtr **checkpoints,
unsigned int flags);
virDomainCheckpointGetXMLDesc(virDomainCheckpointPtr checkpoint,
unsigned int flags);
virDomainCheckpointPtr
virDomainCheckpointLookupByName(virDomainPtr domain,
const char *name,
unsigned int flags);
const char *
virDomainCheckpointGetName(virDomainCheckpointPtr checkpoint);
virDomainPtr
virDomainCheckpointGetDomain(virDomainCheckpointPtr checkpoint);
virConnectPtr
virDomainCheckpointGetConnect(virDomainCheckpointPtr checkpoint);
int
virDomainHasCurrentCheckpoint(virDomainPtr domain, unsigned int flags);
virDomainCheckpointPtr
virDomainCheckpointCurrent(virDomainPtr domain, unsigned int flags);
virDomainCheckpointPtr
virDomainCheckpointGetParent(virDomainCheckpointPtr checkpoint,
unsigned int flags);
int
virDomainCheckpointIsCurrent(virDomainCheckpointPtr checkpoint,
unsigned int flags);
int
virDomainCheckpointRef(virDomainCheckpointPtr checkpoint);
int
virDomainCheckpointFree(virDomainCheckpointPtr checkpoint);
int
virDomainCheckpointListChildren(virDomainCheckpointPtr checkpoint,
virDomainCheckpointPtr **children,
unsigned int flags);
Notably, none of the older racy list functions, like
virDomainSnapshotNum, virDomainSnapshotNumChildren, or
virDomainSnapshotListChildrenNames; also, for now, there is no revert
support like virDomainSnapshotRevert.
Eventually, if we add a way to roll back to the state recorded in an
earlier bitmap, we'll want to tell libvirt that it needs to create a
new bitmap as a child of an existing (non-current) checkpoint. That
is, if we have:
check1 .... check2 .... active
bitmap1 bitmap2
and created a backup at the same time as check2, then when we later
roll back to the state of that backup, we would want to end writes to
bitmap2 and declare that check2 is no longer current, and create a new
current check3 with associated bitmap3 and parent check1 to track all
writes since the point of the revert. Until then, I don't think it's
possible to have more than one child without manually using the
REDEFINE flag to create such scenarios; but the API should not lock us
out of supporting multiple children in the future.
Here's my proposal for user-facing XML documentation, based on
formatsnapshot.html.in:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
<h1>Checkpoint and Backup XML format</h1>
<ul id="toc"></ul>
<h2><a id="CheckpointAttributes">Checkpoint XML</a></h2>
<p>
Libvirt is able to facilitate incremental backups by tracking
disk checkpoints, or points in time against which it is easy to
compute which portion of the disk has changed. Given a full
backup (a backup created from the creation of the disk to a
given point in time, coupled with the creation of a disk
checkpoint at that time), and an incremental backup (a backup
created from just the dirty portion of the disk between the
first checkpoint and the second backup operation), it is
possible to do an offline reconstruction of the state of the
disk at the time of the second backup, without having to copy as
much data as a second full backup would require. Most disk
checkpoints are created in concert with a backup,
via <code>virDomainBackupBegin()</code>; however, libvirt also
exposes enough support to create disk checkpoints independently
from a backup operation,
via <code>virDomainCheckpointCreateXML()</code>.
</p>
<p>
Attributes of libvirt checkpoints are stored as child elements of
the <code>domaincheckpoint</code> element. At checkpoint creation
time, normally only the <code>name</code>, <code>description</code>,
and <code>disks</code> elements are settable; the rest of the
fields are ignored on creation, and will be filled in by
libvirt in for informational purposes
by <code>virDomainCheckpointGetXMLDesc()</code>. However, when
redefining a checkpoint,
with the <code>VIR_DOMAIN_CHECKPOINT_CREATE_REDEFINE</code> flag
of <code>virDomainCheckpointCreateXML()</code>, all of the XML
described here is relevant.
</p>
<p>
Checkpoints are maintained in a hierarchy. A domain can have a
current checkpoint, which is the most recent checkpoint compared to
the current state of the domain (although a domain might have
checkpoints without a current checkpoint, if checkpoints have been
deleted in the meantime). Creating or reverting to a checkpoint
sets that checkpoint as current, and the prior current checkpoint is
the parent of the new checkpoint. Branches in the hierarchy can
be formed by reverting to a checkpoint with a child, then creating
another checkpoint.
</p>
<p>
The top-level <code>domaincheckpoint</code> element may contain
the following elements:
</p>
<dl>
<dt><code>name</code></dt>
<dd>The name for this checkpoint. If the name is specified when
initially creating the checkpoint, then the checkpoint will have
that particular name. If the name is omitted when initially
creating the checkpoint, then libvirt will make up a name for
the checkpoint, based on the time when it was created.
</dd>
<dt><code>description</code></dt>
<dd>A human-readable description of the checkpoint. If the
description is omitted when initially creating the checkpoint,
then this field will be empty.
</dd>
<dt><code>disks</code></dt>
<dd>On input, this is an optional listing of specific
instructions for disk checkpoints; it is needed when making a
checkpoint on only a subset of the disks associated with a
domain (in particular, since qemu checkpoints require qcow2
disks, this element may be needed on input for excluding guest
disks that are not in qcow2 format); if omitted on input, then
all disks participate in the checkpoint. On output, this is
fully populated to show the state of each disk in the
checkpoint. This element has a list of <code>disk</code>
sub-elements, describing anywhere from one to all of the disks
associated with the domain.
<dl>
<dt><code>disk</code></dt>
<dd>This sub-element describes the checkpoint properties of
a specific disk. The attribute <code>name</code> is
mandatory, and must match either the <code><target
dev='name'/></code> or an unambiguous <code><source
file='name'/></code> of one of
the <a href="formatdomain.html#elementsDisks">disk
devices</a> specified for the domain at the time of the
checkpoint. The attribute <code>checkpoint</code> is
optional on input; possible values are <code>no</code>
when the disk does not participate in this checkpoint;
or <code>bitmap</code> if the disk will track all changes
since the creation of this checkpoint via a bitmap, in
which case another attribute <code>bitmap</code> will be
the name of the tracking bitmap (defaulting to the
checkpoint name).
</dd>
</dl>
</dd>
<dt><code>creationTime</code></dt>
<dd>The time this checkpoint was created. The time is specified
in seconds since the Epoch, UTC (i.e. Unix time). Readonly.
</dd>
<dt><code>parent</code></dt>
<dd>The parent of this checkpoint. If present, this element
contains exactly one child element, name. This specifies the
name of the parent checkpoint of this one, and is used to
represent trees of checkpoints. Readonly.
</dd>
<dt><code>domain</code></dt>
<dd>The inactive <a href="formatdomain.html">domain
configuration</a> at the time the checkpoint was created.
Readonly.
</dd>
</dl>
<h2><a id="BackupAttributes">Backup XML</a></h2>
<p>
Creating a backup, whether full or incremental, is done
via <code>virDomainBackupBegin()</code>, which takes an XML
description of the actions to perform. There are two general
modes for backups: a push mode (where the hypervisor writes out
the data to the destination file, which may be local or remote),
and a pull mode (where the hypervisor creates an NBD server that
a third-party client can then read as needed, and which requires
the use of temporary storage, typically local, until the backup
is complete).
</p>
<p>
The instructions for beginning a backup job are provided as
attributes and elements of the
top-level <code>domainbackup</code> element. This element
includes an optional attribute <code>mode</code> which can be
either "push" or "pull" (default push). Where elements are
optional on creation, <code>virDomainBackupGetXMLDesc()</code>
can be used to see the actual values selected (for example,
learning which port the NBD server is using in the pull model,
or what file names libvirt generated when none were supplied).
The following child elements are supported:
</p>
<dl>
<dt><code>incremental</code></dt>
<dd>Optional. If this element is present, it must name an
existing checkpoint of the domain, which will be used to make
this backup an incremental one (in the push model, only
changes since the checkpoint are written to the destination;
in the pull model, the NBD server uses the
NBD_OPT_SET_META_CONTEXT extension to advertise to the client
which portions of the export contain changes since the
checkpoint). If omitted, a full backup is performed.
</dd>
<dt><code>server</code></dt>
<dd>Present only for a pull mode backup. Contains the same
attributes as the <code>protocol</code> element of a disk
attached via NBD in the domain (such as transport, socket,
name, port, or tls), necessary to set up an NBD server that
exposes the content of each disk at the time the backup
started.
</dd>
<dt><code>disks</code></dt>
<dd>This is an optional listing of instructions for disks
participating in the backup (if omitted, all disks
participate, and libvirt attempts to generate filenames by
appending the current timestamp as a suffix). When provided on
input, disks omitted from the list do not participate in the
backup. On output, the list is present but contains only the
disks participating in the backup job. This element has a
list of <code>disk</code> sub-elements, describing anywhere
from one to all of the disks associated with the domain.
<dl>
<dt><code>disk</code></dt>
<dd>This sub-element describes the checkpoint properties of
a specific disk. The attribute <code>name</code> is
mandatory, and must match either the <code><target
dev='name'/></code> or an unambiguous <code><source
file='name'/></code> of one of
the <a href="formatdomain.html#elementsDisks">disk
devices</a> specified for the domain at the time of the
checkpoint. The optional attribute <code>type</code> can
be <code>file</code>, <code>block</code>,
or <code>networks</code>, similar to a disk declaration
for a domain, controls what additional sub-elements are
needed to describe the destination (such
as <code>protocol</code> for a network destination). In
push mode backups, the primary subelement
is <code>target</code>; in pull mode, the primary sublement
is <code>scratch</code>; but either way,
the primary sub-element describes the file name to be used
during the backup operation, similar to
the <code>source</code> sub-element of a domain disk. An
optional sublement <code>driver</code> can also be used to
specify a destination format different from qcow2.
</dd>
</dl>
</dd>
</dl>
<h2><a id="example">Examples</a></h2>
<p>Using this XML to create a checkpoint of just vda on a qemu
domain with two disks and a prior checkpoint:</p>
<pre>
<domaincheckpoint>
<description>Completion of updates after OS
install</description>
<disks>
<disk name='vda' checkpoint='bitmap'/>
<disk name='vdb' checkpoint='no'/>
</disks>
</domaincheckpoint></pre>
<p>will result in XML similar to this from
<code>virDomainCheckpointGetXMLDesc()</code>:</p>
<pre>
<domaincheckpoint>
<name>1525889631</name>
<description>Completion of updates after OS
install</description>
<creationTime>1525889631</creationTime>
<parent>
<name>1525111885</name>
</parent>
<disks>
<disk name='vda' checkpoint='bitmap' bitmap='1525889631'/>
<disk name='vdb' checkpoint='no'/>
</disks>
<domain>
<name>fedora</name>
<uuid>93a5c045-6457-2c09-e56c-927cdf34e178</uuid>
<memory>1048576</memory>
...
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/file1'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='file' device='disk' snapshot='external'>
<driver name='qemu' type='raw'/>
<source file='/path/to/file2'/>
<target dev='vdb' bus='virtio'/>
</disk>
...
</devices>
</domain>
</domaincheckpoint></pre>
<p>With that checkpoint created, the qcow2 image is now tracking
all changes that occur in the image since the checkpoint via
the persistent bitmap named <code>1525889631</code>. Now, we
can make a subsequent call
to <code>virDomainBackupBegin()</code> to perform an incremental
backup of just this data, using the following XML to start a
pull model NBD export of the vda disk:
</p>
<pre>
<domainbackup mode="pull">
<incremental>1525889631</incremental>
<server transport="unix" socket="/path/to/server"/>
<disks/>
<disk name='vda' type='file'/>
<scratch file=/path/to/file1.scratch'/>
</disk>
</disks/>
</domainbackup>
</pre>
</body>
</html>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
6 years, 5 months
[libvirt] Likely build race, "/usr/bin/ld: cannot find -lvirt"
by Ian Jackson
tl;dr:
I think there is a bug in libvirt's build system which, with
low probability, causes a build failure containing this message:
/usr/bin/ld: cannot find -lvirt
Complete build logs of two attempts:
http://logs.test-lab.xenproject.org/osstest/logs/123046/build-i386-libvir...
http://logs.test-lab.xenproject.org/osstest/logs/123096/build-i386-libvir...
Snippet from 123046 containing the error is enclosed below.
Longer explanation:
I have two new machines for the Xen Project CI, which I am trying to
commission. As part of commissioning I run a complete test run (a
"flight" in osstest terminology) on just those new hosts. The i386
libvirt build failed:
http://logs.test-lab.xenproject.org/osstest/logs/123046/build-i386-libvir...
Everything else that would be expected to work was fine. The test
programme was identical to flight 122815, except that that ran on
other hosts in the test farm (and, there, it passed). The error is
the kind of error one sees with missing dependencies in parallel
builds, etc.
I wanted to have some 32-bit libvirt tests actually run, so I reran a
new flight containing the relevant parts. That failed too in a very
similar way:
http://logs.test-lab.xenproject.org/osstest/logs/123096/build-i386-libvir...
The two machines are Dell R230s (and therefore hardly unusual). The
main novelty of these machines is that the firmware is UEFI booting in
UEFI mode. I doubt that has anything to do with it. The host,
including compiler, is Debian jessie i386.
As you can see from the log, we were trying to build libvirt
764a7483f189e6de841163647c14296e693dbb2e
What may be less obvious is that we were trying to build it against
xen.git#0306a1311d02ea52b4a9a9bc339f8bab9354c5e3.
http://logs.test-lab.xenproject.org/osstest/logs/123064/build-i386-libvir...
http://logs.test-lab.xenproject.org/osstest/logs/123046/build-i386/info.html
Does this seem like a likely explanation ? Have other people
experienced occasional problems with make -j ? If someone wants to
suggest a patch that might fix it I can test it.
In the meantime I have set off a number of new attempts, to try to
guess the failure probability, and also one attempt on other hosts to
check that nothing unexpected was broken.
Ian.
/usr/bin/ld: cannot find -lvirt
/usr/bin/ld: cannot find -lvirt
/bin/mkdir -p '/home/osstest/build.123046.build-i386-libvirt/dist/usr/local/lib/libvirt/storage-backend'
/bin/bash ../libtool --mode=install /usr/bin/install -c libvirt_storage_backend_fs.la libvirt_storage_backend_logical.la libvirt_storage_backend_scsi.la libvirt_storage_backend_mpath.la '/home/osstest/build.123046.build-i386-libvirt/dist/usr/local/lib/libvirt/storage-backend'
libtool: install: warning: relinking `libvirt_storage_backend_fs.la'
libtool: install: (cd /home/osstest/build.123046.build-i386-libvirt/libvirt/src; /bin/bash /home/osstest/build.123046.build-i386-libvirt/libvirt/libtool --silent --tag CC --mode=relink gcc -std=gnu99 -I./conf -I/usr/include/libxml2 -fno-common -W -Waddress -Waggressive-loop-optimizations -Wall -Wattributes -Wbad-function-cast -Wbuiltin-macro-redefined -Wcast-align -Wchar-subscripts -Wclobbered -Wcomment -Wcomments -Wcoverage-mismatch -Wcpp -Wdate-time -Wdeprecated-declarations -Wdiv-by-zero -Wdouble-promotion -Wempty-body -Wendif-labels -Wextra -Wformat-contains-nul -Wformat-extra-args -Wformat-security -Wformat-y2k -Wformat-zero-length -Wfree-nonheap-object -Wignored-qualifiers -Wimplicit -Wimplicit-function-declaration -Wimplicit-int -Winit-self -Winline -Wint-to-pointer-cast -Winvalid-memory-model -Winvalid-pch -Wjump-misses-init -Wlogical-op -Wmain -Wmaybe-uninitialized -Wmemset-transposed-args -Wmissing-braces -Wmissing-declarations -Wmissing-field-initializers -Wmissing-include-dirs -Wmissing-parameter-type -Wmissing-prototypes -Wmultichar -Wnarrowing -Wnested-externs -Wnonnull -Wold-style-declaration -Wold-style-definition -Wopenmp-simd -Woverflow -Woverride-init -Wpacked-bitfield-compat -Wparentheses -Wpointer-arith -Wpointer-sign -Wpointer-to-int-cast -Wpragmas -Wpsabi -Wreturn-local-addr -Wreturn-type -Wsequence-point -Wshadow -Wsizeof-pointer-memaccess -Wstrict-aliasing -Wstrict-prototypes -Wsuggest-attribute=const -Wsuggest-attribute=format -Wsuggest-attribute=noreturn -Wsuggest-attribute=pure -Wswitch -Wsync-nand -Wtrampolines -Wtrigraphs -Wtype-limits -Wuninitialized -Wunknown-pragmas -Wunused -Wunused-but-set-parameter -Wunused-but-set-variable -Wunused-function -Wunused-label -Wunused-local-typedefs -Wunused-parameter -Wunused-result -Wunused-value -Wunused-variable -Wvarargs -Wvariadic-macros -Wvector-operation-performance -Wvolatile-register-var -Wwrite-strings -Wnormalized=nfc -Wno-sign-compare -Wjump-misses-init -Wswitch-enum -Wno-format-nonliteral -fstack-protector-strong -fexceptions -fasyn
chronous-unwind-tables -fipa-pure-const -Wno-suggest-attribute=pure -Wno-suggest-attribute=const -Werror -Wframe-larger-than=4096 -g -I/home/osstest/build.123046.build-i386-libvirt/xendist/usr/local/include/ -DLIBXL_API_VERSION=0x040400 -module -avoid-version -Wl,-z -Wl,nodelete -export-dynamic -Wl,-z -Wl,relro -Wl,-z -Wl,now -Wl,--no-copy-dt-needed-entries -g -L/home/osstest/build.123046.build-i386-libvirt/xendist/usr/local/lib/ -Wl,-rpath-link=/home/osstest/build.123046.build-i386-libvirt/xendist/usr/local/lib/ -o libvirt_storage_backend_fs.la -rpath /usr/local/lib/libvirt/storage-backend storage/libvirt_storage_backend_fs_la-storage_backend_fs.lo libvirt.la ../gnulib/lib/libgnu.la -ldl -inst-prefix-dir /home/osstest/build.123046.build-i386-libvirt/dist)
collect2: error: ld returned 1 exit status
Makefile:6410: recipe for target 'install-lockdriverLTLIBRARIES' failed
libtool: install: error: relink `lockd.la' with the above command before installing it
make[3]: *** [install-lockdriverLTLIBRARIES] Error 1
make[3]: *** Waiting for unfinished jobs....
/usr/bin/ld: cannot find -lvirt
6 years, 5 months
[libvirt] [PATCH 00/12] Temporarily use other boot configuration
by Marc Hartmayer
This patch series implements a new API that allows us to temporarily
use another boot configuration than defined in the persistent domain
definition.
The s390 architecture knows only one boot device and therefore the
boot order settings doesn't work the way it would work on x86, for
example. If the first boot device fails to boot there is no trying to
boot from the next boot device. In addition, the architecture/BIOS has
no support for interactively changing the boot device during the
boot/IPL process.
Currently the API is implemented for the remote/QEMU/test driver and
for virsh.
It can be used as follows
$ virsh start {{DOMAIN}} --with-kernel {{KERNEL_FILE}} \
--with-initrd {{INITRD_FILE}} --with-cmdline {{CMDLINE}} \
--with-bootdevice {{DEVICE_IDENTIFIER}}
E.g.
$ virsh start dom_01 --with-bootdevice='vdb'
Marc Hartmayer (12):
virsh: Force boot emulation is only required for
virDomainCreateWithFlags
Introduce new domain create API virDomainCreateWithParams
remote: Add support for virDomainCreateWithParams
utils: Add virStringUpdate
conf: Add functions to change the boot configuration of a domain
qemu: Add the functionality to override the boot configuration
qemu: Add support for virDomainCreateWithParams
test: Implement virConnectSupportsFeature
test: Add support for virDomainCreateWithParams
tests: Add tests for virDomainCreateWithParams
virsh: Add with-{bootdevice,kernel,initrd,cmdline} options for start
docs: Add news entry for new API virDomainCreateWithParams
docs/news.xml | 11 ++
include/libvirt/libvirt-domain.h | 37 +++++
src/conf/domain_conf.c | 226 +++++++++++++++++++++++++++
src/conf/domain_conf.h | 11 ++
src/driver-hypervisor.h | 6 +
src/libvirt-domain.c | 64 ++++++++
src/libvirt_private.syms | 2 +
src/libvirt_public.syms | 4 +
src/qemu/qemu_driver.c | 83 ++++++++--
src/qemu/qemu_migration.c | 3 +-
src/qemu/qemu_process.c | 16 +-
src/qemu/qemu_process.h | 2 +
src/remote/remote_driver.c | 1 +
src/remote/remote_protocol.x | 22 ++-
src/remote_protocol-structs | 12 ++
src/rpc/gendispatch.pl | 18 ++-
src/test/test_driver.c | 108 +++++++++++--
src/util/virstring.c | 27 ++++
src/util/virstring.h | 2 +
tests/objecteventtest.c | 321 +++++++++++++++++++++++++++++++++++++++
tools/virsh-domain.c | 136 ++++++++++++++---
tools/virsh.pod | 14 ++
22 files changed, 1064 insertions(+), 62 deletions(-)
--
2.13.4
6 years, 5 months
[libvirt] [PATCH v2 00/21] nwfilter: refactor the driver to make it independent of virt drivers
by Daniel P. Berrangé
v1: https://www.redhat.com/archives/libvir-list/2018-April/msg02616.html
Today the nwfilter driver is entangled with the virt drivers in both
directions. At various times when rebuilding filters nwfilter will call
out to the virt driver to iterate over running guest's NICs. This has
caused very complicated lock ordering rules to be required. If we are to
split the virt drivers out into separate daemons we need to get rid of
this coupling since we don't want the separate daemons calling each
other, as that risks deadlock if all of the RPC workers are busy.
The obvious way to solve this is to have the nwfilter driver remember
all the filters it has active, avoiding the need to iterate over running
guests.
Easy parts of the v1 posting have already been merged. This v2 is much
more complete, though still not entirely ready for merge.
- The virNWFilterBindingPtr was renamed virNWFilterBindingDefPtr
- New virNWFilterBindingObjPtr & virNWFilterBindingObjListPtr
structs added to track the objects in the driver
- New virNWFilterBindingPtr public API type was added
- New public APIs for listing filter bindings, querying XML, and
creating/deleting them
- Convert the virt drivers to use the public API for creating
and deleting bindings
- Persistent active bindings out to disk so they're preserved
across restarts
- Added RNG schema and XML-2-XML test
- New virsh commands for listing/querying XML/creating/deleting
bindings
Still todo
- Document the new XML format
- Run the nwfilter stress tests to see what I've undoubtably broken
- Think about recording the NIC index in the virNWFilterBindingObjPtr
and persisting across restarts, so we can track if the NIC we had
previously used was deleted & recreated - in which case we can drop
the stale binding.
- Probably something else...
Daniel P. Berrangé (21):
util: fix misleading command for virObjectLock
conf: change virNWFilterBindingPtr to virNWFilterBindingDefPtr
conf: add missing virxml.h include for nwfilter_params.h
conf: move virNWFilterBindingDefPtr into its own files
conf: add support for parsing/formatting virNWFilterBindingDefPtr
schemas: add schema for nwfilter binding XML document
nwfilter: export port binding concept in the public API
access: add nwfilter binding object permissions
remote: add support for nwfilter binding objects
virsh: add nwfilter binding commands
nwfilter: convert the gentech driver code to use
virNWFilterBindingDefPtr
nwfilter: convert IP address learning code to virNWFilterBindingDefPtr
nwfilter: convert DHCP address snooping code to
virNWFilterBindingDefPtr
conf: report an error if nic needs filtering by no driver is present
conf: introduce a virNWFilterBindingObjPtr struct
conf: introduce a virNWFilterBindingObjListPtr struct
nwfilter: keep track of active filter bindings
nwfilter: remove virt driver callback layer for rebuilding filters
nwfilter: wire up new APIs for listing and querying filter bindings
nwfilter: wire up new APIs for creating and deleting nwfilter bindings
nwfilter: convert virt drivers to use public API for nwfilter bindings
docs/schemas/domaincommon.rng | 27 +-
docs/schemas/nwfilter.rng | 29 +-
docs/schemas/nwfilter_params.rng | 32 ++
docs/schemas/nwfilterbinding.rng | 49 ++
include/libvirt/libvirt-nwfilter.h | 39 ++
include/libvirt/virterror.h | 2 +
src/access/viraccessdriver.h | 5 +
src/access/viraccessdrivernop.c | 10 +
src/access/viraccessdriverpolkit.c | 21 +
src/access/viraccessdriverstack.c | 24 +
src/access/viraccessmanager.c | 15 +
src/access/viraccessmanager.h | 5 +
src/access/viraccessperm.c | 7 +-
src/access/viraccessperm.h | 39 ++
src/conf/Makefile.inc.am | 6 +
src/conf/domain_nwfilter.c | 125 ++++-
src/conf/domain_nwfilter.h | 13 -
src/conf/nwfilter_conf.c | 223 ++------
src/conf/nwfilter_conf.h | 68 +--
src/conf/nwfilter_params.h | 1 +
src/conf/virnwfilterbindingdef.c | 279 ++++++++++
src/conf/virnwfilterbindingdef.h | 65 +++
src/conf/virnwfilterbindingobj.c | 260 ++++++++++
src/conf/virnwfilterbindingobj.h | 60 +++
src/conf/virnwfilterbindingobjlist.c | 475 ++++++++++++++++++
src/conf/virnwfilterbindingobjlist.h | 66 +++
src/conf/virnwfilterobj.c | 4 +-
src/conf/virnwfilterobj.h | 4 +
src/datatypes.c | 67 +++
src/datatypes.h | 31 ++
src/driver-nwfilter.h | 30 ++
src/libvirt-nwfilter.c | 305 +++++++++++
src/libvirt_private.syms | 42 +-
src/libvirt_public.syms | 13 +
src/lxc/lxc_driver.c | 28 --
src/nwfilter/nwfilter_dhcpsnoop.c | 151 +++---
src/nwfilter/nwfilter_dhcpsnoop.h | 7 +-
src/nwfilter/nwfilter_driver.c | 211 ++++++--
src/nwfilter/nwfilter_gentech_driver.c | 307 +++++------
src/nwfilter/nwfilter_gentech_driver.h | 22 +-
src/nwfilter/nwfilter_learnipaddr.c | 98 ++--
src/nwfilter/nwfilter_learnipaddr.h | 7 +-
src/qemu/qemu_driver.c | 25 -
src/remote/remote_daemon_dispatch.c | 15 +
src/remote/remote_driver.c | 20 +
src/remote/remote_protocol.x | 90 +++-
src/remote_protocol-structs | 43 ++
src/rpc/gendispatch.pl | 15 +-
src/uml/uml_driver.c | 29 --
src/util/virerror.c | 12 +
src/util/virobject.c | 2 +-
tests/Makefile.am | 7 +
.../filter-vars.xml | 11 +
.../virnwfilterbindingxml2xmldata/simple.xml | 9 +
tests/virnwfilterbindingxml2xmltest.c | 113 +++++
tests/virschematest.c | 1 +
tools/virsh-completer.c | 45 ++
tools/virsh-completer.h | 4 +
tools/virsh-nwfilter.c | 318 ++++++++++++
tools/virsh-nwfilter.h | 8 +
60 files changed, 3247 insertions(+), 792 deletions(-)
create mode 100644 docs/schemas/nwfilter_params.rng
create mode 100644 docs/schemas/nwfilterbinding.rng
create mode 100644 src/conf/virnwfilterbindingdef.c
create mode 100644 src/conf/virnwfilterbindingdef.h
create mode 100644 src/conf/virnwfilterbindingobj.c
create mode 100644 src/conf/virnwfilterbindingobj.h
create mode 100644 src/conf/virnwfilterbindingobjlist.c
create mode 100644 src/conf/virnwfilterbindingobjlist.h
create mode 100644 tests/virnwfilterbindingxml2xmldata/filter-vars.xml
create mode 100644 tests/virnwfilterbindingxml2xmldata/simple.xml
create mode 100644 tests/virnwfilterbindingxml2xmltest.c
--
2.17.0
6 years, 5 months
[libvirt] KVM Forum 2018: Call For Participation
by Paolo Bonzini
================================================================
KVM Forum 2018: Call For Participation
October 24-26, 2018 - Edinburgh International Conference Centre - Edinburgh, UK
(All submissions must be received before midnight June 14, 2018)
=================================================================
KVM Forum is an annual event that presents a rare opportunity
for developers and users to meet, discuss the state of Linux
virtualization technology, and plan for the challenges ahead.
We invite you to lead part of the discussion by submitting a speaking
proposal for KVM Forum 2018.
At this highly technical conference, developers driving innovation
in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
meet users who depend on KVM as part of their offerings, or to
power their data centers and clouds.
KVM Forum will include sessions on the state of the KVM
virtualization stack, planning for the future, and many
opportunities for attendees to collaborate. After more than ten
years of development in the Linux kernel, KVM continues to be a
critical part of the FOSS cloud infrastructure.
This year, KVM Forum is joining Open Source Summit in Edinburgh, UK. Selected
talks from KVM Forum will be presented on Wednesday October 24 to the full
audience of the Open Source Summit. Also, attendees of KVM Forum will have
access to all of the talks from Open Source Summit on Wednesday.
https://events.linuxfoundation.org/events/kvm-forum-2018/program/cfp/
Suggested topics:
* Scaling, latency optimizations, performance tuning, real-time guests
* Hardening and security
* New features
* Testing
KVM and the Linux kernel:
* Nested virtualization
* Resource management (CPU, I/O, memory) and scheduling
* VFIO: IOMMU, SR-IOV, virtual GPU, etc.
* Networking: Open vSwitch, XDP, etc.
* virtio and vhost
* Architecture ports and new processor features
QEMU:
* Management interfaces: QOM and QMP
* New devices, new boards, new architectures
* Graphics, desktop virtualization and virtual GPU
* New storage features
* High availability, live migration and fault tolerance
* Emulation and TCG
* Firmware: ACPI, UEFI, coreboot, U-Boot, etc.
Management and infrastructure
* Managing KVM: Libvirt, OpenStack, oVirt, etc.
* Storage: Ceph, Gluster, SPDK, etc.r
* Network Function Virtualization: DPDK, OPNFV, OVN, etc.
* Provisioning
===============
SUBMITTING YOUR PROPOSAL
===============
Abstracts due: June 14, 2018
Please submit a short abstract (~150 words) describing your presentation
proposal. Slots vary in length up to 45 minutes. Also include the proposal
type -- one of:
- technical talk
- end-user talk
Submit your proposal here:http://events.linuxfoundation.org/cfp
Please only use the categories "presentation" and "panel discussion"
You will receive a notification whether or not your presentation proposal
was accepted by August 10, 2018.
Speakers will receive a complimentary pass for the event. In the instance
that case your submission has multiple presenters, only the primary speaker for a
proposal will receive a complimentary event pass. For panel discussions, all
panelists will receive a complimentary event pass.
TECHNICAL TALKS
A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community. Whenever applicable, focus on
work that needs to be done, difficulties that haven't yet been solved,
and on decisions that other developers should be aware of. Summarizing
recent developments is okay but it should not be more than a small
portion of the overall talk.
END-USER TALKS
One of the big challenges as developers is to know what, where and how
people actually use our software. We will reserve a few slots for end
users talking about their deployment challenges and achievements.
If you are using KVM in production you are encouraged submit a speaking
proposal. Simply mark it as an end-user talk. As an end user, this is a
unique opportunity to get your input to developers.
HANDS-ON / BOF SESSIONS
We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
smaller groups.
These sessions will be announced during the event. If you are interested
in organizing such a session, please add it to the list at
http://www.linux-kvm.org/page/KVM_Forum_2018_BOF
Let people you think who might be interested know about your BOF, and encourage
them to add their names to the wiki page as well. Please try to
add your ideas to the list before KVM Forum starts.
PANEL DISCUSSIONS
If you are proposing a panel discussion, please make sure that you list
all of your potential panelists in your the abstract. We will request full
biographies if a panel is acceped.
===============
HOTEL / TRAVEL
===============
This year's event will take place at the Edinburgh International Conference Centre.
For information about discounted hotel room rate for conference attendees
at the nearby Sheraton Grand Hotel & Spa, Edinburgh, please visit
https://events.linuxfoundation.org/events/kvm-forum-2018/attend/venue-tra...
===============
IMPORTANT DATES
===============
Submission deadline: June 14, 2018
Notification: August 10, 2018
Schedule announced: August 16, 2018
Event dates: October 24-26, 2018
Thank you for your interest in KVM. We're looking forward to your
submissions and seeing you at the KVM Forum 2018 in October!
-your KVM Forum 2018 Program Committee
Please contact us with any questions or comments at
kvm-forum-2018-pc(a)redhat.com
6 years, 5 months