Re: [libvirt-users] some problem with snapshot by libvirt
by xingxing gao
sure you can add the list back in,
2012/5/29 Eric Blake <eblake(a)redhat.com>:
> On 05/29/2012 09:03 AM, xingxing gao wrote:
>> to create snapshot ,I did step below:
>
> I'd really rather answer this in public; do I have your permission to
> add the list back in, or better yet, can you repost to the list?
>
> Also, the list tends to frown on top-posting.
>
>>> Did you mean for this to go to the list, so that others may chime in or
>>> benefit from the answers?
>>>
>
>
> --
> Eric Blake eblake(a)redhat.com +1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>
12 years, 5 months
[libvirt-users] create net if, but not connect
by Sergio Kvjato
Hello,
is there a way to create net if but not connect to any bridge?
<interface type='ethernet'/> is not work becaue its need to run in root mode not user mode
Thanks.
12 years, 5 months
[libvirt-users] What features of kernel are required to support virDomainGetCPUStats?
by Zhihua Che
Hi,
Weeks ago, I developed my app under ubuntu-11.10 (kernel 3.0) with
lilbvirt-0.9.10. In my code , I used virDomainGetCPUStats to query cpu
usage info of domains, and it worked well.
However, today, I port my code to ubuntu-10.04.4 (kernel 2.6.32) with
libvirt-0.9.10, I find the function virDomainGetCPUStats can't work
and complains "this function is not supported by the connection
driver: virDomainGetCPUStats".
I guess the kernel difference cause this problem because the
libvirt is the same on two operating systems. I wonder what features
of kernel are required to support virDomainGetCPUStats? I thought
cgroup is a prerequisite. The following are .config files of two
kernels.
ubuntu-11.10
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP_ENABLED=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
ubuntu-10.04.4
CONFIG_CGROUP_NS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_MM_OWNER=y
As you can see, CPUSET and CPUACCT are enabled in both kernels. So
what else features of kernel are required to support
virDomainGetCPUStats?
harvey
12 years, 6 months
[libvirt-users] New Bugzilla components for upstream bug reporting
by Daniel P. Berrange
Historically we have told people to report all upstream bugs against
Product: Virtualization Tools
Component: libvirt
We now provide much more than just the core libvirt package though. Thus
we have created a large number of new components for reporting upstream
bugs against, one for each package we distribute:
Component: libvirt-appdev-guide - Libvirt application development guide
Component: libvirt-glib - Libvirt GLib, GConfig & GObject libraries
Component: libvirt-cim - Libvirt CIM provider
Component: libvirt-csharp - Libvirt C# language bindings
Component: libvirt-java - Libvirt Java language bindings
Component: libvirt-ocaml - Libvirt OCaml language bindings
Component: libvirt-perl - Libvirt Perl language bindings
Component: libvirt-php - Libvirt PHP language bindings
Component: libvirt-publican - Libvirt Publican documentation templates
Component: libvirt-sandbox - Libvirt application sandbox toolkit
Component: libvirt-snmp - Libvirt SNMP agent
Component: libvirt-tck - Libvirt TCK (Technology Compatibility Kit)
Component: libvirt-test-API - Libvirt test API
Component: libvirt-virshcmdref - Libvirt virsh command reference
Component: ruby-libvirt - Libvirt Ruby language bindings
The bug reporting form is here:
https://bugzilla.redhat.com/enter_bug.cgi?product=Virtualization%20Tools
NB, if you are using a binary package provided by your OS vendor, then you
should use their bug tracker for reporting bugs first. Feel free to also
report it to the libvirt upstream bug tracker at the same time though.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
12 years, 6 months
[libvirt-users] [libivrt-users] What features of kernel are required to support virDomainGetCPUStats?
by Zhihua Che
Hi,
Weeks ago, I developed my app under ubuntu-11.10 (kernel 3.0) with
lilbvirt-0.9.10. In my code , I used virDomainGetCPUStats to query cpu
usage info of domains, and it worked well.
However, today, I port my code to ubuntu-10.04.4 (kernel 2.6.32) with
libvirt-0.9.10, I find the function virDomainGetCPUStats can't work
and complains "this function is not supported by the connection
driver: virDomainGetCPUStats".
I guess the kernel difference causes this problem because the
libvirt is the same on two operating systems. I wonder what features
of kernel are required to support virDomainGetCPUStats? I thought
cgroup is a prerequisite. The following are .config files of two
kernels.
ubuntu-11.10
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP_ENABLED=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
ubuntu-10.04.4
CONFIG_CGROUP_NS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_MM_OWNER=y
As you can see, CPUSET and CPUACCT are enabled in both kernels. So
what else features of kernel are required to support
virDomainGetCPUStats?
harvey
12 years, 6 months
[libvirt-users] Cannot connect to the existing LXC containers (console)
by Sebastien Douche
Hi,
after 2 years of "pure" LXC usage, I'm trying now to administrate our
LXC containers with libvirt No problem with a LXC container I create
"from Scratch" (lxc-create), but with our existing containers, I've a
strange behavior with the console:
vsonde43 login:
Debian GNU/Linux 5.0 vsonde43 console
vsonde43 login:
Debian GNU/Linux 5.0 vsonde43 tty1
vsonde43 login:
Debian GNU/Linux 5.0 vsonde43 console
vsonde43 login:
Debian GNU/Linux 5.0 vsonde43 tty1
vsonde43 login:
Debian GNU/Linux 5.0 vsonde43 console
vsonde43 login: root
Password: z
Login incorrect
vsonde43 login:
Login incorrect
vsonde43 login: root
Password: Password:
Login incorrect
vsonde43 login: root
INIT: Id "c2" respawning too fast: disabled for 5 minutes
INIT: Id "c3" respawning too fast: disabled for 5 minutes
INIT: Id "c4" respawning too fast: disabled for 5 minutes
INIT: Id "c2" respawning too fast: disabled for 5 minutes
INIT: Id "c3" respawning too fast: disabled for 5 minutes
INIT: Id "c4" respawning too fast: disabled for 5 minutes
Characters sometimes appears, and sometimes all keys are like the enter key.
Any ideas? I tried many xml configurations w/o sucsess.
Thanks.
PS: libvirt 0.9.8 / lxc 0.7.5
--
Sebastien Douche <sdouche(a)gmail.com>
Twitter: @sdouche / G+: +sdouche
12 years, 6 months
[libvirt-users] Spawn new domains from a snapshot.
by NoxDaFox
Hello everybody,
I would like to be able to spawn several domains from a given snapshot.
Here's a possible scenario:
- I start from disk image A.qcow2.
- I made some changes to A and I take different snapshots: 1 - 2 - 3
- While A is still running I would like to run domains B and C from snapshot 2
I don't want to revert a domain, I want to create new ones. The
original should be still running but it may be stopped if necessary.
The idea I had was to create a qcow2 image through qemu-img (let's
call it Z.qcow2), as I'm using copy on write for performances then I'd
need to commit the changes contained in Z to A always using qemu-img.
Is there any other better way? I would love to do it through libvirt,
maybe specifying in the config file the backing store path of the
whole disk and then giving as a source file the one containing deltas.
This could be a good idea for realizing a cloud based system as a
incredible amount of time would be saved though it.
Imagine moving only qcow2 file containing deltas through the network
and then give the base image to each node so that it can use it for
start it's own domain.
As well would be possible to store the heavy base image in a single
node saving storage space.
Lots of cpu cycles saved once new image versions must be deployed
(typical case a windows update to propagate in the nodes network).
NoxDaFox
12 years, 6 months
[libvirt-users] How to specify the libvirtd in connect call
by Zhihua Che
Hi,
Here is my situation, I install libvirt in a non-default directory
and start libvird with sudo. While I can connect virsh to libvirtd
using "virsh --connect qemu:///system", I fail to connect my app using
"virConnectOpen("qemu:///system")", which complains "unable to locate
libvirtd daemon in $PATH". Then, I take another try and add the path
of libvirtd in env variable PATH. Unfortunately, it doesn't work.
I read through the api reference and find no place to specify this
path explicitly.
Thanks for any tip.
Harvey
12 years, 6 months
[libvirt-users] monitor socket did not show up.: No such file or directory
by vipul borikar
I am using a Nimbus Cloud Middleware which in turn use Python libvirt
binding to start VM's.
Hypervisor is KVM.
# virsh -c qemu:///system version
Compiled against library: libvir 0.8.7
Using library: libvir 0.8.7
Using API: QEMU 0.8.7
Running hypervisor: QEMU 0.12.1
Some time i get the error in the Nimbus Logs it says :
UnexpectedError: Problem creating the VM: monitor socket did not show up.:
No such file or directory
I googled but this particular not able to find.
I checked the /var/log/libvirt/qemu/wrksp-427.log
2012-05-23 10:09:13.494: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin HOME=/root USER=root
LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.1.0
-enable-kvm -m 2024 -smp 2,sockets=2,cores=1,threads=1 -name wrksp-427
-uuid e8c5c171-392c-02e4-b43c-b0b4279e55c5 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/wrksp-427.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -boot c
-drive file=/VM/Nimbus/secureimages/wrksp-427/
param-vm.cdacb.in/cloud/5f4f0dcd//RHEL5.5_64bit_MPI,if=none,id=drive-ide0-0-0,format=raw-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=24,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=a2:aa:bb:2e:d6:58,bus=pci.0,addr=0x3
-usb -vnc 127.0.0.1:22 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
10:09:13.512: 14116: info : libvirt version: 0.8.7, package: 18.el6 (Red
Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2011-04-18-10:28:30,
x86-008.build.bos.redhat.com)
10:09:13.512: 14116: debug : virCgroupNew:566 : New group
/libvirt/qemu/wrksp-427
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
0:cpu at /cgroup/cpu in
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
1:cpuacct at /cgroup/cpuacct in
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
2:cpuset at /cgroup/cpuset in
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
3:memory at /cgroup/memory in
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
4:devices at /cgroup/devices in
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
5:freezer at /cgroup/freezer in
10:09:13.513: 14116: debug : virCgroupDetect:245 : Detected mount/mapping
6:blkio at /cgroup/blkio in
10:09:13.513: 14116: debug : virCgroupMakeGroup:497 : Make group
/libvirt/qemu/wrksp-427
10:09:13.513: 14116: debug : virCgroupMakeGroup:509 : Make controller
/cgroup/cpu/libvirt/qemu/wrksp-427/
10:09:13.513: 14116: debug : virCgroupMakeGroup:509 : Make controller
/cgroup/cpuacct/libvirt/qemu/wrksp-427/
10:09:13.514: 14116: debug : virCgroupMakeGroup:509 : Make controller
/cgroup/cpuset/libvirt/qemu/wrksp-427/
10:09:13.514: 14116: debug : virCgroupMakeGroup:509 : Make controller
/cgroup/memory/libvirt/qemu/wrksp-427/
10:09:13.514: 14116: debug : virCgroupSetValueStr:290 : Set value
'/cgroup/cpu/libvirt/qemu/wrksp-427/tasks' to '14116'
10:09:13.714: 14116: debug : virCgroupSetValueStr:290 : Set value
'/cgroup/cpuacct/libvirt/qemu/wrksp-427/tasks' to '14116'
10:09:14.467: 14116: debug : virCgroupSetValueStr:290 : Set value
'/cgroup/cpuset/libvirt/qemu/wrksp-427/tasks' to '14116'
10:09:15.467: 14116: debug : virCgroupSetValueStr:290 : Set value
'/cgroup/memory/libvirt/qemu/wrksp-427/tasks' to '14116'
2012-05-23 10:09:16.537: shutting down
There is no suspect in the log file.
Any Ideas???
--
Thanks
Vipul Borikar
"Our task must be to free ourselves...by widening our circle of compassion
to embrace all living creatures and the whole of nature and its beauty."
12 years, 6 months
Re: [libvirt-users] Snapshot system: really confusing.
by NoxDaFox
Hello,
thank you for your fast reply!
To help in comprehension I'll explain you a bit what am I trying to
realize so the picture will be more clear.
Basically it's a platform for debug, what I need is to access to the
memory dump image and to the FS ones; I don't need any reverting
support.
The common lifecycle of a domain is:
a) Given a backing store disk (qcow2) I create a new disk image: my_image.qcow2
b) I start this image and play around based on a persistent domain.
c) I take several pictures (snapshots, dumps) of the VE state: I need
at least readable pictures of the FileSystem and the RAM.
d) I shutdown the guest.
e) I extract valuable information from the pictures. This is the
critical phase where all my doubts on libvirt platform come from.
f) I store those information.
g) I wipe out everything else.
h) Ready for a new test, return to point a).
libvirt is a great platform! And documentation is not bad at all. The
only nebulous part is the snapshot part (I guessed was on hard
development anyway).
Now I'll answer to your points.
> > Greetings,
>
> Hello,
>
> >
> > I am developing a platform for test and debug use.
> >
> > The typical scenario I want to realize is the following:
> >
> > I run a .qcow2 image via QEMU-KVM, the running OS is not relevant.
> > I want to take several snapshots of both Disk and RAM status; same file,
> > separate files doesn't matter.
> > I just need a way to have consistent information about them (better with
> > XML description) and the data as well.
> > I want to store these snapshots on my storage devices to use them whenever
> > and wherever I want.
>
> Hopefully libvirt can supply what you are interested in; and be aware
> that it is a work in progress (that is, unreleased qemu 1.1 and libvirt
> 0.9.12 will be adding yet more features to the picture, in the form of
> live block copy).
I knew the situation, I saw a RFC of yours that is really interesting
as it introduces snapshots of single storage volumes:
virStorageVolSnapshotPtr.
Really interesting under my point of view.
>
>
> >
> > Is it possible to store a single snapshot providing both the memory and
> > disks state in a file (maybe a .qcow2 file)?
>
> Single file: no. Single snapshot: yes. The virDomainSnapshotCreateXML
> API (exposed by virsh snapshot-create[-as]) is able to create a single
> XML representation that libvirt uses to track the multiple files that
> make up a snapshot including both memory and disk state.
This would be enough, as long as I'm able to read those information.
>
> > Is there any way to get a unique interface which handles my snapshots?
> >
> > I was used to use the virDomainSnapshotCreateXML() defining the destination
> > file in the XML with <disk> fields.
>
> Right now, you have a choice:
>
> 1. Don't use the DISK_ONLY flag. That means that you can't use <disk>
> in the snapshot XML. The snapshot then requires qcow2 files, the VM
> state is saved (it happens to be in the first qcow2 disk), and the disk
> state is saved via qcow2 internal snapshots.
That's good to know, I just got yesterday, reading your mails in the
dev mailing list, where snapshots were stored; I always tried to look
for a new file, unsuccessfully.
>
> 2. Use the DISK_ONLY flag. Then you can set up external snapshot file
> names (basically, another layer of qcow2 files, where your original file
> name is now the snapshot backing the new layer), and you get no VM state
> directly. Here is where the <disk> element of the snapshot comes into
> play. If you _also_ want to save VM state, you can use 'virsh save'
> (virDomainSaveFlags) to save just VM state; but then you have to
> coordinate things so that the disk snapshot and the VM state correspond
> to the same point in guest time by pausing the guest before taking the
> disk snapshot.
>
> > After updating libvirt it was not working anymore, I thought was a bug but
> > then I realized it was intentional.
> > The function complains about the fact that the <disk> parameter is not
> > accepted anymore.
>
> What versions of libvirt are you playing with? <disk> was unrecognized
> (and ignored) prior to 0.9.5; after that point, it can only be used with
> the DISK_ONLY flag, but then you have to take the VM state separately.
>
I guessed that the <disk> tag was just ignored, I inherited the code
from a previous project and I spent the first weeks struggling with
what was not working.
I was using libvirt 0.8.3-5 from debian squeeze, I migrated to wheezy
to be able to access to libguestfs features, now I'm running libvirt
0.9.11.
> Someday, I'd like to submit more patches to allow <disk> to be mixed
> with live VM state, but I'm not there yet.
>
> > So I started guessing how to solve reading the API documentation and I fall
> > in a completely nebulous world.
> >
> > For what I got:
> > - virDomainSnapshotCreateXML():
> > According to flags can take system checkpoints (really useful) and disks
> > snapshots.
> > System checkpoints: What I need but I didn't find any way to retrieve the
> > storage file; I'm only able to get the snapshot pointer, quite useless as
> > from its pointer I can only print the XML description.
>
> The storage is in internal snapshots of your qcow2 file. Try 'qemu-img
> info /path/to/file' to see those internal snapshots. You can also use
>
> qemu-img convert -s snapname /path/to/image /path/to/output
Great! This is the information I needed: how to access to those damned
snapshot in a readable way.
I still believe anyway that an interface provided by libvirt would be
really valuable.
I am trying to stay as much as I can on libvirt to work on a higher
level of abstraction, this is really important for my architecture.
Using directly qemu is quite annoying but if at the moment is the only
solution I'll move to that.
>
> as a way to extract that snapshot into a file that you can use
> independently (but only while your guest is not running); alas, qemu
> doesn't yet provide a way to extract this information from a running
> domain, nor have I yet had time to map this functionality into libvirt
> API. But I have requested qemu enhancements to allow it (perhaps by
> qemu 1.2), as well as have a vision of where I plan to take the libvirt
> API in the next year or so to make this more useful.
ATM I don't need "on the fly" extraction, everything is done after
turning off the guest, what I need is a way to get its state whenever
I want, reading it is another business.
But indeed this is something that in a future may be greatly valuable.
>
> > Disk snapshot: here through the XML description I can specify a target file
> > where to write the information. But I still need the memory status.
> > - virDomainSaveFlags():
> > No matter how I set the flags, it halts the guest; I need it still running
> > afterward and I think that reverting the domain and restarting for my
> > scenarios is unaffordable.
>
> Yeah, that's another thing on my list, to add a flag to
> virDomainSaveFlags that will keep the domain running after the save is
> complete, and/or enhance virDomainSnapshotCreateXML to incorporate a
> domain save to an external file alongside disk snapshots at the same
> point in guest time. I'd also like to play with background migration
> (right now, virDomainSaveFlags does a foreground migrate-to-file, which
> pauses the guest up front; but a minimal downtime solution would use the
> live migration path which results in a larger output file but much less
> downtime).
>
> > - virDomainCoreDump():
> > Does what I need for the memory, but doesn't give any XML description and
> > doesn't supply any useful interface to handle it, I just get back an int
> > that represent the exit status of the routine.
>
> virDomainCoreDump() and virDomainSaveFlags() both do a migration to
> file, it's just that the core dump version isn't designed for reverting
> like domain save. And there have been proposals on list for making core
> dump management better (such as allowing you to get at a core dump file
> from a remote machine; right now, the core dump can only be stored on
> the same machine as the guest being dumped).
>
I think this is bad, why not merging these functions? Under an API
point of view (I'm aware of architectural difficulties on the
background) this is useless and confusing.
Better a virDomainSaveFlags() with several configuration than two
different functions.
This is a complex system-management API, so anyone who'll use it will
be conscious of that; I don't think that many flags and XML files will
be confusing as long as they're clearly documented.
But having multiple interfaces (alias functions/methods) that
basically do the same in a different way with slightly different
results is really confusing.
Better have a single method configurable with XML files and flags than
two without.
(This is a point of view of mine as a developer).
> > (other functions really similar)
> >
> > The question is: why all this confusion?
>
> Different qemu capabilities ('savevm', 'migrate',
> 'blockdev-snapshot-sync'), different needs at the time each feature was
> first added, etc. Ultimately, virDomainSnapshotCreateXML is the most
> powerful interface, and I have plans to make it be a superset of
> virDomainSaveFlags(), but I'm not there yet.
Indeed is a powerful interface!
But again: why those functions do something similar?
Isn't better (always under an API point of view) having:
- Two different interfaces that handle separately disk and state; if I
want to revert/migrate I need to give both the results of those
interfaces.
- One interface that gives revert/migrate capability and one interface
that, via flags or XML, gives the separate components.
- Other combinations: may still be done as long as they're clear.
Here we have:
virDomainSnapshotCreateXML():
Takes complete snapshots, disk snapshots, no single-memory snapshots.
It does it keeping the domain alive (paused). It stores information
internally if flags=A, externally if flags=B.
virDomainSaveFlags():
No complete snapshots, no disk snapshots, takes memory snapshots.
It does it stopping the domain. It stores information externally.
Under my point of view this can be done in a more clear way.
Separation of duties: many small functions that do a single thing and
a global one that wraps everything giving a complete package is a good
example.
>
> > I absolutely understand the problematic that realizing a multiplatform
> > snapshots management raises; but I think that for an API purpose what is
> > implemented here is completely confusing the developer.
>
> Is there anything I can do to help improve the documentation? I know
> that's an area where a picture can speak a thousand words; and as more
> features get added into the snapshot picture, it probably becomes more
> important to accurately display the various APIs and the advantages for
> using each.
What's basically missing is data flow and representation, you don't
need to realize it by scratch (data format is strongly bound to qemu)
but give an idea and provide references.
What makes a documentation relevant is the way the reader has to
access to information, must be easy!
If snapshots reliy to qemu ones just link their documentation as well,
so if I don't find enough clues I can go deeper as much as needed.
If help is needed I can contribute, I just need to know where to look
at when I need something. The platforms I'm working on are really
time-consuming but this technology is really important under my point
of view as it will reduce the time to maintain them.
Thanks again for answering so fast!
NoxDaFox
PS: I forgot to CC the libvirt-users mailing list, sorry for the spam.
12 years, 6 months