[libvirt-users] virsh list not working with xen 4
by Rogério Vinhal Nunes
Hi, I'm having some trouble to get libvirt to show the correct power state
of my virtual machines. I'm using Ubuntu 10.04 + Xen 4.1.1 + libvirt 0.8.8.
virsh list --all only shows turned off machines registered in xend. If I
turn them on, they just "disappear", and when I start machines directly from
xml, they just doesn't appear at all.
Libvirt is correctly connecting to xen as I can use the other commands fine,
just the list option doesn't seem to work at all. What can I do to change
that?
# virsh version
Compiled against library: libvir 0.8.8
Using library: libvir 0.8.8
Using API: Xen 3.0.1
Running hypervisor: Xen 4.1.0
12 years, 2 months
[libvirt-users] qemu-kvm fails on RHEL6
by sumit sengupta
Hi,
When I'm trying to run qemu-kvm command on RHEL6(linux kernel 2.6.32) then I get following errors which I think related to tap devices in my setup. Any idea why is that?
bash$ LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-00000027 -uuid a93aeed9-15f7-4ded-b6b3-34c8d2c101a8 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000027.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -kernel /home/sumitsen/openstack/nova/instances/instance-00000027/kernel -initrd /home/sumitsen/openstack/nova/instances/instance-00000027/ramdisk -append root=/dev/vda console=ttyS0 -drive file=/home/sumitsen/openstack/nova/instances/instance-00000027/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=26,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:15:84:3e,bus=pci.0,addr=0x3 -chardev
file,id=charserial0,path=/home/sumitsen/openstack/nova/instances/instance-00000027/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/1
qemu-kvm: -netdev tap,fd=26,id=hostnet0: TUNGETIFF ioctl() failed: Bad file descriptor
TUNSETOFFLOAD ioctl() failed: Bad file descriptor
qemu-kvm: -append root=/dev/vda: could not open disk image console=ttyS0: No such file or directory
[sumitsen@sorrygate-dr ~]$ rpm -qa qemu-kvm
qemu-kvm-0.12.1.2-2.209.el6.x86_64
Let me know if you need any other info.
Thanks,
Sumit
12 years, 4 months
[libvirt-users] converting save/dump output into physical memory image
by Andrew Tappert
A lot of people in the security community, myself included, are
interested in memory forensics these days. Virtualization is a natural
fit with memory forensics because it allows one to get access to a
guest's memory without having to introduce any extra software into the
guest or otherwise interfere with it. Incident responders are
particularly interested in getting memory dumps from systems they're
investigating.
Virsh has "save" and "dump" commands for storing the state of a guest to
a file on disk, but memory of KVM guests doesn't get saved in the
"standard" input format for memory forensics tools, which is a raw
physical memory image. (This is what you'd get via the classical "dd
/dev/mem" approach or the contemporary equivalent using the crash
driver; and VMware Server and Workstation produce .vmem files, which are
such raw physical memory images, when a guest is paused or snapshotted.)
In order to analyze the memory of Libvirt/KVM guests with my Linux
memory forensics software, Second Look, I've created a tool for
converting Libvirt-QEMU-save files (output of virsh save command) or
QEMU-savevm files (output of virsh dump command) to raw physical memory
images.
I've got a basic working capability, though I'm still tracking down some
problems with a guest allocated 8GB RAM--not all the memory seems to be
present in the save or dump file. And I haven't tested very extensively
yet, version support is limited to what I myself am currently running, etc.
I'd like to know if this is a capability that others are interested in.
Is this something that would be of interest to the Libvirt project if I
were to contribute the code, or to the KVM project, or do you think it
best exists as a separate project?
I've also got a proof-of-concept tool for converting hibernate images to
raw physical memory images. Perhaps a collection of tools for
converting various memory dump formats would be a good project. Anyone
else interested in this kind of stuff? As an author of commercial
memory forensics software I've got a vested interest in availability of
good memory acquisition capabilities. But there are a number of people
working on FOSS Linux memory analysis tools, too...
Andrew
12 years, 5 months
Re: [libvirt-users] Snapshot system: really confusing.
by NoxDaFox
Hello,
thank you for your fast reply!
To help in comprehension I'll explain you a bit what am I trying to
realize so the picture will be more clear.
Basically it's a platform for debug, what I need is to access to the
memory dump image and to the FS ones; I don't need any reverting
support.
The common lifecycle of a domain is:
a) Given a backing store disk (qcow2) I create a new disk image: my_image.qcow2
b) I start this image and play around based on a persistent domain.
c) I take several pictures (snapshots, dumps) of the VE state: I need
at least readable pictures of the FileSystem and the RAM.
d) I shutdown the guest.
e) I extract valuable information from the pictures. This is the
critical phase where all my doubts on libvirt platform come from.
f) I store those information.
g) I wipe out everything else.
h) Ready for a new test, return to point a).
libvirt is a great platform! And documentation is not bad at all. The
only nebulous part is the snapshot part (I guessed was on hard
development anyway).
Now I'll answer to your points.
> > Greetings,
>
> Hello,
>
> >
> > I am developing a platform for test and debug use.
> >
> > The typical scenario I want to realize is the following:
> >
> > I run a .qcow2 image via QEMU-KVM, the running OS is not relevant.
> > I want to take several snapshots of both Disk and RAM status; same file,
> > separate files doesn't matter.
> > I just need a way to have consistent information about them (better with
> > XML description) and the data as well.
> > I want to store these snapshots on my storage devices to use them whenever
> > and wherever I want.
>
> Hopefully libvirt can supply what you are interested in; and be aware
> that it is a work in progress (that is, unreleased qemu 1.1 and libvirt
> 0.9.12 will be adding yet more features to the picture, in the form of
> live block copy).
I knew the situation, I saw a RFC of yours that is really interesting
as it introduces snapshots of single storage volumes:
virStorageVolSnapshotPtr.
Really interesting under my point of view.
>
>
> >
> > Is it possible to store a single snapshot providing both the memory and
> > disks state in a file (maybe a .qcow2 file)?
>
> Single file: no. Single snapshot: yes. The virDomainSnapshotCreateXML
> API (exposed by virsh snapshot-create[-as]) is able to create a single
> XML representation that libvirt uses to track the multiple files that
> make up a snapshot including both memory and disk state.
This would be enough, as long as I'm able to read those information.
>
> > Is there any way to get a unique interface which handles my snapshots?
> >
> > I was used to use the virDomainSnapshotCreateXML() defining the destination
> > file in the XML with <disk> fields.
>
> Right now, you have a choice:
>
> 1. Don't use the DISK_ONLY flag. That means that you can't use <disk>
> in the snapshot XML. The snapshot then requires qcow2 files, the VM
> state is saved (it happens to be in the first qcow2 disk), and the disk
> state is saved via qcow2 internal snapshots.
That's good to know, I just got yesterday, reading your mails in the
dev mailing list, where snapshots were stored; I always tried to look
for a new file, unsuccessfully.
>
> 2. Use the DISK_ONLY flag. Then you can set up external snapshot file
> names (basically, another layer of qcow2 files, where your original file
> name is now the snapshot backing the new layer), and you get no VM state
> directly. Here is where the <disk> element of the snapshot comes into
> play. If you _also_ want to save VM state, you can use 'virsh save'
> (virDomainSaveFlags) to save just VM state; but then you have to
> coordinate things so that the disk snapshot and the VM state correspond
> to the same point in guest time by pausing the guest before taking the
> disk snapshot.
>
> > After updating libvirt it was not working anymore, I thought was a bug but
> > then I realized it was intentional.
> > The function complains about the fact that the <disk> parameter is not
> > accepted anymore.
>
> What versions of libvirt are you playing with? <disk> was unrecognized
> (and ignored) prior to 0.9.5; after that point, it can only be used with
> the DISK_ONLY flag, but then you have to take the VM state separately.
>
I guessed that the <disk> tag was just ignored, I inherited the code
from a previous project and I spent the first weeks struggling with
what was not working.
I was using libvirt 0.8.3-5 from debian squeeze, I migrated to wheezy
to be able to access to libguestfs features, now I'm running libvirt
0.9.11.
> Someday, I'd like to submit more patches to allow <disk> to be mixed
> with live VM state, but I'm not there yet.
>
> > So I started guessing how to solve reading the API documentation and I fall
> > in a completely nebulous world.
> >
> > For what I got:
> > - virDomainSnapshotCreateXML():
> > According to flags can take system checkpoints (really useful) and disks
> > snapshots.
> > System checkpoints: What I need but I didn't find any way to retrieve the
> > storage file; I'm only able to get the snapshot pointer, quite useless as
> > from its pointer I can only print the XML description.
>
> The storage is in internal snapshots of your qcow2 file. Try 'qemu-img
> info /path/to/file' to see those internal snapshots. You can also use
>
> qemu-img convert -s snapname /path/to/image /path/to/output
Great! This is the information I needed: how to access to those damned
snapshot in a readable way.
I still believe anyway that an interface provided by libvirt would be
really valuable.
I am trying to stay as much as I can on libvirt to work on a higher
level of abstraction, this is really important for my architecture.
Using directly qemu is quite annoying but if at the moment is the only
solution I'll move to that.
>
> as a way to extract that snapshot into a file that you can use
> independently (but only while your guest is not running); alas, qemu
> doesn't yet provide a way to extract this information from a running
> domain, nor have I yet had time to map this functionality into libvirt
> API. But I have requested qemu enhancements to allow it (perhaps by
> qemu 1.2), as well as have a vision of where I plan to take the libvirt
> API in the next year or so to make this more useful.
ATM I don't need "on the fly" extraction, everything is done after
turning off the guest, what I need is a way to get its state whenever
I want, reading it is another business.
But indeed this is something that in a future may be greatly valuable.
>
> > Disk snapshot: here through the XML description I can specify a target file
> > where to write the information. But I still need the memory status.
> > - virDomainSaveFlags():
> > No matter how I set the flags, it halts the guest; I need it still running
> > afterward and I think that reverting the domain and restarting for my
> > scenarios is unaffordable.
>
> Yeah, that's another thing on my list, to add a flag to
> virDomainSaveFlags that will keep the domain running after the save is
> complete, and/or enhance virDomainSnapshotCreateXML to incorporate a
> domain save to an external file alongside disk snapshots at the same
> point in guest time. I'd also like to play with background migration
> (right now, virDomainSaveFlags does a foreground migrate-to-file, which
> pauses the guest up front; but a minimal downtime solution would use the
> live migration path which results in a larger output file but much less
> downtime).
>
> > - virDomainCoreDump():
> > Does what I need for the memory, but doesn't give any XML description and
> > doesn't supply any useful interface to handle it, I just get back an int
> > that represent the exit status of the routine.
>
> virDomainCoreDump() and virDomainSaveFlags() both do a migration to
> file, it's just that the core dump version isn't designed for reverting
> like domain save. And there have been proposals on list for making core
> dump management better (such as allowing you to get at a core dump file
> from a remote machine; right now, the core dump can only be stored on
> the same machine as the guest being dumped).
>
I think this is bad, why not merging these functions? Under an API
point of view (I'm aware of architectural difficulties on the
background) this is useless and confusing.
Better a virDomainSaveFlags() with several configuration than two
different functions.
This is a complex system-management API, so anyone who'll use it will
be conscious of that; I don't think that many flags and XML files will
be confusing as long as they're clearly documented.
But having multiple interfaces (alias functions/methods) that
basically do the same in a different way with slightly different
results is really confusing.
Better have a single method configurable with XML files and flags than
two without.
(This is a point of view of mine as a developer).
> > (other functions really similar)
> >
> > The question is: why all this confusion?
>
> Different qemu capabilities ('savevm', 'migrate',
> 'blockdev-snapshot-sync'), different needs at the time each feature was
> first added, etc. Ultimately, virDomainSnapshotCreateXML is the most
> powerful interface, and I have plans to make it be a superset of
> virDomainSaveFlags(), but I'm not there yet.
Indeed is a powerful interface!
But again: why those functions do something similar?
Isn't better (always under an API point of view) having:
- Two different interfaces that handle separately disk and state; if I
want to revert/migrate I need to give both the results of those
interfaces.
- One interface that gives revert/migrate capability and one interface
that, via flags or XML, gives the separate components.
- Other combinations: may still be done as long as they're clear.
Here we have:
virDomainSnapshotCreateXML():
Takes complete snapshots, disk snapshots, no single-memory snapshots.
It does it keeping the domain alive (paused). It stores information
internally if flags=A, externally if flags=B.
virDomainSaveFlags():
No complete snapshots, no disk snapshots, takes memory snapshots.
It does it stopping the domain. It stores information externally.
Under my point of view this can be done in a more clear way.
Separation of duties: many small functions that do a single thing and
a global one that wraps everything giving a complete package is a good
example.
>
> > I absolutely understand the problematic that realizing a multiplatform
> > snapshots management raises; but I think that for an API purpose what is
> > implemented here is completely confusing the developer.
>
> Is there anything I can do to help improve the documentation? I know
> that's an area where a picture can speak a thousand words; and as more
> features get added into the snapshot picture, it probably becomes more
> important to accurately display the various APIs and the advantages for
> using each.
What's basically missing is data flow and representation, you don't
need to realize it by scratch (data format is strongly bound to qemu)
but give an idea and provide references.
What makes a documentation relevant is the way the reader has to
access to information, must be easy!
If snapshots reliy to qemu ones just link their documentation as well,
so if I don't find enough clues I can go deeper as much as needed.
If help is needed I can contribute, I just need to know where to look
at when I need something. The platforms I'm working on are really
time-consuming but this technology is really important under my point
of view as it will reduce the time to maintain them.
Thanks again for answering so fast!
NoxDaFox
PS: I forgot to CC the libvirt-users mailing list, sorry for the spam.
12 years, 6 months
[libvirt-users] Regarding persistence of VM's after live migration (virDomainMigrateToURI() problem)
by Coding Geek
Hello
I am working with 3 host machines each running xen with shared NFS storage.
I am working on automatic load balancing if one host is over utilized and
another is under utilized by measuring the utilization from xentop. I am
facing a problem after migration of VM. I am setting the flags ( 1| 8| 16)
in order to do live migration, persist VM on destination, undefine host
from source. After migration if I shut off the migrated VM on destination
host it does not persist. I am using the migrateToURI() API for migration.
Please help how to make VM persist on destination.
Thank you,
Tasvinder Singh
--
--
"To Iterate is Human, To Recurse is Divine"
12 years, 6 months
[libvirt-users] Console to RHEL6.1 container
by Sukadev Bhattiprolu
We are trying to create RHEL6.1 container and having trouble getting a
proper console to the container.
I have used lxc containers in the past and lxc has a script (lxc-fedora)
to setup a basic Fedora container. Is there something similar or docs
with virtmgr ?
We started out making a copy of the root fs and using that copy as the
root for the container. I made a couple of tweaks to the container rootfs:
1. ensured that /dev/ptmx is a symlink to pts/ptmx
2. Added 'newinstance' option to the devpts mount in fstab
The container xml file is attached.
If devpts entry in container's fstab has the 'newinstance' option, I get a
brief "domain test01 started" message on the stdout.
When I run 'virsh consle test01', I only see a few messages
---
Setting hostname node03: [ OK ]
Setting up Logical Volume Management: No volume groups found
[ OK ]
Checking filesystems
[ OK ]
mount: can't find / in /etc/fstab or /etc/mtab
Mounting local filesystems: [ OK ]
Enabling local filesystem quotas: [ OK ]
Enabling /etc/fstab swaps: [ OK ]
---
and then a hang wihtout a shell. At ths point sshd or any of the other
services don't appear to be running.
If I remove the 'newinstance' entry in fstab and run 'virsh start test01'
the container starts up ok. I get all "console" messages on whichever
is my current /dev/pts/0 !
At this point sshd and many other services are running and I can ssh in.
Even in this case, though 'virsh console test01' shows the same _few_
messages as above and no login shell.
Back to the newinstance case, by putting some debug messages in the
container's /etc/rc.sysinit, I see that devpts is already mounted
with newinstance.
With lxc, in setup_pts(), we unmount devpts in the container and then
mount with newinstance. I wonder what that sequence is with virtmgr.
I double checked that /dev/ptmx in container is a symlink to pts/ptmx
and that CONFIG_DEVPTS_MULTIPLE_INSTANCES=y.
Appreciate any tips on how I can get access to the console of the RHEL6.1
container.
Suka
Suka
12 years, 6 months
[libvirt-users] Is there a way for telling libvirt to auto define an xml under /etc/libvirt/qemu?
by Raoul Scarazzini
Hi everybody,
like in the subject: is there a way for telling libvirt to auto define
an xml under /etc/libvirt/qemu? I mean, sometimes it happens to have the
/etc/libvirt/qemu shared between nodes, say with a NFS share, since I
can set a new xml on a node and I want o see the virtual machine on all
the others, is there a way to automate this or am I obligated to launch
virsh define <xml> on EACH node?
Thank you so much,
--
Raoul Scarazzini
Solution Architect
MMUL: Niente è impossibile da realizzare, se lo pensi bene!
+39 3281776712
rasca(a)mmul.it
http://www.mmul.it
12 years, 7 months
[libvirt-users] Installing the python libvirt bindings in a virtualenv
by Daniel Gonzalez
Hi,
I have a virtualenv which I am using for production (I need python 2.7
and certain libraries which can not be easily installed on the host
environment).
Now I am trying to install also libvirt, but I have not succeeded yet.
The problem that I have now looks quite difficult to solve. This is
the script I am using to install libvirt (bash script):
install_libvirt_ ( ) {
local myvirtualenv="$1"
local libvirt_tag=v0.9.9
cd $TMP_DIR
git clone git://libvirt.org/libvirt.git
cd libvirt
git checkout $libvirt_tag
mkdir -p $VIRTUALENVS_DIR/$myvirtualenv/usr
sudo apt-get install autopoint
# configure: error: You must install the GnuTLS library in order
to compile and run libvirt
sudo apt-get install -y gnutls-bin gnutls-dev
# configure: error: You must install
device-mapper-devel/libdevmapper >= 1.0.0 to compile libvirt
sudo apt-get install -y libdevmapper-dev libdevmapper
# configure: error: You must install python-devel to build Python bindings
sudo apt-get install -y python-all-dev
./autogen.sh --prefix=$VIRTUALENVS_DIR/$myvirtualenv/usr
--enable-compile-warnings=error
make
make install
}
But this is failing with error message:
checking for python script directory... ${prefix}/lib/python2.7/site-packages
checking for python extension module directory...
${exec_prefix}/lib/python2.7/site-packages
configure: error: You must install python-devel to build Python bindings
python-devel does not exist. I have used python-all-dev, but it has
not solved the problem.
Has somebody succeeded in installing libvirt-python inside a virtualenv?
Thanks,
Daniel Gonzalez
12 years, 7 months
[libvirt-users] Snapshot system: really confusing.
by NoxDaFox
Greetings,
I am developing a platform for test and debug use.
The typical scenario I want to realize is the following:
I run a .qcow2 image via QEMU-KVM, the running OS is not relevant.
I want to take several snapshots of both Disk and RAM status; same file,
separate files doesn't matter.
I just need a way to have consistent information about them (better with
XML description) and the data as well.
I want to store these snapshots on my storage devices to use them whenever
and wherever I want.
Is it possible to store a single snapshot providing both the memory and
disks state in a file (maybe a .qcow2 file)?
Is there any way to get a unique interface which handles my snapshots?
I was used to use the virDomainSnapshotCreateXML() defining the destination
file in the XML with <disk> fields.
After updating libvirt it was not working anymore, I thought was a bug but
then I realized it was intentional.
The function complains about the fact that the <disk> parameter is not
accepted anymore.
So I started guessing how to solve reading the API documentation and I fall
in a completely nebulous world.
For what I got:
- virDomainSnapshotCreateXML():
According to flags can take system checkpoints (really useful) and disks
snapshots.
System checkpoints: What I need but I didn't find any way to retrieve the
storage file; I'm only able to get the snapshot pointer, quite useless as
from its pointer I can only print the XML description.
Disk snapshot: here through the XML description I can specify a target file
where to write the information. But I still need the memory status.
- virDomainSaveFlags():
No matter how I set the flags, it halts the guest; I need it still running
afterward and I think that reverting the domain and restarting for my
scenarios is unaffordable.
- virDomainCoreDump():
Does what I need for the memory, but doesn't give any XML description and
doesn't supply any useful interface to handle it, I just get back an int
that represent the exit status of the routine.
(other functions really similar)
The question is: why all this confusion?
I absolutely understand the problematic that realizing a multiplatform
snapshots management raises; but I think that for an API purpose what is
implemented here is completely confusing the developer.
Regards,
NoxDaFox.
12 years, 7 months