[libvirt-users] [RFC] per-device metadata
by Francesco Romani
Hi,
Currently libvirt supports metadata in the domain XML. This is very
convenient for data related to the VM, but it is a little awkward for
devices. Let's pretend I want to have extradata (say, a specific port
for a virtual switch) related to a device (say, a NIC). Nowadays I can
store that data in the metadata section, but I need some kind of mapping
to correlate this piece of information to the specific device.
I can use the device alias, but this is not available when the device is
created. This is also more complex when doing hotplug/hotunplug, because
I need to do update device and update metadata; if either fails, the
entire operation must be considered failed.
It would be nice to be able to attach metadata to the device, and this
is what I'm asking for/proposing in this mail.
Would it be possible in a future libvirt release?
If this is not possible, what's the best way to do the aforementioned
mapping? If it's alias (or device address), how can I be sure than I'm
addressing (no pun intended) the right device when I don't have them?
(e.g. hotplug new device, or just first time VM created)?
Thanks,
--
Francesco Romani
Red Hat Engineering Virtualization R & D
IRC: fromani
7 years, 7 months
[libvirt-users] Get PID of a domain's QEMU instance from its domain ID
by Thibaut SAUTEREAU
Hello,
I cannot find a way to retrieve PIDs of QEMU instances from libvirt domains'
IDs (I'm using libvirt C API). I recognize it sounds like a bad idea doing so
(and I know PIDs are explicitly made not available, as I gathered from the
source code and on your IRC channel) but I need that to use the perf_event_open
syscall in order to gather statistics on my QEMU/KVM guests. I also know
libvirt now supports some perf events but only a few and I need more. I could
submit patches to add them and I will definitely consider that but in the
meantime...
What would be the best way to get those PIDs? I tried using the XML file but
PID is hidden in there too. I took a look at QEMU Machine Protocol. Now I'm
going to walk /proc and match on guests names but it is not that elegant.
Any ideas?
Thanks,
Thibaut S.
7 years, 7 months
[libvirt-users] VIR_ERR_OPERATION_INVALID from virDomainDestroyFlags call
by Milan Zamazal
Hi, we experienced a strange, non-reproducible error after a successful
migration to another host. When we called virDomainDestroyFlags with
VIR_DOMAIN_DESTROY_GRACEFUL flag after the migration on the source host,
we got VIR_ERR_OPERATION_INVALID (code 55) error. The same with
repeated virDomainDestroyFlags calls. Normally, we would expect either
success or VIR_ERR_NO_DOMAIN error. `virsh list' didn't show the VM.
Can anybody please explain to us when this can happen and what the error
means in this context? When we have good reasons to believe that the VM
is down (e.g. after a migration call successfully finishes) and we
receive such an error from virDomainDestroyFlags, is it safe to assume
the VM is basically gone and can we perform standard cleanup actions
(like removing related files from the host file system)?
Thank you,
Milan
7 years, 7 months
[libvirt-users] migrating from XenServer / XCP to libvirt/KVM
by Daniel Pocock
I've been using XenServer / XCP (with the "xe" toolset) for a number of
years for hosting servers used for development and free software
projects and I'm now looking at migrating all those environments to
libvirt / KVM.
I had a look at the wiki[1] already and didn't see XenServer mentioned
there.
Could anybody help comment on or explain a few things:
- comparison between XenServer and libvirt: I notice many similar
concepts, for example, XenServer has storage repositories and libvirt
has storage pools. However, in many examples such as this[2] it appears
that libvirt requires more manual effort. For example, when you create
a storage repository in XenServer, the tool takes a physical block
device as input and creates the necessary volume group, logical volume
and a filesystem. In that libvirt example, it appears each of those
steps must be done manually. The networking examples are similar: the
XenServer tools do all the Open vSwitch stuff behind the scenes but in
the libvirt examples it appears necessary to create the bridge manually
before telling libvirt about it. Is this all correct or are the
examples I've seen out of date? Do more recent (or future) releases of
libvirt aim to automate/hide more of these things?
- what is best practice for virtual disk images? Does libvirt always
use files (like XenServer) or can/should block devices be used
directly? If using files, is any special care needed to avoid block
alignment problems?
- are there any guides or tools recommended for migrating small
XenServer environments (less than 50 domains on a single physical node)
to libvirt + KVM?
- is there a list of small things that need to change in a VM before
running it under libvirt / KVM? I'm guessing this might include
bootloader config, updating the kernel, changing block device names in
/etc/fstab and changing network device names in scripts - are there any
others?
- can anybody comment on more tricky issues that may arise in such a
migration, for example, will a Windows VM be likely to run without
modification when migrated from XenServer to libvirt/KVM or will it need
extra drivers added or anything else before the migration?
Regards,
Daniel
1. http://wiki.libvirt.org/page/Main_Page
2. https://keepingitclassless.net/2014/01/libvirt-intro-basic-configuration/
7 years, 7 months
[libvirt-users] MIPS emulation broken - No PCI buses available
by Ian Pilcher
I am trying to create a QEMU MIPS guest, so that I can test some code
for big-endian safety.
Every attempt to create a MIPS guest is giving me an error:
Unable to complete install: 'XML error: No PCI buses available'
It seems like this is a known issue.
https://www.redhat.com/archives/libvir-list/2016-May/msg00197.html
However, I am still getting this on a fully updated Fedora 25 system
with libvirt-2.2.0-2.fc25.x86_64.
--
========================================================================
Ian Pilcher arequipeno(a)gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================
7 years, 7 months
[libvirt-users] trouble after upgrading from 3.0.0 to 3.1.0
by Michael Ströder
HI!
After the last OS update (openSUSE Tumbleweed) with libvirt being updated from 3.0.0 to
3.1.0 starting the VMs (qemu-kvm) does not work anymore:
error: internal error: child reported: Kernel does not provide mount namespace:
Permission denied
Kernel was updated before to 4.10.1 and worked just fine with libvirt 3.0.0 packages.
Any clue how to work around that?
Ciao, Michael.
7 years, 7 months
Re: [libvirt-users] [libvirt] libvirt and dmesg
by Cedric Bosdonnat
Hello Pierre-Jacques,
First note that you posted your message on the developer's mailing list.
For such user questions, rather email: https://www.redhat.com/mailman/listinfo/libvirt-users
According to https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
lxc.kmsg is only used to symlink /dev/kmsg to /dev/console. Thus setting
this to 0 only results in the lack of that symlink. libvirt doesn't setup
this link at all, may be it is coming from the lxc domain root file system.
On Wed, 2017-03-15 at 08:49 +0100, Michel Pierre-Jacques wrote:
> Hi all, il try to find the way to obtain the same use as in the
> lxc config file lxc.kmsg=0
>
> how to do that, with the xml of a lxc domain, i make lot of try without
> success .
>
> is there another way to deny access to the dmesg for a libvirt lxc
> domain ?
>From what I see, dmesg doesn't necessarily requires /dev/kmsg (at least
on opensuse). So blocking it may be more tricky. May be you should tell
us more how you setup your container.
--
Cedric
7 years, 7 months
[libvirt-users] question about libvirt and suspending guests during live migration
by Chris Friesen
Hi,
I hope someone can help me out.
I'm running into an issue with libvirt 1.2.12 reporting "operation failed:
domain is no longer running" for a migration when qemu thinks it was fine.
The steps are:
1) create guest with stress test running in it to dirty memory at a high rate
(fast enough that it would not normally complete live-migration)
2) trigger live migration with dom.migrateToURI2()
3) while migration is in progress, call dom.suspend() on the migrating domain.
What I see at this point is the following:
a) At time 50.465 the monitoring code sees a VIR_DOMAIN_EVENT_SUSPENDED event,
as expected.
b) An instrumented qemu logs the following:
51.143: done transferring state
51.143: done migration
51.144: qmp_query_migrate reporting state completed
c) At time 51.468 the monitoring code sees a VIR_DOMAIN_EVENT_RESUMED event,
with detail of VIR_DOMAIN_EVENT_RESUMED_UNPAUSED
c) At time 51.469 the the monitoring code sees a VIR_DOMAIN_EVENT_RESUMED event,
with detail of VIR_DOMAIN_EVENT_RESUMED_MIGRATED
e) At time 51.471 the dom.migrateToURI2() call raises an exception (this is
python). The corresponding libvirt log file shows:
"error : virNetClientProgramDispatchError:177 : operation failed: domain is no
longer running"
For what it's worth, the problem seems to be fixed in libvirt 1.2.17. In that
version and later I don't see the VIR_DOMAIN_EVENT_RESUMED event, the migration
just completes.
I'm looking at the libvirt history, but I figured I'd ask here too...
Thanks,
Chris
7 years, 7 months