[libvirt-users] mlocate/updatedb and btrfs subvolume mounts
by G. Richard Bellamy
I've just noticed that I'm having issues with finding files using
"locate" when those files are on btrfs subvolume mounts.
The issue is that updatedb cannot discern the difference between a
btrfs bind mount and btrfs subvolume [1]. This generally means that if
you're using btrfs subvolume mounts and updatedb at the same time, and
you want to index those subvolumes, you'll need to set
PRUNE_BIND_MOUNTS to 0 or "no". And then deal with all the cruft that
causes.
>From the bug above, you can see that the RedHat dev Michal Sekletar is
out of ideas. I'm not sure if he's reached out here or not... and if
not, he might welcome some help from the folks on this list.
Regrads,
Richard
[1] https://bugzilla.redhat.com/show_bug.cgi?id=906591#c30
9 years, 7 months
[libvirt-users] P2P live migration with non-shared storage: fails to connect to remote libvirt URI qemu+ssh
by Kashyap Chamarthy
Migration without --p2p works just fine, ie. the below works:
$ virsh migrate --verbose --copy-storage-all \
--live cvm1 qemu+ssh://kashyapc@devstack3/system
Migration: [100 %]
Result:
- On the source host, the guest is shut off
- On the destination host, the guest is live migratied successfully
Migration with "--p2p" fails, a simple test below:
First, I should note, I didn't modify any settings in
/etc/libvirt/libvirtd.conf on both source and destination hosts, except for
libvirt logging filters.
(0) On source and destination hosts, SSH keys are setup so that passwordless
auth works:
$ ssh-keygen -t
$ eval `ssh-agent`
$ ssh-add .ssh/id_rsa
$ ssh-copy-id root@devstack3
(1) Check if the connection to the remote host works w/o a prompt for
user credentials (the below works as user and root):
$ virsh -c qemu+ssh://kashyapc@devstack3/system
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
(2) Perform peer to peer live migration (as root):
$ virsh migrate --verbose --p2p --copy-storage-all \
--live cvm1 qemu+ssh://kashyapc@devstack3/system
error: operation failed: Failed to connect to remote libvirt URI qemu+ssh://kashyapc@devstack3/system: Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer
>From libvirtd debug log:
[. . .]
2015-04-03 06:04:16.221+0000: 31009: debug : virCommandRunAsync:2408 : About to run LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin ssh -l kashyapc devstack3 sh -c ''\''if '\
''nc'\'' -q 2>&1 | grep "requires an argument" >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'\''nc'\'' $ARG -U /var/run/libvirt/libvirt-sock'\'''
2015-04-03 06:04:16.223+0000: 31009: debug : virCommandRunAsync:2411 : Command result 0, with PID 11204
2015-04-03 06:04:16.300+0000: 31009: error : virNetSocketReadWire:1564 : Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer
2015-04-03 06:04:16.300+0000: 31009: debug : do_open:1194 : driver 6 remote returned ERROR
2015-04-03 06:04:16.300+0000: 31009: debug : qemuDomainObjExitRemote:1695 : Exited remote (vm=0x7f727c005f80 name=cvm1)
2015-04-03 06:04:16.300+0000: 31009: error : doPeer2PeerMigrate:4711 : operation failed: Failed to connect to remote libvirt URI qemu+ssh://kashyapc@devstack3/system: Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer
2015-04-03 06:04:16.300+0000: 31009: debug : qemuMigrationRestoreDomainState:1429 : driver=0x7f728c160980, vm=0x7f727c005f80, pre-mig-state=1, state=1
2015-04-03 06:04:16.300+0000: 31009: debug : qemuDomainObjEndAsyncJob:1497 : Stopping async job: migration out (vm=0x7f727c005f80 name=cvm1)
2015-04-03 06:04:16.301+0000: 31007: debug : virProcessAbort:167 : aborting child process 11204
2015-04-03 06:04:16.301+0000: 31007: debug : virProcessAbort:175 : trying SIGTERM to child process 11204
[. . .]
What else am I missing?
--
/kashyap
9 years, 7 months
[libvirt-users] ESX VM from scratch
by Paul Apostolescu
I want to create a virtual machine from scratch in ESX but I can't figure
out how to create the disks - the vmdk files. Any hints on how that can be
done or even if it's possible at all ?
Thanks
9 years, 7 months
[libvirt-users] couple of ceph/rbd questions
by Brian Kroth
Hi, I've recently been working on setting up a set of libvirt compute
nodes that will be using a ceph rbd pool for storing vm disk image
files. I've got a couple of issues I've run into.
First, per the standard ceph documentation examples [1], the way to add a
disk is to create a block in the VM definition XML that looks something
like this:
<disk type='network' device='disk'>
<source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
<host name='{monitor-host-1}' port='6789'/>
<host name='{monitor-host-2}' port='6789'/>
<host name='{monitor-host-3}' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
<auth username='libvirt'>
<secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
</auth>
</disk>
The trouble with this approach is that those ceph cluster details
(secret uuid and monitor host lists), need to be stored separately in
every single VM disk definition separately. That makes for a lot of
maintenance when those details need to change (eg: replace a monitor
host (common), or change the auth details (less common)).
I'd prefer to be able to define a libvirt storage pool that contains
those details, and then reference the disks within each VM as volumes,
so that I only need to change the ceph monitor/auth details once per
libvirt compute host, rather than for every single VM disk definition.
I've rebuilt my libvirt packages using --with-rbd-support so that I can
successfully define a libvirt storage pool as follows:
<pool type='rbd'>
<name>libvirt-rbd-pool</name>
<source>
<name>libvirt-pool</name>
<host name='{monitor-host-1}' port='6789'/>
<host name='{monitor-host-2}' port='6789'/>
<host name='{monitor-host-3}' port='6789'/>
<auth username='libvirt' type='ceph'>
<secret uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
</auth>
</source>
</pool>
However, when I go to start a VM with a volume created in that pool as
follows, I get an error:
<disk type='volume' device='disk'>
<source pool='libvirt-rbd-pool' volume='{rbd-volume-name}'/>
<driver name='qemu' type='raw' cache='writethrough'/>
<target dev='vda' bus='virtio'/>
</disk>
"using 'rbd' pools for backing 'volume' disks isn't yet supported"
When I dug through the code, it appears that there's an explicit check
for RBD type storage pools (VIR_STORAGE_POOL_RBD) that disables that
(libvirt-1.2.13/src/storage/storage_driver.c:3159).
Is there a particular reason for that? Has it just not been implemented
yet, or am I specifying the disk definition in the wrong way?
Second, using the former disk definition method, I'm able to run VMs
under qemu, *and* migrate them. Very slick. Nice work all.
However, I found that since by default virt-manager leaves the VM
defined on both the source and destination, I'm actually able to start
the VM in both places. I didn't see an option to disable that, so I
just wrote a simple wrapper script to do the right thing via virsh using
--undefinesource, but I can't guarantee that some other admin might not
skip that and just use the GUI. It appears that libvirt (or is it
qemu?) doesn't set rbd locks on the disk image files by default.
After running across [2], I had originally thought about writing some
hooks to set and release locks on the VMs using the rbd cli, but after
reading the docs on the migration process [3], I think that's probably
not possible since the VM is started in both places temporarily.
I think my other option is to setup some shared fs (maybe cephfs) and
point virtlockd at it so that all of the libvirt compute hosts register
locks on VMs properly. However, I thought I'd ask if anyone knows if
there's some magic other parameter or setting I can use to have
libvirt/qemu just use rbd locks natively. Or, is that not implemented
either?
Thanks for your help.
Cheers,
Brian
[1] <http://ceph.com/docs/master/rbd/libvirt/#configuring-the-vm>
[2] <https://www.redhat.com/archives/libvirt-users/2014-January/msg00058.html>
[3] <https://libvirt.org/hooks.html#qemu_migration>
9 years, 7 months
[libvirt-users] Lispvirt: porting Libvirt API for Common Lisp
by Julio Faracco
Hi everyone!
I'm developing a Libvirt bindings for Common Lisp. The project is called
"Lispvirt".
I created this project because I was doing a project in Lisp to manage
Virtual Machines. So, I needed to implement some code using C and set up
Lisp to access those methods in C. This project was becoming a mess. The
better scenario is using only Lisp. That's why I started to develop this
bindings for Lisp. Now, I'm only using Lisp for it.
For a while, I'm hosting this project on GitHub:
https://github.com/jcfaracco/lispvirt
But I'm planning to move it to common-lisp.net.
There is still many things to do (callbacks, structures, some project
decisions and planings, documentation, etc). Any contribution or
suggestions would be helpful.
The most important things to do now are test, test and test.
Just sharing if someone is interested to help us.
Thanks!
*--*
*Julio Cesar Faracco*
9 years, 7 months