Thanks for the response! I'm glad this is possible...
Opennebula specifies the permissions for files to be used to be
opened up a fair bit, so I made sure the backing store had the
permissions. Also, I'm actually letting opennebula do the
qemu-img line, and it would probably have to do any libvirt lines
as well (the workflow is opennebula kicks off a new machine
process, which sets up the base files including the image
(originally by copying, I'm trying to do that via a snapshot
instead) and the xml file, then creates and boot the machine using
libvirt.
The backstore is located in ~/public. The apparmor profile
includes the lines:
@{HOME}/ r,
@{HOME}/** r,
and
/**.img r,
/**.qcow{,2} r,
/**.vmdk r,
/**.[iI][sS][oO] r,
/**/disk{,.*} r,
Under a comment that says "
# For backingstore, virt-aa-helper needs to peek inside the disk
image, so
# allow access to non-hidden files in @{HOME} as well as storage
pools, and
# removable media and filesystems, and certain file extentions.
A
# virt-aa-helper failure when checking a disk for backinsgstore
is non-fatal
# (but obviously the backingstore won't be added).
"
And from my reading of the apparmor docs, this should allow
access.
I can't find any apparmor errors in /var/log/kern.log
/var/log/messages. I don't have a /var/log/apparmor or a
/var/log/audit. This is running inside of a chroot (works fine if
I copy the images rather than snapshot), and I can't find any
apparmor errors inside or outside the chroot.
I also found this:
http://serverfault.com/questions/145834/how-to-convert-a-raw-disk-image-to-a-copy-on-write-image-based-on-another-image,
but I think my version of libvirt has the apparmor updates (see
above).
The other piece I found is from /var/log/libvirt/libvirtd.log:
21:55:33.603: 6988: error : qemuMonitorOpenUnix:291 : monitor
socket did not show up.: Connection refused
21:55:33.603: 6988: error : qemuProcessWaitForMonitor:1069 :
internal error process exited while connecting to monitor:
file=/var/lib/one/vm/56/images/disk.0,if=none,id=drive-virtio-disk0,boot=on,format=qcow2,cache=none
But I assume that the monitor socket isn't showing up because it's
not starting because it's not reading the file...
Any other ideas for how to see if it's attempting to access the
backing file and failing? I apparently don't have strace,
either... :-(
On 06/06/2012 05:23 PM, Eric Blake wrote:
> On 06/06/2012 10:55 AM, Sean
Abbott wrote:
>> So, I was attempting to use qemu snapshots with backing
stores. The
>> QEMU docs
(http://wiki.qemu.org/Documentation/CreateSnapshot) make it
>> sound like you simply point your qemu at the snapshot
after it's
>> creation, and you're golden.
>>
>> When attempting this with libvirt, though, it fails.
>
> Libvirt definitely supports this, as I use it for my guests,
so let's
> figure out where you went wrong. By the way, libvirt can
create qcow2
> files itself, rather than forcing you to hand-create it with
qemu-img,
> although support for this could probably be improved with
more APIs and
> documentation. Patches welcome.
>
>>
>> I created a snapshot using the above tutorial. the
resulting file is
>> disk.0, and a qmeu-img info on it returns:
>>
>> image: disk.0
>> file format: qcow2
>> virtual size: 29G (31457280000 bytes)
>> disk size: 140K
>> cluster_size: 65536
>> backing file:
/var/lib/one/public/lin_client_current.qcow2 (actual path:
>> /var/lib/one/public/lin_client_current.qcow2)
>>
>> So that all looks groovy, right?
>
> Unfortunately, 'qemu-img info' output doesn't say whether you
properly
> populated the backing_fmt property, but I will assume that is
not your
> issue (do note, however, that failure to use the backing_fmt
property is
> a security hole - it means libvirt and/or qemu will autoprobe
the format
> from the backing file itself, but if the backing file is
supposed to be
> raw, the guest can manipulate the backing file into looking
non-raw, and
> cause your host to hand over control of files to the guest
that should
> not normally be accessible to the guest).
>
>>
>> Then, I created (via opennebula) an xml deployment file
like so:
>> http://paste.ubuntu.com/1027145/
>
> which included:
>
> <disk type='file' device='disk'>
> <source file='/var/lib/one/vm/56/images/disk.0'/>
> <target dev='hda' bus='virtio'/>
> <driver name='qemu' type='qcow2' cache='none'/>
>
> and that looked correct to me.
>
>>
>> When I attempt to do a virsh create, I get the following
errors:
>>
>> virsh # create deployment.0
>> error: Failed to create domain from deployment.0
>> error: internal error process exited while connecting to
monitor:
>>
file=/var/lib/one/vm/56/images/disk.0,if=none,id=drive-virtio-disk0,boot=on,format=qcow2,cache=none
>> qemu-kvm: boot=on|off is deprecated and will be ignored.
Future versions
>> will reject this parameter. Please update your scripts.
>
> This warning is not the real problem, but a patch to libvirt
to avoid it
> might be nice, if it hasn't already been patched in newer
libvirt.
>
>> qemu-system-x86_64: -drive
>>
file=/var/lib/one/vm/56/images/disk.0,if=none,id=drive-virtio-disk0,boot=on,format=qcow2,cache=none,boot=on:
>> could not open disk image
/var/lib/one/vm/56/images/disk.0: Invalid argument
>
> You mentioned Ubuntu - do you have appArmor running? This
could be a
> case of the apparmor settings on your machine preventing qemu
from
> opening the backing file. I don't have Ubuntu experience
myself to tell
> you how to resolve it (I tend to work with SELinux on Fedora
as my
> security mechanism), but suspect that it might be a failure
along the
> lines of an over-strict security policy.
>
>>
>> So...something isn't working. Is it possible to do this,
or should I
>> give up on this path?
>
> Libvirt definitely supports what you want to do, but I don't
know what
> to suggest to help you get further.
>