On Tue, Apr 22, 2014 at 06:22:18PM +0000, Tomoki Sekiyama wrote:
Hi Daniel,
thanks for your comment.
On 4/22/14 11:39 , "Daniel P. Berrange" <berrange(a)redhat.com> wrote:
>On Thu, Apr 03, 2014 at 11:39:29AM -0400, Tomoki Sekiyama wrote:
>> +
>> +/**
>> + * virDomainFSFreeze:
>> + * @dom: a domain object
>> + * @disks: list of disk names to be frozen
>> + * @ndisks: the number of disks specified in @disks
>
>I realize this current patch series doesn't use the @disks parameter
>at all, but what exactly are we expecting to be passed here ? Is
>this a list of filesystem paths from the guest OS pov, or is it a
>list of disks targets from libvirt's POV ? I'm guessing the former
>since this is expected to be passed to the guest agent.
My intention for 'disks' is latter, a list of disks targets from
libvirt's POV.
Currently it is just a placeholder of API. It would be passed to the
agent after it is converted into a list of device addresses (e.g. a pair
of drive address and controller's PCI address) so that the agent can
look up filesystems on the specified disks.
Hmm, I wonder how practical such a conversion will be.
eg libvirt has a block device "sda", but the guest OS may have added
a partition table (sda1, sda2, sda3, etc) and then put some of those
partitions into LVM (volgroup00) and then created logical volume
(vol1, vol2, etc). The guest agent can freeze individual filesystems
on each logical volume, so if the API is just taking libvirt block
device names, we can't express any way to freeze the filesystems the
guest has.
So I think we have no choice but to actually have the API take a
list of guest "volumes" (eg mount points in Linux, or drive letters
in Windows).
Ideally the guest agent would also provide a way to list all
currently known guest "volumes" so we could expose that in the
API too later.
>> + * @flags: extra flags, not used yet, so callers should
always pass 0
>> + *
>> + * Freeze filesystems on the specified disks within the guest (hence
>>guest
>> + * agent may be required depending on hypervisor used). If @ndisks is
>>0,
>> + * every mounted filesystem on the guest is frozen.
>> + * Freeze can be nested. When it is called on the same disk multiple
>>times,
>> + * the disk will not be thawed until virDomainFSThaw is called the
>>same times.
>
>I'm not entirely convinced we should allow nesting. I think it would
>be better to report an error if the user tries to freeze a disk that
>is already frozen, or tries to thaw a disks that is not frozen. Nesting
>makes it harder for applications to know just what state the guest is
>in.
>
>eg, consider that they need to run a command in the guest agent that
>requires the guest FS to be un-thawed. If we allow nesting, then after
>the app runs virDomainFSThaw, the app can't tell whether or not all
>thes path are un-thawed or not.
>
>IMHO we should forbid nesting.
Nesting was originally advised by Eric for the reason of scalability.
But if it is not actually wanted by apps, I think we can remove nesting
from API design. (I personally don't know apps that run parallel
FSFreeze operations for multiple disks.)
From the implementation view point, nesting may be difficult to
supported in qemu guest agent, because some guest operating systems such
as Windows cannot start new FSFreeze while some of filesystems are frozen.
Yep, I hadn't even considered that impl complexity. I agree we should
just drop the notion of nesting and say that we'll return
VIR_ERR_OPERATION_INVALID if someone attempts to freeze an already
frozen FS.
Regards,
Tomoki Sekiyama
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|