On 4/25/14 5:44 , "Daniel P. Berrange" <berrange(a)redhat.com> wrote:
On Thu, Apr 24, 2014 at 09:29:04PM +0000, Tomoki Sekiyama wrote:
> On 4/24/14 4:58 , "Daniel P. Berrange" <berrange(a)redhat.com> wrote:
>
> >On Thu, Apr 24, 2014 at 12:16:00AM +0000, Tomoki Sekiyama wrote:
> >> Hi Daniel,
> >>
> >>
> >> On 4/23/14 5:55 , "Daniel P. Berrange" <berrange(a)redhat.com>
wrote:
> >> >On Tue, Apr 22, 2014 at 06:22:18PM +0000, Tomoki Sekiyama wrote:
> >> >> Hi Daniel,
> >> >> thanks for your comment.
> >> >>
> >> >> On 4/22/14 11:39 , "Daniel P. Berrange"
<berrange(a)redhat.com>
>wrote:
> >> >> >On Thu, Apr 03, 2014 at 11:39:29AM -0400, Tomoki Sekiyama
wrote:
> >> >> >> +
> >> >> >> +/**
> >> >> >> + * virDomainFSFreeze:
> >> >> >> + * @dom: a domain object
> >> >> >> + * @disks: list of disk names to be frozen
> >> >> >> + * @ndisks: the number of disks specified in @disks
> >> >> >
> >> >> >I realize this current patch series doesn't use the @disks
>parameter
> >> >> >at all, but what exactly are we expecting to be passed here ?
Is
> >> >> >this a list of filesystem paths from the guest OS pov, or is it
a
> >> >> >list of disks targets from libvirt's POV ? I'm guessing
the
>former
> >> >> >since this is expected to be passed to the guest agent.
> >> >>
> >> >> My intention for 'disks' is latter, a list of disks targets
from
> >> >> libvirt's POV.
> >> >> Currently it is just a placeholder of API. It would be passed to
>the
> >> >> agent after it is converted into a list of device addresses (e.g.
>a
> >>pair
> >> >> of drive address and controller's PCI address) so that the
agent
>can
> >> >> look up filesystems on the specified disks.
> >> >
> >> >Hmm, I wonder how practical such a conversion will be.
> >> >
> >> >eg libvirt has a block device "sda", but the guest OS may have
added
> >> >a partition table (sda1, sda2, sda3, etc) and then put some of those
> >> >partitions into LVM (volgroup00) and then created logical volume
> >> >(vol1, vol2, etc). The guest agent can freeze individual filesystems
> >> >on each logical volume, so if the API is just taking libvirt block
> >> >device names, we can't express any way to freeze the filesystems
the
> >> >guest has.
> >>
> >> Specifying libvirt disk alias name is coming from applications'
> >> requirement. For example, OpenStack cinder driver only knows provide
> >> libvirt device names.
> >> It is also nice if virDomainSnapshotCreateXML can specify 'disks'
to
>be
> >> frozen when only a subset of the disks is specified in snapshot XML.
> >>
> >> I'm now prototyping qemu-guest-agent code to resolve filesystems from
> >> disk addresses, and it is working with virtio/SATA/IDE/SCSI drives on
> >> x86 Linux guests. It can also handle LVM logical volumes that lies on
> >> multiple partitions on multiple disks.
> >>
> >>
>
>>>https://github.com/tsekiyama/qemu/commit/6d26115e769a7fe6aba7be52d21804
>>>53
> >>ac
> >> a5fee5
> >>
> >>
> >> This gathers disk device information from sysfs in the guest.
> >> On windows, I hope Virtual Disk Service can provide this kind of
> >> informations too.
> >
> >All of this assumes that you're only interested in freezing filesystems
> >that are backed by virtual devices exposes from the hypervisor. A guest
> >OS could be directly accessing iSCSI/RBD/Gluster volumes. IMHO, it is
> >within scope of this API to be able to freeze/thaw such volumes, but
> >if you make the API take libvirt disk names, this is impossible, since
> >these volumes are invisible to libvirt.
> >
> >What you propose is also fairly coarse because you are forcing all
> >filesystems on a specified block device to be suspended at the
> >same time. That is indeed probably the common case need, but it is
> >desirable not to lock ourselves into that as the only option.
> >
> >Finally, if the guest is doing filesystem passthrough (eg virtio-9p)
> >then we ought to allow for freeze/thaw of such filesystems, which
> >again don't have any block device associated with libvirt XML.
> >
> >> >So I think we have no choice but to actually have the API take a
> >> >list of guest "volumes" (eg mount points in Linux, or drive
letters
> >> >in Windows).
> >> >
> >> >Ideally the guest agent would also provide a way to list all
> >> >currently known guest "volumes" so we could expose that in
the
> >> >API too later.
> >>
> >> Possibly. If the volumes information from the API contains the
> >> dependent hardware addresses, libvirt clients might be able to map
> >> the volumes and libvirt disks from domain XML.
> >> In this case, specifying subset volumes in virDomainSnapshotCreateXML
> >> would be difficult. Maybe libvirt should provide the mapping
>function.
> >>
> >> Which way do you prefer?
> >
> >IMHO the more I look at this, the more I think we need to provide a way
> >for apps to enumerate mount points in the guest, so that we can cover
> >freeze/thaw of individual filesystems, not merely block device.
>
> OK, then, APIs will look like
>
> int virDomainFSFreeze(virDomainPtr dom, const char *mountPoints,
> unsigned int nMountPoints, unsigned int flags);
>
> int virDomainFSThaw(virDomainPtr dom, unsigned int flags);
>
> As we drop nesting, FSThaw will not take mountPoints argument.
Actually, even without nesting, I think it is still potentially valuable
to allow us to thaw individual filesystems. I'd include mountPoints in
the args, even if we don't intend it implement it in QEMU for the
forseeable future, just to future proof it.
I see. I'll make them :
int virDomainFSFreeze(virDomainPtr dom, const char *mountPoints,
unsigned int nMountPoints, unsigned int flags);
int virDomainFSThaw(virDomainPtr dom, const char *mountPoints,
unsigned int nMountPoints, unsigned int flags);
however QEMU driver might ignore mountPoints in virDomainFSThaw.
Regards,
Daniel
Regards,
Tomoki Sekiyama