On 11/01/2012 06:33 AM, Michal Privoznik wrote:
Currently, when we are doing (managed) save, we insert the
iohelper between the qemu and OS. The pipe is created, the
writing end is passed to qemu and the reading end to the
iohelper. It reads data and write them into given file. However,
with write() being asynchronous data may still be in OS
caches and hence in some (corner) cases, all migration data
may have been read and written (not physically though). So
qemu will report success, as well as iohelper. However, with
some non local filesystems, where ENOSPACE is polled every X
time units, we may get into situation where all operations
succeeded but data hasn't reached the disk. And in fact will
never do. Therefore we ought sync caches to make sure data
has reached the block device on remote host.
---
+ /* If we are on shared FS ensure all data is written as some
+ * FSs may do writeback caching or polling for ENOSPC or any
+ * other magic that local FS does not.*/
+ if (virStorageFileIsSharedFS(fdoutname) && (fdatasync(fdout) < 0)) {
+ virReportSystemError(errno, _("unable to fsync %s"), fdoutname);
+ goto cleanup;
+ }
I don't feel comfortable with that - we should do the fdatasync
everywhere, not just network devices. It's better not to second-guess
which file systems have which behaviors.
--
Eric Blake eblake(a)redhat.com +1-919-301-3266
Libvirt virtualization library
http://libvirt.org