On Sun, Mar 22, 2009 at 07:28:36PM +0900, Matt McCowan wrote:
Running into an issue where, if I/O is hampered by load for example,
reading a largish state file (created by 'virsh save') is not allowed to
complete.
qemudStartVMDaemon in src/qemu_driver.c has a loop that waits 10 seconds
for the VM to be brought up. An strace against libvirt when doing a
'virsh restore' against a largish state file shows the VM being sent a
kill when it's still happily reading from the file.
This is a little odd to me - we had previously fixed KVM migration
code so that during startup with -incoming, it would correctly
respond to monitor commands, explicitly to avoid libvirt timing
out in this way. I'm wondering what has broken since then, whether
its libvirt's usage changing, or KVM impl changing.
inotify is used in the xen and uml drivers, so I thought it would be
a
suitable mechanism to delay the timeout loop if the state file was still
being read.
Is this the right way to solve this problem?
This is an interesting idea, but I'd like to figure out why we
have a regression in this area first.
Regards,
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://ovirt.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|