there are 8 servers with 8 vms on each server. all the qcow images are on the nfs share on the same external server.
we are starting all 64 vms at the same time.
each vm is 2.5GB X 64vms = 160GB = 1280Gb
to read all of the data on a 1Gbe interface will take 1280sec = 21.3min
not all of the image is being read on boot so it takes only 5min

anyway, one timeout won't solve all problems, why not 31? or 60?


On Thu, Jan 23, 2014 at 4:44 PM, Daniel P. Berrange <berrange@redhat.com> wrote:
On Thu, Jan 23, 2014 at 04:40:40PM +0200, Pavel Fux wrote:
> I agree, there is no harm in adding an option of configuration, different
> setup configurations require different timeout values.
> my setup was 8 servers booted with PXE boot and running on nfs rootfs, with
> 8 vms on each.
> when I tried to start all of them together the bottle neck was the network
> and it takes about 5 minutes till they all start.

That doesn't make any sense. The waiting code here is about the QEMU
process' initial startup sequence - ie the gap between exec'ing the
QEMU binary, and it listening on the monitor socket.  PXE / nfsroot
doesn't get involved there at all. Even if it were involve, if you're
seeing 5 minute delays with only 8 vms on the host, something is
seriously screwed with your host. This isn't a compelling reason to
add this config option to libvirt.

Daniel
--
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|