The 03/11/11, lvroyce wrote:
Hi all,
I came across below issue when testing:
1.make a volume and attach it to a domain A
2.unplug the vg from the host in order to emulating a volume failure
3.start domain A(failed)
In step 3 can't start domainA . because can't find disk listed
in xml when create the Domain.
I'm not sure if it is reasonable. In common sense, we can still
start our system even if we have a corrupt data disk .And also ,if
in data center we carelessly attatch a corrupt volumn to all the
guest, it will result in all guests fail to boot .
I suggest to automatically detach a disk if it can't be found
and just give out a warning.Please let me know your opinion about if
it is a bug or a feature.Thanks.
For most my use cases, I'd rather the guest not to start at all.
The reason is that any attached disk is a mount point to somewhere. Not
having the disk means that the dedicated space for this mount point is
missing. Think about a guest (system with few disk space) doing a local
mirror of a distribution (data with a lot of space). Starting the
synchronisation script from cron without the disk for data attached
means that the synchronisation will restart from scratch and that the
system will fail by running out of disk space.
Since attached disks are used in many different use cases, I think it
would be a real gain to have an option per disk to tell if it's critical
or not for the guest to start.
--
Nicolas Sebrecht