Daniel Veillard wrote:
- if /proc/xen doesn't exist (on linux, or /dev/xen on
Solaris) well
we should not do that we are pretty sure we will get an error when
trying to connect
- if /proc/vz is present, well it's very likely that if the kernel
has been compiled with OpenVZ support, it's likely to be used as the
default virtualization
- if there is a kvm module loaded well we should probably use
qemu:///system if running as root or qemu:///session otherwise
While I definitely like the direction this thread is going in, I'd just
warn that this sort of probing can be equally as harmful as the current
behavior.
While Xen and KVM are mutually exclusive, the same is not true with
OpenVZ/Linux Containers. While it may be unlikely that both are
actively being used, I find it very likely that in future distributions,
both features will be present.
So you may probe that this is a Linux Containers capable host, but you
may in fact be intending on using KVM for virtualization (and vice versa).
Regards,
Anthony Liguori
I guess on Solaris an easy heuristic would allow to pick the right
hypervisor by default too.
At some point we may have multiple hypervisor support simultaneously
on a linux system thanks to pv_ops, but right now it doesn't make too
much sense to force a default Xen connection even when we know it won't
work.
For a virsh specific solution there is the VIRSH_DEFAULT_CONNECT_URI
environment variable, but it's not really user friendly and not very
generic,
What do people think ? I would be tempted to provide a patch to change
do_open() behaviour on linux in the case name is NULL or "", and
then check what hypervisor might be present and running,
Daniel