[PATCH 0/2] Introduce mshv Hypervisor type.

These patches introduce 'mshv' hypervisor type and check for the hypervisor device on host while starting ch guests. Praveen K Paladugu (2): conf: Introduce mshv hypervisor type ch: Check for hypervisor while starting guests src/ch/ch_conf.c | 2 ++ src/ch/ch_driver.c | 7 +++++++ src/ch/ch_process.c | 34 ++++++++++++++++++++++++++++++++++ src/conf/domain_conf.c | 1 + src/conf/domain_conf.h | 1 + src/qemu/qemu_command.c | 1 + 6 files changed, 46 insertions(+) -- 2.43.0

This hypervisor type is available on a host running Microsoft Hypervisor and Linux as the Dom0. The Dom0 should load "mshv" drivers to expose the hypervisor device to userspace. Cloud-Hypervisor supports running guests on Linux Hosts with mshv as the hypervisor. Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com> --- src/conf/domain_conf.c | 1 + src/conf/domain_conf.h | 1 + src/qemu/qemu_command.c | 1 + 3 files changed, 3 insertions(+) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index fb5a5cc351..d0b33e97e6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -122,6 +122,7 @@ VIR_ENUM_IMPL(virDomainVirt, "test", "vmware", "hyperv", + "mshv", "vbox", "phyp", "parallels", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index bd283d42df..128b058161 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -136,6 +136,7 @@ typedef enum { VIR_DOMAIN_VIRT_TEST, VIR_DOMAIN_VIRT_VMWARE, VIR_DOMAIN_VIRT_HYPERV, + VIR_DOMAIN_VIRT_MSHV, VIR_DOMAIN_VIRT_VBOX, VIR_DOMAIN_VIRT_PHYP, VIR_DOMAIN_VIRT_PARALLELS, diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 31d42495f4..af38ade0c0 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7184,6 +7184,7 @@ qemuBuildAccelCommandLine(virCommand *cmd, case VIR_DOMAIN_VIRT_TEST: case VIR_DOMAIN_VIRT_VMWARE: case VIR_DOMAIN_VIRT_HYPERV: + case VIR_DOMAIN_VIRT_MSHV: case VIR_DOMAIN_VIRT_VBOX: case VIR_DOMAIN_VIRT_PHYP: case VIR_DOMAIN_VIRT_PARALLELS: -- 2.43.0

On Tue, Jan 30, 2024 at 02:44:40PM -0600, Praveen K Paladugu wrote:
This hypervisor type is available on a host running Microsoft Hypervisor and Linux as the Dom0. The Dom0 should load "mshv" drivers to expose the hypervisor device to userspace.
Cloud-Hypervisor supports running guests on Linux Hosts with mshv as the hypervisor.
This is my first time hearing about the Microsoft Hypervisor with Linux Dom0, and the docs I find via Google aren't entirely conclusive. Am I right in thinking that "Microsoft Hypervisor" in this context is simply Hyper-V, aka, the same hypervisor you traditionally have under a Windows Dom0 ? If so then I could think that we probably don't need to have a new virDomainVirt type enum entry. We could simply use the pre-existing VIR_DOMAIN_VIRT_HYPERV to represent this configuration in the cloud-hypervisor configuration.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com> --- src/conf/domain_conf.c | 1 + src/conf/domain_conf.h | 1 + src/qemu/qemu_command.c | 1 + 3 files changed, 3 insertions(+)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index fb5a5cc351..d0b33e97e6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -122,6 +122,7 @@ VIR_ENUM_IMPL(virDomainVirt, "test", "vmware", "hyperv", + "mshv", "vbox", "phyp", "parallels", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index bd283d42df..128b058161 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -136,6 +136,7 @@ typedef enum { VIR_DOMAIN_VIRT_TEST, VIR_DOMAIN_VIRT_VMWARE, VIR_DOMAIN_VIRT_HYPERV, + VIR_DOMAIN_VIRT_MSHV, VIR_DOMAIN_VIRT_VBOX, VIR_DOMAIN_VIRT_PHYP, VIR_DOMAIN_VIRT_PARALLELS, diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 31d42495f4..af38ade0c0 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7184,6 +7184,7 @@ qemuBuildAccelCommandLine(virCommand *cmd, case VIR_DOMAIN_VIRT_TEST: case VIR_DOMAIN_VIRT_VMWARE: case VIR_DOMAIN_VIRT_HYPERV: + case VIR_DOMAIN_VIRT_MSHV: case VIR_DOMAIN_VIRT_VBOX: case VIR_DOMAIN_VIRT_PHYP: case VIR_DOMAIN_VIRT_PARALLELS: -- 2.43.0 _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org
With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Wed, Jan 31, 2024 at 09:55:29AM +0000, Daniel P. Berrang?? wrote:
On Tue, Jan 30, 2024 at 02:44:40PM -0600, Praveen K Paladugu wrote:
This hypervisor type is available on a host running Microsoft Hypervisor and Linux as the Dom0. The Dom0 should load "mshv" drivers to expose the hypervisor device to userspace.
Cloud-Hypervisor supports running guests on Linux Hosts with mshv as the hypervisor.
This is my first time hearing about the Microsoft Hypervisor with Linux Dom0, and the docs I find via Google aren't entirely conclusive.
Unfortunately, Microsoft does not have any official documentation on this configuration, because Linux Dom0 on Microsoft Hypervisor is not yet ready for broader adoption. We still need to sort out Licensing and other questions related to this configuration before it can be made public. That said, we have been working with cloud-hypervisor for a while now to enable mshv as the supported hypervisor in addition to kvm.
Am I right in thinking that "Microsoft Hypervisor" in this context is simply Hyper-V, aka, the same hypervisor you traditionally have under a Windows Dom0 ?
If so then I could think that we probably don't need to have a new virDomainVirt type enum entry. We could simply use the pre-existing VIR_DOMAIN_VIRT_HYPERV to represent this configuration in the cloud-hypervisor configuration.
I considered reusing VIR_DOMAIN_VIRT_HYPERV entry. From what I understand, this hypervisor option implies Libvirt talks to HyperV using WMI. Although the binary bits of the hypervisors may be the same in both configurations, the interfaces to interact with the hypervisors are completely different. With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary. As how libvirt starts guests in these 2 configurations is totally different, I thought it would be better to add a hypervisor type to track this configuration. Regards, Praveen
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com> --- src/conf/domain_conf.c | 1 + src/conf/domain_conf.h | 1 + src/qemu/qemu_command.c | 1 + 3 files changed, 3 insertions(+)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index fb5a5cc351..d0b33e97e6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -122,6 +122,7 @@ VIR_ENUM_IMPL(virDomainVirt, "test", "vmware", "hyperv", + "mshv", "vbox", "phyp", "parallels", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index bd283d42df..128b058161 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -136,6 +136,7 @@ typedef enum { VIR_DOMAIN_VIRT_TEST, VIR_DOMAIN_VIRT_VMWARE, VIR_DOMAIN_VIRT_HYPERV, + VIR_DOMAIN_VIRT_MSHV, VIR_DOMAIN_VIRT_VBOX, VIR_DOMAIN_VIRT_PHYP, VIR_DOMAIN_VIRT_PARALLELS, diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 31d42495f4..af38ade0c0 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7184,6 +7184,7 @@ qemuBuildAccelCommandLine(virCommand *cmd, case VIR_DOMAIN_VIRT_TEST: case VIR_DOMAIN_VIRT_VMWARE: case VIR_DOMAIN_VIRT_HYPERV: + case VIR_DOMAIN_VIRT_MSHV: case VIR_DOMAIN_VIRT_VBOX: case VIR_DOMAIN_VIRT_PHYP: case VIR_DOMAIN_VIRT_PARALLELS: -- 2.43.0 _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org
With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org

On Wed, Jan 31, 2024 at 12:49:50PM -0800, Praveen Paladugu wrote:
Am I right in thinking that "Microsoft Hypervisor" in this context is simply Hyper-V, aka, the same hypervisor you traditionally have under a Windows Dom0 ?
If so then I could think that we probably don't need to have a new virDomainVirt type enum entry. We could simply use the pre-existing VIR_DOMAIN_VIRT_HYPERV to represent this configuration in the cloud-hypervisor configuration.
I considered reusing VIR_DOMAIN_VIRT_HYPERV entry. From what I understand, this hypervisor option implies Libvirt talks to HyperV using WMI. Although the binary bits of the hypervisors may be the same in both configurations, the interfaces to interact with the hypervisors are completely different.
In the context of libvirt XML config, the virDomainVirt enum is very specifically referring to the underlying hypervisor guest ABI. This is distinct from any protocol used for the management of the platform by libvirt. This is why both the CloudHypervisor and QEMU drivers in libvirt will both support the VIR_DOMAIN_VIRT_KVM for guests, despite being completely different mgmt APIs. Similarly in the past we have have multiple drivers all use VIR_DOMAIN_VIRT_XEN for the guest, while using completely different mgmt APIs. So if "mshv" in the context of CloudHypervisor is running HyperV under Dom0 and the guest primarily needs to support the HyperV ABI, then I would say VIR_DOMAIN_VIRT_HYPERV could be the appropriate choice.
With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Wed, Jan 31, 2024 at 08:57:04PM +0000, Daniel P. Berrang?? wrote:
On Wed, Jan 31, 2024 at 12:49:50PM -0800, Praveen Paladugu wrote:
Am I right in thinking that "Microsoft Hypervisor" in this context is simply Hyper-V, aka, the same hypervisor you traditionally have under a Windows Dom0 ?
If so then I could think that we probably don't need to have a new virDomainVirt type enum entry. We could simply use the pre-existing VIR_DOMAIN_VIRT_HYPERV to represent this configuration in the cloud-hypervisor configuration.
I considered reusing VIR_DOMAIN_VIRT_HYPERV entry. From what I understand, this hypervisor option implies Libvirt talks to HyperV using WMI. Although the binary bits of the hypervisors may be the same in both configurations, the interfaces to interact with the hypervisors are completely different.
In the context of libvirt XML config, the virDomainVirt enum is very specifically referring to the underlying hypervisor guest ABI. This is distinct from any protocol used for the management of the platform by libvirt.
This is why both the CloudHypervisor and QEMU drivers in libvirt will both support the VIR_DOMAIN_VIRT_KVM for guests, despite being completely different mgmt APIs.
Similarly in the past we have have multiple drivers all use VIR_DOMAIN_VIRT_XEN for the guest, while using completely different mgmt APIs.
So if "mshv" in the context of CloudHypervisor is running HyperV under Dom0 and the guest primarily needs to support the HyperV ABI, then I would say VIR_DOMAIN_VIRT_HYPERV could be the appropriate choice.
With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed.
Thanks for the explanation, Daniel. By "underlying hypervisor guest ABI" I am guessing you are referring to the interfaces used for starting and managing guests. If so, the ABIs available in Hyperv(VIR_DOMAIN_VIRT_HYPERV) and "mshv" configurations are completely different too. This is because the underlying Operating systems: Windows and Linux respectively, provide different interfaces for programs to start and manage guests. So, I'd say the hypervisor guest ABIs are different in these 2 configurations. Above, you called out CloudHypervisor and Qemu drivers in libvirt supporting VIR_DOMAIN_VIRT_KVM. This makes sense to me. In these 2 configruations, both the VMMs (CloudHypervisor, Qemu) use the same/simiar set of interfaces provided by the kernel and hypervisor(KVM) to manage guests. Unfortunately, this isn't the case with VIR_DOMAIN_VIRT_HYPERV and VIR_DOMAIN_VIRT_MSHV types. With VIR_DOMAIN_VIRT_HYPERV and VIR_DOMAIN_VIRT_MSHV, as different hypervisor types, checks on the hosts will be simpler, as each of these types would imply a host OS. Regards, Praveen
With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org

On Mon, Feb 05, 2024 at 08:12:15AM -0800, Praveen Paladugu wrote:
On Wed, Jan 31, 2024 at 08:57:04PM +0000, Daniel P. Berrang?? wrote:
With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed.
Thanks for the explanation, Daniel. By "underlying hypervisor guest ABI" I am guessing you are referring to the interfaces used for starting and managing guests. If so, the ABIs available in Hyperv(VIR_DOMAIN_VIRT_HYPERV) and "mshv" configurations are completely different too. This is because the underlying Operating systems: Windows and Linux respectively, provide different interfaces for programs to start and manage guests. So, I'd say the hypervisor guest ABIs are different in these 2 configurations.
By 'hypervisor guest ABI' I'm referring to the general virtualization ABI that the hypervisor exposes to guest OS. ie the functionality that a Linux guest enables with CONFIG_HYPERV, or with CONFIG_KVM Kconfig build options. IIUC, there is no new CONFIG_MSHV in Linux guests, and they would be expected to be built with CONFIG_HYPERV enabled, or am I wrong in that respect ? With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Mon, Feb 05, 2024 at 04:57:28PM +0000, Daniel P. Berrang?? wrote:
On Mon, Feb 05, 2024 at 08:12:15AM -0800, Praveen Paladugu wrote:
On Wed, Jan 31, 2024 at 08:57:04PM +0000, Daniel P. Berrang?? wrote:
With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed.
Thanks for the explanation, Daniel. By "underlying hypervisor guest ABI" I am guessing you are referring to the interfaces used for starting and managing guests. If so, the ABIs available in Hyperv(VIR_DOMAIN_VIRT_HYPERV) and "mshv" configurations are completely different too. This is because the underlying Operating systems: Windows and Linux respectively, provide different interfaces for programs to start and manage guests. So, I'd say the hypervisor guest ABIs are different in these 2 configurations.
By 'hypervisor guest ABI' I'm referring to the general virtualization ABI that the hypervisor exposes to guest OS.
ie the functionality that a Linux guest enables with CONFIG_HYPERV, or with CONFIG_KVM Kconfig build options.
IIUC, there is no new CONFIG_MSHV in Linux guests, and they would be expected to be built with CONFIG_HYPERV enabled, or am I wrong in that respect ?
You are correct Daniel. The guest needs CONFIG_HYPERV and a few other HYPERV_* configs enabled. The host on the other hand needs CONFIG_MSHV_ROOT enabled. Regards, Praveen
With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org

On Mon, Feb 05, 2024 at 11:15:21AM -0800, Praveen Paladugu wrote:
On Mon, Feb 05, 2024 at 04:57:28PM +0000, Daniel P. Berrang?? wrote:
On Mon, Feb 05, 2024 at 08:12:15AM -0800, Praveen Paladugu wrote:
On Wed, Jan 31, 2024 at 08:57:04PM +0000, Daniel P. Berrang?? wrote:
With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed.
Thanks for the explanation, Daniel. By "underlying hypervisor guest ABI" I am guessing you are referring to the interfaces used for starting and managing guests. If so, the ABIs available in Hyperv(VIR_DOMAIN_VIRT_HYPERV) and "mshv" configurations are completely different too. This is because the underlying Operating systems: Windows and Linux respectively, provide different interfaces for programs to start and manage guests. So, I'd say the hypervisor guest ABIs are different in these 2 configurations.
By 'hypervisor guest ABI' I'm referring to the general virtualization ABI that the hypervisor exposes to guest OS.
ie the functionality that a Linux guest enables with CONFIG_HYPERV, or with CONFIG_KVM Kconfig build options.
IIUC, there is no new CONFIG_MSHV in Linux guests, and they would be expected to be built with CONFIG_HYPERV enabled, or am I wrong in that respect ?
You are correct Daniel. The guest needs CONFIG_HYPERV and a few other HYPERV_* configs enabled. The host on the other hand needs CONFIG_MSHV_ROOT enabled.
Regards, Praveen
I understand your recommendation now Daniel. By assigning a hypervisor type based on 'hypervisor guest ABI', users will be able move 'Domain XML' with corresponding guest images across hosts that expose the same hypervisor guest ABI, irrespective of what the underlying OS is. Such a setting would potentially also allow Live Migration of guests across platforms supporting the same hypervisor guest ABI. In this particular case though, CONFIG_HYPERV is only part of the story. In order for guests to run on top of Hyperv, they usually need CONFIG_HYPERV_{STORAGE,NET} and other drivers. The guests running in Mshv need virtio drivers. This is because the underlying VMMs in these cases: Hyperv, Cloud-Hypervisor expose different sets of paravirtualized devices to guests. Although the core hypervisor guest ABI is the same in both cases, guests will not be able to move across 'Hyperv' and 'mshv' configs seemlessly. It is less likely for these two VMMs to converge on a common set of paravirtualized and emulated devices to allow seamless migration of guests between Hyperv and Mshv configs. Do you still see value in converging both these configs under the hypervisor type, VIR_DOMAIN_VIRT_HYPERV? Regards, Praveen
With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org
_______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org

On Mon, Feb 12, 2024 at 08:38:42AM -0800, Praveen Paladugu wrote:
On Mon, Feb 05, 2024 at 11:15:21AM -0800, Praveen Paladugu wrote:
On Mon, Feb 05, 2024 at 04:57:28PM +0000, Daniel P. Berrang?? wrote:
On Mon, Feb 05, 2024 at 08:12:15AM -0800, Praveen Paladugu wrote:
On Wed, Jan 31, 2024 at 08:57:04PM +0000, Daniel P. Berrang?? wrote:
With the introduced "mshv" hypervisor option, Libvirt doesn't interact with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed.
Thanks for the explanation, Daniel. By "underlying hypervisor guest ABI" I am guessing you are referring to the interfaces used for starting and managing guests. If so, the ABIs available in Hyperv(VIR_DOMAIN_VIRT_HYPERV) and "mshv" configurations are completely different too. This is because the underlying Operating systems: Windows and Linux respectively, provide different interfaces for programs to start and manage guests. So, I'd say the hypervisor guest ABIs are different in these 2 configurations.
By 'hypervisor guest ABI' I'm referring to the general virtualization ABI that the hypervisor exposes to guest OS.
ie the functionality that a Linux guest enables with CONFIG_HYPERV, or with CONFIG_KVM Kconfig build options.
IIUC, there is no new CONFIG_MSHV in Linux guests, and they would be expected to be built with CONFIG_HYPERV enabled, or am I wrong in that respect ?
You are correct Daniel. The guest needs CONFIG_HYPERV and a few other HYPERV_* configs enabled. The host on the other hand needs CONFIG_MSHV_ROOT enabled.
I understand your recommendation now Daniel. By assigning a hypervisor type based on 'hypervisor guest ABI', users will be able move 'Domain XML' with corresponding guest images across hosts that expose the same hypervisor guest ABI, irrespective of what the underlying OS is. Such a setting would potentially also allow Live Migration of guests across platforms supporting the same hypervisor guest ABI.
In this particular case though, CONFIG_HYPERV is only part of the story. In order for guests to run on top of Hyperv, they usually need CONFIG_HYPERV_{STORAGE,NET} and other drivers.
The guests running in Mshv need virtio drivers. This is because the underlying VMMs in these cases: Hyperv, Cloud-Hypervisor expose different sets of paravirtualized devices to guests. Although the core hypervisor guest ABI is the same in both cases, guests will not be able to move across 'Hyperv' and 'mshv' configs seemlessly.
Yes, that is correct, but we already have XML attributes tracking the device types for storage, network, etc. Probably 50% of the information in the guest XML is expressing guest ABI in some way or other. The domain virt type is just one part of the story. So it is OK for 2 XML configs to use VIRT_HYPERV, while having different settings for storage/network/etc. We're not trying to make the XML be portable across different hypervisors, just trying to use the same terminology for the same feature across hypervisors.
It is less likely for these two VMMs to converge on a common set of paravirtualized and emulated devices to allow seamless migration of guests between Hyperv and Mshv configs. Do you still see value in converging both these configs under the hypervisor type, VIR_DOMAIN_VIRT_HYPERV?
Yes, I still believe VIRT_HYPERV is the right choice here. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Feb 16, 2024 at 03:20:52PM +0000, Daniel P. Berrang?? wrote:
On Mon, Feb 12, 2024 at 08:38:42AM -0800, Praveen Paladugu wrote:
On Mon, Feb 05, 2024 at 11:15:21AM -0800, Praveen Paladugu wrote:
On Mon, Feb 05, 2024 at 04:57:28PM +0000, Daniel P. Berrang?? wrote:
On Mon, Feb 05, 2024 at 08:12:15AM -0800, Praveen Paladugu wrote:
On Wed, Jan 31, 2024 at 08:57:04PM +0000, Daniel P. Berrang?? wrote:
> With the introduced "mshv" hypervisor option, Libvirt doesn't interact > with "/dev/mshv" at all. Libvirt just invokes cloud-hypervisor which in > turn talks to mshv via kernel ioctls as necessary.
That's OK. The distinction of control/mgmt API is represented by the different libvirt virConnect URI schemes. virDomainVirt is exclusively about what primary hypervisor guest ABI is exposed.
Thanks for the explanation, Daniel. By "underlying hypervisor guest ABI" I am guessing you are referring to the interfaces used for starting and managing guests. If so, the ABIs available in Hyperv(VIR_DOMAIN_VIRT_HYPERV) and "mshv" configurations are completely different too. This is because the underlying Operating systems: Windows and Linux respectively, provide different interfaces for programs to start and manage guests. So, I'd say the hypervisor guest ABIs are different in these 2 configurations.
By 'hypervisor guest ABI' I'm referring to the general virtualization ABI that the hypervisor exposes to guest OS.
ie the functionality that a Linux guest enables with CONFIG_HYPERV, or with CONFIG_KVM Kconfig build options.
IIUC, there is no new CONFIG_MSHV in Linux guests, and they would be expected to be built with CONFIG_HYPERV enabled, or am I wrong in that respect ?
You are correct Daniel. The guest needs CONFIG_HYPERV and a few other HYPERV_* configs enabled. The host on the other hand needs CONFIG_MSHV_ROOT enabled.
I understand your recommendation now Daniel. By assigning a hypervisor type based on 'hypervisor guest ABI', users will be able move 'Domain XML' with corresponding guest images across hosts that expose the same hypervisor guest ABI, irrespective of what the underlying OS is. Such a setting would potentially also allow Live Migration of guests across platforms supporting the same hypervisor guest ABI.
In this particular case though, CONFIG_HYPERV is only part of the story. In order for guests to run on top of Hyperv, they usually need CONFIG_HYPERV_{STORAGE,NET} and other drivers.
The guests running in Mshv need virtio drivers. This is because the underlying VMMs in these cases: Hyperv, Cloud-Hypervisor expose different sets of paravirtualized devices to guests. Although the core hypervisor guest ABI is the same in both cases, guests will not be able to move across 'Hyperv' and 'mshv' configs seemlessly.
Yes, that is correct, but we already have XML attributes tracking the device types for storage, network, etc. Probably 50% of the information in the guest XML is expressing guest ABI in some way or other. The domain virt type is just one part of the story.
So it is OK for 2 XML configs to use VIRT_HYPERV, while having different settings for storage/network/etc. We're not trying to make the XML be portable across different hypervisors, just trying to use the same terminology for the same feature across hypervisors.
It is less likely for these two VMMs to converge on a common set of paravirtualized and emulated devices to allow seamless migration of guests between Hyperv and Mshv configs. Do you still see value in converging both these configs under the hypervisor type, VIR_DOMAIN_VIRT_HYPERV?
Yes, I still believe VIRT_HYPERV is the right choice here.
Thanks for the discussion Daniel. This reasoning sounds good to me. I will refactor this patchset to use VIRT_HYPERV as the hypervisor type. Praveen
With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| _______________________________________________ Devel mailing list -- devel@lists.libvirt.org To unsubscribe send an email to devel-leave@lists.libvirt.org

While initializing ch driver, confirm either /dev/kvm or /dev/mshv device is present. Before starting domains, validate the requested hypervisor device exists on the host. Users can specify hypervisor in ch guests's domain definitions like below: <domain type='kvm'> _or_ <domain type='mshv'> Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com> --- src/ch/ch_conf.c | 2 ++ src/ch/ch_driver.c | 7 +++++++ src/ch/ch_process.c | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 43 insertions(+) diff --git a/src/ch/ch_conf.c b/src/ch/ch_conf.c index f421af5121..7cb113bca5 100644 --- a/src/ch/ch_conf.c +++ b/src/ch/ch_conf.c @@ -69,6 +69,8 @@ virCaps *virCHDriverCapsInit(void) virCapabilitiesAddGuestDomain(guest, VIR_DOMAIN_VIRT_KVM, NULL, NULL, 0, NULL); + virCapabilitiesAddGuestDomain(guest, VIR_DOMAIN_VIRT_MSHV, + NULL, NULL, 0, NULL); return g_steal_pointer(&caps); } diff --git a/src/ch/ch_driver.c b/src/ch/ch_driver.c index 96de5044ac..d6294c76ee 100644 --- a/src/ch/ch_driver.c +++ b/src/ch/ch_driver.c @@ -32,6 +32,7 @@ #include "viraccessapicheck.h" #include "virchrdev.h" #include "virerror.h" +#include "virfile.h" #include "virlog.h" #include "virobject.h" #include "virtypedparam.h" @@ -876,6 +877,12 @@ static int chStateInitialize(bool privileged, return -1; } + if (!(virFileExists("/dev/kvm") || virFileExists("/dev/mshv"))) { + virReportError(VIR_ERR_DEVICE_MISSING, "%s", + _("/dev/kvm and /dev/mshv. ch driver failed to initialize.")); + return VIR_DRV_STATE_INIT_ERROR; + } + ch_driver = g_new0(virCHDriver, 1); if (virMutexInit(&ch_driver->lock) < 0) { diff --git a/src/ch/ch_process.c b/src/ch/ch_process.c index f3bb4a7280..d9f943c50b 100644 --- a/src/ch/ch_process.c +++ b/src/ch/ch_process.c @@ -28,6 +28,7 @@ #include "ch_process.h" #include "domain_cgroup.h" #include "virerror.h" +#include "virfile.h" #include "virjson.h" #include "virlog.h" @@ -448,6 +449,35 @@ virCHProcessSetupVcpus(virDomainObj *vm) return 0; } +/** + * virCHProcessStartValidate: + * @vm: domain object + * + * Checks done before starting a VM. + * + * Returns 0 on success or -1 in case of error + */ +static int virCHProcessStartValidate(virDomainObj *vm) +{ + if (vm->def->virtType == VIR_DOMAIN_VIRT_KVM) { + VIR_DEBUG("Checking for KVM availability"); + if (!virFileExists("/dev/kvm")) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules.")); + return -1; + } + } else if (vm->def->virtType == VIR_DOMAIN_VIRT_MSHV) { + VIR_DEBUG("Checking for MSHV availability"); + if (!virFileExists("/dev/mshv")) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Domain requires MSHV, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the mshv modules.")); + return -1; + } + } + return 0; + +} + /** * virCHProcessStart: * @driver: pointer to driver structure @@ -475,6 +505,10 @@ virCHProcessStart(virCHDriver *driver, return -1; } + if (virCHProcessStartValidate(vm) < 0) { + return -1; + } + if (!priv->monitor) { /* And we can get the first monitor connection now too */ if (!(priv->monitor = virCHProcessConnectMonitor(driver, vm))) { -- 2.43.0
participants (3)
-
Daniel P. Berrangé
-
Praveen K Paladugu
-
Praveen Paladugu