[libvirt] [PATCHv2 00/16] Introduce hvf domain type for Hypevisor.framework

Hypervisor.framework provides a lightweight interface to run a virtual cpu on macOS without the need to install third-party kernel extensions (KEXTs). It's supported since macOS 10.10 on machines with Intel VT-x feature set that includes Extended Page Tables (EPT) and Unrestricted Mode. QEMU supports Hypervisor.framework since 2.12. The patch series adds "hvf" domain that uses Hypevisor.framework. v1: https://www.redhat.com/archives/libvir-list/2018-October/msg01090.html Changes since v1: - [x] Fixed unconditional addition of KVM CPU models into capabilities cache. That fixed a "make check" issue in qemucapabilitiestest on Linux. - [x] Fixed missing brace in virQEMUCapsFormatCPUModels in PATCH 6 - [x] Squashed patch 12 into the first patch (second one in the patch series) - [x] Added hvf domain definition to docs/formatdomain.html.in into the first patch (second in the patch series) - [x] Removed redundant argument in virQEMUCapsProbeHVF (patch 3) - [x] Added separate virQEMUCapsProbeHVF for non-apple platforms (patch 3) - [x] Added macOS support page - [x] Marked HVF support for all working domain elements I wasn't able to resolve the issues below, but I think they should go into separate patches/patch series: - [ ] To make qemucapabilitiestests work regardless of OS, accelerator probing should be done via QMP command. So, there's a need to add a new generic command to QEMU "query-accelerator accel=NAME" - [ ] VIRT_TEST_PRELOAD doesn't work on macOS. There are a few reasons: * DYLD_INSERT_LIBRARIES should be used instead of LD_PRELOAD * -module flag shouldn't be added to LDFLAGS in tests/Makefile.am. The flag instructs libtool to creates bundles (MH_BUNDLE) instead of dynamic libraries (MH_DYLIB) and unlike dylibs they cannot be preloaded. * Either symbol interposing or flat namespaces should be used to perform overrides of the calls to the mocks. I've tried both but neither worked for me, need to make a minimal example. I haven't completed the investigation as it looks like a separate work item. - [ ] Can't retrieve qemucapsprobe replies for macOS because qemucapsprobemock is not getting injected because of the issue with VIRT_TEST_PRELOAD - [ ] Can't add to tests/qemuxml2argvtest.c to illustrate the hvf example because qemucapsprobe doesn't work yet. Roman Bolshakov (16): qemu: Add KVM CPUs into cache only if KVM is present conf: Add hvf domain type qemu: Define hvf capability qemu: Query hvf capability on macOS qemu: Expose hvf domain type if hvf is supported qemu: Rename kvmCPU to accelCPU qemu: Introduce virQEMUCapsTypeIsAccelerated qemu: Introduce virQEMUCapsHaveAccel qemu: Introduce virQEMUCapsToVirtType qemu: Introduce virQEMUCapsAccelStr qemu: Make error message accel-agnostic qemu: Correct CPU capabilities probing for hvf news: Mention hvf domain type docs: Add hvf on QEMU driver page docs: Note hvf support for domain elements docs: Add support page for libvirt on macOS docs/docs.html.in | 3 + docs/drvqemu.html.in | 49 +++++++- docs/formatdomain.html.in | 141 ++++++++++++--------- docs/index.html.in | 4 +- docs/macos.html.in | 229 ++++++++++++++++++++++++++++++++++ docs/news.xml | 12 ++ docs/schemas/domaincommon.rng | 1 + src/conf/domain_conf.c | 4 +- src/conf/domain_conf.h | 1 + src/qemu/qemu_capabilities.c | 201 +++++++++++++++++++++-------- src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_command.c | 4 + 12 files changed, 534 insertions(+), 116 deletions(-) create mode 100644 docs/macos.html.in -- 2.19.1

From: Roman Bolshakov <roolebo@gmail.com> virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired side-effects when KVM CPUs are present in the cache on a platform that doesn't support it, e.g. macOS or Linux without KVM support. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Roman Bolshakov <roolebo@gmail.com> --- src/qemu/qemu_capabilities.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str); - if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; - if (virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; @@ -3584,7 +3586,8 @@ virQEMUCapsLoadCache(virArch hostArch, if (virQEMUCapsParseSEVInfo(qemuCaps, ctxt) < 0) goto cleanup; - virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); ret = 0; @@ -3766,10 +3769,12 @@ virQEMUCapsFormatCache(virQEMUCapsPtr qemuCaps) virBufferAsprintf(&buf, "<arch>%s</arch>\n", virArchToString(qemuCaps->arch)); - virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_KVM); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU); - virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_KVM); virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU); for (i = 0; i < qemuCaps->nmachineTypes; i++) { @@ -4566,7 +4571,8 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch, qemuCaps->libvirtCtime = virGetSelfLastChanged(); qemuCaps->libvirtVersion = LIBVIR_VERSION_NUMBER; - virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { -- 2.19.1

On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote:
From: Roman Bolshakov <roolebo@gmail.com>
virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired side-effects when KVM CPUs are present in the cache on a platform that doesn't support it, e.g. macOS or Linux without KVM support.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Roman Bolshakov <roolebo@gmail.com>
This doesn't look like a patch written by Daniel so why did you include the Signed-off-by line? Or did I miss anything?
--- src/qemu/qemu_capabilities.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str);
- if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup;
I don't think we should introduce these guards in all the places. All the loading and formatting functions should return success if the appropriate info is not available, so you should just make sure the relevant info is NULL in qemuCaps.
- if (virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup;
@@ -3584,7 +3586,8 @@ virQEMUCapsLoadCache(virArch hostArch, if (virQEMUCapsParseSEVInfo(qemuCaps, ctxt) < 0) goto cleanup;
- virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM);
Please, follow our coding style, i.e., indent by 4 spaces.
virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU);
ret = 0;
... Jirka

On Wed, Nov 21, 2018 at 05:04:07PM +0100, Jiri Denemark wrote:
On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote:
From: Roman Bolshakov <roolebo@gmail.com>
virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired side-effects when KVM CPUs are present in the cache on a platform that doesn't support it, e.g. macOS or Linux without KVM support.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Roman Bolshakov <roolebo@gmail.com>
This doesn't look like a patch written by Daniel so why did you include the Signed-off-by line? Or did I miss anything?
Daniel kindly helped to root cause an issue I had with qemucapabilitiestest in v1: https://www.redhat.com/archives/libvir-list/2018-November/msg00740.html and provided a diff that resolves the issue: https://www.redhat.com/archives/libvir-list/2018-November/msg00767.html Should I remove his Signed-off-by tag?
--- src/qemu/qemu_capabilities.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str);
- if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup;
I don't think we should introduce these guards in all the places. All the loading and formatting functions should return success if the appropriate info is not available, so you should just make sure the relevant info is NULL in qemuCaps.
Do you mean the capabilities checks should be moved inside the functions? Either way they're needed to avoid loading KVM cpus into QEMU caps cache on the hosts without KVM support.
@@ -3584,7 +3586,8 @@ virQEMUCapsLoadCache(virArch hostArch, if (virQEMUCapsParseSEVInfo(qemuCaps, ctxt) < 0) goto cleanup;
- virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM);
Please, follow our coding style, i.e., indent by 4 spaces.
virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU);
ret = 0;
...
Will do, thank you for catching this! Best regards, Roman

On Wed, Nov 21, 2018 at 20:50:50 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 05:04:07PM +0100, Jiri Denemark wrote:
On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote:
From: Roman Bolshakov <roolebo@gmail.com>
virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired side-effects when KVM CPUs are present in the cache on a platform that doesn't support it, e.g. macOS or Linux without KVM support.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Roman Bolshakov <roolebo@gmail.com>
This doesn't look like a patch written by Daniel so why did you include the Signed-off-by line? Or did I miss anything?
Daniel kindly helped to root cause an issue I had with qemucapabilitiestest in v1: https://www.redhat.com/archives/libvir-list/2018-November/msg00740.html
and provided a diff that resolves the issue: https://www.redhat.com/archives/libvir-list/2018-November/msg00767.html
I see, I missed the diff.
Should I remove his Signed-off-by tag?
Dunno, I guess it's up to Daniel. But if the final patch is going to look very differently anyway, I don't see a reason to keep the tag.
--- src/qemu/qemu_capabilities.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str);
- if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup;
I don't think we should introduce these guards in all the places. All the loading and formatting functions should return success if the appropriate info is not available, so you should just make sure the relevant info is NULL in qemuCaps.
Do you mean the capabilities checks should be moved inside the functions?
virQEMUCapsLoadHostCPUModelInfo does (not literally, but effectively) hostCPUNode = virXPathNode("./hostCPU[@type='kvm']", ctxt); if (!hostCPUNode) return 0; virQEMUCapsLoadCPUModels does n = virXPathNodeSet("./cpu[@type='kvm']", ctxt, &nodes); if (n == 0) return 0; virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return; virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return; So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there. Jirka

On Wed, Nov 21, 2018 at 07:43:43PM +0100, Jiri Denemark wrote:
On Wed, Nov 21, 2018 at 20:50:50 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 05:04:07PM +0100, Jiri Denemark wrote:
On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote:
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str);
- if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup;
I don't think we should introduce these guards in all the places. All the loading and formatting functions should return success if the appropriate info is not available, so you should just make sure the relevant info is NULL in qemuCaps.
Do you mean the capabilities checks should be moved inside the functions?
virQEMUCapsLoadHostCPUModelInfo does (not literally, but effectively)
hostCPUNode = virXPathNode("./hostCPU[@type='kvm']", ctxt); if (!hostCPUNode) return 0;
virQEMUCapsLoadCPUModels does n = virXPathNodeSet("./cpu[@type='kvm']", ctxt, &nodes); if (n == 0) return 0;
I agree, virQEMUCapsLoadHostCPUModelInfo and virQEMUCapsLoadCPUModels don't need the check.
virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it
virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return;
virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return;
So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there.
Unfortunately, that won't work for the patch series. kvmCPUModels is renamed to accelCPUModels and kvmCPU is renamed to accelCPU in PATCH 6. So, virQEMUCapsFormatHostCPUModelInfo looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpuData = qemuCaps->accelCPU; else cpuData = qemuCaps->tcgCPU; and virQEMUCapsFormatCPUModels looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; Without the check we'd return CPUs for KVM domain on the platform that doesn't support it. Thank you, Roman

On Fri, Nov 23, 2018 at 17:16:12 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 07:43:43PM +0100, Jiri Denemark wrote:
On Wed, Nov 21, 2018 at 20:50:50 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 05:04:07PM +0100, Jiri Denemark wrote:
On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote:
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str);
- if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || + if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup;
I don't think we should introduce these guards in all the places. All the loading and formatting functions should return success if the appropriate info is not available, so you should just make sure the relevant info is NULL in qemuCaps.
Do you mean the capabilities checks should be moved inside the functions?
virQEMUCapsLoadHostCPUModelInfo does (not literally, but effectively)
hostCPUNode = virXPathNode("./hostCPU[@type='kvm']", ctxt); if (!hostCPUNode) return 0;
virQEMUCapsLoadCPUModels does n = virXPathNodeSet("./cpu[@type='kvm']", ctxt, &nodes); if (n == 0) return 0;
I agree, virQEMUCapsLoadHostCPUModelInfo and virQEMUCapsLoadCPUModels don't need the check.
virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it
virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return;
virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return;
So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there.
Unfortunately, that won't work for the patch series. kvmCPUModels is renamed to accelCPUModels and kvmCPU is renamed to accelCPU in PATCH 6.
And how does different name change the behavior?
So, virQEMUCapsFormatHostCPUModelInfo looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpuData = qemuCaps->accelCPU; else cpuData = qemuCaps->tcgCPU;
and virQEMUCapsFormatCPUModels looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels;
Without the check we'd return CPUs for KVM domain on the platform that doesn't support it.
It won't return anything because the code will make sure accelCPUModels and accelCPU will be NULL when no accel method is supported. Jirka

On Fri, Nov 23, 2018 at 04:30:13PM +0100, Jiri Denemark wrote:
On Fri, Nov 23, 2018 at 17:16:12 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 07:43:43PM +0100, Jiri Denemark wrote:
virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it
virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return;
virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return;
So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there.
Unfortunately, that won't work for the patch series. kvmCPUModels is renamed to accelCPUModels and kvmCPU is renamed to accelCPU in PATCH 6.
And how does different name change the behavior?
So, virQEMUCapsFormatHostCPUModelInfo looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpuData = qemuCaps->accelCPU; else cpuData = qemuCaps->tcgCPU;
and virQEMUCapsFormatCPUModels looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels;
Without the check we'd return CPUs for KVM domain on the platform that doesn't support it.
It won't return anything because the code will make sure accelCPUModels and accelCPU will be NULL when no accel method is supported.
But accelCPU is not NULL on macOS with QEMU_CAPS_HVF and on Linux with QEMU_CAPS_KVM. That's where the problem arises. We're going to get additional kvm CPUs on mac and hvf CPUs on Linux and that will break qemucapabilitiestest. In fact they will be the same accelCPUs of the supported accelerator but with hostCPU's type attribute of the other accelerator. If you wish I can try to rework the patchset. Instead of generalizing kvmCPU, I'd just add hvfCPU to qemuCaps. It might have a good side effect that libvirt will be able to support multiple accelerators on the same platform. -- Roman

On Fri, Nov 23, 2018 at 18:55:00 +0300, Roman Bolshakov wrote:
On Fri, Nov 23, 2018 at 04:30:13PM +0100, Jiri Denemark wrote:
On Fri, Nov 23, 2018 at 17:16:12 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 07:43:43PM +0100, Jiri Denemark wrote:
virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it
virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return;
virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return;
So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there.
Unfortunately, that won't work for the patch series. kvmCPUModels is renamed to accelCPUModels and kvmCPU is renamed to accelCPU in PATCH 6.
And how does different name change the behavior?
So, virQEMUCapsFormatHostCPUModelInfo looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpuData = qemuCaps->accelCPU; else cpuData = qemuCaps->tcgCPU;
and virQEMUCapsFormatCPUModels looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels;
Without the check we'd return CPUs for KVM domain on the platform that doesn't support it.
It won't return anything because the code will make sure accelCPUModels and accelCPU will be NULL when no accel method is supported.
But accelCPU is not NULL on macOS with QEMU_CAPS_HVF and on Linux with QEMU_CAPS_KVM. That's where the problem arises.
Right, and that's what I think should be changed. Rather then adding checks to the formatting and loading code to ignore something which shouldn't be present in the first place.
We're going to get additional kvm CPUs on mac and hvf CPUs on Linux and that will break qemucapabilitiestest.
I think I'm missing something here. There's only one CPU definition describing the host CPU. There are hosts which have several different CPUs, but libvirt is not really prepared to see that and I believe this is not what you're addressing with this series, is it? Or are you talking about some other CPUs?
In fact they will be the same accelCPUs of the supported accelerator but with hostCPU's type attribute of the other accelerator.
How would this happen? We have a single accelerator enabled on a host and we generate a host CPU model for it (and just for it, there's no reason to generate a CPU model for something that is not supported on the host).
If you wish I can try to rework the patchset. Instead of generalizing kvmCPU, I'd just add hvfCPU to qemuCaps. It might have a good side effect that libvirt will be able to support multiple accelerators on the same platform.
I think we can leave this for the future :-) Jirka

On Fri, Nov 23, 2018 at 06:16:46PM +0100, Jiri Denemark wrote:
On Fri, Nov 23, 2018 at 18:55:00 +0300, Roman Bolshakov wrote:
On Fri, Nov 23, 2018 at 04:30:13PM +0100, Jiri Denemark wrote:
On Fri, Nov 23, 2018 at 17:16:12 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 07:43:43PM +0100, Jiri Denemark wrote:
virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it
virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return;
virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return;
So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there.
Unfortunately, that won't work for the patch series. kvmCPUModels is renamed to accelCPUModels and kvmCPU is renamed to accelCPU in PATCH 6.
And how does different name change the behavior?
So, virQEMUCapsFormatHostCPUModelInfo looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpuData = qemuCaps->accelCPU; else cpuData = qemuCaps->tcgCPU;
and virQEMUCapsFormatCPUModels looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels;
Without the check we'd return CPUs for KVM domain on the platform that doesn't support it.
It won't return anything because the code will make sure accelCPUModels and accelCPU will be NULL when no accel method is supported.
But accelCPU is not NULL on macOS with QEMU_CAPS_HVF and on Linux with QEMU_CAPS_KVM. That's where the problem arises.
Right, and that's what I think should be changed. Rather then adding checks to the formatting and loading code to ignore something which shouldn't be present in the first place.
We're going to get additional kvm CPUs on mac and hvf CPUs on Linux and that will break qemucapabilitiestest.
I think I'm missing something here. There's only one CPU definition describing the host CPU. There are hosts which have several different CPUs, but libvirt is not really prepared to see that and I believe this is not what you're addressing with this series, is it? Or are you talking about some other CPUs?
In fact they will be the same accelCPUs of the supported accelerator but with hostCPU's type attribute of the other accelerator.
How would this happen? We have a single accelerator enabled on a host and we generate a host CPU model for it (and just for it, there's no reason to generate a CPU model for something that is not supported on the host).
accelCPU will be present on a host where an accelerator is avaialable. You said can't have host CPU definitions present twice. I agree with that. But if we call virQEMUCapsFormatCPUModels twice for VIR_DOMAIN_VIRT_KVM and VIR_DOMAIN_VIRT_HVF without the checks, host cpu definitions will be present twice for each accelerator because accelCPU is not NULL. So we need to call it only once for the supported accelerator. The checks help in that. Alternative approach to do only one call is: virDomainVirtType acceleratedDomain = VIR_DOMAIN_VIRT_KVM; if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) acceleratedDomain = VIR_DOMAIN_VIRT_HVF; virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, acceleratedDomain); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU); virQEMUCapsFormatCPUModels(qemuCaps, &buf, acceleratedDomain); virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU); Would that work for you? -- Thank you, Roman

On Fri, Nov 23, 2018 at 09:46:36PM +0300, Roman Bolshakov wrote:
On Fri, Nov 23, 2018 at 06:16:46PM +0100, Jiri Denemark wrote:
On Fri, Nov 23, 2018 at 18:55:00 +0300, Roman Bolshakov wrote:
On Fri, Nov 23, 2018 at 04:30:13PM +0100, Jiri Denemark wrote:
On Fri, Nov 23, 2018 at 17:16:12 +0300, Roman Bolshakov wrote:
On Wed, Nov 21, 2018 at 07:43:43PM +0100, Jiri Denemark wrote:
virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it
virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = &qemuCaps->kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return;
virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return;
So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there.
Unfortunately, that won't work for the patch series. kvmCPUModels is renamed to accelCPUModels and kvmCPU is renamed to accelCPU in PATCH 6.
And how does different name change the behavior?
So, virQEMUCapsFormatHostCPUModelInfo looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpuData = qemuCaps->accelCPU; else cpuData = qemuCaps->tcgCPU;
and virQEMUCapsFormatCPUModels looks like: if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels;
Without the check we'd return CPUs for KVM domain on the platform that doesn't support it.
It won't return anything because the code will make sure accelCPUModels and accelCPU will be NULL when no accel method is supported.
But accelCPU is not NULL on macOS with QEMU_CAPS_HVF and on Linux with QEMU_CAPS_KVM. That's where the problem arises.
Right, and that's what I think should be changed. Rather then adding checks to the formatting and loading code to ignore something which shouldn't be present in the first place.
We're going to get additional kvm CPUs on mac and hvf CPUs on Linux and that will break qemucapabilitiestest.
I think I'm missing something here. There's only one CPU definition describing the host CPU. There are hosts which have several different CPUs, but libvirt is not really prepared to see that and I believe this is not what you're addressing with this series, is it? Or are you talking about some other CPUs?
In fact they will be the same accelCPUs of the supported accelerator but with hostCPU's type attribute of the other accelerator.
How would this happen? We have a single accelerator enabled on a host and we generate a host CPU model for it (and just for it, there's no reason to generate a CPU model for something that is not supported on the host).
accelCPU will be present on a host where an accelerator is avaialable. You said can't have host CPU definitions present twice. I agree with that. But if we call virQEMUCapsFormatCPUModels twice for VIR_DOMAIN_VIRT_KVM and VIR_DOMAIN_VIRT_HVF without the checks, host cpu definitions will be present twice for each accelerator because accelCPU is not NULL.
So we need to call it only once for the supported accelerator. The checks help in that. Alternative approach to do only one call is:
virDomainVirtType acceleratedDomain = VIR_DOMAIN_VIRT_KVM; if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) acceleratedDomain = VIR_DOMAIN_VIRT_HVF;
virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, acceleratedDomain); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU);
virQEMUCapsFormatCPUModels(qemuCaps, &buf, acceleratedDomain); virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU);
Would that work for you?
-- Thank you, Roman
Hi Jiri, That's a kind reminder. Thank you, Roman

First, I'd like to apologize for such a late reply. ...
We're going to get additional kvm CPUs on mac and hvf CPUs on Linux and that will break qemucapabilitiestest.
I think I'm missing something here. There's only one CPU definition describing the host CPU. There are hosts which have several different CPUs, but libvirt is not really prepared to see that and I believe this is not what you're addressing with this series, is it? Or are you talking about some other CPUs?
In fact they will be the same accelCPUs of the supported accelerator but with hostCPU's type attribute of the other accelerator.
How would this happen? We have a single accelerator enabled on a host and we generate a host CPU model for it (and just for it, there's no reason to generate a CPU model for something that is not supported on the host).
accelCPU will be present on a host where an accelerator is avaialable. You said can't have host CPU definitions present twice. I agree with that. But if we call virQEMUCapsFormatCPUModels twice for VIR_DOMAIN_VIRT_KVM and VIR_DOMAIN_VIRT_HVF without the checks, host cpu definitions will be present twice for each accelerator because accelCPU is not NULL.
I see. I was thinking about several options. In general I think we should keep the loader and formatter code as simple and stupid as possible, we should not wire any fancy logic in them. Thus we have two options: First option:
So we need to call it only once for the supported accelerator. The checks help in that. Alternative approach to do only one call is:
virDomainVirtType acceleratedDomain = VIR_DOMAIN_VIRT_KVM; if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) acceleratedDomain = VIR_DOMAIN_VIRT_HVF;
virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, acceleratedDomain); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU);
virQEMUCapsFormatCPUModels(qemuCaps, &buf, acceleratedDomain); virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU);
Or even using a dedicated enum instead of virDomainVirtType for accessing the CPU data. Either way, the callers would need to contain the extra code to request the data they want. Second option: store CPU data separately for each virtType (qemu, kvm, hvf), which would effectively add support for multiple accelerators on a single host. While we don't currently need to support multiple accelerators, I think this solution would be the cleanest one. It would be pretty clear what data are stored in the cache and what the callers want without having to copy paste some special handling for individual accelerators. The formatting/loading code would just be called for all three virtTypes unconditionally. Thanks for the patience. Jirka

QEMU supports Hypervisor.framework since 2.12 as hvf accel. Hypervisor.framework provides a lightweight interface to run a virtual cpu on macOS without the need to install third-party kernel extensions (KEXTs). It's supported since macOS 10.10 on machines with Intel VT-x feature set that includes Extended Page Tables (EPT) and Unrestricted Mode. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- docs/formatdomain.html.in | 8 ++++---- docs/schemas/domaincommon.rng | 1 + src/conf/domain_conf.c | 4 +++- src/conf/domain_conf.h | 1 + src/qemu/qemu_command.c | 4 ++++ 5 files changed, 13 insertions(+), 5 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 2af4960981..25dd4bbbd6 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -22,10 +22,10 @@ <a id="attributeDomainType"><code>type</code></a> specifies the hypervisor used for running the domain. The allowed values are driver specific, but - include "xen", "kvm", "qemu", "lxc" and "kqemu". The - second attribute is <code>id</code> which is a unique - integer identifier for the running guest machine. Inactive - machines have no id value. + include "xen", "kvm", "hvf" (<span class="since">since 4.10.0 and QEMU + 2.12</span>), "qemu", "lxc" and "kqemu". The second attribute is + <code>id</code> which is a unique integer identifier for the running + guest machine. Inactive machines have no id value. </p> diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 5ee727eefa..596e347eda 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -213,6 +213,7 @@ <value>phyp</value> <value>vz</value> <value>bhyve</value> + <value>hvf</value> </choice> </attribute> </define> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 13874837c2..369d4bd634 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -119,7 +119,8 @@ VIR_ENUM_IMPL(virDomainVirt, VIR_DOMAIN_VIRT_LAST, "phyp", "parallels", "bhyve", - "vz") + "vz", + "hvf") VIR_ENUM_IMPL(virDomainOS, VIR_DOMAIN_OSTYPE_LAST, "hvm", @@ -15024,6 +15025,7 @@ virDomainVideoDefaultType(const virDomainDef *def) case VIR_DOMAIN_VIRT_HYPERV: case VIR_DOMAIN_VIRT_PHYP: case VIR_DOMAIN_VIRT_NONE: + case VIR_DOMAIN_VIRT_HVF: case VIR_DOMAIN_VIRT_LAST: default: return VIR_DOMAIN_VIDEO_TYPE_DEFAULT; diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 467785cd83..65f00692b7 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -245,6 +245,7 @@ typedef enum { VIR_DOMAIN_VIRT_PARALLELS, VIR_DOMAIN_VIRT_BHYVE, VIR_DOMAIN_VIRT_VZ, + VIR_DOMAIN_VIRT_HVF, VIR_DOMAIN_VIRT_LAST } virDomainVirtType; diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 23a6661c10..0fb796e15c 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7251,6 +7251,10 @@ qemuBuildMachineCommandLine(virCommandPtr cmd, virBufferAddLit(&buf, ",accel=kvm"); break; + case VIR_DOMAIN_VIRT_HVF: + virBufferAddLit(&buf, ",accel=hvf"); + break; + case VIR_DOMAIN_VIRT_KQEMU: case VIR_DOMAIN_VIRT_XEN: case VIR_DOMAIN_VIRT_LXC: -- 2.19.1

Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 1 + src/qemu/qemu_capabilities.h | 1 + 2 files changed, 2 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 4ba8369e3a..0bbda80782 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -515,6 +515,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST, /* 320 */ "memory-backend-memfd.hugetlb", "iothread.poll-max-ns", + "hvf", ); diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index c2caaf6fe1..7d08e8d243 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -499,6 +499,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ /* 320 */ QEMU_CAPS_OBJECT_MEMORY_MEMFD_HUGETLB, /* -object memory-backend-memfd.hugetlb */ QEMU_CAPS_IOTHREAD_POLLING, /* -object iothread.poll-max-ns */ + QEMU_CAPS_HVF, /* Whether Hypervisor.framework is available */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; -- 2.19.1

There's no QMP command for querying if hvf is supported, therefore we use sysctl interface that tells if Hypervisor.framwork works/available on the host. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 0bbda80782..5ebe3f1afe 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -54,6 +54,10 @@ #include <sys/wait.h> #include <stdarg.h> #include <sys/utsname.h> +#ifdef __APPLE__ +# include <sys/types.h> +# include <sys/sysctl.h> +#endif #define VIR_FROM_THIS VIR_FROM_QEMU @@ -2599,6 +2603,33 @@ virQEMUCapsProbeQMPKVMState(virQEMUCapsPtr qemuCaps, return 0; } +#ifdef __APPLE__ +static int +virQEMUCapsProbeHVF(virQEMUCapsPtr qemuCaps) +{ + int hv_support; + size_t len = sizeof(hv_support); + if (sysctlbyname("kern.hv_support", &hv_support, &len, NULL, 0)) + hv_support = 0; + + if (qemuCaps->version >= 2012000 && + ARCH_IS_X86(qemuCaps->arch) && + hv_support) { + virQEMUCapsSet(qemuCaps, QEMU_CAPS_HVF); + } + + return 0; +} +#else +static int +virQEMUCapsProbeHVF(virQEMUCapsPtr qemuCaps) +{ + (void) qemuCaps; + + return 0; +} +#endif + struct virQEMUCapsCommandLineProps { const char *option; const char *param; @@ -4150,6 +4181,9 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps, if (virQEMUCapsProbeQMPKVMState(qemuCaps, mon) < 0) goto cleanup; + if (virQEMUCapsProbeHVF(qemuCaps) < 0) + goto cleanup; + if (virQEMUCapsProbeQMPEvents(qemuCaps, mon) < 0) goto cleanup; if (virQEMUCapsProbeQMPDevices(qemuCaps, mon) < 0) -- 2.19.1

Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> --- src/qemu/qemu_capabilities.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 5ebe3f1afe..645ce2c89e 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -859,6 +859,17 @@ virQEMUCapsInitGuestFromBinary(virCapsPtr caps, } } + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) { + if (virCapabilitiesAddGuestDomain(guest, + VIR_DOMAIN_VIRT_HVF, + NULL, + NULL, + 0, + NULL) == NULL) { + goto cleanup; + } + } + if ((ARCH_IS_X86(guestarch) || guestarch == VIR_ARCH_AARCH64) && virCapabilitiesAddGuestFeature(guest, "acpi", true, true) == NULL) { goto cleanup; -- 2.19.1

QEMU supports a number of accelerators. It'd be good to have more generic name for kvmCPUModels and kvmCPU. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 645ce2c89e..ad15d2853e 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -578,7 +578,7 @@ struct _virQEMUCaps { virArch arch; - virDomainCapsCPUModelsPtr kvmCPUModels; + virDomainCapsCPUModelsPtr accelCPUModels; virDomainCapsCPUModelsPtr tcgCPUModels; size_t nmachineTypes; @@ -589,7 +589,7 @@ struct _virQEMUCaps { virSEVCapability *sevCapabilities; - virQEMUCapsHostCPUData kvmCPU; + virQEMUCapsHostCPUData accelCPU; virQEMUCapsHostCPUData tcgCPU; }; @@ -1564,9 +1564,9 @@ virQEMUCapsPtr virQEMUCapsNewCopy(virQEMUCapsPtr qemuCaps) ret->arch = qemuCaps->arch; - if (qemuCaps->kvmCPUModels) { - ret->kvmCPUModels = virDomainCapsCPUModelsCopy(qemuCaps->kvmCPUModels); - if (!ret->kvmCPUModels) + if (qemuCaps->accelCPUModels) { + ret->accelCPUModels = virDomainCapsCPUModelsCopy(qemuCaps->accelCPUModels); + if (!ret->accelCPUModels) goto error; } @@ -1576,7 +1576,7 @@ virQEMUCapsPtr virQEMUCapsNewCopy(virQEMUCapsPtr qemuCaps) goto error; } - if (virQEMUCapsHostCPUDataCopy(&ret->kvmCPU, &qemuCaps->kvmCPU) < 0 || + if (virQEMUCapsHostCPUDataCopy(&ret->accelCPU, &qemuCaps->accelCPU) < 0 || virQEMUCapsHostCPUDataCopy(&ret->tcgCPU, &qemuCaps->tcgCPU) < 0) goto error; @@ -1623,7 +1623,7 @@ void virQEMUCapsDispose(void *obj) } VIR_FREE(qemuCaps->machineTypes); - virObjectUnref(qemuCaps->kvmCPUModels); + virObjectUnref(qemuCaps->accelCPUModels); virObjectUnref(qemuCaps->tcgCPUModels); virBitmapFree(qemuCaps->flags); @@ -1636,7 +1636,7 @@ void virQEMUCapsDispose(void *obj) virSEVCapabilitiesFree(qemuCaps->sevCapabilities); - virQEMUCapsHostCPUDataClear(&qemuCaps->kvmCPU); + virQEMUCapsHostCPUDataClear(&qemuCaps->accelCPU); virQEMUCapsHostCPUDataClear(&qemuCaps->tcgCPU); } @@ -1794,8 +1794,8 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, size_t i; virDomainCapsCPUModelsPtr cpus = NULL; - if (type == VIR_DOMAIN_VIRT_KVM && qemuCaps->kvmCPUModels) - cpus = qemuCaps->kvmCPUModels; + if (type == VIR_DOMAIN_VIRT_KVM && qemuCaps->accelCPUModels) + cpus = qemuCaps->accelCPUModels; else if (type == VIR_DOMAIN_VIRT_QEMU && qemuCaps->tcgCPUModels) cpus = qemuCaps->tcgCPUModels; @@ -1804,7 +1804,7 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, return -1; if (type == VIR_DOMAIN_VIRT_KVM) - qemuCaps->kvmCPUModels = cpus; + qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; } @@ -1823,7 +1823,7 @@ virQEMUCapsGetCPUDefinitions(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { if (type == VIR_DOMAIN_VIRT_KVM) - return qemuCaps->kvmCPUModels; + return qemuCaps->accelCPUModels; else return qemuCaps->tcgCPUModels; } @@ -1834,7 +1834,7 @@ virQEMUCapsGetHostCPUData(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { if (type == VIR_DOMAIN_VIRT_KVM) - return &qemuCaps->kvmCPU; + return &qemuCaps->accelCPU; else return &qemuCaps->tcgCPU; } @@ -1898,7 +1898,7 @@ virQEMUCapsIsCPUModeSupported(virQEMUCapsPtr qemuCaps, case VIR_CPU_MODE_CUSTOM: if (type == VIR_DOMAIN_VIRT_KVM) - cpus = qemuCaps->kvmCPUModels; + cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; return cpus && cpus->nmodels > 0; @@ -2385,7 +2385,7 @@ virQEMUCapsProbeQMPCPUDefinitions(virQEMUCapsPtr qemuCaps, if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) qemuCaps->tcgCPUModels = models; else - qemuCaps->kvmCPUModels = models; + qemuCaps->accelCPUModels = models; return 0; } @@ -3232,7 +3232,7 @@ virQEMUCapsLoadCPUModels(virQEMUCapsPtr qemuCaps, goto cleanup; if (type == VIR_DOMAIN_VIRT_KVM) - qemuCaps->kvmCPUModels = cpus; + qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; @@ -3710,7 +3710,7 @@ virQEMUCapsFormatCPUModels(virQEMUCapsPtr qemuCaps, if (type == VIR_DOMAIN_VIRT_KVM) { typeStr = "kvm"; - cpus = qemuCaps->kvmCPUModels; + cpus = qemuCaps->accelCPUModels; } else { typeStr = "tcg"; cpus = qemuCaps->tcgCPUModels; @@ -5107,7 +5107,7 @@ virQEMUCapsFillDomainCPUCaps(virCapsPtr caps, virDomainCapsCPUModelsPtr cpus; if (domCaps->virttype == VIR_DOMAIN_VIRT_KVM) - cpus = qemuCaps->kvmCPUModels; + cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; -- 2.19.1

It replaces hardcoded checks that select accelCPU/accelCPUModels (formerly known as kvmCPU/kvmCPUModels) for KVM. It'll be cleaner to use the function when multiple accelerators are supported in qemu driver. Explicit KVM domain checks should be done only when a feature is available only for KVM. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index ad15d2853e..e302fbb48f 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -637,6 +637,11 @@ static const char *virQEMUCapsArchToString(virArch arch) return virArchToString(arch); } +static bool +virQEMUCapsTypeIsAccelerated(virDomainVirtType type) +{ + return type == VIR_DOMAIN_VIRT_KVM; +} /* Checks whether a domain with @guest arch can run natively on @host. */ @@ -1794,7 +1799,7 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, size_t i; virDomainCapsCPUModelsPtr cpus = NULL; - if (type == VIR_DOMAIN_VIRT_KVM && qemuCaps->accelCPUModels) + if (virQEMUCapsTypeIsAccelerated(type) && qemuCaps->accelCPUModels) cpus = qemuCaps->accelCPUModels; else if (type == VIR_DOMAIN_VIRT_QEMU && qemuCaps->tcgCPUModels) cpus = qemuCaps->tcgCPUModels; @@ -1803,7 +1808,7 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, if (!(cpus = virDomainCapsCPUModelsNew(count))) return -1; - if (type == VIR_DOMAIN_VIRT_KVM) + if (virQEMUCapsTypeIsAccelerated(type)) qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; @@ -1822,7 +1827,7 @@ virDomainCapsCPUModelsPtr virQEMUCapsGetCPUDefinitions(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { - if (type == VIR_DOMAIN_VIRT_KVM) + if (virQEMUCapsTypeIsAccelerated(type)) return qemuCaps->accelCPUModels; else return qemuCaps->tcgCPUModels; @@ -1833,7 +1838,7 @@ static virQEMUCapsHostCPUDataPtr virQEMUCapsGetHostCPUData(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { - if (type == VIR_DOMAIN_VIRT_KVM) + if (virQEMUCapsTypeIsAccelerated(type)) return &qemuCaps->accelCPU; else return &qemuCaps->tcgCPU; @@ -1889,7 +1894,7 @@ virQEMUCapsIsCPUModeSupported(virQEMUCapsPtr qemuCaps, switch (mode) { case VIR_CPU_MODE_HOST_PASSTHROUGH: - return type == VIR_DOMAIN_VIRT_KVM && + return virQEMUCapsTypeIsAccelerated(type) && virQEMUCapsGuestIsNative(caps->host.arch, qemuCaps->arch); case VIR_CPU_MODE_HOST_MODEL: @@ -1897,7 +1902,7 @@ virQEMUCapsIsCPUModeSupported(virQEMUCapsPtr qemuCaps, VIR_QEMU_CAPS_HOST_CPU_REPORTED); case VIR_CPU_MODE_CUSTOM: - if (type == VIR_DOMAIN_VIRT_KVM) + if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; @@ -3004,7 +3009,7 @@ virQEMUCapsInitHostCPUModel(virQEMUCapsPtr qemuCaps, virArchToString(qemuCaps->arch), virDomainVirtTypeToString(type)); goto error; - } else if (type == VIR_DOMAIN_VIRT_KVM && + } else if (virQEMUCapsTypeIsAccelerated(type) && virCPUGetHostIsSupported(qemuCaps->arch)) { if (!(fullCPU = virCPUGetHost(qemuCaps->arch, VIR_CPU_TYPE_GUEST, NULL, NULL))) @@ -3231,7 +3236,7 @@ virQEMUCapsLoadCPUModels(virQEMUCapsPtr qemuCaps, if (!(cpus = virDomainCapsCPUModelsNew(n))) goto cleanup; - if (type == VIR_DOMAIN_VIRT_KVM) + if (virQEMUCapsTypeIsAccelerated(type)) qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; @@ -3708,7 +3713,7 @@ virQEMUCapsFormatCPUModels(virQEMUCapsPtr qemuCaps, const char *typeStr; size_t i; - if (type == VIR_DOMAIN_VIRT_KVM) { + if (virQEMUCapsTypeIsAccelerated(type)) { typeStr = "kvm"; cpus = qemuCaps->accelCPUModels; } else { @@ -4966,7 +4971,8 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, if (virttype == VIR_DOMAIN_VIRT_NONE) virttype = capsType; - if (virttype == VIR_DOMAIN_VIRT_KVM && capsType == VIR_DOMAIN_VIRT_QEMU) { + if (virQEMUCapsTypeIsAccelerated(virttype) && + !virQEMUCapsTypeIsAccelerated(capsType)) { virReportError(VIR_ERR_INVALID_ARG, _("KVM is not supported by '%s' on this host"), binary); @@ -5106,7 +5112,7 @@ virQEMUCapsFillDomainCPUCaps(virCapsPtr caps, if (virCPUGetModels(domCaps->arch, &models) >= 0) { virDomainCapsCPUModelsPtr cpus; - if (domCaps->virttype == VIR_DOMAIN_VIRT_KVM) + if (virQEMUCapsTypeIsAccelerated(domCaps->virttype)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; -- 2.19.1

On Wednesday, 21 November 2018 15:01:50 CET Roman Bolshakov wrote: > +static bool > +virQEMUCapsTypeIsAccelerated(virDomainVirtType type) > +{ > + return type == VIR_DOMAIN_VIRT_KVM; > +} > [...] > @@ -4966,7 +4971,8 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, > if (virttype == VIR_DOMAIN_VIRT_NONE) > virttype = capsType; > > - if (virttype == VIR_DOMAIN_VIRT_KVM && capsType == VIR_DOMAIN_VIRT_QEMU) { > + if (virQEMUCapsTypeIsAccelerated(virttype) && > + !virQEMUCapsTypeIsAccelerated(capsType)) { > virReportError(VIR_ERR_INVALID_ARG, > _("KVM is not supported by '%s' on this host"), > binary); >From what I see, this check is now different: - "capsType == VIR_DOMAIN_VIRT_QEMU" will be true only when capsType is VIR_DOMAIN_VIRT_QEMU - !virQEMUCapsTypeIsAccelerated(capsType) will be true when capsType is not VIR_DOMAIN_VIRT_KVM -- Pino Toscano

On Fri, Nov 23, 2018 at 03:27:50PM +0100, Pino Toscano wrote:
On Wednesday, 21 November 2018 15:01:50 CET Roman Bolshakov wrote:
+static bool +virQEMUCapsTypeIsAccelerated(virDomainVirtType type) +{ + return type == VIR_DOMAIN_VIRT_KVM; +} [...] @@ -4966,7 +4971,8 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, if (virttype == VIR_DOMAIN_VIRT_NONE) virttype = capsType;
- if (virttype == VIR_DOMAIN_VIRT_KVM && capsType == VIR_DOMAIN_VIRT_QEMU) { + if (virQEMUCapsTypeIsAccelerated(virttype) && + !virQEMUCapsTypeIsAccelerated(capsType)) { virReportError(VIR_ERR_INVALID_ARG, _("KVM is not supported by '%s' on this host"), binary);
From what I see, this check is now different: - "capsType == VIR_DOMAIN_VIRT_QEMU" will be true only when capsType is VIR_DOMAIN_VIRT_QEMU - !virQEMUCapsTypeIsAccelerated(capsType) will be true when capsType is not VIR_DOMAIN_VIRT_KVM
Hi Pino, Yep, sure I can leave the 'capsType == VIR_DOMAIN_VIRT_QEMU' as is. Thank you, Roman

The function should be used to check if qemu capabilities include a hardware acceleration, i.e. accel is not TCG. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index e302fbb48f..f80ee62019 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -643,6 +643,12 @@ virQEMUCapsTypeIsAccelerated(virDomainVirtType type) return type == VIR_DOMAIN_VIRT_KVM; } +static bool +virQEMUCapsHaveAccel(virQEMUCapsPtr qemuCaps) +{ + return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM); +} + /* Checks whether a domain with @guest arch can run natively on @host. */ bool @@ -2387,7 +2393,7 @@ virQEMUCapsProbeQMPCPUDefinitions(virQEMUCapsPtr qemuCaps, if (!(models = virQEMUCapsFetchCPUDefinitions(mon))) return -1; - if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + if (tcg || !virQEMUCapsHaveAccel(qemuCaps)) qemuCaps->tcgCPUModels = models; else qemuCaps->accelCPUModels = models; @@ -2413,7 +2419,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps, if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION)) return 0; - if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { + if (tcg || !virQEMUCapsHaveAccel(qemuCaps)) { virtType = VIR_DOMAIN_VIRT_QEMU; model = "max"; } else { @@ -4528,7 +4534,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps, if (virQEMUCapsInitQMPMonitor(qemuCaps, cmd->mon) < 0) goto cleanup; - if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { + if (virQEMUCapsHaveAccel(qemuCaps)) { virQEMUCapsInitQMPCommandAbort(cmd); if ((rc = virQEMUCapsInitQMPCommandRun(cmd, true)) != 0) { if (rc == 1) -- 2.19.1

The function is needed to support multiple accelerators without cluttering codebase by conditionals. At the first glance that might cause an issue related to the ordering capabilities being checked on a system with many accelerators but in the current code base it should be just fine because virQEMUCapsGetHostCPUData is not interested in the exact type of accelarator. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index f80ee62019..1c6b79594d 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -649,6 +649,15 @@ virQEMUCapsHaveAccel(virQEMUCapsPtr qemuCaps) return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM); } +static virDomainVirtType +virQEMUCapsToVirtType(virQEMUCapsPtr qemuCaps) +{ + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + return VIR_DOMAIN_VIRT_KVM; + else + return VIR_DOMAIN_VIRT_QEMU; +} + /* Checks whether a domain with @guest arch can run natively on @host. */ bool @@ -2423,7 +2432,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps, virtType = VIR_DOMAIN_VIRT_QEMU; model = "max"; } else { - virtType = VIR_DOMAIN_VIRT_KVM; + virtType = virQEMUCapsToVirtType(qemuCaps); model = "host"; } @@ -4969,10 +4978,7 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, machine = virQEMUCapsGetPreferredMachine(qemuCaps); } - if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) - capsType = VIR_DOMAIN_VIRT_KVM; - else - capsType = VIR_DOMAIN_VIRT_QEMU; + capsType = virQEMUCapsToVirtType(qemuCaps); if (virttype == VIR_DOMAIN_VIRT_NONE) virttype = capsType; -- 2.19.1

This makes possible to add more accelerators by touching less code and reduces code duplication. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 1c6b79594d..1cee9a833b 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -658,6 +658,16 @@ virQEMUCapsToVirtType(virQEMUCapsPtr qemuCaps) return VIR_DOMAIN_VIRT_QEMU; } +static const char * +virQEMUCapsAccelStr(virDomainVirtType type) +{ + if (type == VIR_DOMAIN_VIRT_KVM) { + return "kvm"; + } else { + return "tcg"; + } +} + /* Checks whether a domain with @guest arch can run natively on @host. */ bool @@ -3670,7 +3680,7 @@ virQEMUCapsFormatHostCPUModelInfo(virQEMUCapsPtr qemuCaps, { virQEMUCapsHostCPUDataPtr cpuData = virQEMUCapsGetHostCPUData(qemuCaps, type); qemuMonitorCPUModelInfoPtr model = cpuData->info; - const char *typeStr = type == VIR_DOMAIN_VIRT_KVM ? "kvm" : "tcg"; + const char *typeStr = virQEMUCapsAccelStr(type); size_t i; if (!model) @@ -3725,16 +3735,13 @@ virQEMUCapsFormatCPUModels(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { virDomainCapsCPUModelsPtr cpus; - const char *typeStr; + const char *typeStr = virQEMUCapsAccelStr(type); size_t i; - if (virQEMUCapsTypeIsAccelerated(type)) { - typeStr = "kvm"; + if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; - } else { - typeStr = "tcg"; + else cpus = qemuCaps->tcgCPUModels; - } if (!cpus) return; -- 2.19.1

With more acceleration types, KVM should be used only in error messages related to KVM. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- src/qemu/qemu_capabilities.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 1cee9a833b..8a1fb2b5d9 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -4993,8 +4993,8 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, if (virQEMUCapsTypeIsAccelerated(virttype) && !virQEMUCapsTypeIsAccelerated(capsType)) { virReportError(VIR_ERR_INVALID_ARG, - _("KVM is not supported by '%s' on this host"), - binary); + _("the accel '%s' is not supported by '%s' on this host"), + virQEMUCapsAccelStr(virttype), binary); goto cleanup; } -- 2.19.1

With this change virsh domcapabilites shows: <mode name='host-passthrough' supported='yes'/> Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> --- src/qemu/qemu_capabilities.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 8a1fb2b5d9..4297a11b27 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -640,13 +640,15 @@ static const char *virQEMUCapsArchToString(virArch arch) static bool virQEMUCapsTypeIsAccelerated(virDomainVirtType type) { - return type == VIR_DOMAIN_VIRT_KVM; + return type == VIR_DOMAIN_VIRT_KVM || + type == VIR_DOMAIN_VIRT_HVF; } static bool virQEMUCapsHaveAccel(virQEMUCapsPtr qemuCaps) { - return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM); + return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) || + virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF); } static virDomainVirtType @@ -654,6 +656,8 @@ virQEMUCapsToVirtType(virQEMUCapsPtr qemuCaps) { if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) return VIR_DOMAIN_VIRT_KVM; + else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + return VIR_DOMAIN_VIRT_HVF; else return VIR_DOMAIN_VIRT_QEMU; } @@ -663,6 +667,8 @@ virQEMUCapsAccelStr(virDomainVirtType type) { if (type == VIR_DOMAIN_VIRT_KVM) { return "kvm"; + } else if (type == VIR_DOMAIN_VIRT_HVF) { + return "hvf"; } else { return "tcg"; } @@ -3109,6 +3115,8 @@ virQEMUCapsLoadHostCPUModelInfo(virQEMUCapsPtr qemuCaps, if (virtType == VIR_DOMAIN_VIRT_KVM) hostCPUNode = virXPathNode("./hostCPU[@type='kvm']", ctxt); + else if (virtType == VIR_DOMAIN_VIRT_HVF) + hostCPUNode = virXPathNode("./hostCPU[@type='hvf']", ctxt); else hostCPUNode = virXPathNode("./hostCPU[@type='tcg']", ctxt); @@ -3244,6 +3252,8 @@ virQEMUCapsLoadCPUModels(virQEMUCapsPtr qemuCaps, if (type == VIR_DOMAIN_VIRT_KVM) n = virXPathNodeSet("./cpu[@type='kvm']", ctxt, &nodes); + else if (type == VIR_DOMAIN_VIRT_HVF) + n = virXPathNodeSet("./cpu[@type='hvf']", ctxt, &nodes); else n = virXPathNodeSet("./cpu[@type='tcg']", ctxt, &nodes); @@ -3542,11 +3552,15 @@ virQEMUCapsLoadCache(virArch hostArch, if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || + (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_HVF) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || + (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF) && + virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_HVF) < 0) || virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; @@ -3661,6 +3675,8 @@ virQEMUCapsLoadCache(virArch hostArch, if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_HVF); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); ret = 0; @@ -3841,10 +3857,14 @@ virQEMUCapsFormatCache(virQEMUCapsPtr qemuCaps) if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_HVF); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_HVF); virQEMUCapsFormatCPUModels(qemuCaps, &buf, VIR_DOMAIN_VIRT_QEMU); for (i = 0; i < qemuCaps->nmachineTypes; i++) { @@ -4455,7 +4475,7 @@ virQEMUCapsInitQMPCommandRun(virQEMUCapsInitQMPCommandPtr cmd, if (forceTCG) machine = "none,accel=tcg"; else - machine = "none,accel=kvm:tcg"; + machine = "none,accel=kvm:hvf:tcg"; VIR_DEBUG("Try to probe capabilities of '%s' via QMP, machine %s", cmd->binary, machine); @@ -4646,6 +4666,8 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch, if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_HVF); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { -- 2.19.1

Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- docs/news.xml | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/docs/news.xml b/docs/news.xml index 4406aeb775..90e378187d 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -68,6 +68,18 @@ be viewed via the domain statistics. </description> </change> + <change> + <summary> + qemu: Add hvf domain type for Hypervisor.framework + </summary> + <description> + QEMU introduced experimental support of Hypervisor.framework + since 2.12. + + It's supported on machines with Intel VT-x feature set that includes + Extended Page Tables (EPT) and Unrestricted Mode since macOS 10.10. + </description> + </change> </section> <section title="Improvements"> </section> -- 2.19.1

It's worth to make the domain type a little bit more visible than a row in news. An example of hvf domain is available on QEMU driver page. While at it, mention Hypervisor.framework on index page. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- docs/drvqemu.html.in | 49 +++++++++++++++++++++++++++++++++++++++++--- docs/index.html.in | 1 + 2 files changed, 47 insertions(+), 3 deletions(-) diff --git a/docs/drvqemu.html.in b/docs/drvqemu.html.in index 0d14027646..7c511ce3b6 100644 --- a/docs/drvqemu.html.in +++ b/docs/drvqemu.html.in @@ -2,13 +2,16 @@ <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <body> - <h1>KVM/QEMU hypervisor driver</h1> + <h1>QEMU/KVM/HVF hypervisor driver</h1> <ul id="toc"></ul> <p> - The libvirt KVM/QEMU driver can manage any QEMU emulator from - version 1.5.0 or later. + The libvirt QEMU driver can manage any QEMU emulator from + version 1.5.0 or later. It supports multiple QEMU accelerators: software + emulation also known as TCG, hardware-assisted virtualization on Linux + with KVM and hardware-assisted virtualization on macOS with + Hypervisor.framework (<span class="since">since 4.10.0</span>). </p> <h2><a id="project">Project Links</a></h2> @@ -21,6 +24,9 @@ <li> The <a href="https://wiki.qemu.org/Index.html">QEMU</a> emulator </li> + <li> + <a href="https://developer.apple.com/documentation/hypervisor">Hypervisor.framework</a> reference + </li> </ul> <h2><a id="prereq">Deployment pre-requisites</a></h2> @@ -41,6 +47,13 @@ node. If both are found, then KVM fullyvirtualized, hardware accelerated guests will be available. </li> + <li> + <strong>Hypervisor.framework (HVF)</strong>: The driver will probe + <code>sysctl</code> for the presence of + <code>Hypervisor.framework</code>. If it is found and QEMU is newer + than 2.12, then it will be possible to create hardware accelerated + guests. + </li> </ul> <h2><a id="uris">Connections to QEMU driver</a></h2> @@ -640,5 +653,35 @@ $ virsh domxml-to-native qemu-argv demo.xml </devices> </domain></pre> + <h3>HVF hardware accelerated guest on x86_64</h3> + + <pre><domain type='hvf'> + <name>hvf-demo</name> + <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid> + <memory>131072</memory> + <vcpu>1</vcpu> + <os> + <type arch="x86_64">hvm</type> + </os> + <features> + <acpi/> + </features> + <clock sync="localtime"/> + <devices> + <emulator>/usr/local/bin/qemu-system-x86_64</emulator> + <controller type='scsi' index='0' model='virtio-scsi'/> + <disk type='volume' device='disk'> + <driver name='qemu' type='qcow2'/> + <source pool='default' volume='myos'/> + <target bus='scsi' dev='sda'/> + </disk> + <interface type='user'> + <mac address='24:42:53:21:52:45'/> + <model type='virtio'/> + </interface> + <graphics type='vnc' port='-1'/> + </devices> +</domain></pre> + </body> </html> diff --git a/docs/index.html.in b/docs/index.html.in index 1f9f448399..b02802fdd9 100644 --- a/docs/index.html.in +++ b/docs/index.html.in @@ -32,6 +32,7 @@ <li>is accessible from C, Python, Perl, Java and more</li> <li>is licensed under open source licenses</li> <li>supports <a href="drvqemu.html">KVM</a>, + <a href="drvqemu.html">Hypervisor.framework</a>, <a href="drvqemu.html">QEMU</a>, <a href="drvxen.html">Xen</a>, <a href="drvvirtuozzo.html">Virtuozzo</a>, <a href="drvesx.html">VMWare ESX</a>, -- 2.19.1

Many domain elements have "QEMU and KVM only" or "QEMU/KVM since x.y.z" remarks. Most of the elements work for HVF domain, so it makes sense to add respective notices for HVF domain. All the elements have been manually tested. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- docs/formatdomain.html.in | 133 ++++++++++++++++++++++---------------- 1 file changed, 77 insertions(+), 56 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 25dd4bbbd6..b1a64c7c74 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -158,10 +158,10 @@ which is specified by absolute path, used to assist the domain creation process. It is used by Xen fully virtualized domains as well as setting the QEMU BIOS file - path for QEMU/KVM domains. <span class="since">Xen since 0.1.0, - QEMU/KVM since 0.9.12</span> Then, <span class="since">since - 1.2.8</span> it's possible for the element to have two - optional attributes: <code>readonly</code> (accepted values are + path for QEMU/KVM/HVF domains. <span class="since">Xen since 0.1.0, + QEMU/KVM since 0.9.12, HVF since 4.10.0</span> Then, <span + class="since">since 1.2.8</span> it's possible for the element to have + two optional attributes: <code>readonly</code> (accepted values are <code>yes</code> and <code>no</code>) to reflect the fact that the image should be writable or read-only. The second attribute <code>type</code> accepts values <code>rom</code> and @@ -680,7 +680,7 @@ IOThreads are dedicated event loop threads for supported disk devices to perform block I/O requests in order to improve scalability especially on an SMP host/guest with many LUNs. - <span class="since">Since 1.2.8 (QEMU only)</span> + <span class="since">QEMU/KVM since 1.2.8, HVF since 4.10.0</span> </p> <pre> @@ -1603,12 +1603,13 @@ Both <code>host-model</code> and <code>host-passthrough</code> modes make sense when a domain can run directly on the host CPUs (for - example, domains with type <code>kvm</code>). The actual host CPU is - irrelevant for domains with emulated virtual CPUs (such as domains with - type <code>qemu</code>). However, for backward compatibility - <code>host-model</code> may be implemented even for domains running on - emulated CPUs in which case the best CPU the hypervisor is able to - emulate may be used rather then trying to mimic the host CPU model. + example, domains with type <code>kvm</code> or <code>hvf</code>). The + actual host CPU is irrelevant for domains with emulated virtual CPUs + (such as domains with type <code>qemu</code>). However, for backward + compatibility <code>host-model</code> may be implemented even for + domains running on emulated CPUs in which case the best CPU the + hypervisor is able to emulate may be used rather then trying to mimic + the host CPU model. </dd> <dt><code>model</code></dt> @@ -1902,12 +1903,12 @@ </dl> <p> - QEMU/KVM supports the <code>on_poweroff</code> and <code>on_reboot</code> - events handling the <code>destroy</code> and <code>restart</code> actions. - The <code>preserve</code> action for an <code>on_reboot</code> event - is treated as a <code>destroy</code> and the <code>rename-restart</code> - action for an <code>on_poweroff</code> event is treated as a - <code>restart</code> event. + QEMU/KVM/HVF domains support the <code>on_poweroff</code> and + <code>on_reboot</code> events handling the <code>destroy</code> and + <code>restart</code> actions. The <code>preserve</code> action for an + <code>on_reboot</code> event is treated as a <code>destroy</code> and the + <code>rename-restart</code> action for an <code>on_poweroff</code> event is + treated as a <code>restart</code> event. </p> <p> @@ -2043,7 +2044,7 @@ to address more than 4 GB of memory.</dd> <dt><code>acpi</code></dt> <dd>ACPI is useful for power management, for example, with - KVM guests it is required for graceful shutdown to work. + KVM or HVF guests it is required for graceful shutdown to work. </dd> <dt><code>apic</code></dt> <dd>APIC allows the use of programmable IRQ @@ -2286,7 +2287,8 @@ </dd> <dt><code>vmcoreinfo</code></dt> <dd>Enable QEMU vmcoreinfo device to let the guest kernel save debug - details. <span class="since">Since 4.4.0</span> (QEMU only) + details. <span class="since">QEMU/KVM since 4.4.0, HVF since + 4.10.0</span> </dd> <dt><code>htm</code></dt> <dd>Configure HTM (Hardware Transational Memory) availability for @@ -3559,7 +3561,7 @@ Copy-on-read avoids accessing the same backing file sectors repeatedly and is useful when the backing file is over a slow network. By default copy-on-read is off. - <span class='since'>Since 0.9.10 (QEMU and KVM only)</span> + <span class='since'>QEMU/KVM since 0.9.10, HVF since 4.10.0</span> </li> <li> The optional <code>discard</code> attribute controls whether @@ -3567,7 +3569,7 @@ ignored or passed to the filesystem. The value can be either "unmap" (allow the discard request to be passed) or "ignore" (ignore the discard request). - <span class='since'>Since 1.0.6 (QEMU and KVM only)</span> + <span class='since'>QEMU/KVM since 1.0.6, HVF since 4.10.0</span> </li> <li> The optional <code>detect_zeroes</code> attribute controls whether @@ -3723,7 +3725,7 @@ <dt><code>blockio</code></dt> <dd>If present, the <code>blockio</code> element allows to override any of the block device properties listed below. - <span class="since">Since 0.10.2 (QEMU and KVM)</span> + <span class="since">QEMU/KVM since 0.10.2, HVF since 4.10.0</span> <dl> <dt><code>logical_block_size</code></dt> <dd>The logical block size the disk will report to the guest @@ -4157,14 +4159,14 @@ The optional <code>queues</code> attribute specifies the number of queues for the controller. For best performance, it's recommended to specify a value matching the number of vCPUs. - <span class="since">Since 1.0.5 (QEMU and KVM only)</span> + <span class="since">QEMU/KVM since 1.0.5, HVF since 4.10.0</span> </dd> <dt><code>cmd_per_lun</code></dt> <dd> The optional <code>cmd_per_lun</code> attribute specifies the maximum number of commands that can be queued on devices controlled by the host. - <span class="since">Since 1.2.7 (QEMU and KVM only)</span> + <span class="since">QEMU/KVM since 1.2.7, HVF since 4.10.0</span> </dd> <dt><code>max_sectors</code></dt> <dd> @@ -4172,7 +4174,7 @@ amount of data in bytes that will be transferred to or from the device in a single command. The transfer length is measured in sectors, where a sector is 512 bytes. - <span class="since">Since 1.2.7 (QEMU and KVM only)</span> + <span class="since">QEMU/KVM since 1.2.7, HVF since 4.10.0</span> </dd> <dt><code>ioeventfd</code></dt> <dd> @@ -4268,7 +4270,8 @@ <code>unit</code> attribute) the 64-bit PCI hole should be. Some guests (like Windows XP or Windows Server 2003) might crash when QEMU and Seabios are recent enough to support 64-bit PCI holes, unless this is disabled - (set to 0). <span class="since">Since 1.1.2 (QEMU only)</span> + (set to 0). <span class="since">QEMU/KVM since 1.1.2, HVF since + 4.10.0</span> </p> <p> PCI controllers also have an optional @@ -4280,8 +4283,8 @@ model <b>attribute</b>. In almost all cases, you should not manually add a <code><model></code> subelement to a controller, nor should you modify one that is automatically - generated by libvirt. <span class="since">Since 1.2.19 (QEMU - only).</span> + generated by libvirt. <span class="since">QEMU/KVM since 1.2.19, HVF + since 4.10.0</span> </p> <p> PCI controllers also have an optional @@ -4293,7 +4296,8 @@ should not manually add a <code><target></code> subelement to a controller, nor should you modify the values in the those that are automatically generated by - libvirt. <span class="since">Since 1.2.19 (QEMU only).</span> + libvirt. <span class="since">QEMU/KVM since 1.2.19, HVF since 4.10.0 + </span> </p> <dl> <dt><code>chassisNr</code></dt> @@ -5681,7 +5685,7 @@ <p> The values for <code>type</code> aren't defined specifically by libvirt, but by what the underlying hypervisor supports (if - any). For QEMU and KVM you can get a list of supported models + any). For QEMU, KVM and HVF you can get a list of supported models with these commands: </p> @@ -5691,7 +5695,7 @@ qemu-kvm -net nic,model=? /dev/null </pre> <p> - Typical values for QEMU and KVM include: + Typical values for QEMU, KVM and HVF include: ne2k_isa i82551 i82557b i82559er ne2k_pci pcnet rtl8139 e1000 virtio </p> @@ -5730,7 +5734,7 @@ qemu-kvm -net nic,model=? /dev/null will be rejected. If this attribute is not present, then the domain defaults to 'vhost' if present, but silently falls back to 'qemu' without error. - <span class="since">Since 0.8.8 (QEMU and KVM only)</span> + <span class="since">QEMU/KVM since 0.8.8, HVF since 4.10.0</span> </dd> <dd> For interfaces of type='hostdev' (PCI passthrough devices) @@ -5755,7 +5759,8 @@ qemu-kvm -net nic,model=? /dev/null The <code>txmode</code> attribute specifies how to handle transmission of packets when the transmit buffer is full. The value can be either 'iothread' or 'timer'. - <span class="since">Since 0.8.8 (QEMU and KVM only)</span><br/><br/> + <span class="since">QEMU/KVM since 0.8.8, HVF since 4.10.0</span> + <br/><br/> If set to 'iothread', packet tx is all done in an iothread in the bottom half of the driver (this option translates into @@ -5801,7 +5806,8 @@ qemu-kvm -net nic,model=? /dev/null usually if the feature is supported, default is on. In case there is a situation where this behavior is suboptimal, this attribute provides a way to force the feature off. - <span class="since">Since 0.9.5 (QEMU and KVM only)</span><br/><br/> + <span class="since">QEMU/KVM since 0.9.5, HVF since 4.10.0</span> + <br/><br/> <b>In general you should leave this option alone, unless you are very certain you know what you are doing.</b> @@ -5828,7 +5834,8 @@ qemu-kvm -net nic,model=? /dev/null some restrictions on actual value. For instance, latest QEMU (as of 2016-09-01) requires value to be a power of two from [256, 1024] range. - <span class="since">Since 2.3.0 (QEMU and KVM only)</span><br/><br/> + <span class="since">QEMU/KVM since 2.3.0, HVF since 4.10.0</span> + <br/><br/> <b>In general you should leave this option alone, unless you are very certain you know what you are doing.</b> @@ -5844,7 +5851,8 @@ qemu-kvm -net nic,model=? /dev/null range. In addition to that, this may work only for a subset of interface types, e.g. aforementioned QEMU enables this option only for <code>vhostuser</code> type. - <span class="since">Since 3.7.0 (QEMU and KVM only)</span><br/><br/> + <span class="since">QEMU/KVM since 3.7.0, HVF since 4.10.0</span> + <br/><br/> <b>In general you should leave this option alone, unless you are very certain you know what you are doing.</b> @@ -6748,7 +6756,10 @@ qemu-kvm -net nic,model=? /dev/null of the first forward dev will be used. </p> </dd> - <dt><code>socket</code> <span class="since">since 2.0.0 (QEMU only)</span></dt> + <dt> + <code>socket</code> <span class="since">QEMU/KVM since 2.0.0, HVF + since 4.10.0</span> + </dt> <dd> <p> This listen type tells a graphics server to listen on unix socket. @@ -6764,7 +6775,10 @@ qemu-kvm -net nic,model=? /dev/null attribute all <code>listen</code> elements are ignored. </p> </dd> - <dt><code>none</code> <span class="since">since 2.0.0 (QEMU only)</span></dt> + <dt> + <code>none</code> <span class="since">QEMU/KVM since 2.0.0, HVF + since 4.10.0</span> + </dt> <dd> <p> This listen type doesn't have any other attribute. Libvirt supports @@ -6838,19 +6852,21 @@ qemu-kvm -net nic,model=? /dev/null <p> You can provide the amount of video memory in kibibytes (blocks of 1024 bytes) using <code>vram</code>. This is supported only for guest - type of "libxl", "vz", "qemu", "vbox", "vmx" and "xen". If no - value is provided the default is used. If the size is not a power of - two it will be rounded to closest one. + type of "libxl", "vz", "qemu", "kvm", "hvf", "vbox", "vmx" and "xen". + If no value is provided the default is used. If the size is not a + power of two it will be rounded to closest one. </p> <p> The number of screen can be set using <code>heads</code>. This is - supported only for guests type of "vz", "kvm", "vbox" and "vmx". + supported only for guests type of "vz", "kvm", "hvf", "vbox" and + "vmx". </p> <p> - For guest type of "kvm" or "qemu" and model type "qxl" there are - optional attributes. Attribute <code>ram</code> (<span class="since"> - since 1.0.2</span>) specifies the size of the primary bar, while the - attribute <code>vram</code> specifies the secondary bar size. + For guest type of "kvm", "hvf" or "qemu" and model type "qxl" there + are optional attributes. Attribute <code>ram</code> (<span + class="since"> since 1.0.2</span>) specifies the size of the primary + bar, while the attribute <code>vram</code> specifies the secondary bar + size. If <code>ram</code> or <code>vram</code> are not supplied a default value is used. The <code>ram</code> should also be rounded to power of two as <code>vram</code>. There is also optional attribute @@ -7735,7 +7751,7 @@ qemu-kvm -net nic,model=? /dev/null <p> A virtual hardware watchdog device can be added to the guest via the <code>watchdog</code> element. - <span class="since">Since 0.7.3, QEMU and KVM only</span> + <span class="since">QEMU/KVM since 0.7.3, HVF since 4.10.0</span> </p> <p> @@ -7773,12 +7789,17 @@ qemu-kvm -net nic,model=? /dev/null underlying hypervisor. </p> <p> - QEMU and KVM support: + QEMU, KVM and HVF support: </p> <ul> <li>'i6300esb' - the recommended device, emulating a PCI Intel 6300ESB </li> <li>'ib700' - emulating an ISA iBase IB700 </li> + </ul> + <p> + QEMU and KVM for s390/s390x support: + </p> + <ul> <li>'diag288' - emulating an S390 DIAG288 device <span class="since">Since 1.2.17</span></li> </ul> @@ -7791,7 +7812,7 @@ qemu-kvm -net nic,model=? /dev/null specific to the underlying hypervisor. </p> <p> - QEMU and KVM support: + QEMU, KVM and HVF support: </p> <ul> <li>'reset' - default, forcefully reset the guest</li> @@ -7823,14 +7844,14 @@ qemu-kvm -net nic,model=? /dev/null <h4><a id="elementsMemBalloon">Memory balloon device</a></h4> <p> - A virtual memory balloon device is added to all Xen and KVM/QEMU + A virtual memory balloon device is added to all Xen and QEMU/KVM/HVF guests. It will be seen as <code>memballoon</code> element. It will be automatically added when appropriate, so there is no need to explicitly add this element in the guest XML unless a specific PCI slot needs to be assigned. - <span class="since">Since 0.8.3, Xen, QEMU and KVM only</span> - Additionally, <span class="since">since 0.8.4</span>, if the - memballoon device needs to be explicitly disabled, + <span class="since">Xen, QEMU and KVM since 0.8.3, HVF since + 4.10.0</span> Additionally, <span class="since">since 0.8.4</span>, if + the memballoon device needs to be explicitly disabled, <code>model='none'</code> may be used. </p> @@ -7867,7 +7888,7 @@ qemu-kvm -net nic,model=? /dev/null the virtualization platform </p> <ul> - <li>'virtio' - default with QEMU/KVM</li> + <li>'virtio' - default with QEMU/KVM/HVF</li> <li>'xen' - default with Xen</li> </ul> </dd> @@ -8148,7 +8169,7 @@ qemu-kvm -net nic,model=? /dev/null <p> panic device enables libvirt to receive panic notification from a QEMU guest. - <span class="since">Since 1.2.1, QEMU and KVM only</span> + <span class="since">QEMU/KVM since 1.2.1, HVF since 4.10.0</span> </p> <p> This feature is always enabled for: @@ -8816,7 +8837,7 @@ qemu-kvm -net nic,model=? /dev/null <ul> <li><a href="drvxen.html#xmlconfig">Xen examples</a></li> - <li><a href="drvqemu.html#xmlconfig">QEMU/KVM examples</a></li> + <li><a href="drvqemu.html#xmlconfig">QEMU/KVM/HVF examples</a></li> </ul> </body> </html> -- 2.19.1

While at it, rename OS-X on index page to macOS. Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> --- docs/docs.html.in | 3 + docs/index.html.in | 3 +- docs/macos.html.in | 229 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 234 insertions(+), 1 deletion(-) create mode 100644 docs/macos.html.in diff --git a/docs/docs.html.in b/docs/docs.html.in index 40e0e3b82e..84a51a55fb 100644 --- a/docs/docs.html.in +++ b/docs/docs.html.in @@ -12,6 +12,9 @@ <dt><a href="windows.html">Windows</a></dt> <dd>Downloads for Windows</dd> + <dt><a href="macos.html">macOS</a></dt> + <dd>Working with libvirt on macOS</dd> + <dt><a href="migration.html">Migration</a></dt> <dd>Migrating guests between machines</dd> diff --git a/docs/index.html.in b/docs/index.html.in index b02802fdd9..34b491ec69 100644 --- a/docs/index.html.in +++ b/docs/index.html.in @@ -39,7 +39,8 @@ <a href="drvlxc.html">LXC</a>, <a href="drvbhyve.html">BHyve</a> and <a href="drivers.html">more</a></li> - <li>targets Linux, FreeBSD, <a href="windows.html">Windows</a> and OS-X</li> + <li>targets Linux, FreeBSD, <a href="windows.html">Windows</a> and + <a href="macos.html">macOS</a></li> <li>is used by many <a href="apps.html">applications</a></li> </ul> <p>Recent / forthcoming <a href="news.html">release changes</a></p> diff --git a/docs/macos.html.in b/docs/macos.html.in new file mode 100644 index 0000000000..54c93ea2fb --- /dev/null +++ b/docs/macos.html.in @@ -0,0 +1,229 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE html> +<html xmlns="http://www.w3.org/1999/xhtml"> + <body> + <h1 >macOS support</h1> + + <ul id="toc"></ul> + + <p> + Libvirt works both as client and server (for <a href="drvqemu.html"> + "qemu" domain</a>) on macOS High Sierra (10.13) and macOS Mojave (10.14) + since 4.7.0. Other macOS variants likely work but we neither tested nor + received reports for them. + </p> + + <p> + <a href="drvqemu.html">"hvf" domain type</a> adds support of <a + href="https://developer.apple.com/documentation/hypervisor"> + Hypervisor.framework</a> since 4.10.0. To use "hvf" domain, QEMU must + be at least 2.12 and macOS must be no less than Yosemite (10.10). "hvf" + domain type is similar to "kvm" but it has less features. + </p> + + <p> + Hypervisor.framework is available on your machine if the sysctl command + returns 1: + + <pre>sysctl -n kern.hv_support</pre> + </p> + + <h2><a id="installation">Installation</a></h2> + + <p> + libvirt client (virsh), server (libvirtd) and development headers can be + installed from <a href="https://brew.sh">homebrew</a>: + + <pre>brew install libvirt</pre> + + <a href="http://virt-manager.org">virt-manager and virt-viewer</a> can be + installed from source via <a + href="https://github.com/jeffreywildman/homebrew-virt-manager"> + Jeffrey Wildman's tap</a>: + + <pre>brew tap jeffreywildman/homebrew-virt-manager +brew install virt-manager virt-viewer</pre> + </p> + + <h2><a id='local-libvirtd'>Running libvirtd locally</a></h2> + + <p> + The server can be started manually: + <pre>libvirtd</pre> + or on system boot: + <pre>brew services start libvirt</pre> + </p> + <p> + Once started, you can use virsh to work with libvirtd: + <pre>virsh define domain.xml +virsh start domain +virsh shutdown domain</pre> + + For more details on virsh, please see <a href="virshcmdref.html">virsh + command reference</a> or built-in help: + <pre>virsh help</pre> + </p> + + <p> + Domain XML examples can be found on <a href="drvqemu.html#xmlconfig">QEMU + driver page</a>. Full reference is available on <a + href="formatdomain.html">domain XML format page</a>. + </p> + + <p> + You can use virt-manager to connect to libvirtd (connection URI must be + specified on the first connection, then it'll be possible to omit it): + <pre>virt-manager -c qemu:///session</pre> + or, if you only need an access to the virtual display of a VM you can use + virt-viewer: + <pre>virt-viewer -c qemu:///session</pre> + </p> + + <h2><a id="external-hypervisors">Working with external hypervisors</a></h2> + <p> + Details on the example domain XML files, capabilities and connection + string syntax used for connecting to external hypervisors can be found + online on <a href="drivers.html">hypervisor specific driver + pages</a>. + </p> + + <h2><a id="tlscerts">TLS Certificates</a></h2> + + <p> + TLS certificates must be placed in the correct locations, before you will + be able to connect to QEMU servers over TLS. + </p> + + <p> + Information on generating TLS certificates can be found here: + </p> + + <a href="http://wiki.libvirt.org/page/TLSSetup">http://wiki.libvirt.org/page/TLSSetup</a> + + <p> + The Certificate Authority (CA) certificate file must be placed in: + </p> + + <ul> + <li>~/.cache/libvirt/pki/CA/cacert.pem</li> + </ul> + + <p> + The Client certificate file must be placed in: + </p> + + <ul> + <li>~/.cache/libvirt/pki/libvirt/clientcert.pem</li> + </ul> + + <p> + The Client key file must be placed in: + </p> + + <ul> + <li>~/.cache/libvirt/pki/libvirt/private/clientkey.pem</li> + </ul> + + <h2><a id="known-issues">Known issues</a></h2> + <p> + This is a list of issues that can be easily fixed and provide + substantial improvement of user experience: + </p> + <ul> + <li> + virt-install doesn't work unless disks are created upfront. The reason + is because VIR_STORAGE_VOL_CREATE_PREALLOC_METADATA sets + preallocate=falloc which is not supported by qemu-img on macOS. + </li> + <li> + "hvf" is not default domain type when virt-install connects to the + local libvirtd on macOS + </li> + <li> + QXL VGA device and SPICE display cannot be used unless QEMU is compiled + with SPICE server. The changes to build and run SPICE server on macOS + haven't been sent to upstream yet. + </li> + <li> + "make check" reports many failing tests on macOS. Some of the tests + need to be adopted to run both on Linux and macOS. + </li> + <li> + "make syntax-check" needs be fixed too, it depends on GNU version of + grep but uses system (BSD) grep. + </li> + <li> + QEMU from homebrew is compiled without USB redirection support. + </li> + <li> + CPU usage is not gathered for VMs and therefore cannot be dispalyed in + virt-manager. + </li> + <li> + libvirtd logs are noisy because some features are missing. + </li> + </ul> + + <h2><a id="missing-features">Missing features</a></h2> + <p> + "hvf" is a new domain type and can't be compared to "kvm" feature-wise. + "kvm" domain relies on QEMU backend devices implemented in Linux kernel + such as para-virtualized vhost devices and PCI-passthrough with vfio. + + Nonetheless, some of the features available in "kvm" domain can be + implemented in userspace for "hvf" domain. + </p> + <ul> + <li> + Instruction emulation in "hvf" accelerator is not mature. The bugs are + tracked on <a + href="https://bugs.launchpad.net/qemu/+bugs?field.tag=hvf">QEMU bug + tracker</a>. + </li> + <li> + Power Management notifications are not implemented, therefore guests + cannot respond to <a + href="https://developer.apple.com/library/archive/qa/qa1340/_index.html"> + sleep events on the host</a>. + </li> + <li> + CPU pinning doesn't work but macOS provides <a + href="https://developer.apple.com/library/archive/releasenotes/Performance/RN-AffinityAPI/"> + Thread Affinity API</a> that can be used to implement it. + </li> + <li> + Network management is not available but macOS has an API that is used + by ifconfig to create bridge and tap devices. So, it should be possible + to implement network management and bridged networking. + </li> + <li> + Filesystem pass-through is not available. + </li> + <li> + PCI/SCSI/USB pass-through is not available. + </li> + </ul> + + + <h2><a id="feedback">Feedback</a></h2> + + <p> + Feedback and suggestions on changes and what else to include + <a href="contact.html">are desired</a>. + </p> + + <h2><a id="compiling">Compiling yourself</a></h2> + + <p> + Use these options when following the instructions on the + <a href="compiling.html">Compiling</a> page. + </p> + +<pre> +./configure \ + --without-wireshark-dissector \ + --without-dbus +</pre> + + </body> +</html> -- 2.19.1
participants (3)
-
Jiri Denemark
-
Pino Toscano
-
Roman Bolshakov