[libvirt PATCH 0/5] add loongarch support for libvirt

From: lixianglai <lixianglai@loongson.cn> Hello, Everyone: This patch series adds libvirt support for loongarch.Although the bios path and name has not been officially integrated into qemu and we think there are still many shortcomings, we try to push a version of patch to the community according to the opinions of the community, hoping to listen to everyone's opinions. Anyway we have a version of libvirt that supports loongarch. You can also get libvirt's patch from the link below: https://gitlab.com/lixianglai/libvirt branch: loongarch Since the patch associated with loongarch has not yet been submitted to the virt-manager community, we are providing a temporary patch with loongarch for the time being patch's virt-manager, the open source work of virt-manager adding loongarch will be followed up later or synchronized with the open source libvirt. You can get the virt-manager code with loongarch patch from the link below: https://github.com/loongson/virt-manager branch: loongarch loongarch's virtual machine bios is not yet available in qemu, so you can get it from the following link https://github.com/loongson/Firmware/tree/main/LoongArchVirtMachine (Note: You should clone the repository using git instead of downloading the file via wget or you'll get xml) We named the bios edk2-loongarch64-code.fd, edk2-loongarch64-vars.fd is used to store pflash images of non-volatile variables.After installing qemu-system-loongarch64, you need to manually copy these two files to the /user/share/qemu directory. Since there is no fedora operating system that supports the loongarch architecture, you can find an iso that supports loongarch at the link below for testing purposes: https://github.com/fedora-remix-loongarch/releases-info Well, if you have completed the above steps I think you can now install loongarch virtual machine, you can install it through the virt-manager graphical interface, or install it through vrit-install, here is an example of installing it using virt-install: virt-install \ --virt-type=qemu \ --name loongarch-test \ --memory 4096 \ --vcpus=4 \ --arch=loongarch64 \ --boot cdrom \ --disk device=cdrom,bus=scsi,path=/root/livecd-fedora-mate-4.loongarch64.iso \ --disk path=/var/lib/libvirt/images/debian12-loongarch64.qcow2,size=10,format=qcow2,bus=scsi \ --network network=default \ --osinfo archlinux \ --feature acpi=true \ --video=virtio \ --graphics=vnc,listen=0.0.0.0 lixianglai (5): Add loongarch cpu support Add loongarch cpu model and vendor info Config some capabilities for loongarch virt machine Implement the method of getting host info for loongarch Add bios path for loongarch po/POTFILES | 1 + src/conf/schemas/basictypes.rng | 1 + src/cpu/cpu.c | 2 + src/cpu/cpu.h | 2 + src/cpu/cpu_loongarch.c | 742 +++++++++++++++++++++++++++++ src/cpu/cpu_loongarch.h | 25 + src/cpu/cpu_loongarch_data.h | 37 ++ src/cpu/meson.build | 1 + src/cpu_map/index.xml | 5 + src/cpu_map/loongarch_la464.xml | 6 + src/cpu_map/loongarch_vendors.xml | 3 + src/cpu_map/meson.build | 2 + src/qemu/qemu.conf.in | 3 +- src/qemu/qemu_capabilities.c | 6 + src/qemu/qemu_conf.c | 3 +- src/qemu/qemu_domain.c | 32 ++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_domain_address.c | 55 +++ src/qemu/qemu_validate.c | 2 +- src/qemu/test_libvirtd_qemu.aug.in | 1 + src/util/virarch.c | 4 + src/util/virarch.h | 4 + src/util/virhostcpu.c | 4 +- src/util/virsysinfo.c | 5 +- 24 files changed, 940 insertions(+), 7 deletions(-) create mode 100644 src/cpu/cpu_loongarch.c create mode 100644 src/cpu/cpu_loongarch.h create mode 100644 src/cpu/cpu_loongarch_data.h create mode 100644 src/cpu_map/loongarch_la464.xml create mode 100644 src/cpu_map/loongarch_vendors.xml -- 2.27.0

From: lixianglai <lixianglai@loongson.cn> Add loongarch cpu support, Define new cpu type 'loongarch64' and implement it's driver functions. Signed-off-by: lixianglai <lixianglai@loongson.cn> --- po/POTFILES | 1 + src/conf/schemas/basictypes.rng | 1 + src/cpu/cpu.c | 2 + src/cpu/cpu.h | 2 + src/cpu/cpu_loongarch.c | 742 ++++++++++++++++++++++++++++++++ src/cpu/cpu_loongarch.h | 25 ++ src/cpu/cpu_loongarch_data.h | 37 ++ src/cpu/meson.build | 1 + src/qemu/qemu_capabilities.c | 1 + src/qemu/qemu_domain.c | 4 + src/util/virarch.c | 2 + src/util/virarch.h | 4 + 12 files changed, 822 insertions(+) create mode 100644 src/cpu/cpu_loongarch.c create mode 100644 src/cpu/cpu_loongarch.h create mode 100644 src/cpu/cpu_loongarch_data.h diff --git a/po/POTFILES b/po/POTFILES index 3a51aea5cb..c0e66d563e 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -70,6 +70,7 @@ src/cpu/cpu.c src/cpu/cpu_arm.c src/cpu/cpu_map.c src/cpu/cpu_ppc64.c +src/cpu/cpu_loongarch.c src/cpu/cpu_riscv64.c src/cpu/cpu_s390.c src/cpu/cpu_x86.c diff --git a/src/conf/schemas/basictypes.rng b/src/conf/schemas/basictypes.rng index 26eb538077..04f032b3ab 100644 --- a/src/conf/schemas/basictypes.rng +++ b/src/conf/schemas/basictypes.rng @@ -470,6 +470,7 @@ <value>x86_64</value> <value>xtensa</value> <value>xtensaeb</value> + <value>loongarch64</value> </choice> </define> diff --git a/src/cpu/cpu.c b/src/cpu/cpu.c index bc43aa4e93..1e7c879ca5 100644 --- a/src/cpu/cpu.c +++ b/src/cpu/cpu.c @@ -27,6 +27,7 @@ #include "cpu_ppc64.h" #include "cpu_s390.h" #include "cpu_arm.h" +#include "cpu_loongarch.h" #include "cpu_riscv64.h" #include "capabilities.h" @@ -41,6 +42,7 @@ static struct cpuArchDriver *drivers[] = { &cpuDriverS390, &cpuDriverArm, &cpuDriverRiscv64, + &cpuDriverLoongArch, }; diff --git a/src/cpu/cpu.h b/src/cpu/cpu.h index a4cdb37f03..9ec0a109b8 100644 --- a/src/cpu/cpu.h +++ b/src/cpu/cpu.h @@ -27,6 +27,7 @@ #include "cpu_x86_data.h" #include "cpu_ppc64_data.h" #include "cpu_arm_data.h" +#include "cpu_loongarch_data.h" typedef struct _virCPUData virCPUData; @@ -36,6 +37,7 @@ struct _virCPUData { virCPUx86Data x86; virCPUppc64Data ppc64; virCPUarmData arm; + virCPULoongArchData loongarch; /* generic driver needs no data */ } data; }; diff --git a/src/cpu/cpu_loongarch.c b/src/cpu/cpu_loongarch.c new file mode 100644 index 0000000000..0f96535606 --- /dev/null +++ b/src/cpu/cpu_loongarch.c @@ -0,0 +1,742 @@ +/* + * cpu_loongarch.c: CPU driver for 64-bit LOONGARCH CPUs + * + * Copyright (C) 2023 Loongson Technology. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#include <config.h> +#include <fcntl.h> +#include <unistd.h> +#include <stdint.h> +#include <stdio.h> +#include <sys/time.h> +#include <sys/types.h> +#include <sys/stat.h> +#include "virlog.h" +#include "viralloc.h" +#include "cpu.h" +#include "virstring.h" +#include "cpu_map.h" +#include "virbuffer.h" + +#define VIR_FROM_THIS VIR_FROM_CPU + +VIR_LOG_INIT("cpu.cpu_loongarch"); + +static const virArch archs[] = { VIR_ARCH_LOONGARCH64 }; + +typedef struct _virCPULoongArchVendor virCPULoongArchVendor; +struct _virCPULoongArchVendor { + char *name; +}; + +typedef struct _virCPULoongArchModel virCPULoongArchModel; +struct _virCPULoongArchModel { + char *name; + const virCPULoongArchVendor *vendor; + virCPULoongArchData data; +}; + +typedef struct _virCPULoongArchMap virCPULoongArchMap; +struct _virCPULoongArchMap { + size_t nvendors; + virCPULoongArchVendor **vendors; + size_t nmodels; + virCPULoongArchModel **models; +}; + +static void +virCPULoongArchDataClear(virCPULoongArchData *data) +{ + if (!data) + return; + + VIR_FREE(data->prid); +} + +static int +virCPULoongArchDataCopy(virCPULoongArchData *dst, + const virCPULoongArchData *src) +{ + size_t i; + + dst->prid = g_new0(virCPULoongArchPrid, src->len); + if (!dst->prid) + return -1; + + dst->len = src->len; + + for (i = 0; i < src->len; i++) { + dst->prid[i].value = src->prid[i].value; + dst->prid[i].mask = src->prid[i].mask; + } + + return 0; +} + +static void +virCPULoongArchVendorFree(virCPULoongArchVendor *vendor) +{ + if (!vendor) + return; + + VIR_FREE(vendor->name); + VIR_FREE(vendor); +} + +static virCPULoongArchVendor * +virCPULoongArchVendorFind(const virCPULoongArchMap *map, + const char *name) +{ + size_t i; + + for (i = 0; i < map->nvendors; i++) { + if (STREQ(map->vendors[i]->name, name)) + return map->vendors[i]; + } + + return NULL; +} + +static void +virCPULoongArchModelFree(virCPULoongArchModel *model) +{ + if (!model) + return; + + virCPULoongArchDataClear(&model->data); + VIR_FREE(model->name); + VIR_FREE(model); +} + +static virCPULoongArchModel * +virCPULoongArchModelCopy(const virCPULoongArchModel *model) +{ + virCPULoongArchModel *copy; + + copy = g_new0(virCPULoongArchModel, 1); + if (!copy) + goto cleanup; + + copy->name = g_strdup(model->name); + + if (virCPULoongArchDataCopy(©->data, &model->data) < 0) + goto cleanup; + + copy->vendor = model->vendor; + + return copy; + + cleanup: + virCPULoongArchModelFree(copy); + return NULL; +} + +static virCPULoongArchModel * +virCPULoongArchModelFind(const virCPULoongArchMap *map, + const char *name) +{ + size_t i; + + for (i = 0; i < map->nmodels; i++) { + if (STREQ(map->models[i]->name, name)) + return map->models[i]; + } + + return NULL; +} + +static virCPULoongArchModel * +virCPULoongArchModelFindPrid(const virCPULoongArchMap *map, + uint32_t prid) +{ + size_t i; + size_t j; + + for (i = 0; i < map->nmodels; i++) { + virCPULoongArchModel *model = map->models[i]; + for (j = 0; j < model->data.len; j++) { + if ((prid & model->data.prid[j].mask) == model->data.prid[j].value) + return model; + } + } + + return NULL; +} + +static virCPULoongArchModel * +virCPULoongArchModelFromCPU(const virCPUDef *cpu, + const virCPULoongArchMap *map) +{ + virCPULoongArchModel *model; + + if (!cpu->model) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("no CPU model specified")); + return NULL; + } + + if (!(model = virCPULoongArchModelFind(map, cpu->model))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unknown CPU model %1$s"), cpu->model); + return NULL; + } + + return virCPULoongArchModelCopy(model); +} + +static void +virCPULoongArchMapFree(virCPULoongArchMap *map) +{ + size_t i; + + if (!map) + return; + + for (i = 0; i < map->nmodels; i++) + virCPULoongArchModelFree(map->models[i]); + VIR_FREE(map->models); + + for (i = 0; i < map->nvendors; i++) + virCPULoongArchVendorFree(map->vendors[i]); + VIR_FREE(map->vendors); + + VIR_FREE(map); +} + +static int +virCPULoongArchVendorParse(xmlXPathContextPtr ctxt ATTRIBUTE_UNUSED, + const char *name, + void *data) +{ + virCPULoongArchMap *map = data; + virCPULoongArchVendor *vendor; + int ret = -1; + + vendor = g_new0(virCPULoongArchVendor, 1); + if (!vendor) + return ret; + vendor->name = g_strdup(name); + + if (virCPULoongArchVendorFind(map, vendor->name)) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("CPU vendor %1$s already defined"), vendor->name); + goto cleanup; + } + + VIR_APPEND_ELEMENT(map->vendors, map->nvendors, vendor); + + ret = 0; + + cleanup: + virCPULoongArchVendorFree(vendor); + return ret; +} + +static int +virCPULoongArchModelParse(xmlXPathContextPtr ctxt, + const char *name, + void *data) +{ + virCPULoongArchMap *map = data; + virCPULoongArchModel *model; + xmlNodePtr *nodes = NULL; + char *vendor = NULL; + uint32_t prid; + size_t i; + int n; + int ret = -1; + + model = g_new0(virCPULoongArchModel, 1); + if (!model) + goto cleanup; + + model->name = g_strdup(name); + + if (virCPULoongArchModelFind(map, model->name)) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("CPU model %1$s already defined"), model->name); + goto cleanup; + } + + if (virXPathBoolean("boolean(./vendor)", ctxt)) { + vendor = virXPathString("string(./vendor/@name)", ctxt); + if (!vendor) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Invalid vendor element in CPU model %1$s"), + model->name); + goto cleanup; + } + + if (!(model->vendor = virCPULoongArchVendorFind(map, vendor))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unknown vendor %1$s referenced by CPU model %2$s"), + vendor, model->name); + goto cleanup; + } + } + + if ((n = virXPathNodeSet("./prid", ctxt, &nodes)) <= 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Missing Prid information for CPU model %1$s"), + model->name); + goto cleanup; + } + + model->data.prid = g_new0(virCPULoongArchPrid, n); + if (!model->data.prid) + goto cleanup; + + model->data.len = n; + + for (i = 0; i < n; i++) { + + if (virXMLPropUInt(nodes[i], "value", 16, VIR_XML_PROP_REQUIRED, + &prid) < 0) + { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Missing or invalid Prid value in CPU model %1$s"), + model->name); + goto cleanup; + } + model->data.prid[i].value = prid; + + if (virXMLPropUInt(nodes[i], "mask", 16, VIR_XML_PROP_REQUIRED, + &prid) < 0) + { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Missing or invalid PVR mask in CPU model %1$s"), + model->name); + goto cleanup; + } + model->data.prid[i].mask = prid; + } + + VIR_APPEND_ELEMENT(map->models, map->nmodels, model); + + ret = 0; + + cleanup: + virCPULoongArchModelFree(model); + VIR_FREE(vendor); + VIR_FREE(nodes); + return ret; +} + +static virCPULoongArchMap * +virCPULoongArchLoadMap(void) +{ + virCPULoongArchMap *map; + + map = g_new0(virCPULoongArchMap, 1); + if (!map) + goto cleanup; + + if (cpuMapLoad("loongarch64", virCPULoongArchVendorParse, NULL, + virCPULoongArchModelParse, map) < 0) + goto cleanup; + + return map; + + cleanup: + virCPULoongArchMapFree(map); + return NULL; +} + +static virCPUData * +virCPULoongArchMakeCPUData(virArch arch, + virCPULoongArchData *data) +{ + virCPUData * cpuData; + + cpuData = g_new0(virCPUData, 1); + if (!cpuData) + return NULL; + + cpuData->arch = arch; + + if (virCPULoongArchDataCopy(&cpuData->data.loongarch, data) < 0) + VIR_FREE(cpuData); + + return cpuData; +} + +static virCPUCompareResult +virCPULoongArchCompute(virCPUDef *host, + const virCPUDef *other, + virCPUData **guestData, + char **message) +{ + virCPULoongArchMap *map = NULL; + virCPULoongArchModel *host_model = NULL; + virCPULoongArchModel *guest_model = NULL; + virCPUDef *cpu = NULL; + virCPUCompareResult ret = VIR_CPU_COMPARE_ERROR; + virArch arch; + size_t i; + + /* Ensure existing configurations are handled correctly */ + if (!(cpu = virCPUDefCopy(other))) + goto cleanup; + + if (cpu->arch != VIR_ARCH_NONE) { + bool found = false; + + for (i = 0; i < G_N_ELEMENTS(archs); i++) { + if (archs[i] == cpu->arch) { + found = true; + break; + } + } + + if (!found) { + VIR_DEBUG("CPU arch %s does not match host arch", + virArchToString(cpu->arch)); + if (message) { + *message = g_strdup_printf(_("CPU arch %1$s does not match host arch"), + virArchToString(cpu->arch)); + } + ret = VIR_CPU_COMPARE_INCOMPATIBLE; + goto cleanup; + } + arch = cpu->arch; + } else { + arch = host->arch; + } + + if (cpu->vendor && + (!host->vendor || STRNEQ(cpu->vendor, host->vendor))) { + VIR_DEBUG("host CPU vendor does not match required CPU vendor %s", + cpu->vendor); + if (message) { + *message = g_strdup_printf(_("host CPU vendor does not match required CPU vendor %1$s"), + cpu->vendor); + } + ret = VIR_CPU_COMPARE_INCOMPATIBLE; + goto cleanup; + } + + if (!(map = virCPULoongArchLoadMap())) + goto cleanup; + + /* Host CPU information */ + if (!(host_model = virCPULoongArchModelFromCPU(host, map))) + goto cleanup; + + if (cpu->type == VIR_CPU_TYPE_GUEST) { + /* Guest CPU information */ + switch (cpu->mode) { + case VIR_CPU_MODE_HOST_MODEL: + case VIR_CPU_MODE_HOST_PASSTHROUGH: + /* host-model and host-passthrough: + * the guest CPU is the same as the host */ + guest_model = virCPULoongArchModelCopy(host_model); + break; + + case VIR_CPU_MODE_CUSTOM: + /* custom: + * look up guest CPU information */ + guest_model = virCPULoongArchModelFromCPU(cpu, map); + break; + } + } else { + /* Other host CPU information */ + guest_model = virCPULoongArchModelFromCPU(cpu, map); + } + + if (!guest_model) + goto cleanup; + + if (STRNEQ(guest_model->name, host_model->name)) { + VIR_DEBUG("host CPU model does not match required CPU model %s", + guest_model->name); + if (message) { + *message = g_strdup_printf(_("host CPU model does not match required CPU model %1$s"), + guest_model->name); + } + ret = VIR_CPU_COMPARE_INCOMPATIBLE; + goto cleanup; + } + + if (guestData) + if (!(*guestData = virCPULoongArchMakeCPUData(arch, &guest_model->data))) + goto cleanup; + + ret = VIR_CPU_COMPARE_IDENTICAL; + + cleanup: + virCPUDefFree(cpu); + virCPULoongArchMapFree(map); + virCPULoongArchModelFree(host_model); + virCPULoongArchModelFree(guest_model); + return ret; +} + +static virCPUCompareResult +virCPULoongArchCompare(virCPUDef *host, + virCPUDef *cpu, + bool failIncompatible) +{ + virCPUCompareResult ret; + char *message = NULL; + + if (!host || !host->model) { + if (failIncompatible) { + virReportError(VIR_ERR_CPU_INCOMPATIBLE, "%s", + _("unknown host CPU")); + } else { + VIR_WARN("unknown host CPU"); + ret = VIR_CPU_COMPARE_INCOMPATIBLE; + } + return -1; + } + + ret = virCPULoongArchCompute(host, cpu, NULL, &message); + + if (failIncompatible && ret == VIR_CPU_COMPARE_INCOMPATIBLE) { + ret = VIR_CPU_COMPARE_ERROR; + if (message) { + virReportError(VIR_ERR_CPU_INCOMPATIBLE, "%s", message); + } else { + virReportError(VIR_ERR_CPU_INCOMPATIBLE, NULL); + } + } + VIR_FREE(message); + + return ret; +} + +static int +virCPULoongArchDriverDecode(virCPUDef *cpu, + const virCPUData *data, + virDomainCapsCPUModels *models) +{ + int ret = -1; + virCPULoongArchMap *map; + const virCPULoongArchModel *model; + + if (!data || !(map = virCPULoongArchLoadMap())) + return -1; + + if (!(model = virCPULoongArchModelFindPrid(map, data->data.loongarch.prid[0].value))) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Cannot find CPU model with Prid 0x%1$08x"), + data->data.loongarch.prid[0].value); + goto cleanup; + } + + if (!virCPUModelIsAllowed(model->name, models)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("CPU model %1$s is not supported by hypervisor"), + model->name); + goto cleanup; + } + + cpu->model = g_strdup(model->name); + if (model->vendor) { + cpu->vendor = g_strdup(model->vendor->name); + } + ret = 0; + + cleanup: + virCPULoongArchMapFree(map); + + return ret; +} + +static void +virCPULoongArchDataFree(virCPUData *data) +{ + if (!data) + return; + + virCPULoongArchDataClear(&data->data.loongarch); + VIR_FREE(data); +} + +static int +virCPULoongArchGetHostPRID(void) +{ + return 0x14c010; +} + +static int +virCPULoongArchGetHost(virCPUDef *cpu, + virDomainCapsCPUModels *models) +{ + virCPUData *cpuData = NULL; + virCPULoongArchData *data; + int ret = -1; + + if (!(cpuData = virCPUDataNew(archs[0]))) + goto cleanup; + + data = &cpuData->data.loongarch; + data->prid = g_new0(virCPULoongArchPrid, 1); + if (!data->prid) + goto cleanup; + + + data->len = 1; + + data->prid[0].value = virCPULoongArchGetHostPRID(); + data->prid[0].mask = 0xffff00ul; + + ret = virCPULoongArchDriverDecode(cpu, cpuData, models); + + cleanup: + virCPULoongArchDataFree(cpuData); + return ret; +} + + +static int +virCPULoongArchUpdate(virCPUDef *guest, + const virCPUDef *host ATTRIBUTE_UNUSED, + bool relative G_GNUC_UNUSED) +{ + /* + * - host-passthrough doesn't even get here + * - host-model is used for host CPU running in a compatibility mode and + * it needs to remain unchanged + * - custom doesn't support any optional features, there's nothing to + * update + */ + + if (guest->mode == VIR_CPU_MODE_CUSTOM) + guest->match = VIR_CPU_MATCH_EXACT; + + return 0; +} + +static virCPUDef * +virCPULoongArchDriverBaseline(virCPUDef **cpus, + unsigned int ncpus, + virDomainCapsCPUModels *models ATTRIBUTE_UNUSED, + const char **features ATTRIBUTE_UNUSED, + bool migratable ATTRIBUTE_UNUSED) +{ + virCPULoongArchMap *map; + const virCPULoongArchModel *model; + const virCPULoongArchVendor *vendor = NULL; + virCPUDef *cpu = NULL; + size_t i; + + if (!(map = virCPULoongArchLoadMap())) + goto error; + + if (!(model = virCPULoongArchModelFind(map, cpus[0]->model))) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Unknown CPU model %1$s"), cpus[0]->model); + goto error; + } + + for (i = 0; i < ncpus; i++) { + const virCPULoongArchVendor *vnd; + + if (STRNEQ(cpus[i]->model, model->name)) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("CPUs are incompatible")); + goto error; + } + + if (!cpus[i]->vendor) + continue; + + if (!(vnd = virCPULoongArchVendorFind(map, cpus[i]->vendor))) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Unknown CPU vendor %1$s"), cpus[i]->vendor); + goto error; + } + + if (model->vendor) { + if (model->vendor != vnd) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("CPU vendor %1$s of model %2$s differs from vendor %3$s"), + model->vendor->name, model->name, + vnd->name); + goto error; + } + } else if (vendor) { + if (vendor != vnd) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("CPU vendors do not match")); + goto error; + } + } else { + vendor = vnd; + } + } + + cpu = virCPUDefNew(); + cpu->model = g_strdup(model->name); + if (vendor) { + cpu->vendor = g_strdup(vendor->name); + } + cpu->type = VIR_CPU_TYPE_GUEST; + cpu->match = VIR_CPU_MATCH_EXACT; + cpu->fallback = VIR_CPU_FALLBACK_FORBID; + + cleanup: + virCPULoongArchMapFree(map); + return cpu; + + error: + virCPUDefFree(cpu); + cpu = NULL; + goto cleanup; +} + +static int +virCPULoongArchDriverGetModels(char ***models) +{ + virCPULoongArchMap *map; + size_t i; + int ret = -1; + + if (!(map = virCPULoongArchLoadMap())) { + return -1; + } + + if (models) { + *models = g_new0(char *, map->nmodels + 1); + if (!(*models)) + return -1; + + for (i = 0; i < map->nmodels; i++) { + (*models)[i] = g_strdup(map->models[i]->name); + } + } + + ret = map->nmodels; + + return ret; +} + +struct cpuArchDriver cpuDriverLoongArch = { + .name = "LoongArch", + .arch = archs, + .narch = G_N_ELEMENTS(archs), + .compare = virCPULoongArchCompare, + .decode = virCPULoongArchDriverDecode, + .encode = NULL, + .dataFree = virCPULoongArchDataFree, + .getHost = virCPULoongArchGetHost, + .baseline = virCPULoongArchDriverBaseline, + .update = virCPULoongArchUpdate, + .getModels = virCPULoongArchDriverGetModels, +}; diff --git a/src/cpu/cpu_loongarch.h b/src/cpu/cpu_loongarch.h new file mode 100644 index 0000000000..bebc16a242 --- /dev/null +++ b/src/cpu/cpu_loongarch.h @@ -0,0 +1,25 @@ +/* + * cpu_loongarch.h: CPU driver for 64-bit LOONGARCH CPUs + * + * Copyright (C) 2023 Loongson Technology. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + */ + +#pragma once + +# include "cpu.h" + +extern struct cpuArchDriver cpuDriverLoongArch; diff --git a/src/cpu/cpu_loongarch_data.h b/src/cpu/cpu_loongarch_data.h new file mode 100644 index 0000000000..43ae044838 --- /dev/null +++ b/src/cpu/cpu_loongarch_data.h @@ -0,0 +1,37 @@ +/* + * cpu_loongarch_data.h: 64-bit LOONGARCH CPU specific data + * + * Copyright (C) 2023 Loongson Technology. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; If not, see + * <http://www.gnu.org/licenses/>. + */ + +#pragma once + +# include <stdint.h> + +typedef struct _virCPULoongArchPrid virCPULoongArchPrid; +struct _virCPULoongArchPrid { + uint32_t value; + uint32_t mask; +}; + +# define VIR_CPU_LOONGARCH_DATA_INIT { 0 } + +typedef struct _virCPULoongArchData virCPULoongArchData; +struct _virCPULoongArchData { + size_t len; + virCPULoongArchPrid *prid; +}; diff --git a/src/cpu/meson.build b/src/cpu/meson.build index 55396903b9..254d6b4545 100644 --- a/src/cpu/meson.build +++ b/src/cpu/meson.build @@ -6,6 +6,7 @@ cpu_sources = [ 'cpu_riscv64.c', 'cpu_s390.c', 'cpu_x86.c', + 'cpu_loongarch.c' ] cpu_lib = static_library( diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 83119e871a..118d3429c3 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -2697,6 +2697,7 @@ static const char *preferredMachines[] = "sim", /* VIR_ARCH_XTENSA */ "sim", /* VIR_ARCH_XTENSAEB */ + "virt", /* VIR_ARCH_LOONGARCH64 */ }; G_STATIC_ASSERT(G_N_ELEMENTS(preferredMachines) == VIR_ARCH_LAST); diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 953808fcfe..00e38950b6 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -4222,6 +4222,10 @@ qemuDomainDefAddDefaultDevices(virQEMUDriver *driver, addPCIRoot = true; break; + case VIR_ARCH_LOONGARCH64: + addPCIeRoot = true; + break; + case VIR_ARCH_ARMV7B: case VIR_ARCH_CRIS: case VIR_ARCH_ITANIUM: diff --git a/src/util/virarch.c b/src/util/virarch.c index 01e520de73..289bd80d90 100644 --- a/src/util/virarch.c +++ b/src/util/virarch.c @@ -83,6 +83,8 @@ static const struct virArchData { { "xtensa", 32, VIR_ARCH_LITTLE_ENDIAN }, { "xtensaeb", 32, VIR_ARCH_BIG_ENDIAN }, + + { "loongarch64", 64, VIR_ARCH_LITTLE_ENDIAN }, }; G_STATIC_ASSERT(G_N_ELEMENTS(virArchData) == VIR_ARCH_LAST); diff --git a/src/util/virarch.h b/src/util/virarch.h index 747f77c48e..638e519fe6 100644 --- a/src/util/virarch.h +++ b/src/util/virarch.h @@ -69,6 +69,8 @@ typedef enum { VIR_ARCH_XTENSA, /* XTensa 32 LE https://en.wikipedia.org/wiki/Xtensa#Processor_Cores */ VIR_ARCH_XTENSAEB, /* XTensa 32 BE https://en.wikipedia.org/wiki/Xtensa#Processor_Cores */ + VIR_ARCH_LOONGARCH64, /* LoongArch 64 LE */ + VIR_ARCH_LAST, } virArch; @@ -106,6 +108,8 @@ typedef enum { #define ARCH_IS_SH4(arch) ((arch) == VIR_ARCH_SH4 ||\ (arch) == VIR_ARCH_SH4EB) +#define ARCH_IS_LOONGARCH(arch) ((arch) == VIR_ARCH_LOONGARCH64) + typedef enum { VIR_ARCH_LITTLE_ENDIAN, VIR_ARCH_BIG_ENDIAN, -- 2.27.0

On Thu, Dec 14, 2023 at 02:08:45PM +0800, xianglai li wrote:
From: lixianglai <lixianglai@loongson.cn>
Add loongarch cpu support, Define new cpu type 'loongarch64' and implement it's driver functions.
Signed-off-by: lixianglai <lixianglai@loongson.cn>
We usually prefer the full name to be used both for git authorship and DCO purposes. I believe this would be "Xianglai Li" in your case.
--- po/POTFILES | 1 + src/conf/schemas/basictypes.rng | 1 + src/cpu/cpu.c | 2 + src/cpu/cpu.h | 2 + src/cpu/cpu_loongarch.c | 742 ++++++++++++++++++++++++++++++++ src/cpu/cpu_loongarch.h | 25 ++ src/cpu/cpu_loongarch_data.h | 37 ++ src/cpu/meson.build | 1 + src/qemu/qemu_capabilities.c | 1 + src/qemu/qemu_domain.c | 4 + src/util/virarch.c | 2 + src/util/virarch.h | 4 + 12 files changed, 822 insertions(+) create mode 100644 src/cpu/cpu_loongarch.c create mode 100644 src/cpu/cpu_loongarch.h create mode 100644 src/cpu/cpu_loongarch_data.h
This patch breaks the test suite: $ make -C .../libvirt/build/build-aux sc_preprocessor_indentation make: Entering directory '.../libvirt/build/build-aux' cppi: .../libvirt/src/cpu/cpu_loongarch.h: line 23: not properly indented cppi: .../libvirt/src/cpu/cpu_loongarch_data.h: line 23: not properly indented cppi: .../libvirt/src/cpu/cpu_loongarch_data.h: line 31: not properly indented incorrect preprocessor indentation make: *** [.../libvirt/build-aux/syntax-check.mk:500: sc_preprocessor_indentation] Error 1 make: Leaving directory '.../libvirt/build/build-aux' Should be an easy enough fix. Please make sure that 'meson test' runs successfully after every single patch in the series, and that you have optional test tools such as cppi installed.
+++ b/src/conf/schemas/basictypes.rng @@ -470,6 +470,7 @@ <value>x86_64</value> <value>xtensa</value> <value>xtensaeb</value> + <value>loongarch64</value>
This list is sorted alphabetically; please ensure that it remains that way after your changes. Not all lists in libvirt are sorted alphabetically, but generally speaking if you see one that is you should keep it that way.
+++ b/src/cpu/cpu_loongarch.c @@ -0,0 +1,742 @@ +static const virArch archs[] = { VIR_ARCH_LOONGARCH64 }; + +typedef struct _virCPULoongArchVendor virCPULoongArchVendor; +struct _virCPULoongArchVendor { + char *name; +}; + +typedef struct _virCPULoongArchModel virCPULoongArchModel; +struct _virCPULoongArchModel { + char *name; + const virCPULoongArchVendor *vendor; + virCPULoongArchData data; +}; + +typedef struct _virCPULoongArchMap virCPULoongArchMap; +struct _virCPULoongArchMap { + size_t nvendors; + virCPULoongArchVendor **vendors; + size_t nmodels; + virCPULoongArchModel **models; +};
This CPU driver appears to be directly modeled after the ppc64 driver. I wonder if all the complexity is necessary at this point in time? Wouldn't it perhaps be better to start with a very bare-bone CPU driver, modeled after the riscv64 one, and then grow from there as the demand for more advanced features becomes apparent?
+static int +virCPULoongArchGetHostPRID(void) +{ + return 0x14c010; +}
Hardcoding the host CPU's PRID...
+static int +virCPULoongArchGetHost(virCPUDef *cpu, + virDomainCapsCPUModels *models) +{ + virCPUData *cpuData = NULL; + virCPULoongArchData *data; + int ret = -1; + + if (!(cpuData = virCPUDataNew(archs[0]))) + goto cleanup; + + data = &cpuData->data.loongarch; + data->prid = g_new0(virCPULoongArchPrid, 1); + if (!data->prid) + goto cleanup; + + + data->len = 1; + + data->prid[0].value = virCPULoongArchGetHostPRID(); + data->prid[0].mask = 0xffff00ul;
... and corresponding mask is definitely not acceptable. You'll need to implement a function that fetches the value dynamically by using whatever mechanism is appropriate, and of course ensure that such code is only ever run on a loongarch64 host. But again, do we really need that complexity right now? The riscv64 driver doesn't have any of that and is usable for many purposes.
+static virCPUDef * +virCPULoongArchDriverBaseline(virCPUDef **cpus, + unsigned int ncpus, + virDomainCapsCPUModels *models ATTRIBUTE_UNUSED, + const char **features ATTRIBUTE_UNUSED, + bool migratable ATTRIBUTE_UNUSED)
The function arguments are not aligned properly here. There are several other instances of this. Please make sure that things are aligned correctly throughout.
diff --git a/src/cpu/meson.build b/src/cpu/meson.build index 55396903b9..254d6b4545 100644 --- a/src/cpu/meson.build +++ b/src/cpu/meson.build @@ -6,6 +6,7 @@ cpu_sources = [ 'cpu_riscv64.c', 'cpu_s390.c', 'cpu_x86.c', + 'cpu_loongarch.c' ]
This is another list that needs to remain sorted...
+++ b/src/util/virarch.h @@ -69,6 +69,8 @@ typedef enum { VIR_ARCH_XTENSA, /* XTensa 32 LE https://en.wikipedia.org/wiki/Xtensa#Processor_Cores */ VIR_ARCH_XTENSAEB, /* XTensa 32 BE https://en.wikipedia.org/wiki/Xtensa#Processor_Cores */
+ VIR_ARCH_LOONGARCH64, /* LoongArch 64 LE */ + VIR_ARCH_LAST, } virArch;
... as is this one and those that are derived from it, including preferredMachines. -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea:
On Thu, Dec 14, 2023 at 02:08:45PM +0800, xianglai li wrote:
From: lixianglai <lixianglai@loongson.cn>
Add loongarch cpu support, Define new cpu type 'loongarch64' and implement it's driver functions.
Signed-off-by: lixianglai <lixianglai@loongson.cn> We usually prefer the full name to be used both for git authorship and DCO purposes. I believe this would be "Xianglai Li" in your case.
Ok, I'll correct it in the next version :)
--- po/POTFILES | 1 + src/conf/schemas/basictypes.rng | 1 + src/cpu/cpu.c | 2 + src/cpu/cpu.h | 2 + src/cpu/cpu_loongarch.c | 742 ++++++++++++++++++++++++++++++++ src/cpu/cpu_loongarch.h | 25 ++ src/cpu/cpu_loongarch_data.h | 37 ++ src/cpu/meson.build | 1 + src/qemu/qemu_capabilities.c | 1 + src/qemu/qemu_domain.c | 4 + src/util/virarch.c | 2 + src/util/virarch.h | 4 + 12 files changed, 822 insertions(+) create mode 100644 src/cpu/cpu_loongarch.c create mode 100644 src/cpu/cpu_loongarch.h create mode 100644 src/cpu/cpu_loongarch_data.h This patch breaks the test suite:
$ make -C .../libvirt/build/build-aux sc_preprocessor_indentation make: Entering directory '.../libvirt/build/build-aux' cppi: .../libvirt/src/cpu/cpu_loongarch.h: line 23: not properly indented cppi: .../libvirt/src/cpu/cpu_loongarch_data.h: line 23: not properly indented cppi: .../libvirt/src/cpu/cpu_loongarch_data.h: line 31: not properly indented incorrect preprocessor indentation make: *** [.../libvirt/build-aux/syntax-check.mk:500: sc_preprocessor_indentation] Error 1 make: Leaving directory '.../libvirt/build/build-aux'
Should be an easy enough fix.
Please make sure that 'meson test' runs successfully after every single patch in the series, and that you have optional test tools such as cppi installed.
Ok, I think it is because I did not install cppi that the problem was not detected during 'meson test'. I will install cppi and conduct 'meson test' again for each patch.
+++ b/src/conf/schemas/basictypes.rng @@ -470,6 +470,7 @@ <value>x86_64</value> <value>xtensa</value> <value>xtensaeb</value> + <value>loongarch64</value> This list is sorted alphabetically; please ensure that it remains that way after your changes. Not all lists in libvirt are sorted alphabetically, but generally speaking if you see one that is you should keep it that way.
Ok, I'll correct it in the next version.
+++ b/src/cpu/cpu_loongarch.c @@ -0,0 +1,742 @@ +static const virArch archs[] = { VIR_ARCH_LOONGARCH64 }; + +typedef struct _virCPULoongArchVendor virCPULoongArchVendor; +struct _virCPULoongArchVendor { + char *name; +}; + +typedef struct _virCPULoongArchModel virCPULoongArchModel; +struct _virCPULoongArchModel { + char *name; + const virCPULoongArchVendor *vendor; + virCPULoongArchData data; +}; + +typedef struct _virCPULoongArchMap virCPULoongArchMap; +struct _virCPULoongArchMap { + size_t nvendors; + virCPULoongArchVendor **vendors; + size_t nmodels; + virCPULoongArchModel **models; +}; This CPU driver appears to be directly modeled after the ppc64 driver. I wonder if all the complexity is necessary at this point in time? Wouldn't it perhaps be better to start with a very bare-bone CPU driver, modeled after the riscv64 one, and then grow from there as the demand for more advanced features becomes apparent?
Well, I think I will try to refer to riscv64 for cpu driver implementation in the next version.
+static int +virCPULoongArchGetHostPRID(void) +{ + return 0x14c010; +} Hardcoding the host CPU's PRID...
+static int +virCPULoongArchGetHost(virCPUDef *cpu, + virDomainCapsCPUModels *models) +{ + virCPUData *cpuData = NULL; + virCPULoongArchData *data; + int ret = -1; + + if (!(cpuData = virCPUDataNew(archs[0]))) + goto cleanup; + + data = &cpuData->data.loongarch; + data->prid = g_new0(virCPULoongArchPrid, 1); + if (!data->prid) + goto cleanup; + + + data->len = 1; + + data->prid[0].value = virCPULoongArchGetHostPRID(); + data->prid[0].mask = 0xffff00ul; ... and corresponding mask is definitely not acceptable. You'll need to implement a function that fetches the value dynamically by using whatever mechanism is appropriate, and of course ensure that such code is only ever run on a loongarch64 host.
But again, do we really need that complexity right now? The riscv64 driver doesn't have any of that and is usable for many purposes.
Okay, so the hard coding here is a little bit inappropriate, and I feel like I can do without the complexity, I'm not sure, but I can try to simplify this.
+static virCPUDef * +virCPULoongArchDriverBaseline(virCPUDef **cpus, + unsigned int ncpus, + virDomainCapsCPUModels *models ATTRIBUTE_UNUSED, + const char **features ATTRIBUTE_UNUSED, + bool migratable ATTRIBUTE_UNUSED) The function arguments are not aligned properly here. There are several other instances of this. Please make sure that things are aligned correctly throughout.
Ok, I will do a code check according to the suggestion.
diff --git a/src/cpu/meson.build b/src/cpu/meson.build index 55396903b9..254d6b4545 100644 --- a/src/cpu/meson.build +++ b/src/cpu/meson.build @@ -6,6 +6,7 @@ cpu_sources = [ 'cpu_riscv64.c', 'cpu_s390.c', 'cpu_x86.c', + 'cpu_loongarch.c' ] This is another list that needs to remain sorted...
Ok, I'll fix it in the next version.
+++ b/src/util/virarch.h @@ -69,6 +69,8 @@ typedef enum { VIR_ARCH_XTENSA, /* XTensa 32 LE https://en.wikipedia.org/wiki/Xtensa#Processor_Cores */ VIR_ARCH_XTENSAEB, /* XTensa 32 BE https://en.wikipedia.org/wiki/Xtensa#Processor_Cores */
+ VIR_ARCH_LOONGARCH64, /* LoongArch 64 LE */ + VIR_ARCH_LAST, } virArch; ... as is this one and those that are derived from it, including preferredMachines.
Ok, I'll fix it in the next version. Thanks, Xianglai.

From: lixianglai <lixianglai@loongson.cn> Define loongarch cpu model type and vendor id in cpu_map loongarch xml. Signed-off-by: lixianglai <lixianglai@loongson.cn> --- src/cpu_map/index.xml | 5 +++++ src/cpu_map/loongarch_la464.xml | 6 ++++++ src/cpu_map/loongarch_vendors.xml | 3 +++ src/cpu_map/meson.build | 2 ++ 4 files changed, 16 insertions(+) create mode 100644 src/cpu_map/loongarch_la464.xml create mode 100644 src/cpu_map/loongarch_vendors.xml diff --git a/src/cpu_map/index.xml b/src/cpu_map/index.xml index d2c5af5797..9cfcd1a9c0 100644 --- a/src/cpu_map/index.xml +++ b/src/cpu_map/index.xml @@ -119,4 +119,9 @@ <include filename='arm_FT-2000plus.xml'/> <include filename='arm_Tengyun-S2500.xml'/> </arch> + + <arch name='loongarch64'> + <include filename="loongarch_vendors.xml"/> + <include filename="loongarch_la464.xml"/> + </arch> </cpus> diff --git a/src/cpu_map/loongarch_la464.xml b/src/cpu_map/loongarch_la464.xml new file mode 100644 index 0000000000..1029e39681 --- /dev/null +++ b/src/cpu_map/loongarch_la464.xml @@ -0,0 +1,6 @@ +<cpus> + <model name='la464'> + <vendor name='Loongson'/> + <prid value='0x14c010' mask='0xfffff0'/> + </model> +</cpus> diff --git a/src/cpu_map/loongarch_vendors.xml b/src/cpu_map/loongarch_vendors.xml new file mode 100644 index 0000000000..c744654617 --- /dev/null +++ b/src/cpu_map/loongarch_vendors.xml @@ -0,0 +1,3 @@ +<cpus> + <vendor name='Loongson'/> +</cpus> diff --git a/src/cpu_map/meson.build b/src/cpu_map/meson.build index ae5293e85f..4ea63188df 100644 --- a/src/cpu_map/meson.build +++ b/src/cpu_map/meson.build @@ -84,6 +84,8 @@ cpumap_data = [ 'x86_vendors.xml', 'x86_Westmere-IBRS.xml', 'x86_Westmere.xml', + 'loongarch_vendors.xml', + 'loongarch_la464.xml', ] install_data(cpumap_data, install_dir: pkgdatadir / 'cpu_map') -- 2.27.0

From: lixianglai <lixianglai@loongson.cn> Config some capabilities for loongarch virt machine such as PCI multi bus. Signed-off-by: lixianglai <lixianglai@loongson.cn> --- src/qemu/qemu_capabilities.c | 5 ++++ src/qemu/qemu_domain.c | 28 +++++++++++++++++ src/qemu/qemu_domain.h | 1 + src/qemu/qemu_domain_address.c | 55 ++++++++++++++++++++++++++++++++++ src/qemu/qemu_validate.c | 2 +- 5 files changed, 90 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 118d3429c3..eb84c9da7d 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -2080,6 +2080,11 @@ bool virQEMUCapsHasPCIMultiBus(const virDomainDef *def) return true; } + /* loongarch64 support PCI-multibus on all machine types + * since forever */ + if (ARCH_IS_LOONGARCH(def->os.arch)) + return true; + return false; } diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 00e38950b6..a8f04155a3 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -5635,6 +5635,11 @@ qemuDomainControllerDefPostParse(virDomainControllerDef *cont, cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; + } else if (ARCH_IS_LOONGARCH(def->os.arch)) { + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_QEMU_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; + else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; } } /* forbid usb model 'qusb1' and 'qusb2' in this kind of hyperviosr */ @@ -8924,6 +8929,22 @@ qemuDomainMachineIsPSeries(const char *machine, } +static bool +qemuDomainMachineIsLoongson(const char *machine, + const virArch arch) +{ + if (!ARCH_IS_LOONGARCH(arch)) + return false; + + if (STREQ(machine, "virt") || + STRPREFIX(machine, "virt-")) { + return true; + } + + return false; +} + + static bool qemuDomainMachineIsMipsMalta(const char *machine, const virArch arch) @@ -9017,6 +9038,13 @@ qemuDomainIsMipsMalta(const virDomainDef *def) } +bool +qemuDomainIsLoongson(const virDomainDef *def) +{ + return qemuDomainMachineIsLoongson(def->os.machine, def->os.arch); +} + + bool qemuDomainHasPCIRoot(const virDomainDef *def) { diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 1e56e50672..1bdbb9c549 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -827,6 +827,7 @@ bool qemuDomainIsS390CCW(const virDomainDef *def); bool qemuDomainIsARMVirt(const virDomainDef *def); bool qemuDomainIsRISCVVirt(const virDomainDef *def); bool qemuDomainIsPSeries(const virDomainDef *def); +bool qemuDomainIsLoongson(const virDomainDef *def); bool qemuDomainIsMipsMalta(const virDomainDef *def); bool qemuDomainHasPCIRoot(const virDomainDef *def); bool qemuDomainHasPCIeRoot(const virDomainDef *def); diff --git a/src/qemu/qemu_domain_address.c b/src/qemu/qemu_domain_address.c index 099778b2a8..2a37853d56 100644 --- a/src/qemu/qemu_domain_address.c +++ b/src/qemu/qemu_domain_address.c @@ -2079,6 +2079,56 @@ qemuDomainValidateDevicePCISlotsQ35(virDomainDef *def, } +static int +qemuDomainValidateDevicePCISlotsLoongson(virDomainDef *def, + virDomainPCIAddressSet *addrs) +{ + virPCIDeviceAddress tmp_addr; + g_autofree char *addrStr = NULL; + virDomainPCIConnectFlags flags = VIR_PCI_CONNECT_TYPE_PCI_DEVICE; + + if (addrs->nbuses) { + memset(&tmp_addr, 0, sizeof(tmp_addr)); + tmp_addr.slot = 1; + /* pci-ohci at 00:01.0 */ + if (virDomainPCIAddressReserveAddr(addrs, &tmp_addr, flags, 0) < 0) + return -1; + } + + if (def->nvideos > 0 && + def->videos[0]->type != VIR_DOMAIN_VIDEO_TYPE_NONE && + def->videos[0]->type != VIR_DOMAIN_VIDEO_TYPE_RAMFB) { + /* reserve slot 2 for vga device */ + virDomainVideoDef *primaryVideo = def->videos[0]; + + if (virDeviceInfoPCIAddressIsWanted(&primaryVideo->info)) { + memset(&tmp_addr, 0, sizeof(tmp_addr)); + tmp_addr.slot = 2; + + if (!(addrStr = virPCIDeviceAddressAsString(&tmp_addr))) + return -1; + if (!virDomainPCIAddressValidate(addrs, &tmp_addr, + addrStr, flags, true)) + return -1; + + if (virDomainPCIAddressSlotInUse(addrs, &tmp_addr)) { + if (qemuDomainPCIAddressReserveNextAddr(addrs, + &primaryVideo->info) < 0) { + return -1; + } + } else { + if (virDomainPCIAddressReserveAddr(addrs, &tmp_addr, flags, 0) < 0) + return -1; + primaryVideo->info.addr.pci = tmp_addr; + primaryVideo->info.type = VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI; + } + } + } + + return 0; +} + + static int qemuDomainValidateDevicePCISlotsChipsets(virDomainDef *def, virDomainPCIAddressSet *addrs) @@ -2093,6 +2143,11 @@ qemuDomainValidateDevicePCISlotsChipsets(virDomainDef *def, return -1; } + if (qemuDomainIsLoongson(def) && + qemuDomainValidateDevicePCISlotsLoongson(def, addrs) < 0) { + return -1; + } + return 0; } diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c index e475ad035e..498e76b1e7 100644 --- a/src/qemu/qemu_validate.c +++ b/src/qemu/qemu_validate.c @@ -100,7 +100,7 @@ qemuValidateDomainDefFeatures(const virDomainDef *def, switch ((virDomainFeature) i) { case VIR_DOMAIN_FEATURE_IOAPIC: if (def->features[i] != VIR_DOMAIN_IOAPIC_NONE) { - if (!ARCH_IS_X86(def->os.arch)) { + if (!ARCH_IS_X86(def->os.arch) && !ARCH_IS_LOONGARCH(def->os.arch)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("The '%1$s' feature is not supported for architecture '%2$s' or machine type '%3$s'"), featureName, -- 2.27.0

On Thu, Dec 14, 2023 at 02:08:47PM +0800, xianglai li wrote:
+++ b/src/qemu/qemu_domain.c @@ -5635,6 +5635,11 @@ qemuDomainControllerDefPostParse(virDomainControllerDef *cont, cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; + } else if (ARCH_IS_LOONGARCH(def->os.arch)) { + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_QEMU_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; + else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; }
I don't think you need to take into account the nec-xhci model for loongarch. aarch64 needs it because qemu-xhci didn't exist when that architecture was introduced, but that's not the case here so we can keep things simpler. I'm surprised that this code doesn't have handling for riscv64. Not your problem, but likely an oversight that should be addressed.
+static bool +qemuDomainMachineIsLoongson(const char *machine, + const virArch arch)
The appropriate name for this function would be qemuDomainMachineIsLoongArchVirt, to match the existing Arm and RISC-V equivalents.
+bool +qemuDomainIsLoongson(const virDomainDef *def) +{
Same here.
+++ b/src/qemu/qemu_domain_address.c @@ -2093,6 +2143,11 @@ qemuDomainValidateDevicePCISlotsChipsets(virDomainDef *def, return -1; }
+ if (qemuDomainIsLoongson(def) && + qemuDomainValidateDevicePCISlotsLoongson(def, addrs) < 0) { + return -1; + }
The existing qemuDomainValidateDevicePCISlots* functions are intended to ensure that certain devices, that historically have been assigned to specific PCI slots by QEMU, always show up at those addresses. We haven't needed anything like that for non-x86 architectures so far, and I believe that loongarch doesn't need it either.
+++ b/src/qemu/qemu_validate.c @@ -100,7 +100,7 @@ qemuValidateDomainDefFeatures(const virDomainDef *def, switch ((virDomainFeature) i) { case VIR_DOMAIN_FEATURE_IOAPIC: if (def->features[i] != VIR_DOMAIN_IOAPIC_NONE) { - if (!ARCH_IS_X86(def->os.arch)) { + if (!ARCH_IS_X86(def->os.arch) && !ARCH_IS_LOONGARCH(def->os.arch)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("The '%1$s' feature is not supported for architecture '%2$s' or machine type '%3$s'"), featureName,
So does loongarch actually have ioapic support? Just making sure. I'm surprised because apparently no other non-x86 architecture supports it... -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea :
On Thu, Dec 14, 2023 at 02:08:47PM +0800, xianglai li wrote:
+++ b/src/qemu/qemu_domain.c @@ -5635,6 +5635,11 @@ qemuDomainControllerDefPostParse(virDomainControllerDef *cont, cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; + } else if (ARCH_IS_LOONGARCH(def->os.arch)) { + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_QEMU_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; + else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; } I don't think you need to take into account the nec-xhci model for loongarch. aarch64 needs it because qemu-xhci didn't exist when that architecture was introduced, but that's not the case here so we can keep things simpler.
I'm surprised that this code doesn't have handling for riscv64. Not your problem, but likely an oversight that should be addressed.
Ok, I'll remove nec-xhci in the next version.
+static bool +qemuDomainMachineIsLoongson(const char *machine, + const virArch arch) The appropriate name for this function would be qemuDomainMachineIsLoongArchVirt, to match the existing Arm and RISC-V equivalents.
Ok, I'll correct that in the next version.
+bool +qemuDomainIsLoongson(const virDomainDef *def) +{ Same here.
Ok, I'll correct that in the next version.
+++ b/src/qemu/qemu_domain_address.c @@ -2093,6 +2143,11 @@ qemuDomainValidateDevicePCISlotsChipsets(virDomainDef *def, return -1; }
+ if (qemuDomainIsLoongson(def) && + qemuDomainValidateDevicePCISlotsLoongson(def, addrs) < 0) { + return -1; + } The existing qemuDomainValidateDevicePCISlots* functions are intended to ensure that certain devices, that historically have been assigned to specific PCI slots by QEMU, always show up at those addresses.
We haven't needed anything like that for non-x86 architectures so far, and I believe that loongarch doesn't need it either.
Ok, I'll correct that in the next version.
+++ b/src/qemu/qemu_validate.c @@ -100,7 +100,7 @@ qemuValidateDomainDefFeatures(const virDomainDef *def, switch ((virDomainFeature) i) { case VIR_DOMAIN_FEATURE_IOAPIC: if (def->features[i] != VIR_DOMAIN_IOAPIC_NONE) { - if (!ARCH_IS_X86(def->os.arch)) { + if (!ARCH_IS_X86(def->os.arch) && !ARCH_IS_LOONGARCH(def->os.arch)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("The '%1$s' feature is not supported for architecture '%2$s' or machine type '%3$s'"), featureName, So does loongarch actually have ioapic support? Just making sure. I'm surprised because apparently no other non-x86 architecture supports it...
Yes, loongarch does have IOAPIC, but this feature has no effect on loongarch at this stage, I will cut it first to simplify the committed code. In addition, I have a question, if I understand correctly, the IOAPIC here should be the device interrupt controller, which is located in the bridge chip, it is called IOAPIC under x86, PCH_PIC under loongarch, and GIC under arm. The kernel_irqchip attribute of the machine parameter in qemu corresponding to the function VIR_DOMAIN_FEATURE_IOAPIC determines whether the device interrupt controller is simulated in qemu or kvm. So arm also has such a need, but why doesn't arm add?

On Tue, Dec 19, 2023 at 11:52:03AM +0800, lixianglai wrote:
So does loongarch actually have ioapic support? Just making sure. I'm surprised because apparently no other non-x86 architecture supports it...
Yes, loongarch does have IOAPIC, but this feature has no effect on loongarch at this stage, I will cut it first to simplify the committed code.
In addition, I have a question, if I understand correctly, the IOAPIC here should be the device interrupt controller, which is located in the bridge chip,
it is called IOAPIC under x86, PCH_PIC under loongarch, and GIC under arm.
The kernel_irqchip attribute of the machine parameter in qemu corresponding to the function VIR_DOMAIN_FEATURE_IOAPIC determines
whether the device interrupt controller is simulated in qemu or kvm. So arm also has such a need, but why doesn't arm add?
Okay, so x86's IOAPIC is controlled by the <ioapic> element, while Arm's GIC uses the <gic> element. By that logic, loongarch should probably introduce a <phc-pic> element. It's a bit silly that we need a separate element per architecture, especially considering that most of the time we just want to control the kernel_irqchip flag. Case in point, as you noticed Arm doesn't expose the ability to configure that at the moment. On the other hand, additional arch-features might show up in the future, at which point the separate element would start making sense. See GIC for an example. Overall, if you don't have a pressing need to expose the ability to control the kernel_irqchip flag I would just avoid doing anything about it now and leave the decision for another day and, possibly, person :) -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea:
So does loongarch actually have ioapic support? Just making sure. I'm surprised because apparently no other non-x86 architecture supports it... Yes, loongarch does have IOAPIC, but this feature has no effect on loongarch at this stage, I will cut it first to simplify the committed code.
In addition, I have a question, if I understand correctly, the IOAPIC here should be the device interrupt controller, which is located in the bridge chip,
it is called IOAPIC under x86, PCH_PIC under loongarch, and GIC under arm.
The kernel_irqchip attribute of the machine parameter in qemu corresponding to the function VIR_DOMAIN_FEATURE_IOAPIC determines
whether the device interrupt controller is simulated in qemu or kvm. So arm also has such a need, but why doesn't arm add? Okay, so x86's IOAPIC is controlled by the <ioapic> element, while Arm's GIC uses the <gic> element. By that logic, loongarch should
On Tue, Dec 19, 2023 at 11:52:03AM +0800, lixianglai wrote: probably introduce a <phc-pic> element.
It's a bit silly that we need a separate element per architecture, especially considering that most of the time we just want to control the kernel_irqchip flag. Case in point, as you noticed Arm doesn't expose the ability to configure that at the moment.
On the other hand, additional arch-features might show up in the future, at which point the separate element would start making sense. See GIC for an example.
Overall, if you don't have a pressing need to expose the ability to control the kernel_irqchip flag I would just avoid doing anything about it now and leave the decision for another day and, possibly, person :)
Ok, I see. Thank you very much! Thanks, Xianglai.

Hi Andrea :
On Thu, Dec 14, 2023 at 02:08:47PM +0800, xianglai li wrote:
+++ b/src/qemu/qemu_domain.c @@ -5635,6 +5635,11 @@ qemuDomainControllerDefPostParse(virDomainControllerDef *cont, cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; + } else if (ARCH_IS_LOONGARCH(def->os.arch)) { + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_QEMU_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_QEMU_XHCI; + else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_NEC_USB_XHCI)) + cont->model = VIR_DOMAIN_CONTROLLER_MODEL_USB_NEC_XHCI; } I don't think you need to take into account the nec-xhci model for loongarch. aarch64 needs it because qemu-xhci didn't exist when that architecture was introduced, but that's not the case here so we can keep things simpler.
I'm surprised that this code doesn't have handling for riscv64. Not your problem, but likely an oversight that should be addressed.
Ok, I'll remove nec-xhci in the next version.
+static bool +qemuDomainMachineIsLoongson(const char *machine, + const virArch arch) The appropriate name for this function would be qemuDomainMachineIsLoongArchVirt, to match the existing Arm and RISC-V equivalents.
Ok, I'll correct that in the next version.
+bool +qemuDomainIsLoongson(const virDomainDef *def) +{ Same here.
Ok, I'll correct that in the next version.
+++ b/src/qemu/qemu_domain_address.c @@ -2093,6 +2143,11 @@ qemuDomainValidateDevicePCISlotsChipsets(virDomainDef *def, return -1; }
+ if (qemuDomainIsLoongson(def) && + qemuDomainValidateDevicePCISlotsLoongson(def, addrs) < 0) { + return -1; + } The existing qemuDomainValidateDevicePCISlots* functions are intended to ensure that certain devices, that historically have been assigned to specific PCI slots by QEMU, always show up at those addresses.
We haven't needed anything like that for non-x86 architectures so far, and I believe that loongarch doesn't need it either.
Ok, I'll correct that in the next version.
+++ b/src/qemu/qemu_validate.c @@ -100,7 +100,7 @@ qemuValidateDomainDefFeatures(const virDomainDef *def, switch ((virDomainFeature) i) { case VIR_DOMAIN_FEATURE_IOAPIC: if (def->features[i] != VIR_DOMAIN_IOAPIC_NONE) { - if (!ARCH_IS_X86(def->os.arch)) { + if (!ARCH_IS_X86(def->os.arch) && !ARCH_IS_LOONGARCH(def->os.arch)) { virReportError(VIR_ERR_CONFIG_UNSUPPORTED, _("The '%1$s' feature is not supported for architecture '%2$s' or machine type '%3$s'"), featureName, So does loongarch actually have ioapic support? Just making sure. I'm surprised because apparently no other non-x86 architecture supports it...
Yes, loongarch does have IOAPIC, but this feature has no effect on loongarch at this stage, I will cut it first to simplify the committed code. In addition, I have a question, if I understand correctly, the IOAPIC here should be the device interrupt controller, which is located in the bridge chip, it is called IOAPIC under x86, PCH_PIC under loongarch, and GIC under arm. The kernel_irqchip attribute of the machine parameter in qemu corresponding to the function VIR_DOMAIN_FEATURE_IOAPIC determines whether the device interrupt controller is simulated in qemu or kvm. So arm also has such a need, but why doesn't arm add? Thanks, Xianglai.

From: lixianglai <lixianglai@loongson.cn> Implement method for loongarch to get host info, such as cpu frequency, system info, etc. Signed-off-by: lixianglai <lixianglai@loongson.cn> --- src/util/virarch.c | 2 ++ src/util/virhostcpu.c | 4 ++-- src/util/virsysinfo.c | 5 +++-- 3 files changed, 7 insertions(+), 4 deletions(-) diff --git a/src/util/virarch.c b/src/util/virarch.c index 289bd80d90..8107279fb8 100644 --- a/src/util/virarch.c +++ b/src/util/virarch.c @@ -224,6 +224,8 @@ virArch virArchFromHost(void) arch = VIR_ARCH_X86_64; } else if (STREQ(ut.machine, "arm64")) { arch = VIR_ARCH_AARCH64; + } else if (STREQ(ut.machine, "loongarch64")) { + arch = VIR_ARCH_LOONGARCH64; } else { /* Otherwise assume the canonical name */ if ((arch = virArchFromString(ut.machine)) == VIR_ARCH_NONE) { diff --git a/src/util/virhostcpu.c b/src/util/virhostcpu.c index 4027547e1e..15e97151d6 100644 --- a/src/util/virhostcpu.c +++ b/src/util/virhostcpu.c @@ -544,7 +544,7 @@ virHostCPUParseFrequency(FILE *cpuinfo, char line[1024]; /* No sensible way to retrieve CPU frequency */ - if (ARCH_IS_ARM(arch)) + if (ARCH_IS_ARM(arch) || ARCH_IS_LOONGARCH(arch)) return 0; if (ARCH_IS_X86(arch)) @@ -579,7 +579,7 @@ virHostCPUParsePhysAddrSize(FILE *cpuinfo, unsigned int *addrsz) char *str; char *endptr; - if (!(str = STRSKIP(line, "address sizes"))) + if (!(str = STRCASESKIP(line, "address sizes"))) continue; /* Skip the colon. */ diff --git a/src/util/virsysinfo.c b/src/util/virsysinfo.c index 36a861c53f..3a09497725 100644 --- a/src/util/virsysinfo.c +++ b/src/util/virsysinfo.c @@ -1241,14 +1241,15 @@ virSysinfoRead(void) { #if defined(__powerpc__) return virSysinfoReadPPC(); -#elif defined(__arm__) || defined(__aarch64__) +#elif defined(__arm__) || defined(__aarch64__) || defined(__loongarch__) return virSysinfoReadARM(); #elif defined(__s390__) || defined(__s390x__) return virSysinfoReadS390(); #elif !defined(WIN32) && \ (defined(__x86_64__) || \ defined(__i386__) || \ - defined(__amd64__)) + defined(__amd64__) || \ + defined(__loongarch__)) return virSysinfoReadDMI(); #else /* WIN32 || not supported arch */ /* -- 2.27.0

On Thu, Dec 14, 2023 at 02:08:48PM +0800, xianglai li wrote:
+++ b/src/util/virhostcpu.c @@ -579,7 +579,7 @@ virHostCPUParsePhysAddrSize(FILE *cpuinfo, unsigned int *addrsz) char *str; char *endptr;
- if (!(str = STRSKIP(line, "address sizes"))) + if (!(str = STRCASESKIP(line, "address sizes"))) continue;
So is the case different on loongarch than it is on other architectures? Weird.
+++ b/src/util/virsysinfo.c @@ -1241,14 +1241,15 @@ virSysinfoRead(void) { #if defined(__powerpc__) return virSysinfoReadPPC(); -#elif defined(__arm__) || defined(__aarch64__) +#elif defined(__arm__) || defined(__aarch64__) || defined(__loongarch__) return virSysinfoReadARM();
This is definitely not right: we shouldn't be calling the Arm-specific function on loongarch.
#elif defined(__s390__) || defined(__s390x__) return virSysinfoReadS390(); #elif !defined(WIN32) && \ (defined(__x86_64__) || \ defined(__i386__) || \ - defined(__amd64__)) + defined(__amd64__) || \ + defined(__loongarch__)) return virSysinfoReadDMI();
Does loongarch actually have DMI support? -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea:
On Thu, Dec 14, 2023 at 02:08:48PM +0800, xianglai li wrote:
+++ b/src/util/virhostcpu.c @@ -579,7 +579,7 @@ virHostCPUParsePhysAddrSize(FILE *cpuinfo, unsigned int *addrsz) char *str; char *endptr;
- if (!(str = STRSKIP(line, "address sizes"))) + if (!(str = STRCASESKIP(line, "address sizes"))) continue; So is the case different on loongarch than it is on other architectures? Weird.
Yes, loongarch and x86 do have some similarities and differences in the cpu Address space string, loongarch is "Address Sizes" under X86 is "address sizes", arm and other architectures should not have this identifier, At present, only x86 architecture and sh architecture can enter the process, other architectures will directly return, and the superior call also needs to allow the loongarch architecture. I will correct it in the next version
+++ b/src/util/virsysinfo.c @@ -1241,14 +1241,15 @@ virSysinfoRead(void) { #if defined(__powerpc__) return virSysinfoReadPPC(); -#elif defined(__arm__) || defined(__aarch64__) +#elif defined(__arm__) || defined(__aarch64__) || defined(__loongarch__) return virSysinfoReadARM(); This is definitely not right: we shouldn't be calling the Arm-specific function on loongarch.
Ok, I'll correct that in the next version.
#elif defined(__s390__) || defined(__s390x__) return virSysinfoReadS390(); #elif !defined(WIN32) && \ (defined(__x86_64__) || \ defined(__i386__) || \ - defined(__amd64__)) + defined(__amd64__) || \ + defined(__loongarch__)) return virSysinfoReadDMI(); Does loongarch actually have DMI support?
Yes, loongarch does support dmi. Thanks, Xianglai.

On Tue, Dec 19, 2023 at 05:23:36PM +0800, lixianglai wrote:
On Thu, Dec 14, 2023 at 02:08:48PM +0800, xianglai li wrote:
+++ b/src/util/virhostcpu.c @@ -579,7 +579,7 @@ virHostCPUParsePhysAddrSize(FILE *cpuinfo, unsigned int *addrsz) char *str; char *endptr;
- if (!(str = STRSKIP(line, "address sizes"))) + if (!(str = STRCASESKIP(line, "address sizes"))) continue;
So is the case different on loongarch than it is on other architectures? Weird.
Yes, loongarch and x86 do have some similarities and differences in the cpu Address space string, loongarch is "Address Sizes" under X86 is "address sizes",
Unfortunate choice on the kernel's part, but not much we can do about that I guess. The way you handled it is perfectly fine.
arm and other architectures should not have this identifier, At present, only x86 architecture and sh architecture can enter the process,
other architectures will directly return, and the superior call also needs to allow the loongarch architecture. I will correct it in the next version
Good catch! I hadn't even noticed that but it definitely needs to be addressed.
#elif !defined(WIN32) && \ (defined(__x86_64__) || \ defined(__i386__) || \ - defined(__amd64__)) + defined(__amd64__) || \ + defined(__loongarch__)) return virSysinfoReadDMI();
Does loongarch actually have DMI support?
Yes, loongarch does support dmi.
Excellent, just making sure :) -- Andrea Bolognani / Red Hat / Virtualization

From: lixianglai <lixianglai@loongson.cn> Add a default BIOS file name for loongarch. Signed-off-by: lixianglai <lixianglai@loongson.cn> --- src/qemu/qemu.conf.in | 3 ++- src/qemu/qemu_conf.c | 3 ++- src/qemu/test_libvirtd_qemu.aug.in | 1 + 3 files changed, 5 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu.conf.in b/src/qemu/qemu.conf.in index 6897e0f760..54c18e31b9 100644 --- a/src/qemu/qemu.conf.in +++ b/src/qemu/qemu.conf.in @@ -842,7 +842,8 @@ # "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", # "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", # "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", -# "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" +# "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd", +# "/usr/share/qemu/edk2-loongarch64-code.fd:/usr/share/qemu/edk2-loongarch64-vars.fd" #] # The backend to use for handling stdout/stderr output from diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 513b5ebb1e..5fe711ee1d 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -93,7 +93,8 @@ VIR_ONCE_GLOBAL_INIT(virQEMUConfig); "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd:" \ "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd:" \ "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd:" \ - "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd:" \ + "/usr/share/qemu/edk2-loongarch64-code.fd:/usr/share/qemu/edk2-loongarch64-vars.fd" #endif diff --git a/src/qemu/test_libvirtd_qemu.aug.in b/src/qemu/test_libvirtd_qemu.aug.in index c730df40b0..92f886a968 100644 --- a/src/qemu/test_libvirtd_qemu.aug.in +++ b/src/qemu/test_libvirtd_qemu.aug.in @@ -99,6 +99,7 @@ module Test_libvirtd_qemu = { "2" = "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd" } { "3" = "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" } { "4" = "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" } + { "5" = "/usr/share/qemu/edk2-loongarch64-code.fd:/usr/share/qemu/edk2-loongarch64-vars.fd" } } { "stdio_handler" = "logd" } { "gluster_debug_level" = "9" } -- 2.27.0

On Thu, Dec 14, 2023 at 02:08:49PM +0800, xianglai li wrote:
+++ b/src/qemu/qemu_conf.c @@ -93,7 +93,8 @@ VIR_ONCE_GLOBAL_INIT(virQEMUConfig); "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd:" \ "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd:" \ "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd:" \ - "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd:" \ + "/usr/share/qemu/edk2-loongarch64-code.fd:/usr/share/qemu/edk2-loongarch64-vars.fd" #endif
We definitely don't want this :) The hard-coded CODE:VARS pairs are considered a legacy mechanism at this point, and we're no longer adding to them. If you try to pass a custom value at build time, a warning will be raised. The way firmware is configured these days is through firmware descriptor files. See src/qemu/qemu_firmware* and tests/qemufirmware* for additional information, but the short version is that you want your edk2 package to include something like this: # /usr/share/qemu/firmware/50-edk2-loongarch64.json { "interface-types": [ "uefi" ], "mapping": { "device": "flash", "mode" : "split", "executable": { "filename": "/usr/share/edk2/loongarch64/QEMU_CODE.fd", "format": "raw" }, "nvram-template": { "filename": "/usr/share/edk2/loongarch64/QEMU_VARS.fd", "format": "raw" } }, "targets": [ { "architecture": "loongarch64", "machines": [ "virt", "virt-*" ] } ] } Once you have that, libvirt will automatically pick up the correct firmware when the VM is configured with <os firmware='efi'> Same as any other architecture, no custom entries needed. -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea Bolognani:
On Thu, Dec 14, 2023 at 02:08:49PM +0800, xianglai li wrote:
+++ b/src/qemu/qemu_conf.c @@ -93,7 +93,8 @@ VIR_ONCE_GLOBAL_INIT(virQEMUConfig); "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd:" \ "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd:" \ "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd:" \ - "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd:" \ + "/usr/share/qemu/edk2-loongarch64-code.fd:/usr/share/qemu/edk2-loongarch64-vars.fd" #endif We definitely don't want this :)
The hard-coded CODE:VARS pairs are considered a legacy mechanism at this point, and we're no longer adding to them. If you try to pass a custom value at build time, a warning will be raised.
The way firmware is configured these days is through firmware descriptor files. See src/qemu/qemu_firmware* and tests/qemufirmware* for additional information, but the short version is that you want your edk2 package to include something like this:
# /usr/share/qemu/firmware/50-edk2-loongarch64.json { "interface-types": [ "uefi" ], "mapping": { "device": "flash", "mode" : "split", "executable": { "filename": "/usr/share/edk2/loongarch64/QEMU_CODE.fd", "format": "raw" }, "nvram-template": { "filename": "/usr/share/edk2/loongarch64/QEMU_VARS.fd", "format": "raw" } }, "targets": [ { "architecture": "loongarch64", "machines": [ "virt", "virt-*" ] } ] }
Once you have that, libvirt will automatically pick up the correct firmware when the VM is configured with
<os firmware='efi'>
Same as any other architecture, no custom entries needed.
Ok, I will remove the custom bios path and then try to add json in the qemu and edk2 installation packages. Thanks, Xianglai.

On Tue, Dec 19, 2023 at 07:44:02PM +0800, lixianglai wrote:
On Thu, Dec 14, 2023 at 02:08:49PM +0800, xianglai li wrote: The way firmware is configured these days is through firmware descriptor files. See src/qemu/qemu_firmware* and tests/qemufirmware* for additional information, but the short version is that you want your edk2 package to include something like this:
# /usr/share/qemu/firmware/50-edk2-loongarch64.json { "interface-types": [ "uefi" ], "mapping": { "device": "flash", "mode" : "split", "executable": { "filename": "/usr/share/edk2/loongarch64/QEMU_CODE.fd", "format": "raw" }, "nvram-template": { "filename": "/usr/share/edk2/loongarch64/QEMU_VARS.fd", "format": "raw" } }, "targets": [ { "architecture": "loongarch64", "machines": [ "virt", "virt-*" ] } ] }
Ok, I will remove the custom bios path and then try to add json in the qemu and edk2 installation packages.
Note that the JSON descriptor files in tests/qemufirmwaredata/ are taken directly from the Fedora edk2 package, and in the long run we want that to be the case for loongarch too, but you don't necessarily need to wait for the firmware to be packaged in Fedora before creating libvirt test cases. You can just have a file containing reasonable-looking values, such as the ones I've shown above, to get things going, and then we can replace it with the actual one for Fedora at a later time. The QEMU package itself doesn't ship any JSON descriptor files. -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea:
On Thu, Dec 14, 2023 at 02:08:49PM +0800, xianglai li wrote: The way firmware is configured these days is through firmware descriptor files. See src/qemu/qemu_firmware* and tests/qemufirmware* for additional information, but the short version is that you want your edk2 package to include something like this:
# /usr/share/qemu/firmware/50-edk2-loongarch64.json { "interface-types": [ "uefi" ], "mapping": { "device": "flash", "mode" : "split", "executable": { "filename": "/usr/share/edk2/loongarch64/QEMU_CODE.fd", "format": "raw" }, "nvram-template": { "filename": "/usr/share/edk2/loongarch64/QEMU_VARS.fd", "format": "raw" } }, "targets": [ { "architecture": "loongarch64", "machines": [ "virt", "virt-*" ] } ] } Ok, I will remove the custom bios path and then try to add json in the qemu and edk2 installation packages. Note that the JSON descriptor files in tests/qemufirmwaredata/ are taken directly from the Fedora edk2 package, and in the long run we want that to be the case for loongarch too, but you don't necessarily need to wait for the firmware to be packaged in Fedora before creating libvirt test cases. You can just have a file containing reasonable-looking values, such as the ones I've shown above, to get
On Tue, Dec 19, 2023 at 07:44:02PM +0800, lixianglai wrote: things going, and then we can replace it with the actual one for Fedora at a later time.
The QEMU package itself doesn't ship any JSON descriptor files.
Ok, I see. Thank you very much! Thanks, Xianglai.

On Thu, Dec 14, 2023 at 02:08:44PM +0800, xianglai li wrote:
Hello, Everyone: This patch series adds libvirt support for loongarch.Although the bios path and name has not been officially integrated into qemu and we think there are still many shortcomings, we try to push a version of patch to the community according to the opinions of the community, hoping to listen to everyone's opinions.
Sharing your work earlier rather than later is definitely a good approach when it comes to open source development, so I appreciate you doing this :)
loongarch's virtual machine bios is not yet available in qemu, so you can get it from the following link https://github.com/loongson/Firmware/tree/main/LoongArchVirtMachine
Great to see that edk2 support has already been mainlined! An excellent next step would be to get an edk2-loongarch64 package into the various distros... Please consider working with the maintainers for edk2 in Fedora to make that happen, as it would significantly lower the barrier for interested people to get involved.
(Note: You should clone the repository using git instead of downloading the file via wget or you'll get xml) We named the bios edk2-loongarch64-code.fd, edk2-loongarch64-vars.fd is used to store pflash images of non-volatile variables.After installing qemu-system-loongarch64, you need to manually copy these two files to the /user/share/qemu directory.
As I have implicitly pointed out in the comment to one of the patches, these paths are not correct. The /usr/share/qemu/ directory is owned by the QEMU package, and other components should not drop their files in there. The exception is the /usr/share/qemu/firmware/ directory, which is specifically designed for interoperation. The edk2 files should be installed to /usr/share/edk2/loongarch64/, following the convention established by existing architectures. Once the directory name already contains architecture information, you can use shorter and less unique names for the files themselves.
Well, if you have completed the above steps I think you can now install loongarch virtual machine, you can install it through the virt-manager graphical interface, or install it through vrit-install, here is an example of installing it using virt-install:
virt-install \ --virt-type=qemu \ --name loongarch-test \ --memory 4096 \ --vcpus=4 \ --arch=loongarch64 \ --boot cdrom \ --disk device=cdrom,bus=scsi,path=/root/livecd-fedora-mate-4.loongarch64.iso \ --disk path=/var/lib/libvirt/images/debian12-loongarch64.qcow2,size=10,format=qcow2,bus=scsi \ --network network=default \ --osinfo archlinux \ --feature acpi=true \
This looks a bit out of place: virt-install should automatically enable the ACPI feature if it's advertised as available by libvirt. Please take a look at virQEMUCapsInitGuestFromBinary() and consider updating it so that ACPI support for loongarch is advertised.
lixianglai (5): Add loongarch cpu support Add loongarch cpu model and vendor info Config some capabilities for loongarch virt machine Implement the method of getting host info for loongarch Add bios path for loongarch
The information provided in the cover letter, including pointers to the various not-yet-upstreamed changes and instructions on how to test everything, is very much appreciated! Unfortunately I didn't have enough time to take things for a spin, so I've limited myself to a relatively quick review. In addition to the comments that I've provided for the code that is there, I need to point out what is *not* there: specifically, any kind of test :) Before this can be considered for inclusion, we need to have some test coverage. It doesn't have to be incredibly exhaustive, but at least the basics need to be addressed. If you look for files that contain "riscv64" in their names in the tests/ directory you should get a decent idea of what kind of coverage we will need. That's all I have for now. I'll talk to you again in 2024 :) -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea:
On Thu, Dec 14, 2023 at 02:08:44PM +0800, xianglai li wrote:
Hello, Everyone: This patch series adds libvirt support for loongarch.Although the bios path and name has not been officially integrated into qemu and we think there are still many shortcomings, we try to push a version of patch to the community according to the opinions of the community, hoping to listen to everyone's opinions. Sharing your work earlier rather than later is definitely a good approach when it comes to open source development, so I appreciate you doing this :)
Thank you very much for your affirmation and encouragement!
loongarch's virtual machine bios is not yet available in qemu, so you can get it from the following link https://github.com/loongson/Firmware/tree/main/LoongArchVirtMachine Great to see that edk2 support has already been mainlined! An excellent next step would be to get an edk2-loongarch64 package into the various distros... Please consider working with the maintainers for edk2 in Fedora to make that happen, as it would significantly lower the barrier for interested people to get involved.
Yes, we will do that, currently the loongarch code is being moved from the edk2-platform directory to the edk2 directory, I think after this work is completed, we will have the edk2 installation package.
(Note: You should clone the repository using git instead of downloading the file via wget or you'll get xml) We named the bios edk2-loongarch64-code.fd, edk2-loongarch64-vars.fd is used to store pflash images of non-volatile variables.After installing qemu-system-loongarch64, you need to manually copy these two files to the /user/share/qemu directory. As I have implicitly pointed out in the comment to one of the patches, these paths are not correct.
The /usr/share/qemu/ directory is owned by the QEMU package, and other components should not drop their files in there. The exception is the /usr/share/qemu/firmware/ directory, which is specifically designed for interoperation.
The edk2 files should be installed to /usr/share/edk2/loongarch64/, following the convention established by existing architectures. Once the directory name already contains architecture information, you can use shorter and less unique names for the files themselves.
I think edk2-loongarch64-code.fd can be the loongarch bios that comes with the qemu package, and then its installation path is /usr/share/qemu which makes sense. The future separately generated loongarch edk2 installation package installation path according to your suggestion can be /usr/share/edk2/loongarch64, named then QEMU_EFI. Fd.
Well, if you have completed the above steps I think you can now install loongarch virtual machine, you can install it through the virt-manager graphical interface, or install it through vrit-install, here is an example of installing it using virt-install:
virt-install \ --virt-type=qemu \ --name loongarch-test \ --memory 4096 \ --vcpus=4 \ --arch=loongarch64 \ --boot cdrom \ --disk device=cdrom,bus=scsi,path=/root/livecd-fedora-mate-4.loongarch64.iso \ --disk path=/var/lib/libvirt/images/debian12-loongarch64.qcow2,size=10,format=qcow2,bus=scsi \ --network network=default \ --osinfo archlinux \ --feature acpi=true \ This looks a bit out of place: virt-install should automatically enable the ACPI feature if it's advertised as available by libvirt.
Please take a look at virQEMUCapsInitGuestFromBinary() and consider updating it so that ACPI support for loongarch is advertised.
lixianglai (5): Add loongarch cpu support Add loongarch cpu model and vendor info Config some capabilities for loongarch virt machine Implement the method of getting host info for loongarch Add bios path for loongarch The information provided in the cover letter, including pointers to the various not-yet-upstreamed changes and instructions on how to test everything, is very much appreciated! Ok, I will provide more detailed instructions on changes and testing in
Ok, I'll fix that in the next version. the next version.
Unfortunately I didn't have enough time to take things for a spin, so I've limited myself to a relatively quick review.
In addition to the comments that I've provided for the code that is there, I need to point out what is *not* there: specifically, any kind of test :)
Before this can be considered for inclusion, we need to have some test coverage. It doesn't have to be incredibly exhaustive, but at least the basics need to be addressed. If you look for files that contain "riscv64" in their names in the tests/ directory you should get a decent idea of what kind of coverage we will need.
Ok, I will refer to the "riscv64" file in the tests directory to add loongarch64 related test cases.
That's all I have for now. I'll talk to you again in 2024 :)
Ok, thank you very much for taking time out of your busy schedule to review these patches. Wish you a merry Christmas in advance. Thanks! Xianglai.

On Mon, Dec 18, 2023 at 11:40:03AM +0800, lixianglai wrote:
On Thu, Dec 14, 2023 at 02:08:44PM +0800, xianglai li wrote:
Great to see that edk2 support has already been mainlined! An excellent next step would be to get an edk2-loongarch64 package into the various distros... Please consider working with the maintainers for edk2 in Fedora to make that happen, as it would significantly lower the barrier for interested people to get involved.
Yes, we will do that, currently the loongarch code is being moved from the edk2-platform directory to the edk2 directory,
I think after this work is completed, we will have the edk2 installation package.
I'm not very familiar with how the edk2 repository is maintained, but that sounds like a good plan. Presumably an edk2 release will have to be tagged as well.
The /usr/share/qemu/ directory is owned by the QEMU package, and other components should not drop their files in there. The exception is the /usr/share/qemu/firmware/ directory, which is specifically designed for interoperation.
The edk2 files should be installed to /usr/share/edk2/loongarch64/, following the convention established by existing architectures. Once the directory name already contains architecture information, you can use shorter and less unique names for the files themselves.
I think edk2-loongarch64-code.fd can be the loongarch bios that comes with the qemu package,
and then its installation path is /usr/share/qemu which makes sense.
Yes, but distro packages usually strip those bits and rely on firmware packages being installed separately instead. It's just a minor point. As long as support is still being merged into the various upstream projects, testing things out is always going to be messy. It will naturally become smoother over time :)
The information provided in the cover letter, including pointers to the various not-yet-upstreamed changes and instructions on how to test everything, is very much appreciated!
Ok, I will provide more detailed instructions on changes and testing in the next version.
Personally I think that the information related to testing that you've provided in the cover letter is quite extensive, so don't feel that you necessarily need to expand upon it. -- Andrea Bolognani / Red Hat / Virtualization

Hi Andrea :
On Thu, Dec 14, 2023 at 02:08:44PM +0800, xianglai li wrote:
Great to see that edk2 support has already been mainlined! An excellent next step would be to get an edk2-loongarch64 package into the various distros... Please consider working with the maintainers for edk2 in Fedora to make that happen, as it would significantly lower the barrier for interested people to get involved. Yes, we will do that, currently the loongarch code is being moved from the edk2-platform directory to the edk2 directory,
I think after this work is completed, we will have the edk2 installation package. I'm not very familiar with how the edk2 repository is maintained, but
On Mon, Dec 18, 2023 at 11:40:03AM +0800, lixianglai wrote: that sounds like a good plan. Presumably an edk2 release will have to be tagged as well.
The /usr/share/qemu/ directory is owned by the QEMU package, and other components should not drop their files in there. The exception is the /usr/share/qemu/firmware/ directory, which is specifically designed for interoperation.
The edk2 files should be installed to /usr/share/edk2/loongarch64/, following the convention established by existing architectures. Once the directory name already contains architecture information, you can use shorter and less unique names for the files themselves. I think edk2-loongarch64-code.fd can be the loongarch bios that comes with the qemu package,
and then its installation path is /usr/share/qemu which makes sense. Yes, but distro packages usually strip those bits and rely on firmware packages being installed separately instead.
It's just a minor point. As long as support is still being merged into the various upstream projects, testing things out is always going to be messy. It will naturally become smoother over time :)
Ok, I see. Thank you very much!
The information provided in the cover letter, including pointers to the various not-yet-upstreamed changes and instructions on how to test everything, is very much appreciated! Ok, I will provide more detailed instructions on changes and testing in the next version. Personally I think that the information related to testing that you've provided in the cover letter is quite extensive, so don't feel that you necessarily need to expand upon it.
Ok, I see. Thank you very much! Thanks, Xianglai.
participants (3)
-
Andrea Bolognani
-
lixianglai
-
xianglai li