[libvirt] [PATCH v2 0/3] store binary name in capabilities
by Daniel P. Berrangé
It is helpful for debugging to have the binary name in the capabilities XML
Daniel P. Berrangé (3):
qemu: add explicit flag to skip qemu caps invalidation
qemu: add qemu caps constructor which takes binary name
qemu: store the emulator name in the capabilities XML
src/qemu/qemu_capabilities.c | 52 +++++++++++++++----
src/qemu/qemu_capabilities.h | 4 ++
.../caps_1.5.3.x86_64.xml | 1 +
.../caps_1.6.0.x86_64.xml | 1 +
.../caps_1.7.0.x86_64.xml | 1 +
.../caps_2.1.1.x86_64.xml | 1 +
.../caps_2.10.0.aarch64.xml | 1 +
.../caps_2.10.0.ppc64.xml | 1 +
.../caps_2.10.0.s390x.xml | 1 +
.../caps_2.10.0.x86_64.xml | 1 +
.../caps_2.11.0.s390x.xml | 1 +
.../caps_2.11.0.x86_64.xml | 1 +
.../caps_2.12.0.aarch64.xml | 1 +
.../caps_2.12.0.ppc64.xml | 1 +
.../caps_2.12.0.s390x.xml | 1 +
.../caps_2.12.0.x86_64.xml | 1 +
.../caps_2.4.0.x86_64.xml | 1 +
.../caps_2.5.0.x86_64.xml | 1 +
.../caps_2.6.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_2.6.0.ppc64.xml | 1 +
.../caps_2.6.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_2.7.0.s390x.xml | 1 +
.../caps_2.7.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_2.8.0.s390x.xml | 1 +
.../caps_2.8.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_2.9.0.ppc64.xml | 1 +
.../qemucapabilitiesdata/caps_2.9.0.s390x.xml | 1 +
.../caps_2.9.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_3.0.0.ppc64.xml | 1 +
.../caps_3.0.0.riscv32.xml | 1 +
.../caps_3.0.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_3.0.0.s390x.xml | 1 +
.../caps_3.0.0.x86_64.xml | 1 +
.../qemucapabilitiesdata/caps_3.1.0.ppc64.xml | 1 +
.../caps_3.1.0.x86_64.xml | 1 +
.../caps_4.0.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 +
.../caps_4.0.0.riscv32.xml | 1 +
.../caps_4.0.0.riscv64.xml | 1 +
.../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 +
.../caps_4.0.0.x86_64.xml | 1 +
.../caps_4.1.0.x86_64.xml | 1 +
.../caps_4.2.0.aarch64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 +
.../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 +
.../caps_4.2.0.x86_64.xml | 1 +
tests/qemucapabilitiestest.c | 7 ++-
tests/testutilsqemu.c | 5 +-
48 files changed, 101 insertions(+), 11 deletions(-)
--
2.23.0
4 years, 11 months
[libvirt] [PATCH] news: document phyp removal
by Cole Robinson
Signed-off-by: Cole Robinson <crobinso(a)redhat.com>
---
docs/news.xml | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index 055353b9a5..1af57f8af0 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -99,6 +99,16 @@
Python 2 binary. Python 3 must be used instead.
</description>
</change>
+ <change>
+ <summary>
+ 'phyp' Power Hypervisor driver removed
+ </summary>
+ <description>
+ The 'phyp' Power Hypervisor driver has not seen active development
+ since 2011 and does not seem to have any real world usage. It
+ has now been removed.
+ </description>
+ </change>
</section>
</release>
<release version="v5.10.0" date="2019-12-02">
--
2.23.0
4 years, 11 months
[libvirt] i want to update the disk driver from ide to virtio, but fail , why ?
by thomas
hi, ALL:
my question description as follow:
virsh dumpxml root-vsys_v2 > v2.xml ,
in the v2.xml has the following information:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/root-vsys_v2.qcow2'/>
<backingStore/>
<target dev='hda' bus='ide'/> // i want change the 'ide' to 'virtio', but fail
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
so i create a new xml name v2-new.xml ,it content is:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/root-vsys_v2.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
then run:
virsh update-device root-vsys_v2 /root/v2-new.xml --live
the result is:
[root@NSG ~]# virsh update-device root-vsys_v2 /root/v2-new.xml --live
error: Failed to update device from /root/v2-new.xml
error: internal error: No device with bus 'virtio' and target 'vda'
i read the source code for libvirt ,the log outprint at funcion:
static int
qemuDomainChangeDiskLive(virDomainObjPtr vm,
virDomainDeviceDefPtr dev,
virQEMUDriverPtr driver,
bool force)
{
virDomainDiskDefPtr disk = dev->data.disk;
virDomainDiskDefPtr orig_disk = NULL;
virDomainDeviceDef oldDev = { .type = dev->type };
int ret = -1;
if (!(orig_disk = virDomainDiskFindByBusAndDst(vm->def,
disk->bus, disk->dst))) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("No device with bus '%s' and target '%s'"),
virDomainDiskBusTypeToString(disk->bus),
disk->dst); //the failure log is here, the vm->def has disk item :dev = hda, bus=ide ,and disk->bus = virtio disk->dst=vda,
goto cleanup;
}
//the disk->bus and disk->dst can't in the vm->def ,so the api virDomainDiskFindByBusAndDst failed
oldDev.data.disk = orig_disk;
if (virDomainDefCompatibleDevice(vm->def, dev, &oldDev,
VIR_DOMAIN_DEVICE_ACTION_UPDATE,
true) < 0)
goto cleanup;
if (!qemuDomainDiskChangeSupported(disk, orig_disk))
goto cleanup;
if (!virStorageSourceIsSameLocation(disk->src, orig_disk->src)) {
/* Disk source can be changed only for removable devices */
if (disk->device != VIR_DOMAIN_DISK_DEVICE_CDROM &&
disk->device != VIR_DOMAIN_DISK_DEVICE_FLOPPY) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
_("disk source can be changed only in removable "
"drives"));
goto cleanup;
}
if (qemuDomainAttachDeviceDiskLive(driver, vm, dev, force) < 0)
goto cleanup;
}
orig_disk->startupPolicy = dev->data.disk->startupPolicy;
orig_disk->snapshot = dev->data.disk->snapshot;
ret = 0;
cleanup:
return ret;
}
what reason lead the faiure ? is my v2-new.xml write error ? how to write the xml for virsh update-device ?
i refer the link
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/...
--------------------------
thanks !
thosmas.kuang
4 years, 11 months
[libvirt] [PATCH 0/7] introduce support for an embedded driver mode
by Daniel P. Berrangé
This is a followup to:
https://www.redhat.com/archives/libvir-list/2019-May/msg00467.html
This series implements support for an embedded driver mode for libvirt,
with initial support in the QEMU and secrets drivers.
In this mode of operation, the driver stores all its config and state
under a private directory tree. See the individual patches for the
illustrated directory hierarchy used.
The intent of this embedded mode is to suit cases where the application
is using virtualization as a building block for some functionality, as
opposed to running traditional "full OS" builds.
The long time posterchild example would be libguestfs, while a more
recent example could be Kata containers.
The general principal in enabling this embedded mode is that the
functionality available should be identical to that seen when the
driver is running inside libvirtd. This is achieved by loading the
exact same driver .so module as libvirtd would load, and simply
configuring it with a different directory layout.
The result of this is that when running in embedded mode, the driver
can still talk to other secondary drivers running inside libvirtd
if desired. This is useful, for example, to connect a VM to the
default virtual network.
The secondary drivers can be made to operate in embedded mode as
well, however, this will require some careful consideration for each
driver to ensure they don't clash with each other. Thus in this series
only the secret driver is enabled for embedded mode. This is required
to enable use of VMs with encrypted disks, or authenticated network
block storage.
In this series we introduce a new command line tool 'virt-qemu-run'
which is a really simple tool for launching a VM in embedded mode.
I'm not entirely sure whether we should provide this as an official
supported tool in this way, or merely put it into the 'examples'
directory as demo-ware.
With testing of the virt-qemu-run tool we can immediately see what the
next important thing to tackle is: performance. We have not really cared
too much about the startup performance of libvirtd as this is a one time
cost when the mgmt application connects. We did none the less cache
capabilities because probing caps for 30 QEMU binaries takes a long time.
Even with this caching it takes an unacceptably long time to start a VM
in embedded mode. About 100 ms to open the embedded QEMU driver,
assuming pre-cached capabilies - ~2 seconds if not cached and all 30
QEMU targets are present. Then about 300 ms to actually start the QEMU
guest.
IOW, about 400 ms to get QEMU running. NB this is measuring time from
launching the virt-run-qemu program, to the point at which the API call
'virDomainCreate' returns control. This has both libvirt & QEMU overhead
in & I don't have clear figures to distinguish, but I can see a 40 ms
delay between issuing the 'qmp_capabilities' call and getting a reply,
which is QEMU startup overead.
This is a i440fx based QEMU with a general purpose virtio-pci config
(disk, net, etc) tyupical for running a full OS. I've not tried any
kind of optimized QEMU config with microvm.
I've already started on measuring & optimizing & identified several key
areas that can be addressed, but it is all ultimately about not doing
work before we need the answers from that work (which often means we
will never do the work at all).
For example, we shouldn't probe all 30 QEMU's upfront. If the app is
only going to create an x86_64 KVM guest we should only care about that
1 QEMU. This is painful because parsing any guest XML requires a
virCapsPtr which in turn causes probing of every QEMU binary. I've got
in progress patches to eliminate virCapsPtr almost entirely and work
directly with the virQEMUCapsPtr instead.
It is possible we'll want to use a different file format for storing
the cached QEMU capabilities, and the CPU feature/model info. Parsing
this XML is a non-negligible time sink. A binary format is likely way
quicker, especially if its designed to be just mmap'able for direct
read. To be investigated...
We shouldn't probe for whether host PM suspend is possible unless
someone wants that info, or tries to issue that API call.
After starting QEMU we spend 150-200 ms issuing a massive number of
qom-get calls to check whether QEMU enabled each individual CPU feature
flag. We only need this info if someone asks for the live XML or we
intend to live migrate etc. So we shouldn't issue these qom-get calls in
the "hot path" of QEMU startup. It can be done later in a non-time
critical point. Also the QEMU API for this is horribly inefficient to
require so many qom-get calls.
There's more but I won't talk about it now. Suffice to say that I think
we can get libvirt overhead down to less than 100 ms fairly easily and
probably even down to less than 50 ms without much effort.
The exact figure will depend on what libvirt features you want enabled,
and how much work we want/need to put into optimization. We'll want to
fix the really gross mistakes & slow downs, but we'll want guidance from
likely users as to their VM startup targets to decide how much work
needs investing.
This optimization will ultimately help non-embedded QEMU mode too,
making it faster to respond & start.
Changed in v2:
- Use a simplified directory layout for embedded mode. Previously we
just put a dir prefix onto the normal paths. This has the downside
that the embedded drivers paths are needlessly different for
privileged vs unprivileged user. It also results in very long paths
which can be a problem for the UNIX socket name length limits.
- Also ported the secret driver to support embedded mode
- Check to validate that the event loop is registered.
- Add virt-qemu-run tool for embedded usage.
- Added docs for the qemu & secret driver explaining embedded mode
Daniel P. Berrangé (7):
access: report an error if no access manager is present
libvirt: pass a directory path into drivers for embedded usage
event: add API for requiring an event loop impl to be registered
libvirt: support an "embed" URI path selector for opening drivers
qemu: add support for running QEMU driver in embedded mode
secrets: add support for running secret driver in embedded mode
qemu: introduce a new "virt-qemu-run" program
build-aux/syntax-check.mk | 2 +-
docs/drivers.html.in | 1 +
docs/drvqemu.html.in | 84 +++++++
docs/drvsecret.html.in | 82 +++++++
libvirt.spec.in | 2 +
po/POTFILES.in | 1 +
src/Makefile.am | 9 +
src/access/viraccessmanager.c | 5 +
src/driver-state.h | 2 +
src/driver.h | 2 +
src/interface/interface_backend_netcf.c | 7 +
src/interface/interface_backend_udev.c | 7 +
src/libvirt.c | 93 ++++++-
src/libvirt_internal.h | 4 +-
src/libxl/libxl_driver.c | 7 +
src/lxc/lxc_driver.c | 8 +
src/network/bridge_driver.c | 7 +
src/node_device/node_device_hal.c | 7 +
src/node_device/node_device_udev.c | 7 +
src/nwfilter/nwfilter_driver.c | 7 +
src/qemu/Makefile.inc.am | 26 ++
src/qemu/qemu_conf.c | 38 ++-
src/qemu/qemu_conf.h | 6 +-
src/qemu/qemu_driver.c | 21 +-
src/qemu/qemu_process.c | 15 +-
src/qemu/qemu_shim.c | 313 ++++++++++++++++++++++++
src/qemu/qemu_shim.pod | 94 +++++++
src/remote/remote_daemon.c | 1 +
src/remote/remote_driver.c | 1 +
src/secret/secret_driver.c | 41 +++-
src/storage/storage_driver.c | 7 +
src/util/virevent.c | 25 ++
src/util/virevent.h | 2 +
src/vz/vz_driver.c | 7 +
tests/domaincapstest.c | 2 +-
tests/testutilsqemu.c | 2 +-
36 files changed, 920 insertions(+), 25 deletions(-)
create mode 100644 docs/drvsecret.html.in
create mode 100644 src/qemu/qemu_shim.c
create mode 100644 src/qemu/qemu_shim.pod
--
2.23.0
4 years, 11 months
[libvirt] [PATCH 00/42] Cleanups after adopting g_get_*_dir()
by Fabiano Fidêncio
After adopting g_get_*_dir(), internally, when implementing
virGetUser*Dir(), the libvirt functions changed their behaviour as NULL
is never ever returned by the GLib functions.
Knowing that, let's cleanup the callers' code of those functions,
removing the unnecessary checks.
While doing the cleanup mentioned above, I've noticed that some other
cleanups could be done, when touching the very same function, as using
g_autofree. It's done as part of this series as well.
phyp driver was not touched as there's some plan to just drop its code
entirely: https://www.redhat.com/archives/libvir-list/2019-December/msg01162.html
Fabiano Fidêncio (42):
tools: Use g_autofree on cmdCd()
vbox: Don't leak virGetUserDirectory()'s output
rpc: Use g_autofree on virNetClientNewLibSSH2()
rpc: Use g_autofree on virNetClientNewLibssh()
rpc: Don't check the output of virGetUserDirectory()
qemu: Don't check the output of virGetUserDirectory()
util: Don't check the output of virGetUserConfigDirectory()
storage: Don't check the output of virGetUserConfigDirectory()
secret: Don't check the output of virGetUserConfigDirectory()
tools: Don't check the output of virGetUserCacheDirectory()
vbox: Use g_autofree on vboxDomainScreenshot()
vbox: Don't check the output of virGetUserCacheDirectory()
util: Use g_autofree on virLogSetDefaultOutputToFile()
util: Don't check the output of virGetUserCacheDirectory()
qemu: Don't check the output of virGetUserCacheDirectory()
rpc: Don't check the output of virGetUserConfigDirectory()
remote: Don't check the output of virGetUserConfigDirectory()
qemu: Don't check the output of virGetUserConfigDirectory()
network: Don't check the output of virGetUserConfigDirectory()
logging: Use g_autofree on virLogDaemonConfigFilePath()
logging: Don't check the output of virGetUserConfigDirectory()
locking: Use g_autofree on virLockDaemonConfigFilePath()
locking: Don't check the output of virGetUserConfigDirectory()
util: Don't check the output of virGetUserRuntimeDirectory()
storage: Don't check the output of virGetUserRuntimeDirectory()
secret: Don't check the output of virGetUserRuntimeDirectory()
rpc: Don't check the output of virGetUserRuntimeDirectory()
remote: Don't check the output of virGetUserRuntimeDirectory()
qemu: Don't check the output of virGetUserRuntimeDirectory()
node_device: Don't check the output of virGetUserRuntimeDirectory()
network: Don't check the output of virGetUserRuntimeDirectory()
logging: Use g_autofree on virLogManagerDaemonPath()
logging: Use g_autofree on virLogDaemonUnixSocketPaths()
logging: Use g_autofree on virLogDaemonExecRestartStatePath()
logging: Don't check the output of virGetUserRuntimeDirectory()
locking: Use g_autofree on virLockManagerLockDaemonPath()
locking: Use g_autofree on virLockDaemonUnixSocketPaths()
locking: Use g_autofree on virLockDaemonExecRestartStatePath()
locking: Don't check the output of virGetUserRuntimeDirectory()
interface: Don't check the output of virGetUserRuntimeDirectory()
admin: Use g_autofree on getSocketPath()
admin: Don't check the output of virGetUserRuntimeDirectory()
src/admin/libvirt-admin.c | 6 +--
src/interface/interface_backend_netcf.c | 3 +-
src/interface/interface_backend_udev.c | 3 +-
src/locking/lock_daemon.c | 31 +++----------
src/locking/lock_daemon_config.c | 9 +---
src/locking/lock_driver_lockd.c | 7 +--
src/logging/log_daemon.c | 31 +++----------
src/logging/log_daemon_config.c | 9 +---
src/logging/log_manager.c | 7 +--
src/network/bridge_driver.c | 2 -
src/node_device/node_device_hal.c | 3 +-
src/node_device/node_device_udev.c | 3 +-
src/qemu/qemu_conf.c | 7 +--
src/qemu/qemu_interop_config.c | 3 --
src/remote/remote_daemon.c | 8 +---
src/remote/remote_daemon_config.c | 6 +--
src/remote/remote_driver.c | 3 +-
src/rpc/virnetclient.c | 59 +++++++++----------------
src/rpc/virnetsocket.c | 3 +-
src/rpc/virnettlscontext.c | 12 -----
src/secret/secret_driver.c | 6 +--
src/storage/storage_driver.c | 2 -
src/util/virauth.c | 3 +-
src/util/virconf.c | 2 -
src/util/virhostdev.c | 3 +-
src/util/virlog.c | 13 ++----
src/util/virpidfile.c | 3 +-
src/vbox/vbox_common.c | 12 ++---
src/vbox/vbox_storage.c | 7 ++-
tools/vsh.c | 13 ++----
30 files changed, 73 insertions(+), 206 deletions(-)
--
2.24.1
4 years, 11 months
[libvirt] [PATCH] cpu: add CLZERO CPUID support for AMD platforms
by Ani Sinha
Qemu commit e900135dcfb67 ("i386: Add CPUID bit for CLZERO and XSAVEERPTR")
adds support for CLZERO CPUID bit.
This commit extends support for this CPUID bit into libvirt.
Signed-off-by: Ani Sinha <ani.sinha(a)nutanix.com>
---
src/cpu_map/x86_EPYC-IBPB.xml | 1 +
src/cpu_map/x86_EPYC.xml | 1 +
src/cpu_map/x86_features.xml | 3 +++
3 files changed, 5 insertions(+)
diff --git a/src/cpu_map/x86_EPYC-IBPB.xml b/src/cpu_map/x86_EPYC-IBPB.xml
index 283697e..a70fbd8 100644
--- a/src/cpu_map/x86_EPYC-IBPB.xml
+++ b/src/cpu_map/x86_EPYC-IBPB.xml
@@ -14,6 +14,7 @@
<feature name='bmi2'/>
<feature name='clflush'/>
<feature name='clflushopt'/>
+ <feature name='clzero'/>
<feature name='cmov'/>
<feature name='cr8legacy'/>
<feature name='cx16'/>
diff --git a/src/cpu_map/x86_EPYC.xml b/src/cpu_map/x86_EPYC.xml
index f060139..6c11d82 100644
--- a/src/cpu_map/x86_EPYC.xml
+++ b/src/cpu_map/x86_EPYC.xml
@@ -14,6 +14,7 @@
<feature name='bmi2'/>
<feature name='clflush'/>
<feature name='clflushopt'/>
+ <feature name='clzero'/>
<feature name='cmov'/>
<feature name='cr8legacy'/>
<feature name='cx16'/>
diff --git a/src/cpu_map/x86_features.xml b/src/cpu_map/x86_features.xml
index 2bed1e0..dd62755 100644
--- a/src/cpu_map/x86_features.xml
+++ b/src/cpu_map/x86_features.xml
@@ -473,6 +473,9 @@
<feature name='ibpb'>
<cpuid eax_in='0x80000008' ebx='0x00001000'/>
</feature>
+ <feature name='clzero'>
+ <cpuid eax_in='0x80000008' ebx='0x00000001'/>
+ </feature>
<feature name='amd-ssbd'>
<cpuid eax_in='0x80000008' ebx='0x01000000'/>
</feature>
--
1.9.4
4 years, 11 months
[libvirt] [PATCH] AUTHORS: Add Fabiano Fidêncio
by Ján Tomko
$ git log --committer=fidencio --pretty=oneline | wc -l
12
Signed-off-by: Ján Tomko <jtomko(a)redhat.com>
---
Déjà vu from c379576dbc80a66820e256f9ce27595270d95ac2 except the
mailmap entry is already set up.
AUTHORS.in | 1 +
1 file changed, 1 insertion(+)
diff --git a/AUTHORS.in b/AUTHORS.in
index fbac54c48d..4ed8ceb858 100644
--- a/AUTHORS.in
+++ b/AUTHORS.in
@@ -19,6 +19,7 @@ Daniel Veillard <veillard(a)redhat.com>
Doug Goldstein <cardoe(a)gentoo.org>
Eric Blake <eblake(a)redhat.com>
Erik Skultety <eskultet(a)redhat.com>
+Fabiano Fidêncio <fidencio(a)redhat.com>
Gao Feng <gaofeng(a)cn.fujitsu.com>
Guido Günther <agx(a)sigxcpu.org>
Ján Tomko <jtomko(a)redhat.com>
--
2.21.0
4 years, 11 months
[libvirt] CfP VHPC20: HPC Containers-Kubernetes
by VHPC 20
====================================================================
CALL FOR PAPERSa
15th Workshop on Virtualization in High-Performance Cloud Computing
(VHPC 20) held in conjunction with the International Supercomputing
Conference - High Performance, June 21-25, 2020, Frankfurt, Germany.
(Springer LNCS Proceedings)
====================================================================
Date: June 25, 2020
Workshop URL: vhpc[dot]org
Abstract registration Deadline: Jan 31st, 2020
Paper Submission Deadline: Apr 05th, 2020
Springer LNCS
Call for Papers
Containers and virtualization technologies constitute key enabling
factors for flexible resource management in modern data centers, and
particularly in cloud environments. Cloud providers need to manage
complex infrastructures in a seamless fashion to support the highly
dynamic and heterogeneous workloads and hosted applications customers
deploy. Similarly, HPC environments have been increasingly adopting
techniques that enable flexible management of vast computing and
networking resources, close to marginal provisioning cost, which is
unprecedented in the history of scientific and commercial computing.
Most recently, Function as a Service (Faas) and Serverless computing,
utilizing lightweight VMs-containers widens the spectrum of
applications that can be deployed in a cloud environment, especially
in an HPC context. Here, HPC-provided services can be become
accessible to distributed workloads outside of large cluster
environments.
Various virtualization-containerization technologies contribute to the
overall picture in different ways: machine virtualization, with its
capability to enable consolidation of multiple underutilized servers
with heterogeneous software and operating systems (OSes), and its
capability to live-migrate a fully operating virtual machine (VM)
with a very short downtime, enables novel and dynamic ways to manage
physical servers; OS-level virtualization (i.e., containerization),
with its capability to isolate multiple user-space environments and
to allow for their coexistence within the same OS kernel, promises to
provide many of the advantages of machine virtualization with high
levels of responsiveness and performance; lastly, unikernels provide
for many virtualization benefits with a minimized OS/library surface.
I/O Virtualization in turn allows physical network interfaces to take
traffic from multiple VMs or containers; network virtualization, with
its capability to create logical network overlays that are independent
of the underlying physical topology is furthermore enabling
virtualization of HPC infrastructures.
Publication
Accepted papers will be published in a Springer LNCS proceedings
volume.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to virtualization across the entire software stack with a
special focus on the intersection of HPC, containers-virtualization
and the cloud.
Major Topics:
- HPC workload orchestration (Kubernetes)
- Kubernetes HPC batch
- HPC Container Environments Landscape
- HW Heterogeneity
- Container ecosystem (Docker alternatives)
- Networking
- Lightweight Virtualization
- Unikernels / LibOS
- State-of-the-art processor virtualization (RISC-V, EPI)
- Containerizing HPC Stacks/Apps/Codes:
Climate model containers
each major topic encompassing design/architecture, management,
performance management, modeling and configuration/tooling.
Specifically, we invite papers that deal with the following topics:
- HPC orchestration (Kubernetes)
- Virtualizing Kubernetes for HPC
- Deployment paradigms
- Multitenancy
- Serverless
- Declerative data center integration
- Network provisioning
- Storage
- OCI i.a. images
- Isolation/security
- HW Accelerators, including GPUs, FPGAs, AI, and others
- State-of-practice/art, including transition to cloud
- Frameworks, system software
- Programming models, runtime systems, and APIs to facilitate cloud
adoption
- Edge use-cases
- Application adaptation, success stories
- Kubernetes Batch
- Scheduling, job management
- Execution paradigm - workflow
- Data management
- Deployment paradigm
- Multi-cluster/scalability
- Performance improvement
- Workflow / execution paradigm
- Podman: end-to-end Docker alternative container environment & use-cases
- Creating, Running containers as non-root (rootless)
- Running rootless containers with MPI
- Container live migration
- Running containers in restricted environments without setuid
- Networking
- Software defined networks and network virtualization
- New virtualization NICs/Nitro alike ASICs for the data center?
- Kubernetes SDN policy (Calico i.a.)
- Kubernetes network provisioning (Flannel i.a.)
- Lightweight Virtualization
- Micro VMMs (Rust-VMM, Firecracker, solo5)
- Xen
- Nitro hypervisor (KVM)
- RVirt
- Cloud Hypervisor
- Unikernels / LibOS
- HPC Storage in Virtualization
- HPC container storage
- Cloud-native storage
- Hypervisors in storage virtualization
- Processor Virtualization
- RISC-V hypervisor extensions
- RISC-V Hypervisor ports
- EPI
- Composable HPC microservices
- Containerizing Scientific Codes
- Building
- Deploying
- Securing
- Storage
- Monitoring
- Use case for containerizing HPC codes:
Climate model containers for portability, reproducibility,
traceability, immutability, provenance, data & software preservation
The Workshop on Virtualization in High-Performance Cloud Computing
(VHPC) aims to bring together researchers and industrial practitioners
facing the challenges posed by virtualization in order to foster
discussion, collaboration, mutual exchange of knowledge and
experience, enabling research to ultimately provide novel solutions
for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections, plus
lightning talks that are limited to 5 minutes. Presentations may be
accompanied by interactive demonstrations.
Important Dates
Jan 31st, 2020 - Abstract
Apr 5th, 2020 - Paper submission deadline (Springer LNCS)
Apr 26th, 2020 - Acceptance notification
June 25th, 2020 - Workshop Day
July 10th, 2020 - Camera-ready version due
Chair
Michael Alexander (chair), BOKU, Vienna, Austria
Anastassios Nanos (co-chair), Sunlight.io, UK
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Paolo Bonzini, Redhat, Italy
Jakob Blomer, CERN, Europe
Eduardo César, Universidad Autonoma de Barcelona, Spain
Taylor Childers, Argonne National Laboratory, USA
Stephen Crago, USC ISI, USA
Tommaso Cucinotta, St. Anna School of Advanced Studies, Italy
François Diakhaté CEA DAM Ile de France, France
Kyle Hale, Northwestern University, USA
Brian Kocoloski, Washington University, USA
John Lange, University of Pittsburgh, USA
Giuseppe Lettieri, University of Pisa, Italy
Klaus Ma, Huawei, China
Alberto Madonna, Swiss National Supercomputing Center, Switzerland
Nikos Parlavantzas, IRISA, France
Anup Patel, Western Digital, USA
Kevin Pedretti, Sandia National Laboratories, USA
Amer Qouneh, Western New England University, USA
Carlos Reaño, Queen’s University Belfast, UK
Adrian Reber, Redhat, Germany
Riccardo Rocha, CERN, Europe
Borja Sotomayor, University of Chicago, USA
Jonathan Sparks, Cray, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
John Walters, USC ISI, USA
Yasuhiro Watashiba, Osaka University, Japan
Chao-Tung Yang, Tunghai University, Taiwan
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, keywords, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work. Accepted papers will be published in a
Springer LNCS volume.
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Abstract, Paper Submission Link:
edas[dot]info/newPaper.php?c=26973
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are
strictly limited to 5 minutes. They can be used to gain early
feedback on ongoing research, for demonstrations, to present research
results, early research ideas, perspectives and positions of interest
to the community. Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with
the International Supercomputing Conference - High Performance (ISC)
2019, June 21-25, Frankfurt, Germany.
4 years, 11 months
[libvirt] [libvirt-tck PATCH v2] Add cases for nvram
by dzheng@redhat.com
From: Dan Zheng <dzheng(a)redhat.com>
This is to add the tests for below flags:
- Sys::Virt::Domain::UNDEFINE_KEEP_NVRAM
- Sys::Virt::Domain::UNDEFINE_NVRAM
v1: https://www.redhat.com/archives/libvir-list/2019-December/msg00932.html
Signed-off-by: Dan Zheng <dzheng(a)redhat.com>
---
scripts/domain/401-ovmf-nvram.t | 144 ++++++++++++++++++++++++++++++++
1 file changed, 144 insertions(+)
create mode 100644 scripts/domain/401-ovmf-nvram.t
diff --git a/scripts/domain/401-ovmf-nvram.t b/scripts/domain/401-ovmf-nvram.t
new file mode 100644
index 0000000..4af2117
--- /dev/null
+++ b/scripts/domain/401-ovmf-nvram.t
@@ -0,0 +1,144 @@
+# -*- perl -*-
+#
+# Copyright (C) 2009 Red Hat, Inc.
+# Copyright (C) 2018 Dan Zheng (dzheng(a)redhat.com)
+#
+# This program is free software; You can redistribute it and/or modify
+# it under the GNU General Public License as published by the Free
+# Software Foundation; either version 2, or (at your option) any
+# later version
+#
+# The file "LICENSE" distributed along with this file provides full
+# details of the terms and conditions
+#
+
+=pod
+
+=head1 NAME
+
+domain/401-ovmf-nvram.t - Test OVMF related functions and flags
+
+=head1 DESCRIPTION
+
+The test cases validates OVMF related APIs and flags
+
+Sys::Virt::Domain::UNDEFINE_KEEP_NVRAM
+Sys::Virt::Domain::UNDEFINE_NVRAM
+
+=cut
+
+use strict;
+use warnings;
+
+use Test::More tests => 6;
+
+use Sys::Virt::TCK;
+use File::stat;
+use File::Copy;
+
+
+my $tck = Sys::Virt::TCK->new();
+my $conn = eval { $tck->setup(); };
+BAIL_OUT "failed to setup test harness: $@" if $@;
+END { $tck->cleanup if $tck; }
+
+
+sub setup_nvram {
+
+ my $loader_path = shift;
+ my $nvram_template = shift;
+ my $nvram_path = shift;
+
+ # Check below two files should exist
+ # - /usr/share/OVMF/OVMF_CODE.secboot.fd
+ # - /usr/share/OVMF/OVMF_VARS.fd
+ if (!stat($loader_path) or !stat($nvram_template)) {
+ return undef;
+ }
+
+ # Ensure the sample nvram file exists
+ copy($nvram_template, $nvram_path) or die "Copy failed: $!";
+
+ # Use 'q35' as machine type and add below lines to guest xml
+ # <loader readonly='yes' secure='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
+ # <nvram template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/test_VARS.fd</nvram>
+
+ my $xml = $tck->generic_domain(name => "tck")->as_xml;
+ my $xp = XML::XPath->new($xml);
+
+ if ($xp->getNodeText("/domain/os/type/\@machine") ne 'q35') {
+ diag "Changing guest machine type to q35";
+ $xp->setNodeText("/domain/os/type/\@machine", "q35");
+ }
+
+ my $loader_node = XML::XPath::Node::Element->new('loader');
+ my $loader_text = XML::XPath::Node::Text->new($loader_path);
+ my $attr_ro = XML::XPath::Node::Attribute->new('readonly');
+ my $attr_secure = XML::XPath::Node::Attribute->new('secure');
+ my $attr_type = XML::XPath::Node::Attribute->new('type');
+
+ $attr_ro -> setNodeValue('yes');
+ $attr_secure -> setNodeValue('yes');
+ $attr_type -> setNodeValue('pflash');
+
+ $loader_node->appendChild($loader_text);
+ $loader_node->appendAttribute($attr_ro);
+ $loader_node->appendAttribute($attr_secure);
+ $loader_node->appendAttribute($attr_type);
+
+ my $nvram_node = XML::XPath::Node::Element->new('nvram');
+ my $nvram_text = XML::XPath::Node::Text->new($nvram_path);
+ my $attr_template = XML::XPath::Node::Attribute->new('template');
+
+ $attr_template -> setNodeValue($nvram_template);
+
+ $nvram_node->appendChild($nvram_text);
+ $nvram_node->appendAttribute($attr_template);
+
+ my $smm_node = XML::XPath::Node::Element->new('smm');
+ my $attr_state = XML::XPath::Node::Attribute->new('state');
+ $attr_state -> setNodeValue("on");
+ $smm_node -> appendAttribute($attr_state);
+
+ my ($root) = $xp->findnodes('/domain/os');
+ $root->appendChild($loader_node);
+ $root->appendChild($nvram_node);
+ ($root) = $xp->findnodes('/domain/features');
+ $root->appendChild($smm_node);
+
+ $xml = $xp->findnodes_as_string('/');
+ diag $xml;
+ return $xml;
+}
+
+diag "Defining an inactive domain config with nvram";
+my $loader_file_path = '/usr/share/OVMF/OVMF_CODE.secboot.fd';
+my $nvram_file_template = '/usr/share/OVMF/OVMF_VARS.fd';
+my $nvram_file_path = '/var/lib/libvirt/qemu/nvram/test_VARS.fd';
+
+my $xml = setup_nvram($loader_file_path, $nvram_file_template, $nvram_file_path);
+
+SKIP: {
+ diag "Require files ($loader_file_path, $nvram_file_template) for testing";
+ skip "Please install OVMF and ensure necessary files exist", 5 if !defined($xml);
+ my $dom;
+
+ diag "Test Sys::Virt::Domain::UNDEFINE_KEEP_NVRAM";
+ ok_domain(sub { $dom = $conn->define_domain($xml) }, "defined domain with nvram configure");
+ diag "Checking nvram file already exists";
+ my $st = stat($nvram_file_path);
+ ok($st, "File '$nvram_file_path' exists as expected");
+ $dom->undefine(Sys::Virt::Domain::UNDEFINE_KEEP_NVRAM);
+ diag "Checking nvram file still exists";
+ $st = stat($nvram_file_path);
+ ok($st, "File '$nvram_file_path' still exists as expected");
+
+ diag "Test Sys::Virt::Domain::UNDEFINE_NVRAM";
+ ok_domain(sub { $dom = $conn->define_domain($xml) }, "defined domain with nvram configure");
+ $dom->undefine(Sys::Virt::Domain::UNDEFINE_NVRAM);
+ diag "Checking nvram file removed";
+ $st = stat($nvram_file_path);
+ ok(!$st, "File '$nvram_file_path' is removed");
+}
+ok_error(sub { $conn->get_domain_by_name("tck") }, "NO_DOMAIN error raised from missing domain",
+ Sys::Virt::Error::ERR_NO_DOMAIN);
--
2.18.1
4 years, 11 months