[libvirt-users] VHPC at ISC extension - Papers due May 2
by VHPC 17
====================================================================
CALL FOR PAPERS
12th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'17)
held in conjunction with the International Supercomputing Conference - High
Performance,
June 18-22, 2017, Frankfurt, Germany.
(Springer LNCS Proceedings)
====================================================================
Date: June 22, 2017
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 2, 2017 (extended), Springer LNCS, rolling
abstract submission
Abstract/Paper Submission Link: https://edas.info/newPaper.php?c=23179
Keynotes:
Satoshi Matsuoka, Professor of Computer Science, Tokyo Institute of
Technology and
John Goodacre, Professor in Computer Architectures University of
Manchester, Director of Technology and Systems ARM Ltd. Research Group and
Chief Scientific Officer Kaleao Ltd.
Call for Papers
Virtualization technologies constitute a key enabling factor for flexible
resource management
in modern data centers, and particularly in cloud environments. Cloud
providers need to
manage complex infrastructures in a seamless fashion to support the highly
dynamic and
heterogeneous workloads and hosted applications customers deploy.
Similarly, HPC
environments have been increasingly adopting techniques that enable
flexible management of vast computing and networking resources, close to
marginal provisioning cost, which is
unprecedented in the history of scientific and commercial computing.
Various virtualization technologies contribute to the overall picture in
different ways: machine
virtualization, with its capability to enable consolidation of multiple
underutilized servers with
heterogeneous software and operating systems (OSes), and its capability to
live-migrate a
fully operating virtual machine (VM) with a very short downtime, enables
novel and dynamic
ways to manage physical servers; OS-level virtualization (i.e.,
containerization), with its
capability to isolate multiple user-space environments and to allow for
their coexistence within
the same OS kernel, promises to provide many of the advantages of machine
virtualization
with high levels of responsiveness and performance; I/O Virtualization
allows physical
NICs/HBAs to take traffic from multiple VMs or containers; network
virtualization, with its
capability to create logical network overlays that are independent of the
underlying physical
topology and IP addressing, provides the fundamental ground on top of which
evolved
network services can be realized with an unprecedented level of dynamicity
and flexibility; the
increasingly adopted paradigm of Software-Defined Networking (SDN)
promises to extend
this flexibility to the control and data planes of network paths.
Publication
Accepted papers will be published in a Springer LNCS proceedings volume.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to
virtualization across the entire software stack with a special focus on the
intersection of HPC
and the cloud.
Major Topics
- Virtualization in supercomputing environments, HPC clusters, HPC in the
cloud and grids
- OS-level virtualization and containers (Docker, rkt, Singularity,
Shifter, i.a.)
- Lightweight/specialized operating systems, unikernels
- Optimizations of virtual machine monitor platforms and hypervisors
- Hypervisor support for heterogenous resources (GPUs, co-processors,
FPGAs, etc.)
- Virtualization support for emerging memory technologies
- Virtualization in enterprise HPC and microvisors
- Software defined networks and network virtualization
- Management, deployment of virtualized environments and orchestration
(Kubernetes i.a.),
- Workflow-pipeline container-based composability
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Virtualization in data intensive computing and Big Data processing - HPC
convergence
- Adaptation of HPC technologies in the cloud (high performance networks,
RDMA, etc.)
- ARM-based hypervisors, ARM virtualization extensions
- I/O virtualization and cloud based storage systems
- GPU, FPGA and many-core accelerator virtualization
- Job scheduling/control/policy and container placement in virtualized
environments
- Cloud reliability, fault-tolerance and high-availability
- QoS and SLA in virtualized environments
- IaaS platforms, cloud frameworks and APIs
- Large-scale virtualization in domains such as finance and government
- Energy-efficient and power-aware virtualization
- Container security
- Configuration management tools for containers (including CFEngine,
Puppet, i.a.)
- Emerging topics including multi-kernel approaches and,NUMA in hypervisors
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to
bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each
followed by 10 min discussion sections, plus lightning talks that are
limited to 5 minutes.
Presentations may be accompanied by interactive demonstrations.
Important Dates
Rolling Abstract Submission
May 2, 2017 - Paper submission deadline (extended)
May 30, 2017 - Acceptance notification
June 22, 2017 - Workshop Day
July 18, 2017 - Camera-ready version due
Chair
Michael Alexander (chair), scaledinfra technologies, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Jakob Blomer, CERN, Europe
Ron Brightwell, Sandia National Laboratories, USA
Eduardo César, Universidad Autonoma de Barcelona, Spain
Julian Chesterfield, OnApp, UK
Stephen Crago, USC ISI, USA
Christoffer Dall, Columbia University, USA
Patrick Dreher, MIT, USA
Robert Futrick, Cycle Computing, USA
Maria Girone, CERN, Europe
Kyle Hale, Northwestern University, USA
Romeo Kinzler, IBM, Switzerland
Brian Kocoloski, University of Pittsburgh, USA
Nectarios Koziris, National Technical University of Athens, Greece
John Lange, University of Pittsburgh, USA
Che-Rung Lee, National Tsing Hua University, Taiwan
Giuseppe Lettieri, University of Pisa, Italy
Qing Liu, Oak Ridge National Laboratory, USA
Nikos Parlavantzas, IRISA, France
Kevin Pedretti, Sandia National Laboratories, USA
Amer Qouneh, University of Florida, USA
Carlos Reaño, Technical University of Valencia, Spain
Thomas Ryd, CFEngine, Norway
Na Zhang, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Craig Stewart, Indiana University, USA
Anata Tiwari, San Diego Supercomputer Center, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Yasuhiro Watashiba, Osaka University, Japan
Nicholas Wright, Lawrence Berkeley National Laboratory, USA
Chao-Tung Yang, Tunghai University, Taiwan
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work. Accepted papers will be published in a
Springer LNCS volume. .
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract, Paper Submission Link:
https://edas.info/newPaper.php?c=23179
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are strictly
limited to 5 minutes.
They can be used to gain early feedback on ongoing research, for
demonstrations, to present research results, early research ideas,
perspectives and positions of interest to the community. Submit abstract
via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with the
International
Supercomputing Conference - High Performance (ISC) 2017, June 18-22,
Frankfurt,
Germany.
7 years, 8 months
[libvirt-users] Windows Guest Server 2016, bad performance of whole virtualisation system
by Marko Weber | 8000
hello ,
i did a kvm guest with virt-manager and installed windows server 2016.
performance settings for the disk: hypervisor default for cache & io
mode
i added a 2nd disk to the guest , also with hypervisor default settings
on cache & io mode
Under disk settings in windows server 2016 i set "format whole disk
NTFS"
i did not choose the "quick format" option.
when windows is formatting this 1TB drive the performance on the
virtualisation host goes down
loging via ssh take ages, top take 2 mins to start up.
the physical disks are connected to a raid controller LSI and FS on
virtualisation server where the qcow2 files are is XFS.
DO i have to set another options for "performance options" as
"hypervisor default" to keep the system performant on formatting the
disk?
thanks for any hints.
marko
--
zbfmail - Mittendrin statt nur Datei!
7 years, 8 months
[libvirt-users] Windows Guest Server 2016, bad performance of whole virtualisation system
by Marko Weber | 8000
hello list,
i did a kvm guest with virt-manager and installed windows server 2016.
performance settings for the disk: hypervisor default for cache & io
mode
i added a 2nd disk to the guest , also with hypervisor default settings
on cache & io mode
Under disk settings in windows server 2016 i set "format whole disk
NTFS"
i did not choose the "quick format" option.
when windows is formatting this 1TB drive the performance on the
virtualisation host goes down
loging via ssh take ages, top take 2 mins to start up.
the physical disks are connected to a raid controller LSI and FS on
virtualisation server where the qcow2 files are is XFS.
DO i have to set another options for "performance options" as
"hypervisor default" to keep the system performant on formatting the
disk?
thanks for any hints.
marko
--
zbfmail - Mittendrin statt nur Datei!
7 years, 8 months
[libvirt-users] Cannot save domain
by Tony Arnold
I have a domain running a Windows 10 system on an Ubuntu 16.10 host
system. The domain has 4096MiB of memory. When I try to save the
domain, or managedsave the domain, I get an error when it's about 96%
complete. The terminal session looks like this:
virsh # save Windows10 windows10-20170402.save --bypass-cache --verbose
Save: [ 95 %]error: Failed to save domain Windows10 to windows10-
0170402.save
error: operation failed: domain is not running
It seems the domain closes before the save has completed.
libvirt is at version 2.1.0
Should I submit this as a bug, or is there something I can change to
fix this?
Regards,
Tony.
--
Tony Arnold MBCS, CITP | Senior IT Security Analyst | Directorate of IT
Services | G64, Kilburn Building | The University of Manchester |
Manchester M13 9PL | T: +44 161 275 6093 | M: +44 773 330 0039
7 years, 8 months
[libvirt-users] creating a lxc image to be used with libvirt-lxc
by Spike
Dear all,
I'm moving my first baby steps with libvirt-lxc trying to convert over from
an LXD installation and one of the hurdles is putting together an image.
All the examples I found about libvirt-lxc refer to running /bin/sh in a
container, almost as if it was docker, as opposed to run a "full system"
like I've been doing with lxd. Also virt-install, often referred in libvirt
docs, seems to be specific/only for kvm.
Can anybody point me to any documentation to achieve the same as you'd do
with lxd? would it even just work to use those images (
https://cloud-images.ubuntu.com/) with libvirt? Last but not least, is
there any way to "publish" a modified image so that I could make changes to
any of the above and then reuse the modified one as a base for other
containers?
thank you,
spike
7 years, 8 months
[libvirt-users] libvirt-lxc: good for production?
by Spike
Dear all,
I'm a happy lxc (lxd) user with a need to add a bunch of KVM images to the
mix. More importantly I need to have some simple frontend to give users the
ability to quickly run some VMs for testing.
Researching the topic brought me to virt-manager and from there libvirt.
I've had however a hard time to answer a few questions that I hope this
list can help me with:
1) is the libvirt-lxc driver actively developed? there's been a lot of
upgrades to lxc and there seems to be relatively little activity on the lxc
driver
2) is libvirt-lxc to be used in production to begin with? every single
guide I found about libvirt pretty much points to KVM usage, with simple
/bin/sh examples with lxc. Furthermore stuff like virt-install seems to be
exclusively catered to full os/KVM images creation, with no obvious way to
create a container image.
thanks for any input, even just docs I missed that explain the avove would
be most helpful.
Spike
7 years, 8 months
[libvirt-users] (Live) Migration safe vs. unsafe
by Michael Hierweck
Hi all,
virsh checks whether a (live) migration is safe or unsafe. When a
migration is considered to be unsafe it is rejected unless the --unsafe
option is prodivided.
As a part of those checks virsh considers the cache settings for the
underlying storage resources. In this context only cache="none" is
considered to be safe.
I wonder why cache="directsync" might be harmfull as it bypasses the
host page cache completely.
Regards,
Michael
7 years, 8 months
[libvirt-users] migrated RHEL7/CentsOS7 VMs fail to boot
by Daniel Pocock
I migrated a CentOS7 VM (the fedrtc.org site) from an XCP environment to
a libvirt/KVM environment.
This involved using qemu-img to convert the image from VHD to qcow2 and
then using virt-install --import to define the VM in libvirt.
Three problems occurred during boot:
a) on the first boot, the BIOS screen and grub screen don't appear at
all, the screen is blank for a couple of seconds and then the message
about loading a kernel appears. Hard reset and on the second and
subsequent attempts I see the grub screen and have the opportunity to
interact with it. This actually happened with many of my VMs, not just
the CentOS7 VM.
b) with console=hvc0 in the grub config, the kernel refused to boot, no
error appeared on the screen. I was able to remove that easily enough
by pressing "e" in grub. Rather than halting, should the kernel fall
back to VGA perhaps when the console= argument is not valid? Or could
KVM emulate the Xen console device to make migrations easier?
c) after that, the boot proceeds up to about the point where I see
systemd messages about starting basic system. Then it sits there for a
few minutes and then these messages appear:
warning: dracut-initqueue timeout - starting timeout scripts
and after that I see "A start job is running for dev-mapp...oot.device
(X min Ys/ no limit)
Rebooting and choosing the "rescue" image from the bottom of the grub
menu got me past that, I was able to log in and then I ran:
dracut --kver 3.10.0-514.10.2.el7.x86_64 --force
and on the next attempt it was able to boot successfully. The block
device for the root FS is an LVM volume (so the name should have been
constant) and the block device for the /boot filesystem listed in
/etc/fstab was mounted by UUID (the block device name itself changed
from xvda1 (XCP) to vda1 (KVM)). All my Debian VMs were able to cope
with this device name changing. The CentOS7 system was originally
installed using default settings for just about everything.
Why does dracut need to be re-run in this situation? Should a bug
report be filed about this?
7 years, 8 months