[libvirt-users] Fwd: Virtualization in High-Performance Cloud Computing (VHPC '16)
by VHPC 16
====================================================================
CALL FOR PAPERS
11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'16)
held in conjunction with the International Supercomputing Conference - High
Performance,
June 19-23, 2016, Frankfurt, Germany.
====================================================================
Date: June 23, 2016
Workshop URL: http://vhpc.org
Lightning talk abstract registration deadline: February 29, 2016
Paper/publishing track abstract registration deadline: March 21, 2016
Paper Submission Deadline: April 25, 2016
Call for Papers
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, and particularly in cloud environments.
Cloud providers need to manage complex infrastructures in a seamless
fashion to support
the highly dynamic and heterogeneous workloads and hosted applications
customers
deploy. Similarly, HPC environments have been increasingly adopting
techniques that
enable flexible management of vast computing and networking resources,
close to marginal
provisioning cost, which is unprecedented in the history of scientific and
commercial
computing.
Various virtualization technologies contribute to the overall picture in
different ways: machine
virtualization, with its capability to enable consolidation of multiple
underutilized servers with
heterogeneous software and operating systems (OSes), and its capability to
live-migrate a
fully operating virtual machine (VM) with a very short downtime, enables
novel and dynamic
ways to manage physical servers; OS-level virtualization (i.e.,
containerization), with its
capability to isolate multiple user-space environments and to allow for
their coexistence
within the same OS kernel, promises to provide many of the advantages of
machine
virtualization with high levels of responsiveness and performance; I/O
Virtualization allows
physical NICs/HBAs to take traffic from multiple VMs or containers; network
virtualization,
with its capability to create logical network overlays that are independent
of the underlying
physical topology and IP addressing, provides the fundamental ground on top
of which
evolved network services can be realized with an unprecedented level of
dynamicity and
flexibility; the increasingly adopted paradigm of Software-Defined
Networking (SDN)
promises to extend this flexibility to the control and data planes of
network paths.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to
virtualization across the entire software stack with a special focus on the
intersection of HPC
and the cloud. Topics include, but are not limited to:
- Virtualization in supercomputing environments, HPC clusters, cloud HPC
and grids
- OS-level virtualization including container runtimes (Docker, rkt et al.)
- Lightweight compute node operating systems/VMMs
- Optimizations of virtual machine monitor platforms, hypervisors
- QoS and SLA in hypervisors and network virtualization
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Virtual per job / on-demand clusters and cloud bursting
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Programming models for virtualized environments
- Virtualization in data intensive computing and Big Data processing
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators, GPUs
and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC in
the cloud
- Topology management and optimization for distributed virtualized
applications
- Adaptation of emerging HPC technologies (high performance networks, RDMA,
etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- Energy-efficient / power-aware virtualization
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to
bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each
followed by 10 min discussion sections, plus lightning talks that are
limited to 5 minutes.
Presentations may be accompanied by interactive demonstrations.
Important Dates
February 29, 2016 - Lightning talk abstract registration
March 21, 2016 - Paper/publishing track abstract registration
April 25, 2016 - Full paper submission
May 30, 2016 Acceptance notification
June 23, 2016 - Workshop Day
July 25, 2016 - Camera-ready version due
Chair
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Costas Bekas, IBM Research, Switzerland
Jakob Blomer, CERN
Ron Brightwell, Sandia National Laboratories, USA
Roberto Canonico, University of Napoli Federico II, Italy
Julian Chesterfield, OnApp, UK
Stephen Crago, USC ISI, USA
Christoffer Dall, Columbia University, USA
Patrick Dreher, MIT, USA
Robert Futrick, Cycle Computing, USA
Robert Gardner, University of Chicago, USA
William Gardner, University of Guelph, Canada
Wolfgang Gentzsch, UberCloud, USA
Kyle Hale, Northwestern University, USA
Marcus Hardt, Karlsruhe Institute of Technology, Germany
Krishna Kant, Templte University, USA
Romeo Kinzler, IBM, Switzerland
Brian Kocoloski, University of Pittsburgh, USA
Kornilios Kourtis, IBM Research, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
John Lange, University of Pittsburgh, USA
Nikos Parlavantzas, IRISA, France
Kevin Pendretti, Sandia National Laboratories, USA
Che-Rung Roger Lee, National Tsing Hua University, Taiwan
Giuseppe Lettieri, University of Pisa, Italy
Qing Liu, Oak Ridge National Laboratory, USA
Paul Mundt, Adaptant, Germany
Amer Qouneh, University of Florida, USA
Carlos Reaño, Technical University of Valencia, Spain
Seetharami Seelam, IBM Research, USA
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Dieter Suess, TU Wien, Austria
Craig Stewart, Indiana University, USA
Anata Tiwari, San Diego Supercomputer Center, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Amit Vadudevan, Carnegie Mellon University, USA
Yasuhiro Watashiba, Osaka University, Japan
Nicholas Wright, Lawrence Berkeley National Laboratory, USA
Chao-Tung Yang, Tunghai University, Taiwan
Gianluigi Zanetti, CRS4, Italy
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract, Paper Submission Link:
https://edas.info/newPaper.php?c=21801
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are strictly
limited to 5 minutes.
They can be used to gain early feedback on ongoing research, for
demonstrations, to
present research results, early research ideas, perspectives and positions
of interest to the
community. Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with the
International
Supercomputing Conference - High Performance (ISC) 2016, June 19-23,
Frankfurt, Germany.
8 years, 10 months
[libvirt-users] inquiry .. help!
by Mezo Hindawi
hello. I am new to libvirt. I have installed xen hypervisor on my ubuntu 14.04 LTS version machine..1- I am trying to use php-libvirt-control from php script(apache server)2- i have downloaded and compiled libvirt. and i can see it in my php configuration file(phpinfo). 3- i have downloaded and compiled libvirt-php4- i have downloaded and compiled php-libvirt-control. 5- when i simple copy the php-libvirt-control directory to /var/www. it doesn't work. i used localhost/index.php which should take me to the index page just like any other website.. is there something i am missing ?
8 years, 10 months
[libvirt-users] virDomainMemoryStats available tags
by Jean-Pierre Ribeauville
Hi,
By using libvirt-1.2.8-16.el7_1.3.x86_64 , it looks like there are 3 virDomainMemoryStats tags available :
VIR_DOMAIN_MEMORY_STAT_SWAP_IN
VIR_DOMAIN_MEMORY_STAT_ACTUAL_BALLOON
VIR_DOMAIN_MEMORY_STAT_RSS
Is there a plan to add the other ones ?
Meanwhile , do you know which metrics ovirt uses to display memory column value in the manager GUI ?
Thx.
J.P. Ribeauville
P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5 Bureau 4
jpribeauville(a)axway.com<mailto:jpribeauville@axway.com>
http://www.axway.com<http://www.axway.com/>
P Pensez à l'environnement avant d'imprimer.
8 years, 10 months
Re: [libvirt-users] [netcf-devel] Re: Error when creating bridge with virt-manager
by Laine Stump
On 02/04/2016 02:47 PM, Niccolò Belli wrote:
> Thanks, selecting "Specify shared device name" worked flawlessly!
> One more question: often I work with virtual machines with public ip
> addresses and routed networking (not bridges!), how can I achieve it
> with Arch and virt-manager?
If you want to create a completely separate subnet contained within the
host, and have all traffic to/from the outside routed via the host's IP
stack, define a libvirt network, setup the IP subnet you want for the
network, then in the final screen select "Forwarding to physical
network", Destination: "Any physical device", Mode: "Routed". Once
you've started this network, it will appear in the list of possible
connections for a guest's network interfaces. Note that you will need to
teach the rest of the network to forward packets for this new subnet to
the host's physical ethernet IP (but if you're specifically asking for a
routed network, then you likely already know this (either that or I've
misunderstood your question) :-)
You can see more info about the possible types of networks here:
http://www.libvirt.org/formatnetwork.html
I've set Reply-To: in this message to libvirt-users(a)redhat.com. It
really is a more appropriate place for these questions, as the netcf
mailing list deals specifically with the netcf library, which isn't
involved in any of the things we're discussing here.
8 years, 10 months
[libvirt-users] libvirt.so is not safe to use from setuid programs
by Jean-Pierre Ribeauville
Hi,
When trying to connect the HyperVisor from a binary having setuid bit set , then I got following error:
Unable to perform virConnectOpenReadOnly function error(internal error: libvirt.so is not safe to use from setuid programs)
My test software config is the following :
-rwsr-xr-x. 1 root root 3374956 Feb 4 13:45 test
As this test software needs S bit to be able to access O.S. metrics counters , how may I use it to retrieve KVM metrics counters ?
Thx for help.
J.P. Ribeauville
P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5 Bureau 4
jpribeauville(a)axway.com<mailto:jpribeauville@axway.com>
http://www.axway.com<http://www.axway.com/>
P Pensez à l'environnement avant d'imprimer.
8 years, 10 months
[libvirt-users] Both virsh console logging and pty usage
by Carlos Godoy
Hi,
I am trying to set my VMs for redirecting their consoles to a file but its
important to keep the virsh console access to solve any problem I can find
while they are running. I tried several settings through libvirt (serial
and console tags) but I can not get the key.
Any suggestion? Any help?
Many thanks in advance.
Carlos G.
8 years, 10 months
[libvirt-users] virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
by Jelle de Jong
Hello everybody,
This is a cross post to libvirt-users, libguestfs and ceph-users.
I came back from FOSDEM 2016 and this was my 7th year or so and seen the
awesome development around visualization going on and want to thank
everybody for there contributions.
I seen presentations from oVirt, OpenStack and quite a few great Redhat
people, just like the last previous years.
I personally been supporting Linux-HA (pacemaker) DRBD/iSCSI/KVM
platform for years now and last year I started supporting Ceph RBD
clusters with KVM hosts.
But I keep hitting some “pains” and I been wondering why they have not
been solved and how I can best request help around this?
Tools that don't seem to fully work with Ceph RBD yet:
- virsh snapshot-create --quiesce --disk-only $quest
- virt-filesystems ${array_volume[@]}
- guestmount --ro ${array_volume[@]} -m $mount /mnt/$name-$disk
- virt-install also doesn't have Ceph RBD and I currently use virsh edit
to add the RBD storage disks.
My request is to get these tools working as well with Ceph RBD and maybe
not only integrate oVirt or OpenStack alike systems. I use several types
of back-up strategies depending on the size on the data storage use. I
am not sure how oVirt and alike create secure encrypted incremental
off-site back-ups of large data sets but I use(d) combinations of
rdiff-backup, duplicity and full guest dumps with dd, rbd export, xz.
Are these issues known and important enough, roadmap, bug reports?
Should I start working for Redhat as I don't have the resources myself,
but I can help with some money towards a crowed fund or bounty.
Maybe my software is outdated?
virsh 1.2.9
ceph version 0.80.10
libguestfs-tools 1:1.28.1-1
3.16.7-ckt20-1+deb8u2
My current workarounds for full storage exports:
virsh domfsfreeze $quest
sleep 2
virsh domblklist $quest
rbd snap create --snap snapshot $blkdevice
virsh domfsthaw $quest
rbd export $blkdevice@snapshot - | xz -1 | ssh -p 222 $user@$server "dd
of=/$location/$blkdevice$snapshot-$daystamp.dd.disk.gz"
rbd snap rm $blkdevice@snapshot
Kind regards,
Jelle de Jong
irc: #tuxcrafter
8 years, 11 months
[libvirt-users] Advice on virtio, or any virtualization solution for hdparm
by Peter Teoh
At the present moment, my guest is running inside qemu and host is kvm
intel, running Ubuntu 14.04, kernel is 4.3.0 stable. From within the
guest, when I run "hdparm -i /dev/sdb" on the guest, I get:
HDIO_GET_IDENTITY failed: Invalid argument
as the error,but on the host, I will get the full harddisk/SSD info.
Can I know how to resolve this so that the output is the same for both host
and guest?
My strace of hdparm from within the guest (just "-e ioctl" is traced):
ioctl(3, HDIO_GET_MULTCOUNT, 0x618ef0) = -1 EINVAL (Invalid argument)
ioctl(3, SG_IO, {'S', SG_DXFER_FROM_DEV, cmd[16]=[85, 08, 0e, 00, 00, 00,
01, 00, 00, 00, 00, 00, 00, 40, ec, 00], mx_sb_len=32, iovec_count=0,
dxfer_len=512, timeout=15000, flags=0,
data[512]=["@\0\377?7\310\20\0\0\0\0\0?\0\0\0\0\0\0\0HPAD0409105B"...],
status=00, masked_status=00, sb[0]=[], host_status=0, driver_status=0,
resid=0, duration=184, info=0}) = 0
ioctl(3, HDIO_GET_IDENTITY, 0x7fffda088500) = -1 EINVAL (Invalid argument)
HDIO_GET_IDENTITY failed: Invalid argument
+++ exited with 22 +++
And at the host level:
ioctl(1, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS,
0x7ffd4b096d90) = -1 ENOTTY (Inappropriate ioctl for device)
ioctl(3, HDIO_GET_MULTCOUNT, 0x618ef0) = -1 ENOTTY (Inappropriate ioctl
for device)
ioctl(3, SG_IO, {'S', SG_DXFER_FROM_DEV, cmd[16]=[85, 08, 0e, 00, 00, 00,
01, 00, 00, 00, 00, 00, 00, 40, ec, 00], mx_sb_len=32, iovec_count=0,
dxfer_len=512, timeout=15000, flags=0,
data[512]=["@\0\377?7\310\20\0\0\0\0\0?\0\0\0\0\0\0\0HPAD0409105B"...],
status=00, masked_status=00, sb[0]=[], host_status=0, driver_status=0,
resid=0, duration=184, info=0}) = 0
ioctl(3, HDIO_GET_IDENTITY, 0x7ffd4b0976e0) = 0
+++ exited with 0 +++
And my qemu command line:
sudo ./x86_64-softmmu/qemu-system-x86_64 -m 1024 -boot c -enable-kvm -net
nic -net user \
-device virtio-scsi-pci \
-drive if=none,file=/dev/sdb,id=sdb,cache=none,format=raw \
-device scsi-block,drive=sdb \
-hda /home/user/ubuntu1404_x86_64/ubuntu1404_x86_64.img
where qemu-system_x86_64 is freshly compiled from latest qemu-devel git
tree.
I would the SSD (internal SATA) at /dev/sdb to be pass directly into qemu.
Please kindly recommend the best solution: distro (CentOS??), kernel
version, qemu command like, and setup procedure for libvirtd? or
virtio-scsi?
8 years, 11 months
[libvirt-users] generate interface MAC addresses in a particular order
by Andrei Perietanu
Hi all,
I am using libvirt to manage VM on my system; after creating a VM (default
no NICs are present in the configuration) you can add any number of
interfaces to it (as long as they exist on the host).
To do that, I edit the configuration xlm:
vmXml = self.domain.XMLDesc()
root = ET.fromstring(vmXml)
devices = root.find('./devices')
intf = ET.SubElement(devices,'interface')
intf.set('type', 'bridge')
src = ET.SubElement(intf,'source')
src.set('bridge', bIntf)
model = ET.SubElement(intf,'model')
model.set('type', 'e1000')
xml = ET.tostring(root)
self.conn.defineXML(xml)
Now the problem I have is that the MAC addresses are auto-generated and
because of this there is no way to predict which interface number the newly
added interface will map to, on the VM. Ideally, the first added interface
is mapped to eth0/0, the second one eth0/1...etc. Since the mappings
depend on the MAC addresses I figured that is the part I need to have
control over.
Any ideas?
Thanks,
Andrei
--
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of or
taking of any action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you receive
this in error please contact the sender and delete the material from any
computer immediately. It is the policy of Klas Limited to disavow the
sending of offensive material and should you consider that the material
contained in the message is offensive you should contact the sender
immediately and also your I.T. Manager.
Klas Telecom Inc., a Virginia Corporation with offices at 1101 30th St. NW,
Washington, DC 20007.
Klas Limited (Company Number 163303) trading as Klas Telecom, an Irish
Limited Liability Company, with its registered office at Fourth Floor, One
Kilmainham Square, Inchicore Road, Kilmainham, Dublin 8, Ireland.
8 years, 11 months