[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 1 month
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 2 months
[libvirt-users] Locking without virtlockd (nor sanlock)?
by Gionatan Danti
Hi list,
I would like to ask a clarification about how locking works. My test
system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64
Is was understanding that, by default, libvirt does not use any locks.
From here [1]: "The out of the box configuration, however, currently
uses the nop lock manager plugin". As "lock_manager" is commented in my
qemu.conf file, I was expecting that no locks were used to protect my
virtual disk from guest double-start or misassignement to other vms.
However, "cat /proc/locks" shows the following (17532905 being the vdisk
inode):
[root@localhost tmp]# cat /proc/locks | grep 17532905
42: OFDLCK ADVISORY READ -1 fd:00:17532905 201 201
43: OFDLCK ADVISORY READ -1 fd:00:17532905 100 101
Indeed, try to associate and booting the disk to another machines give
me an error (stating that the disk is alredy in use).
Enabling the "lockd" plugin and starting the same machine, "cat
/proc/locks" looks different:
[root@localhost tmp]# cat /proc/locks | grep 17532905
31: POSIX ADVISORY WRITE 19266 fd:00:17532905 0 0
32: OFDLCK ADVISORY READ -1 fd:00:17532905 201 201
33: OFDLCK ADVISORY READ -1 fd:00:17532905 100 101
As you can see, an *additional* write lock was granted. Again, assigning
the disk to another vms and booting it up ends with the same error.
So, may I ask:
- why does libvirtd requests READ locks even commenting the
"lock_manager" option?
- does it means that I can avoid modifying anything, relying on libvirtd
to correctly locks image files?
- if so, I should use virtlockd for what use cases?
Thanks.
[1] https://libvirt.org/locking-lockd.html
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
4 years, 9 months
[libvirt-users] aarch64 vm doesn't boots
by daggs
Greetings,
I'm trying to bring up a alpine rpi aarch64 image within kvm but I'm ended up with a stuck system, here is the xml:
<domain type='qemu'>
<name>alpine_rpi4_dev_machine</name>
<uuid>b1b155fc-cb92-4f22-8904-c934dd24415b</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>4</vcpu>
<os>
<type arch='aarch64' machine='virt'>hvm</type>
</os>
<features>
<gic version='2'/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>cortex-a53</model>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-aarch64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/dagg/alpine-rpi4.qcow2'/>
<target dev='vda' bus='virtio'/>
<boot order='2'/>
<address type='virtio-mmio'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/dagg/alpine-virt-3.11.2-aarch64.iso'/>
<target dev='sdb' bus='scsi'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='virtio-mmio'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='dmi-to-pci-bridge'>
<model name='i82801b11-bridge'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pci-bridge'>
<model name='pci-bridge'/>
<target chassisNr='2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='virtio-mmio'/>
</controller>
<interface type='network'>
<mac address='52:54:00:e0:7a:7b'/>
<source network='default'/>
<model type='virtio'/>
<address type='virtio-mmio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<console type='pty'>
<target type='virtio' port='1'/>
</console>
</devices>
</domain>
generated using this cmd:
virt-install --cpu cortex-a53 --name alpine_rpi4_dev_machine --cdrom ./alpine-virt-3.11.2-aarch64.iso --disk path=alpine-rpi4.qcow2,size=8 --vcpus 4 --memory 2048 --os-type linux --arch aarch64
I've tried adding a vnc server and vga device but the screen stays black, qxl doesn't work.
I'm using ubuntu 16.04 with libvirt 1.3.1, if this is a version issue, I can upgrade to latest version.
what I'm I missing?
Thanks,
Dagg.
4 years, 9 months
[libvirt-users] (no subject)
by Mauricio Tavares
When I use kvm+libvirt as my hypervisor at home, I usually
pass logical volumes as the guests' drives (I probably can do better
but the disk here is just a garden-variety SSD, not NVMe).
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/dev/vmhost_vg0/desktop'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
That works fine as local drives, but what about if I want to use the
network storage (can provide nfs and iscsi)? Should I:
1. Create one iscsi target for each guest? Reasoning here is that some
OS can boot off iscsi.
2. Create a local lv with the minimum required disk space to boot, and
then network mount the rest be it as nfs or iscsi?
3. Create a large iscsi target, do not format it, but instead
configure it as a lvm, handing out logical volumes as before? So the
vm guest won't know any better.
I am leaning towards door #3, but I am open for suggestions.
4 years, 10 months
[libvirt-users] Add support for vhost-user-scsi-pci/vhost-user-blk-pci
by Li Feng
Hi Guys,
And I want to add the vhost-user-scsi-pci/vhost-user-blk-pci support
for libvirt.
The usage in qemu like this:
Vhost-SCSI
-chardev socket,id=char0,path=/var/tmp/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
Vhost-BLK
-chardev socket,id=char1,path=/var/tmp/vhost.1
-device vhost-user-blk-pci,id=blk0,chardev=char1
What type should I add for libvirt.
Type1:
<hostdev mode='subsystem' type='vhost-user'>
<source protocol='vhost-user-scsi' path='/tmp/vhost-scsi.sock'></source>
<alias name="vhost-user-scsi-disk1"/>
</hostdev>
Type2:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source protocol='vhost-user' path='/tmp/vhost-scsi.sock'>
</source>
<target dev='sdb' bus='vhost-user-scsi'/>
<boot order='3'/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source protocol='vhost-user' path='/tmp/vhost-blk.sock'>
</source>
<target dev='vda' bus='vhost-user-blk'/>
<boot order='1'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
Could anyone give some suggestions?
Thanks,
Feng Li
--
The SmartX email address is only for business purpose. Any sent message
that is not related to the business is not authorized or permitted by
SmartX.
本邮箱为北京志凌海纳科技有限公司(SmartX)工作邮箱. 如本邮箱发出的邮件与工作无关,该邮件未得到本公司任何的明示或默示的授权.
4 years, 10 months
[libvirt-users] CfP VHPC20: HPC Containers-Kubernetes
by VHPC 20
====================================================================
CALL FOR PAPERSa
15th Workshop on Virtualization in High-Performance Cloud Computing
(VHPC 20) held in conjunction with the International Supercomputing
Conference - High Performance, June 21-25, 2020, Frankfurt, Germany.
(Springer LNCS Proceedings)
====================================================================
Date: June 25, 2020
Workshop URL: vhpc[dot]org
Abstract registration Deadline: Jan 31st, 2020
Paper Submission Deadline: Apr 05th, 2020
Springer LNCS
Call for Papers
Containers and virtualization technologies constitute key enabling
factors for flexible resource management in modern data centers, and
particularly in cloud environments. Cloud providers need to manage
complex infrastructures in a seamless fashion to support the highly
dynamic and heterogeneous workloads and hosted applications customers
deploy. Similarly, HPC environments have been increasingly adopting
techniques that enable flexible management of vast computing and
networking resources, close to marginal provisioning cost, which is
unprecedented in the history of scientific and commercial computing.
Most recently, Function as a Service (Faas) and Serverless computing,
utilizing lightweight VMs-containers widens the spectrum of
applications that can be deployed in a cloud environment, especially
in an HPC context. Here, HPC-provided services can be become
accessible to distributed workloads outside of large cluster
environments.
Various virtualization-containerization technologies contribute to the
overall picture in different ways: machine virtualization, with its
capability to enable consolidation of multiple underutilized servers
with heterogeneous software and operating systems (OSes), and its
capability to live-migrate a fully operating virtual machine (VM)
with a very short downtime, enables novel and dynamic ways to manage
physical servers; OS-level virtualization (i.e., containerization),
with its capability to isolate multiple user-space environments and
to allow for their coexistence within the same OS kernel, promises to
provide many of the advantages of machine virtualization with high
levels of responsiveness and performance; lastly, unikernels provide
for many virtualization benefits with a minimized OS/library surface.
I/O Virtualization in turn allows physical network interfaces to take
traffic from multiple VMs or containers; network virtualization, with
its capability to create logical network overlays that are independent
of the underlying physical topology is furthermore enabling
virtualization of HPC infrastructures.
Publication
Accepted papers will be published in a Springer LNCS proceedings
volume.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to virtualization across the entire software stack with a
special focus on the intersection of HPC, containers-virtualization
and the cloud.
Major Topics:
- HPC workload orchestration (Kubernetes)
- Kubernetes HPC batch
- HPC Container Environments Landscape
- HW Heterogeneity
- Container ecosystem (Docker alternatives)
- Networking
- Lightweight Virtualization
- Unikernels / LibOS
- State-of-the-art processor virtualization (RISC-V, EPI)
- Containerizing HPC Stacks/Apps/Codes:
Climate model containers
each major topic encompassing design/architecture, management,
performance management, modeling and configuration/tooling.
Specifically, we invite papers that deal with the following topics:
- HPC orchestration (Kubernetes)
- Virtualizing Kubernetes for HPC
- Deployment paradigms
- Multitenancy
- Serverless
- Declerative data center integration
- Network provisioning
- Storage
- OCI i.a. images
- Isolation/security
- HW Accelerators, including GPUs, FPGAs, AI, and others
- State-of-practice/art, including transition to cloud
- Frameworks, system software
- Programming models, runtime systems, and APIs to facilitate cloud
adoption
- Edge use-cases
- Application adaptation, success stories
- Kubernetes Batch
- Scheduling, job management
- Execution paradigm - workflow
- Data management
- Deployment paradigm
- Multi-cluster/scalability
- Performance improvement
- Workflow / execution paradigm
- Podman: end-to-end Docker alternative container environment & use-cases
- Creating, Running containers as non-root (rootless)
- Running rootless containers with MPI
- Container live migration
- Running containers in restricted environments without setuid
- Networking
- Software defined networks and network virtualization
- New virtualization NICs/Nitro alike ASICs for the data center?
- Kubernetes SDN policy (Calico i.a.)
- Kubernetes network provisioning (Flannel i.a.)
- Lightweight Virtualization
- Micro VMMs (Rust-VMM, Firecracker, solo5)
- Xen
- Nitro hypervisor (KVM)
- RVirt
- Cloud Hypervisor
- Unikernels / LibOS
- HPC Storage in Virtualization
- HPC container storage
- Cloud-native storage
- Hypervisors in storage virtualization
- Processor Virtualization
- RISC-V hypervisor extensions
- RISC-V Hypervisor ports
- EPI
- Composable HPC microservices
- Containerizing Scientific Codes
- Building
- Deploying
- Securing
- Storage
- Monitoring
- Use case for containerizing HPC codes:
Climate model containers for portability, reproducibility,
traceability, immutability, provenance, data & software preservation
The Workshop on Virtualization in High-Performance Cloud Computing
(VHPC) aims to bring together researchers and industrial practitioners
facing the challenges posed by virtualization in order to foster
discussion, collaboration, mutual exchange of knowledge and
experience, enabling research to ultimately provide novel solutions
for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections, plus
lightning talks that are limited to 5 minutes. Presentations may be
accompanied by interactive demonstrations.
Important Dates
Jan 31st, 2020 - Abstract
Apr 5th, 2020 - Paper submission deadline (Springer LNCS)
Apr 26th, 2020 - Acceptance notification
June 25th, 2020 - Workshop Day
July 10th, 2020 - Camera-ready version due
Chair
Michael Alexander (chair), BOKU, Vienna, Austria
Anastassios Nanos (co-chair), Sunlight.io, UK
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Paolo Bonzini, Redhat, Italy
Jakob Blomer, CERN, Europe
Eduardo César, Universidad Autonoma de Barcelona, Spain
Taylor Childers, Argonne National Laboratory, USA
Stephen Crago, USC ISI, USA
Tommaso Cucinotta, St. Anna School of Advanced Studies, Italy
François Diakhaté CEA DAM Ile de France, France
Kyle Hale, Northwestern University, USA
Brian Kocoloski, Washington University, USA
John Lange, University of Pittsburgh, USA
Giuseppe Lettieri, University of Pisa, Italy
Klaus Ma, Huawei, China
Alberto Madonna, Swiss National Supercomputing Center, Switzerland
Nikos Parlavantzas, IRISA, France
Anup Patel, Western Digital, USA
Kevin Pedretti, Sandia National Laboratories, USA
Amer Qouneh, Western New England University, USA
Carlos Reaño, Queen’s University Belfast, UK
Adrian Reber, Redhat, Germany
Riccardo Rocha, CERN, Europe
Borja Sotomayor, University of Chicago, USA
Jonathan Sparks, Cray, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
John Walters, USC ISI, USA
Yasuhiro Watashiba, Osaka University, Japan
Chao-Tung Yang, Tunghai University, Taiwan
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, keywords, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work. Accepted papers will be published in a
Springer LNCS volume.
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Abstract, Paper Submission Link:
edas[dot]info/newPaper.php?c=26973
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are
strictly limited to 5 minutes. They can be used to gain early
feedback on ongoing research, for demonstrations, to present research
results, early research ideas, perspectives and positions of interest
to the community. Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with
the International Supercomputing Conference - High Performance (ISC)
2019, June 21-25, Frankfurt, Germany.
4 years, 10 months