[libvirt] route seems to be not affecting the network behavior
by pichon
Hello,
I’m trying to set a default gateway on an isolated lan (because one of my VM get access to an other lan).
I’m trying to use the option <route > but this doesn’t affect the default route on my VMs.
Here is the network xml
<network>
<name>prd-private-lan</name>
<uuid>2222222222222222222222</uuid>
<bridge name='virbr3' stp='off' delay='0'/>
<mac address='52:54:00:08:1e:d8'/>
<domain name=‘pre' />
<dns>
<forwarder addr='8.8.4.4'/>
<forwarder addr='8.8.8.8'/>
</dns>
<ip address='10.10.0.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.10.0.128' end='10.10.0.254’/>
<host mac=‘xxxxxx' name=‘finternet.pret' ip='10.10.0.7'/>
</dhcp>
</ip>
<route address='10.10.0.0' netmask='255.255.255.0' gateway='10.10.0.7'/>
</network>
And here is what I get on the guest VM connected to the pre-private-lan
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens4
Can someone help me or tell me what to do ?
What I’m a bit surprise is the file //var/lib/libvirt/dnsmasq/prd-private-lan.conf doesn’t contain any clause defining the new gateway. Something like ‘dhcp-option=3,10.10.0.7'
strict-order
no-resolv
server=8.8.4.4
server=8.8.8.8
domain=prd.pipiche.net
expand-hosts
pid-file=/var/run/libvirt/network/prd-private-lan.pid
except-interface=lo
bind-dynamic
interface=virbr3
dhcp-option=3
no-resolv
dhcp-range=10.10.0.128,10.10.0.254
dhcp-no-override
dhcp-lease-max=127
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/prd-private-lan.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/prd-private-lan.addnhosts
Of course I can add an explicit route on the guest - so did I - but for maintenance purposes I would like to avoid and have all configuration in the network XML
Thanks in advance
Patrick
8 years, 10 months
[libvirt] [PATCH 00/13] vbox: more error checking
by Ján Tomko
Check for return values of libvirt's internal APIs in vboxDumpVideo
and vboxDumpDisplay.
Patch 10/13 should fix a complaint by Coverity (untested).
Lots of whitespace changes, -b is your friend.
Ján Tomko (13):
Check return value of vboxDumpVideo
vboxDumpDisplay: reduce indentation level
vboxDumpDisplay: more indentation reducing
vboxDumpDisplay: add addDesktop bool
vboxDumpDisplay: remove extra virReportOOMError
vboxDumpDisplay: split out def->graphics allocation
vboxDumpDisplay: allocate the graphics structure upfront
vboxDumpDisplay: fill out the graphics structure earlier
vboxDumpDisplay: clean up VIR_STRDUP usage
vboxDumpDisplay: check return of virDomainGraphicsListenSetAddress
vboxDumpDisplay: use VIR_APPEND_ELEMENT
vboxDumpDisplay: reuse the keyUtf16 variable
vboxDumpDisplay: remove suspicious strlen
src/vbox/vbox_common.c | 249 ++++++++++++++++++++++---------------------------
1 file changed, 113 insertions(+), 136 deletions(-)
--
2.4.10
8 years, 10 months
[libvirt] [PATCH v4 0/4] vcpu info storage refactors - part 2
by Peter Krempa
Yet another version since I've changed a few places other than requested in the
review.
Peter Krempa (4):
qemu: vcpu: Aggregate code to set vCPU tuning
qemu: vcpu: Reuse qemuProcessSetupVcpu in vcpu hotplug
qemu: iothread: Aggregate code to set IOThread tuning
qemu: iothread: Reuse qemuProcessSetupIOThread in iothread hotplug
src/qemu/qemu_cgroup.c | 188 ------------------------
src/qemu/qemu_cgroup.h | 2 -
src/qemu/qemu_driver.c | 136 +-----------------
src/qemu/qemu_process.c | 376 +++++++++++++++++++++++++++++++++---------------
src/qemu/qemu_process.h | 6 +
5 files changed, 266 insertions(+), 442 deletions(-)
--
2.6.2
8 years, 10 months
[libvirt] [PATCH] vircgroup: Update virCgroupGetPercpuStats stump
by Michal Privoznik
In the commit 7938b533 we've changed the function signature,
however forgot to update stump that's used on systems without
CGroups causing a build failure.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Pushed under trivial and build breaker rules.
src/util/vircgroup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index f625cbc..6ce208e 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -4945,7 +4945,7 @@ virCgroupGetPercpuStats(virCgroupPtr group ATTRIBUTE_UNUSED,
unsigned int nparams ATTRIBUTE_UNUSED,
int start_cpu ATTRIBUTE_UNUSED,
unsigned int ncpus ATTRIBUTE_UNUSED,
- unsigned int nvcpupids ATTRIBUTE_UNUSED)
+ virBitmapPtr guestvcpus ATTRIBUTE_UNUSED)
{
virReportSystemError(ENOSYS, "%s",
_("Control groups not supported on this platform"));
--
2.4.10
8 years, 10 months
[libvirt] [PATCH 0/7] storage:dir: ploop volumes support
by Olga Krishtal
This series of patches is aimed to support ploop volumes directory storage
pools.
Ploop is a disk loopback block device, not unlike loop but with many features
like dynamic resize, snapshots, backups etc. Container images will be stored
in ploop volumes, that can be manipulated via virsh api.
8 years, 10 months
[libvirt] How to check if 'Per-node memory binding' is supported in this libvirt version?
by Valeriy Solovyov
Hi All,
*How to check if 'Per-node memory binding' is supported in this libvirt
version?*
PS:
I have the #virsh version:
Compiled against library: libvirt 1.2.*9*
Using library: libvirt 1.2.9
Using API: QEMU 1.2.9
Running hypervisor: QEMU 2.0.0
on host
# uname -a
Linux nfv 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 17:53:56 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.1 LTS
Release: 14.04
Codename: trusty
#virsh start SomeOSOS-30-5-0-0_rls_743
error: Failed to start domain AlteonOS-30-5-0-0_rls_743
error: unsupported configuration: Per-node memory binding is not
supported with this QEMU
#virsh dumpxml SomeOSOS-30-5-0-0_rls_743
<domain type='kvm'>
<name>SomeOSOS-30-5-0-0_rls_743</name>
<uuid>c0eac6f0-e519-459d-978d-bd5b2da0be45</uuid>
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<vcpu placement='static'>8</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='2'/>
<vcpupin vcpu='2' cpuset='4'/>
<vcpupin vcpu='3' cpuset='6'/>
<vcpupin vcpu='4' cpuset='1'/>
<vcpupin vcpu='5' cpuset='3'/>
<vcpupin vcpu='6' cpuset='5'/>
<vcpupin vcpu='7' cpuset='7'/>
</cputune>
<numatune>
<memory mode='strict' nodeset='0-1'/>
<memnode cellid='0' mode='strict' nodeset='0'/>
<memnode cellid='1' mode='strict' nodeset='1'/>
</numatune>
<os>
<type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Westmere</model>
<feature policy='require' name='pdpe1gb'/>
<numa>
<cell id='0' cpus='0,1,2,3' memory='16777216'/>
<cell id='1' cpus='4,5,6,7' memory='16777216'/>
</numa>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source
file='/var/lib/libvirt/images/SomeOSOS-30-5-0-0_rls_743.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<interface type='bridge'>
<mac address='52:54:00:4b:e8:9b'/>
<source bridge='mgmt'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</source>
<rom bar='off'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/>
</source>
<rom bar='off'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</hostdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</memballoon>
</devices>
</domain>
Regards.
8 years, 10 months
[libvirt] [PATCH 00/11] Fix disk dst/wwn/serial checks in hotplug case
by Peter Krempa
Peter Krempa (11):
qemu: hotplug: Use typecasted switch
qemu: hotplug: Remove unnecessary variable
qemu: hotplug: Break up if/else statement into switch
qemu: hotplug: Use more common 'cleanup' label in
qemuDomainAttachDeviceDiskLive
qemu: hotplug: Extract common code to qemuDomainAttachDeviceDiskLive
conf: Extract code that checks disk serial/wwn conflict
qemu: hotplug: Check duplicate disk serial/wwn on hotplug too
qemu: process: Reorder operations on early VM startup
qemu: process: Extract pre-start checks into a function
tests: Integrate startup checks to qemuxml2argvtest
conf: Move and optimize disk target duplicity checking
src/conf/domain_conf.c | 82 +++++++++++++++++++--------------------------
src/conf/domain_conf.h | 4 ++-
src/libvirt_private.syms | 2 +-
src/qemu/qemu_command.c | 3 --
src/qemu/qemu_hotplug.c | 85 +++++++++++++++++++++--------------------------
src/qemu/qemu_migration.c | 2 +-
src/qemu/qemu_process.c | 57 +++++++++++++++++++++----------
src/qemu/qemu_process.h | 9 ++++-
tests/qemuxml2argvtest.c | 13 ++++++--
9 files changed, 136 insertions(+), 121 deletions(-)
--
2.6.2
8 years, 10 months
[libvirt] Fwd: Virtualization in High-Performance Cloud Computing (VHPC '16)
by VHPC 16
====================================================================
CALL FOR PAPERS
11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'16)
held in conjunction with the International Supercomputing Conference - High
Performance,
June 19-23, 2016, Frankfurt, Germany.
====================================================================
Date: June 23, 2016
Workshop URL: http://vhpc.org
Lightning talk abstract registration deadline: February 29, 2016
Paper/publishing track abstract registration deadline: March 21, 2016
Paper Submission Deadline: April 25, 2016
Call for Papers
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, and particularly in cloud environments.
Cloud providers need to manage complex infrastructures in a seamless
fashion to support
the highly dynamic and heterogeneous workloads and hosted applications
customers
deploy. Similarly, HPC environments have been increasingly adopting
techniques that
enable flexible management of vast computing and networking resources,
close to marginal
provisioning cost, which is unprecedented in the history of scientific and
commercial
computing.
Various virtualization technologies contribute to the overall picture in
different ways: machine
virtualization, with its capability to enable consolidation of multiple
underutilized servers with
heterogeneous software and operating systems (OSes), and its capability to
live-migrate a
fully operating virtual machine (VM) with a very short downtime, enables
novel and dynamic
ways to manage physical servers; OS-level virtualization (i.e.,
containerization), with its
capability to isolate multiple user-space environments and to allow for
their coexistence
within the same OS kernel, promises to provide many of the advantages of
machine
virtualization with high levels of responsiveness and performance; I/O
Virtualization allows
physical NICs/HBAs to take traffic from multiple VMs or containers; network
virtualization,
with its capability to create logical network overlays that are independent
of the underlying
physical topology and IP addressing, provides the fundamental ground on top
of which
evolved network services can be realized with an unprecedented level of
dynamicity and
flexibility; the increasingly adopted paradigm of Software-Defined
Networking (SDN)
promises to extend this flexibility to the control and data planes of
network paths.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to
virtualization across the entire software stack with a special focus on the
intersection of HPC
and the cloud. Topics include, but are not limited to:
- Virtualization in supercomputing environments, HPC clusters, cloud HPC
and grids
- OS-level virtualization including container runtimes (Docker, rkt et al.)
- Lightweight compute node operating systems/VMMs
- Optimizations of virtual machine monitor platforms, hypervisors
- QoS and SLA in hypervisors and network virtualization
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Virtual per job / on-demand clusters and cloud bursting
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Programming models for virtualized environments
- Virtualization in data intensive computing and Big Data processing
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators, GPUs
and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC in
the cloud
- Topology management and optimization for distributed virtualized
applications
- Adaptation of emerging HPC technologies (high performance networks, RDMA,
etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- Energy-efficient / power-aware virtualization
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to
bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each
followed by 10 min discussion sections, plus lightning talks that are
limited to 5 minutes.
Presentations may be accompanied by interactive demonstrations.
Important Dates
February 29, 2016 - Lightning talk abstract registration
March 21, 2016 - Paper/publishing track abstract registration
April 25, 2016 - Full paper submission
May 30, 2016 Acceptance notification
June 23, 2016 - Workshop Day
July 25, 2016 - Camera-ready version due
Chair
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Costas Bekas, IBM Research, Switzerland
Jakob Blomer, CERN
Ron Brightwell, Sandia National Laboratories, USA
Roberto Canonico, University of Napoli Federico II, Italy
Julian Chesterfield, OnApp, UK
Stephen Crago, USC ISI, USA
Christoffer Dall, Columbia University, USA
Patrick Dreher, MIT, USA
Robert Futrick, Cycle Computing, USA
Robert Gardner, University of Chicago, USA
William Gardner, University of Guelph, Canada
Wolfgang Gentzsch, UberCloud, USA
Kyle Hale, Northwestern University, USA
Marcus Hardt, Karlsruhe Institute of Technology, Germany
Krishna Kant, Templte University, USA
Romeo Kinzler, IBM, Switzerland
Brian Kocoloski, University of Pittsburgh, USA
Kornilios Kourtis, IBM Research, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
John Lange, University of Pittsburgh, USA
Nikos Parlavantzas, IRISA, France
Kevin Pendretti, Sandia National Laboratories, USA
Che-Rung Roger Lee, National Tsing Hua University, Taiwan
Giuseppe Lettieri, University of Pisa, Italy
Qing Liu, Oak Ridge National Laboratory, USA
Paul Mundt, Adaptant, Germany
Amer Qouneh, University of Florida, USA
Carlos Reaño, Technical University of Valencia, Spain
Seetharami Seelam, IBM Research, USA
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Dieter Suess, TU Wien, Austria
Craig Stewart, Indiana University, USA
Anata Tiwari, San Diego Supercomputer Center, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Amit Vadudevan, Carnegie Mellon University, USA
Yasuhiro Watashiba, Osaka University, Japan
Nicholas Wright, Lawrence Berkeley National Laboratory, USA
Chao-Tung Yang, Tunghai University, Taiwan
Gianluigi Zanetti, CRS4, Italy
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract, Paper Submission Link:
https://edas.info/newPaper.php?c=21801
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are strictly
limited to 5 minutes.
They can be used to gain early feedback on ongoing research, for
demonstrations, to
present research results, early research ideas, perspectives and positions
of interest to the
community. Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with the
International
Supercomputing Conference - High Performance (ISC) 2016, June 19-23,
Frankfurt, Germany.
8 years, 10 months