[libvirt-users] pty pairing host - guest
by Neel Basu
HI!,
I am using virt-manager GUI. There I can add a `Console Device` through
Add Hardware. I can add a virtio pty WITHOUT restarting the VM. also I get
a pty allocated to host machine say `/dev/pty/M`
My expectation is this `host:/dev/pty/M` is connected with a
`guest:/dev/ttyN`. so If I open minicom or screen and write to
`host:/dev/pty/M` it will be propagated to 'guest:/dev/ttyN'. Is my
expectation correct ?
If it is correct how will I know what is the guest tty allocated against
this host pty ?
This is what I've tried:
dumpxml returns
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
I expected host:/dev/pts/1 is paired with guest:/dev/ttyS0. so I opened
screen/cat/echo on both sides but could not exchange messages
------
Similar to console device I can add a Serial device but that requires a
reboot of the guest VM. However I tried that though I prefer to do it
without a guest restart.
<serial type='pty'>
<source path='/dev/pts/1'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
here also I expected host:/dev/pts/1 is paired with /dev/ttyS0 but couldn't
communicate.
My sole requirement is to have a tty pair in host and guest that can be
used to communicate with each other. I am using qemu-kvm.
Thank You.
9 years, 9 months
[libvirt-users] how to get disk snapshot size
by Yitao Jiang
Hi,guys
I wanna to get the disk snapshot size, but found nothing libvirt commands
related,except qemu-img.
I create two disk snapshots, but nothing return its' size
[root@cskvm01 qcow2]# qemu-img info
/mnt/e6758700-af68-3c06-ade3-53f5f9b93507/e2cf6551-0d2c-4382-a86c-8ba633954ff2
image:
/mnt/e6758700-af68-3c06-ade3-53f5f9b93507/e2cf6551-0d2c-4382-a86c-8ba633954ff2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 1.0G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 4fc42d73-8257-42bd-8807-81700fd3c689 0 2015-03-15 21:05:55
01:05:53.583
2 2fd6aeab-cb26-446d-b1c6-e8d70d33f651 0 2015-03-15 21:50:35
00:00:00.000
After wrote data to the disk, then create another snapshot
[root@cskvm01 qcow2]# qemu-img info
/mnt/e6758700-af68-3c06-ade3-53f5f9b93507/e2cf6551-0d2c-4382-a86c-8ba633954ff2
image:
/mnt/e6758700-af68-3c06-ade3-53f5f9b93507/e2cf6551-0d2c-4382-a86c-8ba633954ff2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 3.9G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 4fc42d73-8257-42bd-8807-81700fd3c689 0 2015-03-15 21:05:55
01:05:53.583
2 2fd6aeab-cb26-446d-b1c6-e8d70d33f651 0 2015-03-15 21:50:35
00:00:00.000
3 63815565-3a06-4366-a1b3-bfeb9c4a07b4 0 2015-03-15 22:22:43
00:00:00.000
VM SIZE column still show 0
Here is my environment
qemu-img version 0.12.1, Copyright (c) 2004-2008 Fabrice Bellard
libvirtd (libvirt) 0.10.2
CentOS release 6.5 (Final) 2.6.32-431.el6.x86_64
---
Thanks,
Yitao(依涛 姜)
jiangyt.github.io
9 years, 9 months
[libvirt-users] Processor usage of qemu process.
by Dominique Ramaekers
I have been using libvirt for a while now with some linux guest installed. And everything has been working great.
I've got a nice new (used) HP virtual host with 12 x dual core and 48Gb mem. My Windows servers are getting old, so I found it was time to take the next step and also virtualise my Windows systems.
Now I've got two Windows guests on my new host:
- A Windows 8.1 which runs a Autodesk Job Processor
- A Windows Server 2012 R2 which runs a Pervasive SQL, Autodesk Vault Server (MSSQL, IIS), Autodesk license server,...
The first is constanly using 30% host-CPU and the second one is using 50% host-CPU. Both guests are using <5% guest CPU...
(Host system load is between 0.1 and 0.3...)
Is it normal Windows guests take up 30% to 50% host CPU resources? The Windows 8.1 is actualy a simpel PC with almost no activity...
Version:
QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.10), Copyright (c) 2003-2008 Fabrice Bellard
9 years, 9 months
[libvirt-users] How to run qemu with root permission from libvirt.
by Edward Young
Hi all,
My platform is qemu 1.5.1 and libvirt 1.2.0. How to lauch the qemu with
root permission, since in the qemu I would like to access the hardware
directly.
I actually check the runing /usr/bin/kvm process, it indicated that this
process is from a root user, but it still failed to act as with root
permission.
Thanks!
Ed
9 years, 9 months
[libvirt-users] lxc-enter-namespace support in Python API
by Florian Haas
Hello everyone,
referring back to
https://www.redhat.com/archives/libvirt-users/2013-August/msg00107.html,
in which a user discovered that there is no equivalent to "virsh -c
lxc:/// lxc-enter-namespace" in the Python API. Has that ever changed?
In that thread Daniel suggested that the user file a bug, but that
apparently never happened.
Why am I interested in this? Ansible currently supports a somewhat
limited libvirt_lxc connection driver which uses lxc-enter-namespace
to access LXC containers without SSH
(https://github.com/ansible/ansible/blob/devel/lib/ansible/runner/connecti...).
That connection driver, written in Python like the rest of Ansible,
currently has to use jump through subprocess hoops with virsh to enter
the namespace, which is slow and doesn't support pipelining. Being
able to use the native Python API would help greatly. In addition, if
it were to use the Python API instead, the connection plugin could
also be extended to use lxc+ssh:// connection types so Ansible users
could potentially manage container hosts _and_ all their containers
from a single playbook, just so long as they have SSH access to the
host.
Does anyone know whether that extension of the API is being / has been
addressed?
Cheers,
Florian
9 years, 9 months
[libvirt-users] lxc-enter-namespace support in Python API
by Florian Haas
Hello everyone,
referring back to
https://www.redhat.com/archives/libvirt-users/2013-August/msg00107.html,
in which a user discovered that there is no equivalent to "virsh -c
lxc:/// lxc-enter-namespace" in the Python API. Has that ever changed?
In that thread Daniel suggested that the user file a bug, but that
apparently never happened.
Why am I interested in this? Ansible currently supports a somewhat
limited libvirt_lxc connection driver which uses lxc-enter-namespace
to access LXC containers without SSH
(https://github.com/ansible/ansible/blob/devel/lib/ansible/runner/connecti...).
That connection driver, written in Python like the rest of Ansible,
currently has to use jump through subprocess hoops with virsh to enter
the namespace, which is slow and doesn't support pipelining. Being
able to use the native Python API would help greatly. In addition, if
it were to use the Python API instead, the connection plugin could
also be extended to use lxc+ssh:// connection types so Ansible users
could potentially manage container hosts _and_ all their containers
from a single playbook, just so long as they have SSH access to the
host.
Does anyone know whether that extension of the API is being / has been
addressed?
Cheers,
Florian
9 years, 9 months
[libvirt-users] vm live storage migration failure.
by Edward Young
Hi all,
When I migrate a live vm with two virtual disk images from one node to
another. Bothe of the nodes are in the same LAN and there is no shared
storage.
I run the following command and get the output blew:
yyang@node1:~$ sudo virsh migrate --live --persistent --copy-storage-all
--unsafe --verbose vm1 qemu+ssh://192.168.1.3/system
root(a)192.168.1.3's password:
root(a)192.168.1.3's password:
root(a)192.168.1.3's password:
error: Cannot recv data: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).: Connection reset by peer
I'm sure the root passwd I entered is correct. And When I run the following
command, it works. I can use qemu+ssh to connect to a remote libvirtd
daemon.
yyang@node2:~$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # connect qemu+ssh://192.168.1.2/system
root(a)192.168.1.2's password:
virsh # list
Id Name State
----------------------------------------------------
6 vm1 running
7 vm2 running
virsh #
Any suggestions about this issue?
Thanks a lot!
Ed
9 years, 9 months
[libvirt-users] CfP 10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '15)
by VHPC 15
=================================================================
CALL FOR PAPERS
10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'15)
held in conjunction with Euro-Par 2015, August 24-28, Vienna, Austria
(Springer LNCS)
=================================================================
Date: August 25, 2015
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 22, 2015
CALL FOR PAPERS
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, cloud environments, and increasingly in
HPC as well. Providers need to dynamically manage complex infrastructures
in a
seamless fashion for varying workloads and hosted applications,
independently of
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to
manage
vast computing and networking resources dynamically and close to the
marginal
cost of providing the services, which is unprecedented in the history of
scientific
and commercial computing.
Various virtualization technologies contribute to the overall picture in
different
ways: machine virtualization, with its capability to enable consolidation
of multiple
under-utilized servers with heterogeneous software and operating systems
(OSes),
and its capability to live-migrate a fully operating virtual machine (VM)
with a very
short downtime, enables novel and dynamic ways to manage physical servers;
OS-level virtualization, with its capability to isolate multiple user-space
environments and to allow for their co-existence within the same OS kernel,
promises to provide many of the advantages of machine virtualization with
high
levels of responsiveness and performance; I/O Virtualization allows physical
network adapters to take traffic from multiple VMs; network virtualization,
with its
capability to create logical network overlays that are independent of the
underlying physical topology and IP addressing, provides the fundamental
ground on top of which evolved network services can be realized with an
unprecedented level of dynamicity and flexibility; These technologies
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified
Service-
Level Agreements (SLAs), as needed for industrial-grade provided services.
Indeed, among emerging and increasingly interesting application domains
for virtualization, we can find big-data application workloads in cloud
infrastructures, interactive and real-time multimedia services in the cloud,
including real-time big-data streaming platforms such as used in real-time
analytics supporting nowadays a plethora of application domains. Distributed
cloud infrastructures promise to offer unprecedented responsiveness levels
for
hosted applications, but that is only possible if the underlying
virtualization
technologies can overcome most of the latency impairments typical of current
virtualized infrastructures (e.g., far worse tail-latency).
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations,
each followed by 10 min discussion sections, and lightning talks, limited
to 5
minutes. Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
- Virtualization in supercomputing environments, HPC clusters, cloud HPC
and grids
- Optimizations of virtual machine monitor platforms, hypervisors and
OS-level virtualization
- Hypervisor and network virtualization QoS and SLAs
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Programming models for virtualized environments
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators, GPUs
and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC in
the cloud
- Topology management and optimization for distributed virtualized
applications
- Cluster provisioning in the cloud and cloud bursting
- Adaptation of emerging HPC technologies (high performance networks, RDMA,
etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- Energy-efficient / power-aware virtualization
Important Dates
April 29, 2015 - Abstract registration
May 22, 2015 - Full paper submission
June 19, 2014 - Acceptance notification
October 2, 2014 - Camera-ready version due
August 25, 2014 - Workshop Date
TPC
CHAIR
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
PROGRAM COMMITTEE
Stergios Anastasiadis, University of Ioannina, Greece
Costas Bekas, IBM Zurich Research Laboratory, Switzerland
Jakob Blomer, CERN
Ron Brightwell, Sandia National Laboratories, USA
Roberto Canonico, University of Napoli Federico II, Italy
Julian Chesterfield, OnApp, UK
Patrick Dreher, MIT, USA
William Gardner, University of Guelph, Canada
Kyle Hale, Northwestern University, USA
Marcus Hardt, Karlsruhe Institute of Technology, Germany
Iftekhar Hussain, Infinera, USA
Krishna Kant, Temple University, USA
Eiji Kawai, National Institute of Information and Communications
Technology, Japan
Romeo Kinzler, IBM, Switzerland
Kornilios Kourtis, ETH, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
Massimo Lamanna, CERN
Che-Rung Roger Lee, National Tsing Hua University, Taiwan
Helge Meinhard, CERN
Jean-Marc Menaud, Ecole des Mines de Nantes France
Christine Morin, INRIA, France
Amer Qouneh, University of Florida, USA
Seetharami Seelam, IBM Watson Research Center, USA
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Yasuhiro Watashiba, Osaka University, Japan
Chao-Tung Yang, Tunghai University, Taiwan
PAPER SUBMISSION-PUBLICATION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
http://www.springer.de/comp/lncs/authors.html
Submission Link:
https://easychair.org/conferences/?conf=europar2015ws
GENERAL INFORMATION
The workshop is one day in length and will be held in conjunction with
Euro-Par 2015, 24-28 August, Vienna, Austria
9 years, 9 months
[libvirt-users] Issues with XML validation after upgrade to 1.2.12
by Brian Rak
After we upgraded to 1.2.12, we've been having issues with libvirt... it
complains that our formerly valid guest definitions are now invalid:
error: Failed to start domain XXXX
error: internal error: Cannot instantiate filter due to unresolvable
variables or unavailable list elements: DHCPSERVER
We looked into this, and found that it's the XML validation that's failing:
# xmllint --noout --relaxng "/share/libvirt/schemas/domain.rng"
XXXX.xml --recover
Relax-NG validity error : Extra element devices in interleave
test.xml:1: element domain: Relax-NG validity error : Element domain
failed to validate content
test.xml fails to validate
And, a minimal domain XML to reproduce it (this won't boot, but it shows
the issue):
<domain type='kvm' id='65'>
<name>XXXX</name>
<uuid>b602b5f2-b9d7-43bd-a949-acc7eeeb9f8f</uuid>
<memory unit='KiB'>1048576</memory>
<devices>
<interface type='bridge'>
<filterref filter='myfilter'>
<parameter name='CTRL_IP_LEARNING' value='none'/>
<parameter name='DHCPSERVER' value='104.156.226.10'/>
<parameter name='IP' value='104.207.129.11'/>
<parameter name='IP6_ADDR' value='2001:19f0:300:2102::'/>
<parameter name='IP6_MASK' value='64'/>
</filterref>
</interface>
</devices>
</domain>
The cause seems to be having multiple parameters in a <filterref> block.
We applied the following patch to fix it:
diff -ur src_clean/docs/schemas/domaincommon.rng
src/docs/schemas/domaincommon.rng
--- src_clean/docs/schemas/domaincommon.rng 2015-01-23
06:46:24.000000000 -0500
+++ src/docs/schemas/domaincommon.rng 2015-03-10 11:30:42.057441342 -0400
@@ -4468,6 +4468,7 @@
<data type="NCName"/>
</attribute>
<optional>
+ <zeroOrMore>
<element name="parameter">
<attribute name="name">
<ref name="filter-param-name"/>
@@ -4476,6 +4477,7 @@
<ref name="filter-param-value"/>
</attribute>
</element>
+ </zeroOrMore>
</optional>
</define>
9 years, 9 months
[libvirt-users] Unable to start sandbox: Kernel module dir /lib/modules/3.18.5-x86_64-linode52/kernel does not exist
by Adam Smith
Dear all,
I have been trying to set up the set up Libvirt Sandbox without success.
I want to use virt-sandbox in order to run untrusted programs in a secure
environment. I am had no knowledge about virtualization until a couple of
days ago, so I am probably doing something wrong.
The scenario is the following:
Linode instance. OS that I have tried: Ubuntu 14.04, Ubuntu 14, Fedora 21.
Both compiling from source and installing the pre-compiled packages. But I
always reach the same error:
"""
$ virt-sandbox -c qemu:///session /bin/date
Unable to start sandbox: Kernel module dir
/lib/modules/3.18.5-x86_64-linode52/kernel does not exist
"""
I have been told by the guys of Linode that:
"The kernels we use are completely compiled and do not utilize modules. In
addition, the kernels are loaded from the host rather than the /boot
directory"
Any hints to solve this issue? Is the only solution to compile my own
kernel?
Also, if I decide to use a service like Linode, AWS, Digital Ocean...then
the server that I would be using would be already a virtual server. Is it a
problem to run virt-sandbox within a server which is already a virtual
server?
Thanks a lot!
9 years, 9 months