[libvirt-users] How to achieve higher priority of a process in a lxc on CentOS machine
by WANG Cheng D
Dear all,
I use libvirt lxc to host a real-time application with priority of -50. The native OS is fedora21 and lxc is also fedora21. Everything works fine. Linux "top" command shows the priority is -50).
Then I use Openstack to manage lxc. The native compute node OS is CentOS7.2. and the lxc is fedora. After the real-time application is started in the lxc. top command on the native machine sees the real-time application with priority of 20, which is not what I expect. If lxc is Ubuntu, the priority of real-time application is also 20.
Can anybody help me on this?
Cheng Wang
7 years, 10 months
[libvirt-users] Trouble moving OVMF guest to new host
by Bashe, Joe
Hello,
I recently had to reinstall my operating system on my computer. I made a
backup of the entire partition beforehand onto an external drive. Now I am
trying to import a VM from that backup onto the newly installed system.
What I've done so far:
- copied over the qcow2 disk image
- copied over the XML config file
- copied over the OVMF files under /usr/share/edk2.git/ovmf-x64/
I am getting this error whenever I try to boot the VM:
Error starting domain: operation failed: unable to find any master var
store for loader: /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
The file /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd does exist.
Where would the "master var store" be located?
Thank you,
Joe
--
Joseph Bashe
Technical Director
Bashe Development
+1 (323) 999-1731
7 years, 10 months
[libvirt-users] accessing USB as storage device through lxc container.
by ravi mh
Hi all,
I am not able to access USB as storage device in the lxc container.
Having tried to pass the usb device with the product and vendor id, not
able to see the device mounted in the lxc file system.
However, they are seen as char devices in the container at the location.
There is no issue of ACL, as the capabilities restrictions have been
dropped.
Has anyone successfully mounted the storage device in the libvirt. Having
gone through the libvirt documentation, couldnt find further information of
enabling the usb as storage device. Any pointers towards that would be
useful.
Host OS:
IR800-GOS-1:~# lsusb
Bus 001 Device 003: ID 8644:800b
Bus 001 Device 001: ID 1d6b:0002
Bus 002 Device 001: ID 1d6b:0001
IR800-GOS-1:~#
Lxc app container:
root@ir800-lxc:/mnt/usb# ls -la /dev/bus/usb/001/003
*crwx-*----- 1 root root 189, 2 Feb 3 01:44
/dev/bus/usb/001/003
----------libvirt xml snippet ------------
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x8644'/>
<product id='0x800b'/>
</source>
</hostdev>
-----------------------------
while changing the mode from subsystem to capabilities, it issues error for
validating against the schema.
---------------while changing the hostdev mode='capabilities'----------
virsh # edit n01_1
*error: XML document failed to validate against schema: Unable to validate
doc against /usr/share/libvirt/schemas/domain.rng*
Extra element devices in interleave
Element domain failed to validate content
Failed. Try again? [y,n,i,f,?]:
error: XML document failed to validate against schema: Unable to validate
doc against /usr/share/libvirt/schemas/domain.rng
--------------------------------------------
Regards,
*Ravi*
7 years, 10 months
[libvirt-users] Real time threads don't work in libvirt containers under CentOS 7.3
by Peter Steele
We've been using libvirt based containers under CentOS 7 and everything
has been working fine. One application we run in our containers is ctdb,
which uses SCHED_FIFO (real time) threads. This has been working without
problems until our recent upgrade to CentOS 7.3. For some reason, ctdb
is no longer able to create real time threads, and I've tried a simple
program myself that confirms this. The same program works fine on the
hypervisor so I know the kernel supports real time. Does anyone know
what may have changed in CentOS 7.3 that breaks real time threads in
libvirt containers?
This is the simple test program I created to verify the real time
threads are failing. This same program works in a libvirt container in
CentOS 7.2.
#include <stdio.h>
#include <string.h>
#include <pthread.h>
pthread_t test_thread;
void *test(void *arg)
{
printf("Starting thread\n");
sleep(1);
printf("Thread complete\n");
return 0;
}
int main(int argc, char *argv[])
{
int rc;
printf("Starting main\n");
struct sched_param tsparam;
pthread_attr_t tattr;
memset(&tsparam, 0, sizeof(tsparam));
pthread_attr_init(&tattr);
pthread_attr_setinheritsched(&tattr, PTHREAD_EXPLICIT_SCHED);
pthread_attr_setschedpolicy(&tattr, SCHED_FIFO);
tsparam.sched_priority = sched_get_priority_max(SCHED_FIFO) - 7;
pthread_attr_setschedparam(&tattr, &tsparam);
if ((rc = pthread_create(&test_thread, &tattr, test, NULL)) != 0) {
printf("Unable to start rt thread\n");
}
return 0;
}
7 years, 10 months
[libvirt-users] Communicating between libvirt contains on separate hosts
by Peter Steele
I have a group of CentOS 7.2 servers running as VMs under VMware's ESXi
6.0. These all reside on the same subnet and we have no problem
communicating between the different virtual servers. In addition, each
of these servers run a number of libvirt-based LXC containers, also
based on CentOS 7.2. The hosts can communicate with their containers
without issues and the containers on a given server can communicate with
each other. However, containers hosted on two different servers cannot
communicate with each other or with other servers.
If we duplicate this setup using KVM based VMs instead of ESXi VMs
everything works fine--there is no problem communicating between LXC
containers regardless of which VM hosts them.
I assume this is not a libvirt issue but if anyone has encountered this
issue in running libvirt containers under a VMware environment and has
found a solution, I'd appreciate hearing from you.
Thanks very much,
Peter
7 years, 10 months
[libvirt-users] How to pin an libvirt lxc to a specific physical CPU core in openstack?
by WANG Cheng D
Dear all,
In my applications, the real-time performance is very important, so I used 4 containers, with only one application running in each container and a physical CPU core is only dedicated to one lxc. and I must know which container is hosted by which CPU core. That is, I need to pin a specific lxc to a specific CPU core.
I can achieve this in a native linux system by the following xml script (to pin a libvirt lxc to core #3 of a machine). It works fine.
<vcpu placement='static' cpuset='3'>1</vcpu>
Now I want to realize this on openstack platform, I carefully read the instruction on CPU pinning in nova and found that openstack flavor can only support pinning an lxc(instance) onto a set of CPU cores. I cannot allocate a specific pCPU to an lxc. After the lxc is started, "dumpxml" only shows the following statement, I even don't know to which CPU core the lxc is pinned.
<vcpu placement='static' >1</vcpu>
I tried to edit the libvirt.xml for the lxc which locates at nova/instances/#INSTANCE ID#/libvirt.xml and manually inserted cpuset='3', but this didn't take effect. After the lxc is restarted, the libvirt.xml is recovered and the modification was not there anymore.
I googled the question on Internet, but no solution was found. I hope the libvirt people can help me on this.
Thank you in advance.
Cheng
7 years, 10 months
[libvirt-users] Libvirt and Virt-manager compatibility
by abhishek jain
Hi,
I am using Libvirt (1.2.2) on RHEL6. I want to use the compatible virt-manager for it.
When I tried 1.4.0 and 1.3.0 there is an insatllation issue on RHLE6 for gtk3. Is there any virt-manager version which works with 1.2.2 and RHLE6 too.
Also what is the compatibility matrix for libvirt and virt-manager version.
Thanks & RegardsAbhishek
From: "libvirt-users-request(a)redhat.com" <libvirt-users-request(a)redhat.com>
To: libvirt-users(a)redhat.com
Sent: Wednesday, 1 February 2017 3:12 AM
Subject: libvirt-users Digest, Vol 85, Issue 24
Send libvirt-users mailing list submissions to
libvirt-users(a)redhat.com
To subscribe or unsubscribe via the World Wide Web, visit
https://www.redhat.com/mailman/listinfo/libvirt-users
or, via email, send a message with subject or body 'help' to
libvirt-users-request(a)redhat.com
You can reach the person managing the list at
libvirt-users-owner(a)redhat.com
When replying, please edit your Subject line so it is more specific
than "Re: Contents of libvirt-users digest..."
Today's Topics:
1. CfP 12th Virtualization in High?-Performance Cloud Computing
Workshop (VHPC '17) (VHPC 17)
----------------------------------------------------------------------
Message: 1
Date: Tue, 31 Jan 2017 22:42:50 +0100
From: VHPC 17 <vhpc.dist(a)gmail.com>
To: libvirt-users(a)redhat.com
Subject: [libvirt-users] CfP 12th Virtualization in High?-Performance
Cloud Computing Workshop (VHPC '17)
Message-ID:
<CAF05tLM1bPf=R6rF7WF+EgAJoJxB6=U3QMzczJRRxozydGu9qQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
====================================================================
CALL FOR PAPERS
12th Workshop on Virtualization in High?-Performance Cloud Computing (VHPC
'17)
held in conjunction with the International Supercomputing Conference - High
Performance,
June 18-22, 2017, Frankfurt, Germany.
(Springer LNCS Proceedings)
====================================================================
Date: June 22, 2017
Workshop URL: http://vhpc.org
Abstract Submission Deadline: February 28, 2017
Paper Submission Deadline: April 25, 2017 (Springer LNCS)
Abstract/Paper Submission Link: https://edas.info/newPaper.php?c=23179
Call for Papers
Virtualization technologies constitute a key enabling factor for flexible
resource management
in modern data centers, and particularly in cloud environments. Cloud
providers need to
manage complex infrastructures in a seamless fashion to support the highly
dynamic and
heterogeneous workloads and hosted applications customers deploy.
Similarly, HPC
environments have been increasingly adopting techniques that enable
flexible management of
vast computing and networking resources, close to marginal provisioning
cost, which is
unprecedented in the history of scientific and commercial computing.
Various virtualization technologies contribute to the overall picture in
different ways: machine
virtualization, with its capability to enable consolidation of multiple
under?utilized servers with
heterogeneous software and operating systems (OSes), and its capability to
live?-migrate a
fully operating virtual machine (VM) with a very short downtime, enables
novel and dynamic
ways to manage physical servers; OS-?level virtualization (i.e.,
containerization), with its
capability to isolate multiple user?-space environments and to allow for
their co?existence within
the same OS kernel, promises to provide many of the advantages of machine
virtualization
with high levels of responsiveness and performance; I/O Virtualization
allows physical
NICs/HBAs to take traffic from multiple VMs or containers; network
virtualization, with its
capability to create logical network overlays that are independent of the
underlying physical
topology and IP addressing, provides the fundamental ground on top of which
evolved
network services can be realized with an unprecedented level of dynamicity
and flexibility; the
increasingly adopted paradigm of Software-?Defined Networking (SDN)
promises to extend
this flexibility to the control and data planes of network paths.
Publication
Accepted papers will be published in a Springer LNCS proceedings volume.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to
virtualization across the entire software stack with a special focus on the
intersection of HPC
and the cloud.
Major Topics
- Virtualization in supercomputing environments, HPC clusters, HPC in the
cloud and grids
- OS-level virtualization and containers (Docker, rkt, Singularity,
Shifter, i.a.)
- Lightweight/specialized operating systems, unikernels
- Optimizations of virtual machine monitor platforms and hypervisors
- Hypervisor support for heterogenous resources (GPUs, co-processors,
FPGAs, etc.)
- Virtualization support for emerging memory technologies
- Virtualization in enterprise HPC and microvisors
- Software defined networks and network virtualization
- Management, deployment of virtualized environments and orchestration
(Kubernetes i.a.),
- Workflow-pipeline container-based composability
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Virtualization in data intensive computing and Big Data processing - HPC
convergence
- Adaptation of HPC technologies in the cloud (high performance networks,
RDMA, etc.)
- ARM-based hypervisors, ARM virtualization extensions
- I/O virtualization and cloud based storage systems
- GPU, FPGA and many-core accelerator virtualization
- Job scheduling/control/policy and container placement in virtualized
environments
- Cloud reliability, fault-tolerance and high-availability
- QoS and SLA in virtualized environments
- IaaS platforms, cloud frameworks and APIs
- Large-scale virtualization in domains such as finance and government
- Energy-efficient and power-aware virtualization
- Container security
- Configuration management tools for containers (including CFEngine,
Puppet, i.a.)
- Emerging topics including multi-kernel approaches and,NUMA in hypervisors
The Workshop on Virtualization in High?-Performance Cloud Computing (VHPC)
aims to
bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each
followed by 10 min discussion sections, plus lightning talks that are
limited to 5 minutes.
Presentations may be accompanied by interactive demonstrations.
Important Dates
February 28, 2017 - Abstract Submission Deadline
April 25, 2017 - Paper submission deadline
May 30, 2017 - Acceptance notification
June 22, 2017 - Workshop Day
June 25, 2017 - Camera-ready version due
Chair
Michael Alexander (chair), scaledinfra technologies, Austria
Anastassios Nanos (co-?chair), NTUA, Greece
Balazs Gerofi (co-?chair), ?RIKEN Advanced Institute for Computational
Science?, Japan
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Jakob Blomer, CERN, Europe
Ron Brightwell, Sandia National Laboratories, USA
Eduardo C?sar, Universidad Autonoma de Barcelona, Spain
Julian Chesterfield, OnApp, UK
Stephen Crago, USC ISI, USA
Christoffer Dall, Columbia University, USA
Patrick Dreher, MIT, USA
Robert Futrick, Cycle Computing, USA
Maria Girone, CERN, Europe
Kyle Hale, Northwestern University, USA
Romeo Kinzler, IBM, Switzerland
Brian Kocoloski, University of Pittsburgh, USA
Nectarios Koziris, National Technical University of Athens, Greece
John Lange, University of Pittsburgh, USA
Che-Rung Lee, National Tsing Hua University, Taiwan
Giuseppe Lettieri, University of Pisa, Italy
Qing Liu, Oak Ridge National Laboratory, USA
Nikos Parlavantzas, IRISA, France
Kevin Pedretti, Sandia National Laboratories, USA
Amer Qouneh, University of Florida, USA
Carlos Rea?o, Technical University of Valencia, Spain
Thomas Ryd, CFEngine, Norway
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Craig Stewart, Indiana University, USA
Anata Tiwari, San Diego Supercomputer Center, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Yasuhiro Watashiba, Osaka University, Japan
Nicholas Wright, Lawrence Berkeley National Laboratory, USA
Chao-Tung Yang, Tunghai University, Taiwan
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work. Accepted papers will be published in a
Springer LNCS volume. .
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract, Paper Submission Link:
https://edas.info/newPaper.php?c=23179
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are strictly
limited to 5 minutes.
They can be used to gain early feedback on ongoing research, for
demonstrations, to present
research results, early research ideas, perspectives and positions of
interest to the community.
Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with the
International
Supercomputing Conference - High Performance (ISC) 2017, June 18-22,
Frankfurt,
Germany.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.redhat.com/archives/libvirt-users/attachments/20170131/4024ae...>
------------------------------
_______________________________________________
libvirt-users mailing list
libvirt-users(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users
End of libvirt-users Digest, Vol 85, Issue 24
*********************************************
7 years, 10 months
[libvirt-users] virt-p2v - Windows 10 guest hangs at boot after successful P2V
by JT Edwards
Hi all,
I successfully virt-p2v'ed a Windows 10 laptop to my Centos 7.3 instance
running KVM. However, on boot, the guest hangs. Is there a registry fix
that is needed after the P2V is done? Here is what is in the guest's
logfile:
017-01-08 03:20:37.508+0000: starting up libvirt version: 2.0.0, package:
10.el7_3.2 (CentOS BuildSystem <http://bugs.centos.org>,
2016-12-06-19:53:38, c1bm.rdu2.centos.org), qemu version: 1.5.3
(qemu-kvm-1.5.3-126.el7), hostname: torden40.me.org
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name win10 -S -machine
pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu
Conroe,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -realtime
mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid
f72e7ad5-98d4-44ab-aa85-347fe232b4e5 -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-9-win10/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=discard
-no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global
PIIX4_PM.disable_s4=1 -boot strict=on -device
ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device
ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/home/tstrike39/Virtuals/tordenmobile.img,format=qcow2,if=none,id=drive-ide0-0-0
-device
ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-netdev tap,fd=26,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:d6:c8:67,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
spicevmc,id=charchannel0,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
-device usb-tablet,id=input0,bus=usb.0,port=1 -spice
port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on
-vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=67108864 -global qxl-vga.vgamem_mb=16 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev
spicevmc,id=charredir0,name=usbredir -device
usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev
spicevmc,id=charredir1,name=usbredir -device
usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
char device redirected to /dev/pts/2 (label charserial0)
main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 0.140000 ms, bitrate
14222222222 bps (13563.368055 Mbps)
((null):19947): Spice-Warning **:
red_channel.c:542:red_channel_client_send_ping: getsockopt failed,
Operation not supported
red_dispatcher_set_cursor_peer:
inputs_connect: inputs channel client create
((null):19947): Spice-Warning **:
red_channel.c:542:red_channel_client_send_ping: getsockopt failed,
Operation not supported
((null):19947): Spice-Warning **:
red_channel.c:542:red_channel_client_send_ping: getsockopt failed,
Operation not supported
Below is the XML of my migrated instance:
<?xml version='1.0' encoding='utf-8'?>
<domain type='kvm'>
<!-- generated by virt-v2v 1.32.7rhel=7,release=3.el7.centos,libvirt -->
<name>localhost</name>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64'>hvm</type>
</os>
<features>
<acpi/>
<apic/>
</features>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/home/tstrike39/Virtuals/localhost-sda'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<mac address='f0:de:f1:08:9c:c4'/>
</interface>
<video>
<model type='cirrus' vram='9216' heads='1'/>
</video>
<graphics type='vnc' autoport='yes' port='-1'/>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<console type='pty'/>
</devices>
</domain>
Any help would be appreciated!
7 years, 10 months