[libvirt-users] Disable weak ciphers in vnc_tls
by Matthias Fenner
Dear libvirt team,
we a currently in a pci-dss certification process and our security
scanner found weak ciphers in the vlc_tls service on our centos6 box:
When I scan using sslscan I can see that sslv3 and rc4 is accepted:
inf0rmix@tardis:~$ sslscan myhost:16514 | grep Accepted
Accepted SSLv3 256 bits DHE-RSA-AES256-SHA
Accepted SSLv3 256 bits AES256-SHA
Accepted SSLv3 128 bits DHE-RSA-AES128-SHA
Accepted SSLv3 128 bits AES128-SHA
Accepted SSLv3 128 bits RC4-SHA
Accepted SSLv3 128 bits RC4-MD5
Accepted SSLv3 112 bits EDH-RSA-DES-CBC3-SHA
Accepted SSLv3 112 bits DES-CBC3-SHA
Accepted TLSv1 256 bits DHE-RSA-AES256-SHA
Accepted TLSv1 256 bits DHE-RSA-CAMELLIA256-SHA
Accepted TLSv1 256 bits AES256-SHA
Accepted TLSv1 256 bits CAMELLIA256-SHA
Accepted TLSv1 128 bits DHE-RSA-AES128-SHA
Accepted TLSv1 128 bits DHE-RSA-CAMELLIA128-SHA
Accepted TLSv1 128 bits AES128-SHA
Accepted TLSv1 128 bits CAMELLIA128-SHA
Accepted TLSv1 128 bits RC4-SHA
Accepted TLSv1 128 bits RC4-MD5
Accepted TLSv1 112 bits EDH-RSA-DES-CBC3-SHA
Accepted TLSv1 112 bits DES-CBC3-SHA
how do we turn it off and only allow tlv>=1.1
Kind regards,
Matthias Fenner
9 years, 7 months
[libvirt-users] QemuDomainObjEndJob called when libvirtd is started and libvirt insists qemu is using the wrong disk source.
by Matthew Schumacher
List,
I was under the impression that I could restart libvirtd without it
destroying my VMs, but am not finding that to be true. When I killall
libvirtd then my VM's keep running, but then when I start libvirtd it
calls qemuDomainObjEndJob:1542 : Stopping job: modify (async=none
vm=0x7fb8cc0d8510 name=test) and my domain gets whacked.
Any way to disable this behavior?
Also, while I'm at it, due to issues with snapshotting, I ended up with
two domains where libvirt insists the disk source is incorrect which
causes many things to break like this:
root@wasvirt1:/etc/libvirt# virsh domblklist wasdev
Target Source
------------------------------------------------
vda /glustervol1/vm/wasdev/wasdev.reboot
hdc /dev/sr0
Yet:
root@wasvirt1:/etc/libvirt# lsof | grep wasdev
2235 /usr/bin/qemu-system-x86_64 /var/log/libvirt/qemu/wasdev.log
2235 /usr/bin/qemu-system-x86_64 /var/log/libvirt/qemu/wasdev.log
2235 /usr/bin/qemu-system-x86_64 /glustervol1/vm/wasdev/wasdev.qcow2
The reason is because a blockcommit --active --pivot doesn't work, I
blockjob --abort, then try again, it works the second time, but then the
disk is wrong.
Help would be greatly appreciated.
Thanks,
schu
9 years, 7 months
[libvirt-users] Virtual Smartcard GPG
by roky@openmailbox.org
Hi. Is is possible to use GPG on the host instead of NSS with virtual
smartcards? Please document how or add support for it.
Can a virtual smartcard make the host less secure? If there are bugs in
GPG/NSS backend on the host can they be abused by untrusted code in the
vm?
9 years, 7 months
[libvirt-users] How does the libvirt deal with the vnet mac address
by wh.h@foxmail.com
How does the libvirt deal with the vnet mac address?
Greetings,
if I establish a network for the VM (hypervisor is KVM) using bridge in the virt-manager , a vnet0 device is created . There are some relationships about mac address between the vnet0 device in the hypervisor and the ethX device in the VM, for example :
the mac address of vnet0 is FE:54:00:84:E3:62
the mac address of ethX in the VM is 52:54:00:84:E3:62
two mac addresses above are almost the same except the first part of the address .
but if I created a tap device manually ,
tunctl -t tap0 -u root
brctl addif br0 tap0
and add tap0 to the VM, I will find that mac address between the tap0 device in the hypervisor and the ethX device in the VM will totally different . so I think that libvirt must do something about the mac address handling, could you please kindly tell me something about this ?
I have found a function in libvirt-0.10.2.8\src\util\ virnetdevtap.c
int virNetDevTapCreateInBridgePort(const char *brname,
char **ifname,
const virMacAddrPtr macaddr,
const unsigned char *vmuuid,
int *tapfd,
virNetDevVPortProfilePtr virtPortProfile,
virNetDevVlanPtr virtVlan,
unsigned int flags)
{
….
virMacAddr tapmac;
if (virNetDevTapCreate(ifname, tapfd, flags) < 0)
return -1;
/* We need to set the interface MAC before adding it
* to the bridge, because the bridge assumes the lowest
* MAC of all enslaved interfaces & we don't want it
* seeing the kernel allocate random MAC for the TAP
* device before we set our static MAC.
*/
virMacAddrSet(&tapmac, macaddr);
if (!(flags & VIR_NETDEV_TAP_CREATE_USE_MAC_FOR_BRIDGE)) {
if (macaddr->addr[0] == 0xFE) {
/* For normal use, the tap device's MAC address cannot
* match the MAC address used by the guest. This results
* in "received packet on vnetX with own address as source
* address" error logs from the kernel.
*/
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
_("Unable to use MAC address starting with "
"reserved value 0xFE - '%02X:%02X:%02X:%02X:%02X:%02X' - "),
macaddr->addr[0], macaddr->addr[1],
macaddr->addr[2], macaddr->addr[3],
macaddr->addr[4], macaddr->addr[5]);
goto error;
}
tapmac.addr[0] = 0xFE; /* Discourage bridge from using TAP dev MAC */ //the first part of mac address is set to 0xFE.
}
if (virNetDevSetMAC(*ifname, &tapmac) < 0)
goto error;
….
}
How does the libvirt establish the arp table in the hypervisor if the vnet0 device in the hypervisor and the ethX device in the VM is different?
If I want to create tap device manually , how should I deal with the mac address ?I have setup the mac address of the tap0 device in the hypervisor and the ethX device in the VM in the same way with libvirt , but the network of VM cannot work.
weihua
wh.h(a)foxmail.com
wh.h(a)foxmail.com
9 years, 7 months
[libvirt-users] Remove Virtual bridge and DNSMASQ
by mimicafe@gmail.com
I am running KVM virtualization with libvirtd (libvirt) 0.10.2 in bridged
network mode, however I still have the default virtual network
bridge/interfaces and dnsmasq on the host. What I am trying to understand
is whether or not dnsmasq and the virtual network (*virbr0, Vnet0 and Vnet1*)
still play any role. If not, can I remove them?
On most virtual hosts I see they are left around even when a network bridge
has been manually setup and the default virtual network bridge/interfaces
and dnsmasq are not serve any purpose.
Thanks
Mimi
[bigfoot@localhost ~]$ ifconfig
br0 Link encap:Ethernet HWaddr E2:59:52:12:34:4C
inet addr:135.17.1XX.XX Bcast:135.17.1XX.XX Mask:255.255.255.128
inet6 addr: fe80::ea39:35ff:fe12:948e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:564577 errors:0 dropped:0 overruns:0 frame:0
TX packets:303315 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:566465184 (540.2 MiB) TX bytes:83532692 (79.6 MiB)
eth0 Link encap:Ethernet HWaddr E2:59:52:12:34:4C
inet6 addr: fe80::ea39:35ff:fe12:948e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:806142 errors:0 dropped:0 overruns:0 frame:0
TX packets:450977 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:711888772 (678.9 MiB) TX bytes:175377460 (167.2 MiB)
Interrupt:28 Memory:f7100000-f7120000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:276793 errors:0 dropped:0 overruns:0 frame:0
TX packets:276793 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:551259266 (525.7 MiB) TX bytes:551259266 (525.7 MiB)
*virbr0 Link encap:Ethernet HWaddr 52:54:00:2A:1B:AF*
* inet addr:192.168.122.1 Bcast:192.168.122.255
Mask:255.255.255.0*
* UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1*
* RX packets:0 errors:0 dropped:0 overruns:0 frame:0*
* TX packets:0 errors:0 dropped:0 overruns:0 carrier:0*
* collisions:0 txqueuelen:0*
* RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)*
*vnet0 Link encap:Ethernet HWaddr FE:54:00:A0:8D:50*
* inet6 addr: fe80::fc54:ff:fea0:8d50/64 Scope:Link*
* UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1*
* RX packets:124812 errors:0 dropped:0 overruns:0 frame:0*
* TX packets:343055 errors:0 dropped:0 overruns:0 carrier:0*
* collisions:0 txqueuelen:500*
* RX bytes:55075362 (52.5 MiB) TX bytes:138488556 (132.0 MiB)*
*vnet1 Link encap:Ethernet HWaddr FE:54:00:71:A8:78*
* inet6 addr: fe80::fc54:ff:fe71:a878/64 Scope:Link*
* UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1*
* RX packets:17834 errors:0 dropped:0 overruns:0 frame:0*
* TX packets:318816 errors:0 dropped:0 overruns:1 carrier:0*
* collisions:0 txqueuelen:500*
* RX bytes:2279102 (2.1 MiB) TX bytes:20534286 (19.5 MiB)*
[bigfoot@localhost ~]$
9 years, 7 months
[libvirt-users] CfP Virtualization in High-Performance Cloud Computing Workshop (VHPC '15)
by VHPC 15
=================================================================
CALL FOR PAPERS
10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'15)
held in conjunction with Euro-Par 2015, August 24-28, Vienna, Austria
(Springer LNCS)
=================================================================
Date: August 25, 2015
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 22, 2015
CALL FOR PAPERS
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, cloud environments, and increasingly in
HPC as well. Providers need to dynamically manage complex infrastructures
in a
seamless fashion for varying workloads and hosted applications,
independently of
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to
manage
vast computing and networking resources dynamically and close to the
marginal
cost of providing the services, which is unprecedented in the history of
scientific
and commercial computing.
Docker et al. OS-level virtualization, with its capability to isolate
multiple user-space
environments allows for their co-existence within the same OS kernel. It
promises to provide many of the advantages of machine virtualization with
high
levels of responsiveness and performance; coupled with lightweight OSs it
forms a
potent architecture with promise to become a mainstream environment
for HPC workloads.
Machine virtualization, with its capability to enable consolidation of
multiple
under-utilized servers with heterogeneous software and operating systems
(OSes),
and its capability to live-migrate a fully operating virtual machine (VM)
with a very
short downtime, enables novel and dynamic ways to manage physical servers;
I/O Virtualization allows physical network adapters to take traffic from
multiple VMs;
network virtualization, with its capability to create logical network
overlays that
are independent of the underlying physical topology and IP addressing,
provides
the fundamental ground on top of which evolved network services can be
realized
with an unprecedented level of dynamicity and flexibility; These
technologies
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified
Service-
Level Agreements (SLAs), as needed for industrial-grade provided services.
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations,
each followed by 10 min discussion sections, and lightning talks, limited
to 5
minutes. Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
- Virtualization in supercomputing environments, HPC clusters, cloud HPC
and grids
- OS-level virtualization including container runtimes (Docker, rkt et al.)
- Lightweight compute node operating systems/VMMs
- Optimizations of virtual machine monitor platforms, hypervisors and
- Hypervisor and network virtualization QoS and SLAs
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Programming models for virtualized environments
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators, GPUs
and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC in
the cloud
- Topology management and optimization for distributed virtualized
applications
- Cluster provisioning in the cloud and cloud bursting
- Adaptation of emerging HPC technologies (high performance networks, RDMA,
etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- Energy-efficient / power-aware virtualization
Important Dates
April 29, 2015 - Abstract registration
May 22, 2015 - Full paper submission
June 19, 2015 - Acceptance notification
October 2, 2015 - Camera-ready version due
August 25, 2015 - Workshop Date
TPC
CHAIR
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
PROGRAM COMMITTEE
Stergios Anastasiadis, University of Ioannina, Greece
Costas Bekas, IBM Zurich Research Laboratory, Switzerland
Jakob Blomer, CERN
Ron Brightwell, Sandia National Laboratories, USA
Roberto Canonico, University of Napoli Federico II, Italy
Julian Chesterfield, OnApp, UK
Patrick Dreher, MIT, USA
William Gardner, University of Guelph, Canada
Kyle Hale, Northwestern University, USA
Marcus Hardt, Karlsruhe Institute of Technology, Germany
Iftekhar Hussain, Infinera, USA
Krishna Kant, Temple University, USA
Eiji Kawai, National Institute of Information and Communications
Technology, Japan
Romeo Kinzler, IBM, Switzerland
Kornilios Kourtis, ETH, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
Massimo Lamanna, CERN
Che-Rung Roger Lee, National Tsing Hua University, Taiwan
William Magato, University of Cincinnati, USA
Helge Meinhard, CERN
Jean-Marc Menaud, Ecole des Mines de Nantes France
Christine Morin, INRIA, France
Amer Qouneh, University of Florida, USA
Seetharami Seelam, IBM Watson Research Center, USA
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Yasuhiro Watashiba, Osaka University, Japan
Chao-Tung Yang, Tunghai University, Taiwan
PAPER SUBMISSION-PUBLICATION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
http://www.springer.de/comp/lncs/authors.html
Submission Link:
https://easychair.org/conferences/?conf=europar2015ws
GENERAL INFORMATION
The workshop is one day in length and will be held in conjunction with
Euro-Par 2015, 24-28 August, Vienna, Austria
9 years, 7 months
[libvirt-users] Mounting directory as readonly within LXC
by Harish Vishwanath
Hello
Is there a way to mount a directory as readonly when using LXC with libvirt?
Something like:
<filesystem type="mount">
<source dir="/sw/py27/python2.7_x86_64"/>
<target dir="/opt//py27"/>
<readonly/>
</filesystem>
The documentation says readonly only works with KVM/QEMU.
Regards,
Harish
9 years, 7 months
[libvirt-users] QEMU interface type=ethernet
by Brian Rak
With Libvirt under modern kernels, you can't use <interface
type='ethernet'> unless QEMU is running as root.
Running qemu as root is not ideal, but I was able to track down the
issue to this linux change:
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id...
Which means that if you're seeing errors like this:
2015-03-02T18:00:51.243477Z qemu-kvm: -netdev
tap,script=/tmp/vnet380622.sh,id=hostnet1: could not open /dev/net/tun:
Operation not permitted
2015-03-02T18:00:51.243518Z qemu-kvm: -netdev
tap,script=/tmp/vnet380622.sh,id=hostnet1: Device 'tap' could not be
initialized
They can be resolved like this:
1) Edit /etc/libvirt/qemu.conf, and add "/dev/net/tun" to the
cgroup_device_acl option
2) Run: setcap cap_net_admin+eip /bin/qemu-system-x86_64
This will give QEMU CAP_NET_ADMIN when it runs. Make sure you review
`man capabilities` to see what capabilities this actually gets qemu.
The downside here is that in the event a guest somehow breaks out of
qemu, CAP_NET_ADMIN gives them a bunch of scary permissions that could
result in you having a seriously bad day (it's enough permissions to
MITM all the machine's traffic, which could easily result in compromise)
It looks to me like libvirt already has the ability to create tap
devices and pass them into qemu (src/util/virnetdevtap.c -
virNetDevTapCreateInBridgePort), however you need to actually be using a
bridged network to do this. There is no way to have libvirt just create
a tap device and leave the rest to user defined scripts.
I don't think I have the necessary knowledge to add that feature in a
generic way, but it seems like it would be pretty handy. I'll probably
just work around it by removing the virNetDevBridgeAddPort call from our
version of libvirt.
9 years, 7 months