[libvirt-users] Fail to convert LXC container configuration into a domain XML fragment
by Du Jun
Hi, all,
I used lxc-tools to create a linux container. I am trying to transfrom LXC
container configuration into a domain XML fragment using the following
command:
*$ virsh -c lxc:/// domxml-from-native lxc /var/lib/lxc/my_conatiner/config*
However, I get the error message,
error: this function is not supported by the connection driver:
virConnectDomainXMLFromNative
It seems that *virConnectDomainXMLFromNative *is not supported in libvrit
LXC driver. However, I find the command in the official documentation of
libvirt. Besides, my libvirt's version is 1.2.2(the latest version).
I wonder whether I can convert LXC container configuration into a domain
XML fragment using *domxml-from-native*? If I can't, how cant I translate
it?
PS:
I care about the rules in my LXC container configuration file, such as:
*lxc.aa_profile = unconfined*
*lxc.cgroup.devices.deny = a*
*# Allow any mknod (but not using the node)*
*lxc.cgroup.devices.allow = c *:* m*
*lxc.cgroup.devices.allow = b *:* m*
*lxc.cgroup.devices.allow = b 7:* rwm*
I just don't know how to express the rules above using xml format. Please
help! Thanks!
--
Best Regards,
Du Jun
10 years, 9 months
[libvirt-users] Simple Networkfilter not working as expected.
by Matthias Babisch
Hello People.
I have produced a very simple networkfilter that is not work as I would
expect it. Perhaps one of you knows what I did wrong?
I made this little filter:
<filter name='my-test-no-ip-spoofing' priority='-700'>
<rule action='drop' direction='out' priority='-999'>
<all match='no' srcipaddr='$IP'/>
</rule>
</filter>
I could attach it directly to a VM (and defined an IP-Adress in the
network-interface there). Then it produced iptables rules that look like
this:
Chain FI-vnetnn (1 references)
pkts bytes target prot opt in out source
destination
0 0 DROP all -- * * ! IP
0.0.0.0/0
(This is the rule governing the input via the virtual device into the
bridge, is as expected.)
Chain HI-vnetnn (1 references)
pkts bytes target prot opt in out source
destination
0 0 DROP all -- * * ! IP 0.0.0.0/0
(This is the rule governing the input to the host, i would expect this too.)
Chain FO-vnetnn (1 references)
pkts bytes target prot opt in out source
destination
0 0 DROP all -- * * 0.0.0.0/0 ! IP
This is the rule governing the output via the virtual-device from the
bridge. (i.e. Packets coming from the network.)
I specifically asked to filter outgoing traffic. This one I don't
unterstand. Perhaps somebody knows a hint?
On the other hand this filter works as expected, no rule on "FO-vnetnn":
<filter name='my-no-mac-spoofing' priority='-800'>
<rule action='drop' direction='out'>
<all match='no' srcmacaddr='$MAC'/>
</rule>
</filter>
I used libvirt with qemu on Ubuntu 13.10. (Version 1.1.1-0ubuntu8.5)
I am grateful for any helpful comments.
Sincerely
Matthias Babisch
IT/Organisation
*b+m Informatik AG*
Rotenhofer Weg 20
24109 Melsdorf
T +49 4340/404-1444
F +49 4340/404-111
M +49 160/8866426
matthias.babisch(a)bmiag.de
Aktuelle Informationen unter www.bmiag.de <%5C%22http://www.bmiag.de%5C%22>
Die b+m Informatik AG ist ein Unternehmen der Allgeier Gruppe
<%5C%22http://www.allgeier-holding.de%5C%22>
Vorsitzender des Aufsichtsrates: Dr. Marcus Goedsche
Vorstand: Dipl-Ing. Frank Mielke
Amtsgericht Kiel, HRB 5526
10 years, 9 months
[libvirt-users] cgroup for VM - does it work properly?
by Martin Pavlásek
Hi
I tried to restrict usage of some running VM by cpu.shares (i.e. set to
10 from original 1024) on loaded system and it seem doesn't work as I
expected... all running processes has same CPU usage (by htop) :-/
Does anyone has same experience?
Fedora 19, libvirt-1.0.5.9-1.fc19.x86_64
Thanks a lot
Martin
--
Martin Pavlásek <mpavlase(a)redhat.com>
OpenStack QA Associate/Red Hat Czech/BRQ
irc: mpavlase
10 years, 9 months
[libvirt-users] fedora 19 + libvirt-1.0.5.9 routing problems
by Patrick Chemla
Hi,
I am an experienced libvirt user on Fedora versions from F15 to F17.
I have developped scripts to route trafic from outside on multiple
interfaces/multiples IPs to multiple VMs, and back to affect each VM the
required external IP address.
I have servers with more than hundreds external IPs, and up to 4 VMs,
each of them route trafic on different external IPs.
I have servers with Fedora F17 which work very fine with this.
Now libvirt-1.0.5.9 comes to Fedora 19 with many iptables default rules
that refrain me to use my scripts.
So I put in /etc/libvirt/hooks /qemu the right rules to get trafic to my
VMs, but I can't set trafic back to external with the right external IP.
The -j SNAT --to-source ot -j MASQUERADE dont work, are ignored, and I
dont see any packet through these rules in iptables -tnat -L POSTROUTING.
I used tcpdump to trace packet on the physical server on virbr0
interface and on eth0 interface. I see the packets on outgoing route.
But, the ougoing packets are presented to the external interface with
the internal address 10.0.0.x instead of the address specified in the -j
SNAT rule.
Am I the only one in this case?
Somebody could help?
Thanks
Patrick
10 years, 9 months
[libvirt-users] Flags in java api bindings
by Pasquale Dir
I am currently using Connect(String,ConnectAuth,int) constructor, as I
decided to use tcp connection and I need the auth part.
It works, but I still need the read/write flag which, in the
Connect(String, boolean) is a boolean. I need to enable write permits.
In the javadoc no flags are defined...so which is the flag for allow write?
And where can I find a list?
10 years, 9 months
[libvirt-users] Live migration (kvm) not working if any I/O operation is in progress
by Pasquale Dir
Hello,
I'd like to know if this is an hypervisor related problem or a libvirt one.
I did this experiment: on VM I started watching a video on youtube.
While video was in progress I started migration.
Migration did not complete until video was not finished.
I did another experiment: I installed a web server on VM.
I then started an httperf stress test.
As before migration did not complete until the stress test was closed.
Is this normal?
10 years, 9 months
[libvirt-users] libvirt-sock No such file or directory
by Ram Srivatsa Kannan
Hi,
When I run the following command
*~/kvm/virt-manager $ virsh -c qemu:///system list*
i get the following errors. Please help me out with this issue.
error: failed to connect to the hypervisor
error: Failed to connect socket to
'/usr/local/var/run/libvirt/libvirt-sock': No such file or directory'
Thank you
Ram
PhD student
Dept of EECS UMich
10 years, 9 months
[libvirt-users] [libvirt] LXC, user namespaces and systemd
by Dariusz Michaluk
Hi!
I with my colleagues from Samsung trying to run systemd in Linux
container. I saw that the others are experimenting in this topic,
so I would like to present the results of my work and tests, perhaps it
will be helpful to others.
As the prototype I used a manual written by Daniel:
https://www.berrange.com/posts/2013/08/12/running-a-full-fedora-os-inside...
After many attempts, I managed to run systemd. Let's move to specifics.
1. Host configuration, Fedora 20
- kernel 3.14 with NAMESPACES, UTS_NS, IPC_NS, USER_NS, PID_NS, NET_NS
enabled in kernel config
I used kernel-3.14.0-0.rc2.git0.1.fc21.i686.rpm downloaded from
https://dl.fedoraproject.org/pub/fedora/linux/development/rawhide
- libvirtd (libvirt) 1.2.2
I used libvirt build from git sources, it is important that the source
contained commit 6fb42d7cdc57da453691d043d6b9bf23e2bae15e
Patch from Richard Weinberger "Ensure systemd cgroup ownership is
delegated to container with userns"
2. Container configuration
- setup Fedora environment
# yum -y --releasever=20 --nogpg
--installroot=/var/lib/libvirt/filesystems/mycontainer --disablerepo='*'
--enablerepo=fedora install systemd passwd yum fedora-release
vim-minimal openssh-server procps-ng
# echo "pts/0" >> /var/lib/libvirt/filesystems/mycontainer/etc/securetty
# chroot /var/lib/libvirt/filesystems/mycontainer /bin/passwd root
- In the final solution I want to map root inside container to some
normal user in the host. So let's create some user (in host):
# useradd foo -u 666
#id foo
uid=666(foo) gid=1001(foo) grupy=1001(foo)
# chown -R foo:foo /var/lib/libvirt/filesystems/mycontainer
- enabling user namespace (user mapping setup), look at my full libvirt
config file
# cat /etc/libvirt/lxc/container.xml
<domain type='lxc'>
<name>mycontainer</name>
<uuid>d750af59-6082-437c-b860-922e76b46410</uuid>
<memory unit='KiB'>819200</memory>
<currentMemory unit='KiB'>819200</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='i686'>exe</type>
<init>/sbin/init</init>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<idmap>
<uid start='0' target='666' count='1000'/>
<gid start='0' target='1001' count='1000'/>
</idmap>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/var/lib/libvirt/filesystems/mycontainer'/>
<target dir='/'/>
</filesystem>
<interface type='network'>
<mac address='00:16:3e:34:a2:dd'/>
<source network='default'/>
</interface>
<console type='pty'>
<target type='lxc' port='0'/>
</console>
</devices>
</domain>
3. Start container
# virsh --connect lxc:/// define /etc/libvirt/lxc/container.xml
# virsh --connect lxc:/// start mycontainer --console
If all login attempts are rejected, please boot host machine with audit=0
# vi /etc/default/grub
GRUB_CMDLINE_LINUX=" [...] audit=0 [...]"
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
4. Problems and solutions
a)
"Cannot add dependency job for unit display-manager.service, ignoring:
Unit display-manager.service failed to load: No such file or directory."
Delete or just comment line "Wants=display-manager.service"
# cat /usr/lib/systemd/system/default.target
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
After=multi-user.target
Conflicts=rescue.target
#Wants=display-manager.service
AllowIsolate=yes
[Install]
Alias=default.target
b)
[FAILED] Failed to mount Huge Pages File System.
See 'systemctl status dev-hugepages.mount' for details.
[FAILED] Failed to mount Configuration File System.
See 'systemctl status sys-kernel-config.mount' for details.
[FAILED] Failed to mount Debug File System.
See 'systemctl status sys-kernel-debug.mount' for details.
[FAILED] Failed to mount FUSE Control File System.
See 'systemctl status sys-fs-fuse-connections.mount' for details.
Based on knowledge, which gave Daniel: "When a syscall requires
CAP_SYS_ADMIN, for example, the kernel will either use
capable(CAP_SYS_ADMIN) which only succeeds in the host, or
ns_capable(CAP_SYS_ADMIN) which is allowed to suceed in the container.
Different filesystems have differing restrictions, but at this time the
vast majority of filesystems require that capable(CAP_SYS_ADMIN)
succeeed and thus you can only mount them in the host.",
and discussion about "allow some kernel filesystems to be mounted in a
user namespace" from:
http://comments.gmane.org/gmane.linux.kernel/1525998
I decided to disable mounting this filesystems:
# systemctl mask dev-hugepages.mount
ln -s '/dev/null' '/etc/systemd/system/dev-hugepages.mount'
# systemctl mask sys-kernel-config.mount
ln -s '/dev/null' '/etc/systemd/system/sys-kernel-config.mount'
# systemctl mask sys-kernel-debug.mount
ln -s '/dev/null' '/etc/systemd/system/sys-kernel-debug.mount'
# systemctl mask sys-fs-fuse-connections.mount
ln -s '/dev/null' '/etc/systemd/system/sys-fs-fuse-connections.mount'
c)
[FAILED] Failed to start D-Bus System Message Bus.
See 'systemctl status dbus.service' for details.
Feb 26 09:26:12 localhost.localdomain systemd[1]: Starting D-Bus System
Message Bus...
Feb 26 09:26:12 localhost.localdomain systemd[20]: Failed at step
OOM_ADJUST spawning /bin/dbus-daemon: Permission denied
# echo -900 > /proc/20/oom_score_adj
/proc/20/oom_score_adj: Permission denied
# ls -l /proc/20/oom_score_adj
-rw-r--r--. 1 65534 65534 0 Feb 26 10:28 /proc/20/oom_score_adj
Regarding to kernel documentation in user namespace local root user (on
guest) cannot set the OOM on any value. Set OOM on any value required
except CAP_SYS_RESOURCE also full root privileges.
To disable OOM support delete or just comment line "OOMScoreAdjust=-900"
# cat /usr/lib/systemd/system/dbus.service
[Unit]
Description=D-Bus System Message Bus
Requires=dbus.socket
After=syslog.target
[Service]
ExecStart=/bin/dbus-daemon --system --address=systemd: --nofork
--nopidfile --systemd-activation
ExecReload=/bin/dbus-send --print-reply --system --type=method_call
--dest=org.freedesktop.DBus / org.freedesktop.DBus.ReloadConfig
#OOMScoreAdjust=-900
5. Final systemd start
# virsh --connect lxc:/// start mycontainer --console
systemd 208 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA
+SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
Detected virtualization 'lxc-libvirt'.
Welcome to Fedora 20 (Heisenbug)!
Failed to install release agent, ignoring: No such file or directory
[ OK ] Reached target Remote File Systems.
[ OK ] Created slice Root Slice.
[ OK ] Created slice User and Session Slice.
[ OK ] Created slice System Slice.
[ OK ] Created slice system-getty.slice.
[ OK ] Reached target Slices.
[ OK ] Listening on Delayed Shutdown Socket.
[ OK ] Listening on /dev/initctl Compatibility Named Pipe.
[ OK ] Reached target Paths.
[ OK ] Reached target Encrypted Volumes.
[ OK ] Listening on Journal Socket.
Mounting POSIX Message Queue File System...
Starting Journal Service...
[ OK ] Started Journal Service.
Starting Create static device nodes in /dev...
[ OK ] Reached target Swap.
Mounting Temporary Directory...
Starting Load/Save Random Seed...
[ OK ] Mounted POSIX Message Queue File System.
[ OK ] Started Create static device nodes in /dev.
[ OK ] Reached target Local File Systems (Pre).
[ OK ] Started Load/Save Random Seed.
[ OK ] Mounted Temporary Directory.
[ OK ] Reached target Local File Systems.
Starting Trigger Flushing of Journal to Persistent Storage...
Starting Recreate Volatile Files and Directories...
[ OK ] Started Trigger Flushing of Journal to Persistent Storage.
[ OK ] Started Recreate Volatile Files and Directories.
Starting Update UTMP about System Reboot/Shutdown...
[ OK ] Started Update UTMP about System Reboot/Shutdown.
[ OK ] Reached target System Initialization.
[ OK ] Reached target Timers.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Reached target Sockets.
[ OK ] Reached target Basic System.
Starting OpenSSH server daemon...
Starting Permit User Sessions...
Starting D-Bus System Message Bus...
[ OK ] Started D-Bus System Message Bus.
Starting Login Service...
[ OK ] Started OpenSSH server daemon.
[ OK ] Started Permit User Sessions.
Starting Console Getty...
[ OK ] Started Console Getty.
[ OK ] Reached target Login Prompts.
Starting Cleanup of Temporary Directories...
[ OK ] Started Cleanup of Temporary Directories.
[ OK ] Started Login Service.
[ OK ] Reached target Multi-User System.
[ OK ] Reached target Graphical Interface.
Fedora release 20 (Heisenbug)
Kernel 3.14.0-0.rc2.git0.1.fc21.i686 on an i686 (console)
localhost login: root
Password:
Last login: Wed Feb 26 09:26:21 on pts/0
-bash-4.2#
- verification which namespace is used
inside container
# ls -l /proc/self/ns/
ipc -> ipc:[4026532341]
mnt -> mnt:[4026532338]
net -> net:[4026532344]
pid -> pid:[4026532342]
user -> user:[4026532337]
uts -> uts:[4026532339]
outside container
$ ls -l /proc/self/ns/
ipc -> ipc:[4026531839]
mnt -> mnt:[4026531840]
net -> net:[4026531956]
pid -> pid:[4026531836]
user -> user:[4026531837]
uts -> uts:[4026531838]
I know that no one likes to read long emails , but most is config and
logs. I will be grateful for comments and suggestions.
Regards.
--
Dariusz Michaluk
Samsung R&D Institute Poland
Samsung Electronics
d.michaluk(a)samsung.com
10 years, 9 months
[libvirt-users] 'virsh capabilities' on Debian Wheezy-amd64 reports different cpu to Wheezy-i386 (on same hardware)
by Struan Bartlett
Hi
On a range of Dell servers containing Intel 64bit processors, 'virsh
capabilities' reports the cpu differently on Debian Wheezy-amd64 and
Wheezy-i386. The results given by the Wheezy-i386 version seem very
wrong (since n270 is an Atom processor). Apart from architecture, the
package versions of libvirt-bin are identical: 1.2.1-1~bpo70+1.
/usr/share/libvirt/cpu_map.xml files are identical. Is this a known
issue? Details for one server are:
# cat /proc/cpuinfo| head -n 26
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2650L 0 @ 1.80GHz
stepping : 7
microcode : 0x70d
cpu MHz : 1800.054
cache size : 20480 KB
physical id : 0
siblings : 16
core id : 0
cpu cores : 8
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good
nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64
monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1
sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat
xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 3600.10
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
...
</proc/cpuinfo for processors 1..31 snipped here for brevity>
# Running Wheezy-amd64 libvirt-bin1.2.1-1~bpo70+1
# virsh capabilities
<cpu>
<arch>x86_64</arch>
<model>SandyBridge</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='8' threads='2'/>
<feature name='pdpe1gb'/>
<feature name='osxsave'/>
<feature name='dca'/>
<feature name='pcid'/>
<feature name='pdcm'/>
<feature name='xtpr'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='smx'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='monitor'/>
<feature name='dtes64'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
<feature name='vme'/>
</cpu>
# Running Wheezy-i386 libvirt-bin1.2.1-1~bpo70+1
# virsh capabilities
<cpu>
<arch>x86_64</arch>
<model>n270</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='8' threads='2'/>
<feature name='lahf_lm'/>
<feature name='lm'/>
<feature name='rdtscp'/>
<feature name='pdpe1gb'/>
<feature name='avx'/>
<feature name='osxsave'/>
<feature name='xsave'/>
<feature name='aes'/>
<feature name='tsc-deadline'/>
<feature name='popcnt'/>
<feature name='x2apic'/>
<feature name='sse4.2'/>
<feature name='sse4.1'/>
<feature name='dca'/>
<feature name='pcid'/>
<feature name='pdcm'/>
<feature name='xtpr'/>
<feature name='cx16'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='smx'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='dtes64'/>
<feature name='pclmuldq'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
<feature name='pse36'/>
</cpu>
Kind regards
Struan Bartlett
--
Struan Bartlett
NewsNow Publishing Limited
Tel: +44 (0)845 838 8890
Fax: +44 (0)845 838 8898
The UK's #1 News Portal:
> www.NewsNow.co.uk <http://www.NewsNow.co.uk> (est. 1998)
Also tailored for Mobile:
> mobile.NewsNow.co.uk <http://mobile.NewsNow.co.uk/>
Now with FREE Personalisation:
> Register <http://www.NewsNow.co.uk/register/>
Bespoke B2B Internet News Monitoring:
> Internet News Monitoring
<http://www.newsnow.co.uk/services/newsmonitoring/>
Bespoke B2B Headlines for Websites:
> Editorial-In-A-Box <http://www.newsnow.co.uk/services/websites/>
NewsNow Publishing Limited, trading also as NewsNow.co.uk, is a company
registered in England and Wales under company no. 3435857 with
registered office The Euston Office, 1 Euston Square, 40 Melton Street,
London NW1 2FD
10 years, 9 months