[libvirt-users] Can we disable write to /sys/fs/cgroup tree inside container ?
by mxs kolo
Hi all
Each lxc container on node have mounted tmpfs for cgroups tree:
[root-inside-lxc@tst1 ~]# mount | grep cgroups
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/cpuset type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
(rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/systemd type cgroup
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/hugetlb type cgroup
(rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
It's by default, at least in my case.
Problem is, that it's full cgroups tree - from hardware node and from
all another containers on node.
[root-inside-lxc@tst1 ~]# for i in `ls
/sys/fs/cgroup/devices/machine.slice/machine-lxc*/devices.list`; do
echo $i; cat $i; done
/sys/fs/cgroup/devices/machine.slice/machine-lxc\x2d10297\x2dtst2.scope/devices.list
c 1:3 rwm
c 1:5 rwm
c 1:7 rwm
c 1:8 rwm
c 1:9 rwm
c 5:0 rwm
c 5:2 rwm
c 10:229 rwm
b 253:6 rw
c 136:* rwm
/sys/fs/cgroup/devices/machine.slice/machine-lxc\x2d9951\x2dtst1.scope/devices.list
c 1:3 rwm
c 1:5 rwm
c 1:7 rwm
c 1:8 rwm
c 1:9 rwm
c 5:0 rwm
c 5:2 rwm
c 10:229 rwm
b 253:7 rw
c 136:* rwm
Hardware node file, view inside tst1 container:
[root-inside-lxc@tst1 ~]# cat /sys/fs/cgroup/devices/devices.list
a *:* rwm
What is best way to prevent viewing and editing of all cgroups
structures except belonging to current lxc container (selinux,
apparmor ) ?
Why libvirt mount /sys/fs/cgroup/* inside container as rw ?
We use kernel 3.10.0-693.2.2.el7.x86_64 and XFS and therefore our
containers are privileged. Yes, we know that in such containers root
can use SysRq at least for reboot hardware node. But problem with
cgroups can be more hidden and cryptic.
p.s.
As show short test, root user can disable device zero on node
[root-lxc@tst1 ~]# echo "c 1:5 rwm" > /sys/fs/cgroup/devices/devices.deny
or all devices in another container
[root-lxc@tst1 ~]# echo "a *:* rwm" >
/sys/fs/cgroup/devices/machine.slice/machine-lxc\x2d10297\x2dtst2.scope/devices.deny
b.r.
Maxim Kozin
7 years, 1 month
[libvirt-users] File option for smbios.
by Julio Faracco
Hi guys,
Does anybody knows if there is an option to set a binary file for smbios?
Currently, I'm doing this approach:
<qemu:commandline>
<qemu:arg value='-smbios'/>
<qemu:arg value='file=/var/run/libvirt/qemu/smbios_type_1.bin'/>
</qemu:commandline>
Has libvirt a method as <smbios mode='host'/> ?
--
Julio Cesar Faracco
7 years, 1 month
[libvirt-users] virConnectIsAlive
by llilulu
Hi
In my program , When libvirtd restart , the old libvirtd connection(virConnectPtr) has to reconnect, Before usr old virConnectPtr , I call virConnectIsAlive. but when I restart libvirtd , virConnectIsAlive return 1, and I continue other operate use the old virConnectPtr, program will receive signal pipe.
I use libvirt event. before any api use, I call virEventRegisterDefaultImpl, and run virEventRunDefaultImpl. then use virConnectIsAlive, when I restart libvirtd, virConnectIsAlive will return 0, I want know how to use virConnectIsAlive, if virConnectIsAlive use keep alive message, what default the interval and count.
Thanks
7 years, 1 month
[libvirt-users] error: internal error: missing backend for pool type 11 (zfs)
by Nick Gilmour
Hi all,
I'm trying to setup virt-manager with ZFS als storage on my Arch box. I
have created a pool named virt-pool and tried to use it as storage. First
with virt-manager and then in the terminal with virsh but I'm always
getting the following errors:
*virsh # pool-define-as --name zfsvirtpool --source-name virt-pool --type
zfserror: Failed to define pool zfsvirtpoolerror: internal error: missing
backend for pool type 11 (zfs)*
But ZFS seems to be working fine:
# zpool status
pool: virt-pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
virt-pool ONLINE 0 0 0
x ONLINE 0 0 0
errors: No known data errors
libvirt and VMM also. These are the installed versions:
Virtual Machine Manager v.1.4.3
virsh v.3.7.0
libvirtd (libvirt) 3.7.0
I have found that there was a bug last year which is supposed to be fixed:
*https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=827245
<https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=827245>*
So why I'm getting this error. Is something wrong with my setup?
Regards,
Nick
7 years, 1 month
[libvirt-users] virDomainSaveImageDefineXML
by llilulu
Hi:
When I make snapshot use virDomainSave,before I resume the snapshot , I use virDomainSaveImageDefineXML change the domain info, like disk path. but I can't change the <devices> , for example detach a disk or interface, Can't tell me what can I change use virDomainSaveImageDefineXML before resume snapshot.
7 years, 1 month