[libvirt-users] lxc-enter-namespace error: security model cannot be entered.
by hzguanqiang
Hi Guys,
I started a lxc container with libvit in ubuntu Operating system, and succeed using lxc-enter-namespace to enter the namespaces and security context of the container. But when I do the same thing in debian OS, It reported an error, with details as following:
root@debian:/etc# vir list
Id Name State
----------------------------------------------------
4424 instance-00000007 running
25913 instance-00000008 running
root@debian:/etc# vir dumpxml 4424
<domain type='lxc' id='4424'>
<name>instance-00000007</name>
<uuid>f1ce5360-bb5e-4cfc-b5ef-d05f8db52618</uuid>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<vcpu placement='static'>3</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64'>exe</type>
<init>/sbin/init</init>
<cmdline>console=tty0 console=ttyS0</cmdline>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/lib/libvirt/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/opt/stack/data/nova/instances/f1ce5360-bb5e-4cfc-b5ef-d05f8db52618/rootfs'/>
<target dir='/'/>
</filesystem>
<interface type='bridge'>
<mac address='fa:16:3e:3a:c6:11'/>
<source bridge='br100'/>
<target dev='veth0'/>
<filterref filter='nova-instance-instance-00000007-fa163e3ac611'/>
</interface>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target type='lxc' port='0'/>
<alias name='console0'/>
</console>
</devices>
<seclabel type='none'/>
</domain>
root@debian:/etc# vir lxc-enter-namespace 4424 /bin/sh/
libvirt: error : argument unsupported: Security model cannot be entered
Is there anything that needs to be configured in debian OS for using the 'lxc-enter-namespace' interface?
--------------
Best regards!
GuanQiang
2013-07-30
11 years, 3 months
[libvirt-users] ANNOUNCE: Oz 0.11.0 release
by Chris Lalancette
All,
I'm pleased to announce release 0.11.0 of Oz. Oz is a program
for doing automated installation of guest operating systems with
limited input from the user. Release 0.11.0 is a bugfix and feature
release for Oz. Some of the highlights between Oz 0.10.0 and 0.11.0
are:
* Add support for installing Ubuntu 13.04
* Add the ability to get user-specific ICICLE information
* Add the ability to generate ICICLE safely, by using a disk snapshot
* Add the ability to include extra files and directories on the installation ISO
* Add the ability to install to alternate file types, like qcow2, etc.
* Add support for installing Ubuntu 5.04/5.10
* Add support for installing Fedora 19
* Add support for installing Debian 7
* Add support for Windows 2012 and 8
* Add support for getting files over http for the commands/files
section of the TDL
* Add support for setting a custom MAC address to guests during installation
* Add support for user specified disk and NIC model
* Add support for OpenSUSE 12.3
* Add support for URL based installs for Ubuntu
A tarball and zipfile of this release is available on the Github
releases page: https://github.com/clalancette/oz/releases . Packages
for Fedora-18 and Fedora-19 have been built in Koji and will
eventually make their way to stable. Instructions on how to get and
use Oz are available at http://github.com/clalancette/oz/wiki .
If you have questions or comments about Oz, please feel free to
contact me at clalancette at gmail.com, or open up an issue on the
github page: http://github.com/clalancette/oz/issues .
This was one of the most active Oz releases ever, because of the
feedback and patches from the community. Thanks to everyone who
contributed to this release through bug reports, patches, and
suggestions for improvement.
Chris Lalancette
11 years, 3 months
[libvirt-users] Libvirt and Glusterfs pool
by Pierre-Gilles Mialon
Hi,
I use the QEMU-GlusterFS native integration (no Fuse mount) with the
libvirt.
Now I create a volume issuing :
# qemu-img create gluster://localhost/gv1/test.img 5G
Then using the libvirt I declare the following lines in my domain.xml :
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source protocol='gluster' name='gv1/test.img'>
<host name='127.0.0.1' transport='tcp' />
</source>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
It works really well. But the better way, would be to only use the libvirt.
Is it planned to support a native glusterFS pool type ?
That would be great to be able to define a pool like this:
<pool type='glusterfs'>
<name>myname</name>
<source protocol='gluster' volume='gv0'>
<host name='127.0.0.1'/>
</source>
<target>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
And then create new images using vol-create.
Regards,
--
Pierre-Gilles Mialon
11 years, 3 months
[libvirt-users] pinVcpu not working
by Peeyush Gupta
Hi all,
I am working with libvirt and I am trying to set cpu affinity. Now I can always use
virsh vcpupin <domain_name> <vcpu> <pcpu>
to pin vcpus to pcpus. I want to do it using Python API. Now, there is function pinVcpu which is supposed to do that. But this is not working. For example I gave
dom.pinVcpu(0,1)
but still my vcpu affinity is for all the pcpus. The function returns 0 (success).
Any idea what am I doing wrong?
Thanks.
~Peeyush Gupta
11 years, 3 months
[libvirt-users] How to trigger a script in a guest after resume? (aka guest system clock incorrect after host reboot with suspended guest)
by Nils Toedtmann
Hi
Is there a way i can trigger a script within a libvirt guest immediately
after resume? E.g. i don't see any log message that indicates to a guest
that it had been paused and was just woken up.
The problem that i am trying to solve is that when i reboot the host
(pausing all guests), the guests' system clocks are a few minutes late
after resume. The guests RTC (current_clocksource
= kvm-clock) are correct though, so a "hwclock --hctosys" fixed the
offset. I only need to find out how to call hwclock right after resume!
Right now i call hwclock every few minutes, but i'd like to shorten the
window of wrong time. OTOH i'd like to avoid installing ntpd into all
guests, given their perfectly fine RTCs.
On the host i am using CentOS 6.4 with all patches and the packaged
libvirt-0.10.2 and qemu-kvm-0.12.1.2. All guests are Ubuntu 12.04.
/nils.
11 years, 3 months
[libvirt-users] API to set cpuset.cpu_exclusive flag
by Peeyush Gupta
Hi all,
I have been trying to set cpu_exclusive flag. Now I can do it using "echo". I want to know is there any other way (an API) to set this flag? Is it possible to set this flag using API?
Thanks.
~Peeyush Gupta
11 years, 3 months
[libvirt-users] How to monitor a lxc container started by libvirt_lxc from inside ?
by hzguanqiang
Hi Guys,
When I created a lxc container by libvirt, I logged into the lxc container and noticed that info under /proc/ dir did not match the lxc container resource. Is the /proc dir in lxc container just showing the same thing as the lxc host? If I want to monitor the realtime resource usage inside the lxc container, What should I do?
--------------
Best regards!
GuanQiang
2013-07-23
11 years, 3 months
[libvirt-users] Resize errors with virt-resize/vgchange
by Alex
Hi,
I have an fc18 system and trying to resize an LVM partition with an
ext4 filesystem and receiving the following message from virt-resize:
# virt-resize -d --expand /dev/sda1 --LV-expand /dev/mapper/prop-home
prop-1.img prop-expand.img
command line: virt-resize -d --expand /dev/sda1 --LV-expand
/dev/mapper/prop-home prop-1.img prop-expand.img
Examining prop-1.img ...
libguestfs: trace: add_drive "prop-1.img" "readonly:true"
libguestfs: trace: add_drive = 0
libguestfs: trace: add_drive "prop-expand.img" "readonly:false"
libguestfs: trace: add_drive = 0
libguestfs: trace: launch
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: trace: disk_format "/var/lib/libvirt/images/prop-expand.img"
libguestfs: trace: disk_format = "raw"
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
[...] 100% --:--
libguestfs: trace: launch = 0
libguestfs: trace: lvm_set_filter "/dev/sda"
libguestfs: trace: lvm_set_filter = -1 (error)
Fatal error: exception Guestfs.Error("lvm_set_filter: vgchange:
Couldn't find device with uuid zouQ8X-qxqJ-mp6p-pzg3-mi2i-K9YM-A763Kc.
Refusing activation of partial LV home. Use --partial to override.
Refusing activation of partial LV swap. Use --partial to override.
R
libguestfs: trace: close
libguestfs: trace: internal_autosync
libguestfs: trace: internal_autosync = 0
I don't understand this error message. I also see that vgchange
doesn't even have a 'partial' option, so I'm not sure how to
troubleshoot it. Here is the filesystem layout for this system:
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 3.9M 7.9G 1% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/mapper/prop-root 15G 7.4G 6.3G 55% /
tmpfs 7.9G 0 7.9G 0% /tmp
/dev/mapper/prop-boot 477M 95M 358M 21% /boot
/dev/mapper/prop-home 222G 212G 9.9G 96% /home
The image is the 222G /home partition only. The other partitions are
on another image.
Any ideas how to troubleshoot this would be greatly appreciated.
Thanks,
Alex
11 years, 3 months