[libvirt-users] Enabling capabilities in a container
by Peter Steele
I'm using libvirt_lxc to create and manage various containers. I need to
enable certain capabilities in a container to support ctdb, and as a
quick solution I decided to just enable them all. I *thought* this would
do the trick, adding the following XML to my container config:
<features>
<capabilities policy='allow'>
</capabilities>
</features>
After adding this to my container, I restarted it and tried to start the
ctdb service again:
# systemctl start ctdb.service
Job for ctdb.service failed. See 'systemctl status ctdb.service' and
'journalctl -xn' for details.
# systemctl status ctdb.service
ctdb.service - CTDB
Loaded: loaded (/usr/lib/systemd/system/ctdb.service; disabled)
Active: failed (Result: exit-code) since Tue 2015-08-04 14:10:39
PDT; 8s ago
Process: 4612 ExecStart=/usr/sbin/ctdbd_wrapper /run/ctdb/ctdbd.pid
start (code=exited, status=1/FAILURE)
Aug 04 14:10:37 pws-01 systemd[1]: Starting CTDB...
Aug 04 14:10:37 pws-01 ctdbd[4629]: CTDB starting on node
Aug 04 14:10:37 pws-01 ctdbd[4631]: Starting CTDBD (Version 2.5.4) as
PID: 4631
Aug 04 14:10:37 pws-01 ctdbd[4631]: Created PID file /run/ctdb/ctdbd.pid
Aug 04 14:10:37 pws-01 ctdbd[4631]: Unable to set scheduler to
SCHED_FIFO (Operation not permitted)
Aug 04 14:10:37 pws-01 ctdbd[4631]: CTDB daemon shutting down
Aug 04 14:10:39 pws-01 ctdbd_wrapper[4612]: CTDB exited during
initialisation - check logs.
Aug 04 14:10:39 pws-01 systemd[1]: ctdb.service: control process exited,
code=exited status=1
Aug 04 14:10:39 pws-01 systemd[1]: Failed to start CTDB.
Aug 04 14:10:39 pws-01 systemd[1]: Unit ctdb.service entered failed state.
This implies that the container still doesn't have access to the
capabilities it needs to use. I believe this error in fact is caused by
the container not having the sys_nice capability. So I tried to allow
this specific capability using:
<features>
<capabilities policy='default'>
<sys_nice state='on'/>
</capabilities>
</features>
This did not work either. So, what *is* the correct way to add
capabilities to a container?
9 years, 2 months
[libvirt-users] Cannot boot libvirt guests with OVMF. Raw qemu-kvm works as expected
by Ryan Barry
Using:
edk2.git-0-20150803.b1141.ga0973dc.x86_64
edk2.git-ovmf-x64-0-20150802.b1139.gb234418.noarch
On Fedora 22.
Provisioning a i440FX system in virt-manager and attempting to boot
results in successful EFI initialization, but the VM exits ungracefully
after the bootloader (with F22 and CentOS 7 installer images). There's
no really useful information in any of the logs.
Using qemu-kvm directly (qemu-kvm -bios
/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd -m 1G -cdrom
~rbarry/Downloads/Fedora-Server-netinst-x86_64-22.iso) boots and loads
successfully.
What's the difference here? Where can I go for troubleshooting?
libvirt XML is below:
<domain type='kvm'>
<name>fedora22</name>
<uuid>7f363d28-881f-4240-97eb-9b8d49cfb282</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
<loader readonly='yes'
type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/fedora22_VARS.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>Haswell-noTSX</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/fedora22.qcow2'/>
<target dev='vda' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source
file='/home/rbarry/Downloads/Fedora-Server-netinst-x86_64-22.iso'/>
<target dev='hda' bus='ide'/>
<readonly/>
<boot order='2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:35:b6:00'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channel/target/fedora22.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'>
<image compression='off'/>
</graphics>
<sound model='ich6'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
</memballoon>
</devices>
</domain>
9 years, 2 months
[libvirt-users] UDP unicast network backend (QEMU)
by Stanley Karunditu
In 2012 QEMU added UDP unicast network backend support
https://github.com/qemu/qemu/commit/0e0e7facc775e9bb020314f48751b3d09f316...
Checked latest libvirt on the git repo, and didn't see this as an option.
So tried to use the mcast tunnel mode. I keep getting duplicate packets
with BPDUs and LLDP packets going between the Point to point connection
between the VMs. VMs have bridges on them. When I used the TCP tunnel
interface, if the client came up before the server end, the connection
failed to establish. So both methods not super reliable.
If I manually change the netdev settings to use unicast udp tunneling the
connection is way more stable just like the GNS3 connections.
Has anyone written support for UDP unicast network backend for QEMU and the
patch is awaiting review?
Searched mailing list and couldn't find a bug covering this.
Just want to be sure no one has worked on this before creating a patch to
add this support.
9 years, 2 months