[libvirt-users] How can I run command in containers on the host?
by John Y.
How can I run command in containers on the host? Just like the lxc command
lxc-attach.
I run :
virsh -c lxc:/// lxc-enter-namespace fedora2 --noseclabel /bin/ls
but get error:
libvirt: error : Expected at least one file descriptor
error: internal error: Child process (14930) unexpected exit status 125
Here is my libvirt.xml
<domain type='lxc'>
<name>fedora2</name>
<memory>190000</memory>
<vcpu>2</vcpu>
<cputune>
<shares>1000</shares>
</cputune>
<os>
<features>
<privnet/>
<capabilities policy='allow'>
</capabilities>
</features>
<type>exe</type>
<init>/sbin/init</init>
</os>
<devices>
<filesystem type="mount">
<source dir="/home/lxc-fedora"></source>
<target dir="/"></target>
</filesystem>
<console type='pty'/>
<interface type='bridge'>
<mac address='52:54:00:e8:96:88'/>
<source bridge='virbr0'/>
<target dev='vnet0'/>
<guest dev='eth0'/>
</interface>
</devices>
</domain>
version of libvirt:
Compiled against library: libvirt 1.2.9
Using library: libvirt 1.2.9
Using API: QEMU 1.2.9
Running hypervisor: QEMU 2.1.2
Is there any mistake?
Thanks,
John
8 years, 3 months
[libvirt-users] Blockpull behavior when interrupted
by Andrew Martin
Hello,
I use snapshot-create-as followed by blockpull when creating external snapshots
of VMs. This works well, however I am curious about the behavior of blockpull
after an unexpected shutdown (or SIGKILL). If a blockpull is in progress and an
unexpected power loss occurs, will the VM continue to reference the backing file
for the parts of it that have not yet been copied? Or, will will the disk image
no longer be usable?
Thanks,
Andrew
8 years, 3 months
[libvirt-users] libvirtd terminates running instances after monitor's socket inode id is changed
by Gavin Nicholson
We accidentally moved /var/lib/libvirt/qemu to another directory which is
on a different disk partition, then we moved it back; however, that caused
changing the inode ids of the files inside /var/lib/libvirt/qemu including
the socket files ( <instance>.monitor ) of the running VMs. The problem we
are facing is that when libvirtd is restarted it terminates the running VMs
with "error : qemuMonitorOpenUnix:315 : monitor socket did not show up.:
Connection refused”
We are trying to find a workaround so that when libvirtd is restarted it
doesn’t stop the running VMs.
We are running libvirt 0.9.13
Any help is appreciated.
Thanks,
Gavin
8 years, 3 months
Re: [libvirt-users] [Qemu-devel] VFIO PCIe Extended Capabilities
by Spenser Gilliland
Hi Marcel,
>> Indeed, if a device is attached to a PCI bus it makes no sense to advertise the extended configuration space.
>> Can you please share the QEMU command line? Maybe is possible to make the device's bus PCIe in QEMU?
Changing the following should make the tweak I proposed irrelevant.
- device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,addr=0x4
+ device vfio-pci,host=03:00.0,id=hostdev0,bus=pcie.0,addr=0x4
Here's the full command for reference.
2016-07-19 18:42:22.110+0000: starting up libvirt version: 1.2.17, package: 13.el7 (CentOS BuildSystem <http://bugs.centos.org>, 2015-11-20-16:24:10, worker1.bsys.centos.org), qemu version: 2.3.0 (qemu-kvm-ev-2.3.0-31.el7_2.10.9)
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name instance-00000019 -S -machine pc-q35-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 8192 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 -uuid 8b6865eb-2118-430e-a7e0-2989696576b1 -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=13.1.0-1.el7,serial=a1068a93-24d1-4da2-8903-f9b8307fb0d8,uuid=8b6865eb-2118-430e-a7e0-2989696576b1,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-00000019/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -drive file=/var/lib/nova/instances/8b6865eb-2118-430e-a7e0-2989696576b1/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x2,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/nova/instances/8b6865eb-2118-430e-a7e0-2989696576b1/disk.swap,if=none,id=drive-virtio-disk1,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x3,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:9c:cf:60,bus=pci.2,addr=0x1 -chardev file,id=charserial0,path=/var/lib/nova/instances/8b6865eb-2118-430e-a7e0-2989696576b1/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 0.0.0.0:0 -k en-us -device cirrus-vga,id=video0,bus=pcie.0,addr=0x1 -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x5 -msg timestamp=on
char device redirected to /dev/pts/3 (label charserial1)
It seems very reasonable to change the qemu command line. However, I'm using OpenStack Nova to launch the instance, which then uses libvirt, which finally uses qemu. So, the issues is very buried.
> I think that any instance of a q35 machine where the assigned device is
> placed on the conventional PCI bridge will create this scenario. It's
> the default for attaching devices to a libvirt managed q35 VM AFAIK.
> Yes, I discussed with Laine from libvirt the possibility to assign
> devices to a PCIe port instead.
Yes, I see this as the issue as well. The VM is being created by libvirt with the following xml. The problem is that the device is auto-assigned to the pci-bridge by default.
<domain type="kvm">
<uuid>8b6865eb-2118-430e-a7e0-2989696576b1</uuid>
<name>instance-00000019</name>
<memory>8388608</memory>
<vcpu>8</vcpu>
... <snip> ...
<devices>
... <snip> ...
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address bus="0x03" domain="0x0000" function="0x0" slot="0x00"/>
</source>
</hostdev>
... <snip> ...
</devices>
</domain>
If I do a dumpxml I get the following
<domain type="kvm">
<uuid>8b6865eb-2118-430e-a7e0-2989696576b1</uuid>
<name>instance-00000019</name>
<memory>8388608</memory>
<vcpu>8</vcpu>
... <snip> ...
<devices>
... <snip> ...
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='pci' index='1' model='dmi-to-pci-bridge'>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pci-bridge'>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
</controller>
<interface type='bridge'>
... <snip> ...
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</source>
<alias name='hostdev0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
</hostdev>
</devices>
</domain>
If I manually change the domain as follows
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</source>
<alias name='hostdev0'/>
- <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</hostdev>
I can reimport and the device and it attaches successfully to the pcie-root hub. The problem is I need to either manually specify this in nova or update the behavior of libvirt to auto assign pcie devices to pcie busses. This would involve reading pci configuration space to check if the device is a pcie devices. Then attaching to the pcie-root, if possible.
>>>
>>> In fact, I've tried to fix this multiple times:
>>>
>>> https://lists.nongnu.org/archive/html/qemu-devel/2015-10/msg05384.html
>>> https://lists.nongnu.org/archive/html/qemu-devel/2015-11/msg02422.html
>>> https://lists.nongnu.org/archive/html/qemu-devel/2016-01/msg03259.html
>>>
>>> Yet the patch remains unapplied :(
>>
>> I thought is it in already. Maybe Michael can add it as part of the hard freeze.
>> And if the patch will be applied, the tweak above wouldn't help, right Alex?
>
> The tweak Spenser suggested above should be unnecessary with my proposed
> patch applied.
> I think I finally understand this. The bus is not pcie -> we return
> from vfio_add_ext_cap without "breaking" the extended capabilities
> chain and the bare metal SR-IOV capability will be visible to guest.
> With your patch the PCI bridge will "interfere" and mask the extended
> configuration space completely.
> Only now searching for that patch did I notice Michael's
> comment hidden at the bottom of his reply, which I assume is why it
> never got applied:
>
> https://patchwork.kernel.org/patch/8057411/
>
> I just saw it too! It seems Michael wants to cache this info
> in device instead of re-calculating it every time.
>> Anyway, the current behavior is clearly a bug, so QEMU hard freeze
>> should be irrelevant. If anyone wants to take over the patch, feel
>> free. Thanks,
> I suppose I can handle it, but sadly not for 2.7.
> If Spencer has some time now he can help by testing it and reviewing it quickly :)
I'd be happy to help; but that patch really just more permanently breaks my current workaround ;-) .
Thanks,
Spenser
This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.
8 years, 3 months
[libvirt-users] Fwd: Please help see issue: conn.storageVolLookupByPath('/var/lib/xen/images/xen-pv-rhel5.9-x64-nolvm.img')
by Junqin Zhou
Hi all,
Please see the attachment for vdsm.log details, thanks.
Best Regards,
juzhou.
----- Forwarded Message -----
> From: "Junqin Zhou" <juzhou(a)redhat.com>
> To: libvirt-users(a)redhat.com
> Cc: "Shahar Havivi" <shavivi(a)redhat.com>, "Tingting Zheng" <tzheng(a)redhat.com>, "Min Zhan" <mzhan(a)redhat.com>, "Ming
> Xie" <mxie(a)redhat.com>
> Sent: Wednesday, July 13, 2016 6:24:15 PM
> Subject: Please help see issue: conn.storageVolLookupByPath('/var/lib/xen/images/xen-pv-rhel5.9-x64-nolvm.img')
>
> Hi guys,
> While testing v2v integration to rhevm4.0, I met following error when import
> virtual machine from xen server.
>
> I.rhevm4.0 prompt error message
> Failed to communicate with the external provider, see log for additional
> details
>
> Steps:
> 1. In order to import VMs password-less SSH access has to be enabled between
> VDSM host and the Xen host. The following steps needed to be done at the
> VDSM host:
> Refer to
> https://github.com/oVirt/ovirt-site/blob/master/source/develop/release-ma...
>
> 2. Then login rhevm 4.0 and try to import virtual machine from Xen.
> Virtual Machines-->Import-->Fill items on 'Import Virtual Machine(s)' window
> with:
> Data Center:xx
> Source: Xen(via RHEL)
> URI: xen+ssh://root@xxxxxx
> Proxy Host: xxxxxx
>
> Result:
> Step2 failed to load xen server's guests with error:
> Failed to communicate with the external provider, see log for additional
> details
>
> Part of vdsm.log
> ...
> jsonrpc.Executor/3::WARNING::2016-07-13
> 08:18:40,588::v2v::959::root::(_add_disks) Disk {'alias':
> '/var/lib/xen/images/xen-pv-rhel5.9-x64-nolvm.img', 'type': 'disk', 'dev':
> 'xvda'} has unsupported format: <built-in function format>
> jsonrpc.Executor/3::ERROR::2016-07-13
> 08:18:41,504::v2v::923::root::(_add_disk_info) Error getting disk size
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 920, in
> _add_disk_info
> vol = conn.storageVolLookupByPath(disk['alias'])
> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4596, in
> storageVolLookupByPath
> if ret is None:raise libvirtError('virStorageVolLookupByPath() failed',
> conn=self)
> libvirtError: Storage volume not found: no storage vol with matching path
>
> Please help me check how to resolve this issue, thanks.
>
>
> Best Regards,
> juzhou.
>
>
>
8 years, 3 months
[libvirt-users] virt-login-shell: Security model none cannot be entered
by james@lottspot.com
Hello!
I am currently experimenting a bit with some of the LXC support under
libvirt, and in trying to utilize the tool virt-login-shell, I encounter
the following error:
[james@lxchost ~]$ virt-login-shell
libvirt: error : argument unsupported: Security model none cannot be
entered
Though it should be apparent from the lack of error, the domain is most
definitely running.
[root@lxchost ~]# virsh -c lxc:/// list
Id Name State
----------------------------------------------------
4909 james running
Config information:
[root@lxchost ~]# grep -vE '^(#|$)' /etc/libvirt/virt-login-shell.conf
allowed_users = [ "*" ]
OS information:
[root@lxchost ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
Am I simply missing something here? Any tips or pointers are greatly
appreciated!
8 years, 3 months
[libvirt-users] nested vms and nested macvtaps
by jsl6uy js16uy
Hello all, hope all is well
this maybe outside of libvirt-users....
Can you nest macvtap devices to ultimately receive a real routable ip on
the nested vm?
I have a nested vm up and running. Both vm and nested vm are centos 7 on
arch linux host. The first vm uses a macvtap in bridge mode receives dhcp
from an external dhcp server.
I start the second vm and dhclient hangs and never receives an offer.
I stood up a static interface on the nested vm. If I initiate a ping from
within the nested vm, I can see via tcpdump that the echo request is seen
on the non-nested vm and the hosting server. The reply comes back and
stops/is answered on the hosting OS interface.
rp_filter is off. No firewall rules in play blocking dhcp or icmp
Any/all help appreciated
thanks very much
8 years, 3 months
[libvirt-users] lxc containers won't start in a f24 custom install - odd cgroup fs layout observed
by Thierry Parmentelat
Hi folks
I use libvirt to programmatically spawn lxc containers
I am facing an issue when migrating from fedora23 to fedora24
I use the stock kernel and libvirt version on both deployments, i.e.:
f23: libvirt-1.2.18.3-2.fc23.x86_64 - kernel 4.5.7-202.fc23.x86_64
f24: libvirt-1.3.3.1-4.fc24.x86_64 - kernel 4.6.3-300.fc24.x86_64
First off, I need to outline that the host installation is done through some ad hoc procedure,
as all this happens in the context of the centrally-managed PlanetLab global infrastructure
I could not reproduce this problem on a host that is installed with the standard fedora install
However I could use any clue that would point at a possible reason for this IMHO odd behaviour from libvirt,
so that I can narrow down the potential flaws in my host installation procedure
At first sight, it looks like the f24 deployment uses a different cgroup fs layout - see below
and I suspect this could be the problem, but I am not aware of anything in my install. procedure that
could cause that change, and as per this document
https://libvirt.org/cgroups.html#systemdScope
this change in the naming scheme looks odd - unless the doc is out of date of course
So again, any clue is most welcome; many thanks in advance -- Thierry
------
All is fine with f23/libvirt-1.2.18.3-2.fc23.x86_64
However with f24/libvirt-1.3.3.1-4.fc24.x86_64 my lxc containers won't start
and I am seeing this message in the libvirtd logfile
2016-07-09 11:52:33.416+0000: 3479: debug : virCgroupValidateMachineGroup:317 : Name 'lxc-3555-inrisl1.libvirt-lxc' for controller 'cpu' does not match 'inri_sl1', 'lxc-3555-inrisl1', 'inri_sl1.libvirt-lxc', '\
machine-lxc\x2dinri_sl1.scope' or 'machine-lxc\x2d3555\x2dinrisl1.scope'
2016-07-09 11:52:33.416+0000: 3479: debug : virCgroupNewDetectMachine:1501 : Failed to validate machine name for 'inri_sl1' driver 'lxc'
2016-07-09 11:52:33.416+0000: 3479: error : virLXCProcessStart:1501 : internal error: No valid cgroup for machine inri_sl1
At first it looked like the issue could be linked to the domain name having an underscore (example below uses inri_sl1 as a domain name)
But trying with a regular name 'plain' without an underscore exhibits a similar issue
-------------------------
Here's the set of files under /sys/fs/cgroup/memory that contains 'inri' in their name
(other susbsytems have similar naming schemes in both cases)
========= f23 / libvirt-1.2.18.3-2.fc23.x86_64 (works fine)
[root@vnode06 log]# cat /etc/fedora-release
Fedora release 23 (Twenty Three)
[root@vnode06 log]# uname -r
4.5.7-202.fc23.x86_64
[root@vnode06 log]# rpm -q libvirt
libvirt-1.2.18.3-2.fc23.x86_64
[root@vnode06 log]# find /sys/fs/cgroup/memory/ -name '*inri*'
/sys/fs/cgroup/memory/machine.slice/machine-lxc\x2dinri_sl1.scope
========= f24 / libvirt-1.3.3.1-4.fc24.x86_64 (container won't start)
[root@vnode05 libvirt]# cat /etc/fedora-release
Fedora release 24 (Twenty Four)
[root@vnode05 libvirt]# uname -r
4.6.3-300.fc24.x86_64
[root@vnode05 libvirt]# rpm -q libvirt
libvirt-1.3.3.1-4.fc24.x86_64
[root@vnode05 libvirt]# find /sys/fs/cgroup/memory/ -name '*inri*'
/sys/fs/cgroup/memory/machine/lxc-3237-inrisl1.libvirt-lxc
/sys/fs/cgroup/memory/machine/lxc-3555-inrisl1.libvirt-lxc
/sys/fs/cgroup/memory/machine/lxc-2989-inrisl1.libvirt-lxc
8 years, 3 months
[libvirt-users] vfio driver using libvirt
by abhishek jain
Hi Team
I need to run vfio driver using qemu and I'm using below xml file for this..
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>instance</name>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='aarch64' machine='virt'>hvm</type>
<kernel>/home/root/Image</kernel>
<initrd>/home/root/fsl-image-minimal-ls2085ardb.ext2.gz</initrd>
<cmdline>'root=/dev/ram0 rw console=ttyAMA0,115200 rootwait earlyprintk
ramdisk_size=1000000'</cmdline>
</os>
<cpu mode='custom' match='exact'>
<model fallback='allow'>host</model>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-aarch64</emulator>
<memballoon model='none'/>
</devices>
<qemu:commandline>
<qemu:arg value='-mem-path'/>
<qemu:arg value='/hugepages'/>
<qemu:arg value='-device'/>
<qemu:arg value='vfio-fsl-mc,host=dprc.2'/>
</qemu:commandline>
</domain>
I'm able to launch VM succesfully using the above xml file and it produces
the below process running in the background..
*ps -ef | grep qemu*
root 6767 1 34 22:11 ? 00:00:02
/usr/bin/qemu-system-aarch64 -name instance -S -machine
virt,accel=kvm,usb=off -cpu host -m 4096 -realtime mlock=off -smp
8,sockets=8,cores=1,threads=1 -uuid 3b527012-c15d-46b3-862d-9a7fd3b04e01
-nographic -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -kernel /home/root/Image -initrd
/home/root/fsl-image-minimal-ls2085ardb.ext2.gz -append 'root=/dev/ram0 rw
console=ttyAMA0,115200 rootwait earlyprintk ramdisk_size=1000000' -usb
-mem-path /hugepages *-device vfio-fsl-mc,host=dprc.2* -msg timestamp=on
Although the driver appears in the above process,actually that driver is
not loaded as there is no output of cat /proc/interrupts | grep dpio on the
host.
However when I'm using the same command as that of the above process i.e ..
/usr/bin/qemu-system-aarch64 -name instance -S -machine
virt,accel=kvm,usb=off -cpu host -m 4096 -realtime mlock=off -smp
8,sockets=8,cores=1,threads=1 -uuid 3b527012-c15d-46b3-862d-9a7fd3b04e01
-nographic -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -kernel /home/root/Image -initrd
/home/root/fsl-image-minimal-ls2085ardb.ext2.gz -append 'root=/dev/ram0 rw
console=ttyAMA0,115200 rootwait earlyprintk ramdisk_size=1000000' -usb
-mem-path /hugepages *-device vfio-fsl-mc,host=dprc.2* -msg timestamp=on
the driver is getting loaded and I'm able to see the output of cat
/proc/interrupts | grep dpio on the host.
Please help me regarding this.
Thanks
Abhishek Jain
8 years, 3 months