On Fri, Jan 06, 2017 at 02:57:21AM +0000, Qiao, Liyong wrote:
Hi Michal
☹ Nothing changes with your patch(without workaround and with /dev/mqueue mounted)
The workaround is turning off namespaces?
root@s2600wt:~/linux# cat /proc/mounts | grep mqueue
none /dev/mqueue mqueue rw,relatime 0 0
root@s2600wt:~/linux# vim /etc/libvirt/qemu.conf
root@s2600wt:~/linux# grep namespace /etc/libvirt/qemu.conf
# To enhance security, QEMU driver is capable of creating private namespaces
# for each domain started. Well, so far only "mount" namespace is supported. If
# devices entries throughout the domain lifetime. This namespace is turned on
#namespaces = [ "mount" ]
#namespaces = []
root@s2600wt:~/linux# virsh start kvm02
error: Failed to start domain kvm02
error: An error occurred, but the cause is unknown
Attach kvm02.log, but seems nothing debug information helpful.
2017-01-06 02:47:37.544+0000: 74279: debug : virCommandHandshakeChild:435 : Notifying
parent for handshake start on 26
2017-01-06 02:47:37.544+0000: 74279: debug : virCommandHandshakeChild:443 : Waiting on
parent for handshake complete on 27
libvirt: error : libvirtd quit during handshake: Input/output error
So, if I'm reading it correctly, this means that there was no problem in
qemuProcessHook, but probably there was a problem in the main thread
running qemuProcessLaunch. Would you mind looking at the libvirtd.log
as well, please? Since I can't reproduce it and neither there is error
set, it is pretty hard to find where the problem is. The log _might_ be
helpful in this regard.
Thanks,
Martin
Best Regards
Eli Qiao(乔立勇)OpenStack Core team OTC Intel.
--
On 05/01/2017, 10:46 PM, "Martin Kletzander" <mkletzan(a)redhat.com> wrote:
On Thu, Jan 05, 2017 at 02:41:02PM +0100, Michal Privoznik wrote:
>With my namespace patches, we are spawning qemu in its own
>namespace so that we can manage /dev entries ourselves. However,
>some filesystems mounted under /dev needs to be preserved in
>order to be shared with the parent namespace (e.g. /dev/pts).
>Currently, the list of mount points to preserve is hardcoded
>which ain't right - on some systems there might be less or more
>items under real /dev that on our list. The solution is to parse
>/proc/mounts and fetch the list from there.
>
Works for me, ACK. I still wonder whether we should mark the mounts as
slaves. I was also wondering if there are multiple sub-subtrees and
from the code it looks like it will 'just work'. Nice.
Martin