[libvirt-users] how to list and kill existing console sessions to VMs?

Hi everyone, If a VM is configured to have a console attached to it, like using http://libvirt.org/formatdomain.html#elementCharConsole Libvirt offers access to VM serial console's using the virDomainOpenConsole API[1] However, I didn't find a way to 1. list the existing connections to the console 2. kill an existing connection - without reconnecting using VIR_DOMAIN_CONSOLE_FORCE[2] Am I missing something? How can I do that? Rationale for my request oVirt [3] offers a management interface for VMs, and we have recently integrated user-friandly VM serial console access [4] in the system; in the future release we want to enhance the administation capabilities allowing to check existing connections and to terminate them (maybe because it got stuck). +++ [1] http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainOpenConsole [2] http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_DOMAIN_CONSOLE_FORCE [4] http://www.ovirt.org/ [5] https://www.ovirt.org/develop/release-management/features/engine/serial-cons... et. al. -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

Hey All, I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used. On the controller and node SELinux is disabled. [oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this: 2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html Comment is: "The current implementation works for local storage only and returns the canonical path of the volume." But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 . Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

Adding in libvir-list. Cheers, Tom K. ------------------------------------------------------------------------------------- Mobile: 416 618 8456 Home: 905 857 9652 Living on earth is expensive, but it includes a free trip around the sun. On 4/7/2016 7:32 PM, TomK wrote:
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used.
On the controller and node SELinux is disabled.
[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:
2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

Hey All, Wondering if anyone had any suggestions on this topic? Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun. On 4/9/2016 11:08 AM, TomK wrote:
Adding in libvir-list.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/7/2016 7:32 PM, TomK wrote:
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used.
On the controller and node SELinux is disabled.
[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:
2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Mon, Apr 11, 2016 at 08:02:04PM -0400, TomK wrote:
Hey All,
Wondering if anyone had any suggestions on this topic?
The only thing I can come up with is: '/var/lib/one//datastores/0/38/disk.1': Permission denied ... that don't have access to that file. Could you elaborate on that? I think it's either: a) you are running the domain as root or b) we don't use the domain's uid/gid to canonicalize the path. But if read access is enough for canonicalizing that path, I think the problem is purely with permissions.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/9/2016 11:08 AM, TomK wrote:
Adding in libvir-list.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/7/2016 7:32 PM, TomK wrote:
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used.
On the controller and node SELinux is disabled.
[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:
2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

Hey Martin, Thanks very much. Appreciate you jumping in on this thread. You see, that's just it. I've configured libvirt .conf files to run as oneadmin.oneadmin (non previlidged) for that NFS share and I can access all the files on that share as oneadmin without error, including the one you listed. But libvirtd, by default, always starts as root. So it's doing something as root, despite being configured to access the share as oneadmin. As oneadmin I can access that file no problem. Here's how I read the file off the node on which the NFS share is mounted on: [oneadmin@mdskvm-p01 ~]$ ls -altri /var/lib/one//datastores/0/38/disk.1 34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one//datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$ file /var/lib/one//datastores/0/38/disk.1 /var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data 'CONTEXT' [oneadmin@mdskvm-p01 ~]$ strings /var/lib/one//datastores/0/38/disk.1|head CD001 LINUX CONTEXT GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C) 1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600 2016040500205600 0000000000000000 2016040500205600 CD001 2016040500205600 2016040500205600 [oneadmin@mdskvm-p01 ~]$ My NFS mount looks as follows ( I have to use root_squash for security reasons. I'm sure it will work using no_root_squash but that option is not an option here.): [root@mdskvm-p01 ~]# grep nfs /etc/fstab # 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto [root@mdskvm-p01 ~]# [root@opennebula01 ~]# cat /etc/exports /var/lib/one/ *(rw,sync,no_subtree_check,root_squash) [root@opennebula01 ~]# So I dug deeper and see that there is a possibility libvirtd is trying to access that NFS mount as root as some level because as root I also get a permission denied to the NFS share above. Rightly so since I have root_squash that I need to keep. But libvirtd should be able to access the file as oneadmin as I have above. It's not and this is what I read on it: https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html Comment is: "The current implementation works for local storage only and returns the canonical path of the volume." But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 . My post with OpenNebula is here from which this conversation originates: https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-onea... Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun. On 4/12/2016 10:03 AM, Martin Kletzander wrote:
On Mon, Apr 11, 2016 at 08:02:04PM -0400, TomK wrote:
Hey All,
Wondering if anyone had any suggestions on this topic?
The only thing I can come up with is: '/var/lib/one//datastores/0/38/disk.1': Permission denied
... that don't have access to that file. Could you elaborate on that?
I think it's either:
a) you are running the domain as root or
b) we don't use the domain's uid/gid to canonicalize the path.
But if read access is enough for canonicalizing that path, I think the problem is purely with permissions.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/9/2016 11:08 AM, TomK wrote:
Adding in libvir-list.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/7/2016 7:32 PM, TomK wrote:
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used.
On the controller and node SELinux is disabled.
[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:
2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version you have installed. I know I've made changes in this space in more recent versions (not the most recent). I'm no root_squash expert, but I was the last to change things in the space so that makes me partially fluent ;-) in NFS/root_squash speak. Using root_squash is very "finicky" (to say the least)... It wasn't really clear from what you posted how you are attempting to reference things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file use a direct path to the NFS volume or does it use a pool? If a pool, then what type of pool? It is beneficial to provide as many details as possible about the configuration because (so to speak) those that are helping you won't know your environment (I've never used OpenNebula) nor do I have a 'oneadmin' uid:gid. What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system). This says to me that you're trying to use a "file system" pool (e.g <pool type="fs">) perhaps rather than the "NFS" pool (e.g. <pool type="netfs">). Using an NFS pool certainly has the advantage of "knowing how" to deal with the NFS environment. Since libvirt may consider this to "just" be a FS file, then it won't necessarily know to try to access the file properly (OK dependent upon libvirt version too perhaps - the details have been paged out of my memory while I do other work). One other thing that popped out at me: My /etc/exports has: /home/bzs/rootsquash/nfs *(rw,sync,root_squash) which only differs from yours by the 'no_subtree_check' your environment though seems to have much more "depth" than mine. That is you have "//datastores/0/38/disk.1" appended on as the (I assume) disk to use. The question then becomes - does every directory in the path to that file use "oneadmin:oneadmin" and of course does it have to with[out] that extra flag. Again, I'm no expert just trying to provide ideas and help... John
You see, that's just it. I've configured libvirt .conf files to run as oneadmin.oneadmin (non previlidged) for that NFS share and I can access all the files on that share as oneadmin without error, including the one you listed. But libvirtd, by default, always starts as root. So it's doing something as root, despite being configured to access the share as oneadmin. As oneadmin I can access that file no problem. Here's how I read the file off the node on which the NFS share is mounted on:
[oneadmin@mdskvm-p01 ~]$ ls -altri /var/lib/one//datastores/0/38/disk.1 34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one//datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$ file /var/lib/one//datastores/0/38/disk.1 /var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data 'CONTEXT' [oneadmin@mdskvm-p01 ~]$ strings /var/lib/one//datastores/0/38/disk.1|head CD001 LINUX CONTEXT GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C) 1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600 2016040500205600 0000000000000000 2016040500205600
CD001 2016040500205600 2016040500205600 [oneadmin@mdskvm-p01 ~]$
My NFS mount looks as follows ( I have to use root_squash for security reasons. I'm sure it will work using no_root_squash but that option is not an option here.):
[root@mdskvm-p01 ~]# grep nfs /etc/fstab # 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto [root@mdskvm-p01 ~]#
[root@opennebula01 ~]# cat /etc/exports /var/lib/one/ *(rw,sync,no_subtree_check,root_squash) [root@opennebula01 ~]#
So I dug deeper and see that there is a possibility libvirtd is trying to access that NFS mount as root as some level because as root I also get a permission denied to the NFS share above. Rightly so since I have root_squash that I need to keep. But libvirtd should be able to access the file as oneadmin as I have above. It's not and this is what I read on it:
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
My post with OpenNebula is here from which this conversation originates: https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-onea...
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/12/2016 10:03 AM, Martin Kletzander wrote:
On Mon, Apr 11, 2016 at 08:02:04PM -0400, TomK wrote:
Hey All,
Wondering if anyone had any suggestions on this topic?
The only thing I can come up with is: '/var/lib/one//datastores/0/38/disk.1': Permission denied
... that don't have access to that file. Could you elaborate on that?
I think it's either:
a) you are running the domain as root or
b) we don't use the domain's uid/gid to canonicalize the path.
But if read access is enough for canonicalizing that path, I think the problem is purely with permissions.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/9/2016 11:08 AM, TomK wrote:
Adding in libvir-list.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/7/2016 7:32 PM, TomK wrote:
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used.
On the controller and node SELinux is disabled.
[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:
2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

Hey John, Hehe, I got the right guy then. Very nice! And very good ideas but I may need more time to reread and try them out later tonight. I'm fully in agreement about providing more details. Can't be accurate in a diagnosis if there isn't much data to go on. This pool option is new to me. Please tell me more on it. Can't find it in the file below but maybe it's elsewhere? ( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> ) Allright, here's the details: [root@mdskvm-p01 ~]# rpm -aq|grep -i libvir libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 libvirt-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 libvirt-client-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 libvirt-python-1.2.17-2.el7.x86_64 libvirt-glib-0.1.9-1.el7.x86_64 libvirt-daemon-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 [root@mdskvm-p01 ~]# cat /etc/release cat: /etc/release: No such file or directory [root@mdskvm-p01 ~]# cat /etc/*release* NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov" REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) cpe:/o:scientificlinux:scientificlinux:7.2:ga [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# su - oneadmin Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on ssh:notty There were 9584 failed login attempts since the last successful login. i[oneadmin@mdskvm-p01 ~]$ id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [oneadmin@mdskvm-p01 ~]$ pwd /var/lib/one [oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root 134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. [oneadmin@mdskvm-p01 ~]$ [oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-38</name> <vcpu>1</vcpu> <cputune> <shares>1024</shares> </cputune> <memory>524288</memory> <os> <type arch='x86_64'>hvm</type> <boot dev='hd'/> </os> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> <disk type='file' device='cdrom'> <source file='/var/lib/one//datastores/0/38/disk.1'/> <target dev='hdb'/> <readonly/> <driver name='qemu' type='raw'/> </disk> <interface type='bridge'> <source bridge='br0'/> <mac address='02:00:c0:a8:00:64'/> </interface> <graphics type='vnc' listen='0.0.0.0' port='5938'/> </devices> <features> <acpi/> </features> </domain> [oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0|grep -i nfs [oneadmin@mdskvm-p01 ~]$ Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun. On 4/12/2016 11:45 AM, John Ferlan wrote:
On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread. Can you provide some more details with respect to which libvirt version you have installed. I know I've made changes in this space in more recent versions (not the most recent). I'm no root_squash expert, but I was the last to change things in the space so that makes me partially fluent ;-) in NFS/root_squash speak.
Using root_squash is very "finicky" (to say the least)... It wasn't really clear from what you posted how you are attempting to reference things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file use a direct path to the NFS volume or does it use a pool? If a pool, then what type of pool? It is beneficial to provide as many details as possible about the configuration because (so to speak) those that are helping you won't know your environment (I've never used OpenNebula) nor do I have a 'oneadmin' uid:gid.
What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system).
This says to me that you're trying to use a "file system" pool (e.g <pool type="fs">) perhaps rather than the "NFS" pool (e.g. <pool type="netfs">). Using an NFS pool certainly has the advantage of "knowing how" to deal with the NFS environment. Since libvirt may consider this to "just" be a FS file, then it won't necessarily know to try to access the file properly (OK dependent upon libvirt version too perhaps - the details have been paged out of my memory while I do other work).
One other thing that popped out at me:
My /etc/exports has:
/home/bzs/rootsquash/nfs *(rw,sync,root_squash)
which only differs from yours by the 'no_subtree_check'
your environment though seems to have much more "depth" than mine. That is you have "//datastores/0/38/disk.1" appended on as the (I assume) disk to use. The question then becomes - does every directory in the path to that file use "oneadmin:oneadmin" and of course does it have to with[out] that extra flag.
Again, I'm no expert just trying to provide ideas and help...
John
You see, that's just it. I've configured libvirt .conf files to run as oneadmin.oneadmin (non previlidged) for that NFS share and I can access all the files on that share as oneadmin without error, including the one you listed. But libvirtd, by default, always starts as root. So it's doing something as root, despite being configured to access the share as oneadmin. As oneadmin I can access that file no problem. Here's how I read the file off the node on which the NFS share is mounted on:
[oneadmin@mdskvm-p01 ~]$ ls -altri /var/lib/one//datastores/0/38/disk.1 34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one//datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$ file /var/lib/one//datastores/0/38/disk.1 /var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data 'CONTEXT' [oneadmin@mdskvm-p01 ~]$ strings /var/lib/one//datastores/0/38/disk.1|head CD001 LINUX CONTEXT GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C) 1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600 2016040500205600 0000000000000000 2016040500205600
CD001 2016040500205600 2016040500205600 [oneadmin@mdskvm-p01 ~]$
My NFS mount looks as follows ( I have to use root_squash for security reasons. I'm sure it will work using no_root_squash but that option is not an option here.):
[root@mdskvm-p01 ~]# grep nfs /etc/fstab # 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs soft,intr,rsize=8192,wsize=8192,noauto [root@mdskvm-p01 ~]#
[root@opennebula01 ~]# cat /etc/exports /var/lib/one/ *(rw,sync,no_subtree_check,root_squash) [root@opennebula01 ~]#
So I dug deeper and see that there is a possibility libvirtd is trying to access that NFS mount as root as some level because as root I also get a permission denied to the NFS share above. Rightly so since I have root_squash that I need to keep. But libvirtd should be able to access the file as oneadmin as I have above. It's not and this is what I read on it:
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
My post with OpenNebula is here from which this conversation originates: https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-onea...
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/12/2016 10:03 AM, Martin Kletzander wrote:
On Mon, Apr 11, 2016 at 08:02:04PM -0400, TomK wrote:
Hey All,
Wondering if anyone had any suggestions on this topic?
The only thing I can come up with is: '/var/lib/one//datastores/0/38/disk.1': Permission denied
... that don't have access to that file. Could you elaborate on that?
I think it's either:
a) you are running the domain as root or
b) we don't use the domain's uid/gid to canonicalize the path.
But if read access is enough for canonicalizing that path, I think the problem is purely with permissions.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Adding in libvir-list.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors out with: can't canonicalize path '/var/lib/one//datastores/0 . The unprevilidged user is able to read/write fine to the share. root_squash is used and for security reasons no_root_squash cannot be used.
On the controller and node SELinux is disabled.
[oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/38/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy script. Closest I get to more details is this:
2016-04-06 04:15:35.945+0000: 14072: debug : virStorageFileBackendFileInit:1441 : initializing FS storage file 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869] 2016-04-06 04:15:35.954+0000: 14072: error : virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't be? Anyway to get around this problem? This is CentOS 7 .
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users --
On 4/7/2016 7:32 PM, TomK wrote: libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list --
On 4/9/2016 11:08 AM, TomK wrote: libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

[ I would be way easier to reply if you didn't top-post ] On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I may need more time to reread and try them out later tonight. I'm fully in agreement about providing more details. Can't be accurate in a diagnosis if there isn't much data to go on. This pool option is new to me. Please tell me more on it. Can't find it in the file below but maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
Allright, here's the details:
[root@mdskvm-p01 ~]# rpm -aq|grep -i libvir libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 libvirt-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 libvirt-client-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 libvirt-python-1.2.17-2.el7.x86_64 libvirt-glib-0.1.9-1.el7.x86_64 libvirt-daemon-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 [root@mdskvm-p01 ~]# cat /etc/release cat: /etc/release: No such file or directory [root@mdskvm-p01 ~]# cat /etc/*release* NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) cpe:/o:scientificlinux:scientificlinux:7.2:ga [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# su - oneadmin Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on ssh:notty There were 9584 failed login attempts since the last successful login. i[oneadmin@mdskvm-p01 ~]$ id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [oneadmin@mdskvm-p01 ~]$ pwd /var/lib/one [oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root 134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. [oneadmin@mdskvm-p01 ~]$
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-38</name> <vcpu>1</vcpu> <cputune> <shares>1024</shares> </cputune> <memory>524288</memory> <os> <type arch='x86_64'>hvm</type> <boot dev='hd'/> </os> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> <disk type='file' device='cdrom'> <source file='/var/lib/one//datastores/0/38/disk.1'/> <target dev='hdb'/> <readonly/> <driver name='qemu' type='raw'/> </disk> <interface type='bridge'> <source bridge='br0'/> <mac address='02:00:c0:a8:00:64'/> </interface> <graphics type='vnc' listen='0.0.0.0' port='5938'/> </devices> <features> <acpi/> </features> </domain>
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0|grep -i nfs [oneadmin@mdskvm-p01 ~]$
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/12/2016 11:45 AM, John Ferlan wrote:
On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread. Can you provide some more details with respect to which libvirt version you have installed. I know I've made changes in this space in more recent versions (not the most recent). I'm no root_squash expert, but I was the last to change things in the space so that makes me partially fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not even used anywhere at all, but care about the conditions we have in the code. Especially when it's constantly changing. So thanks for jumping in. I only replied because nobody else did and I had only the tiniest clue as to what could happen.
Using root_squash is very "finicky" (to say the least)... It wasn't really clear from what you posted how you are attempting to reference things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file use a direct path to the NFS volume or does it use a pool? If a pool, then what type of pool? It is beneficial to provide as many details as possible about the configuration because (so to speak) those that are helping you won't know your environment (I've never used OpenNebula) nor do I have a 'oneadmin' uid:gid.
What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system).
I totally missed this. So the only thing that popped on my mind now was checking the whole path: ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} You can also run it as root and oneadmin, however after reading through all the info again, I don't think that'll help.

On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]
On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I may need more time to reread and try them out later tonight. I'm fully in agreement about providing more details. Can't be accurate in a diagnosis if there isn't much data to go on. This pool option is new to me. Please tell me more on it. Can't find it in the file below but maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
Allright, here's the details:
[root@mdskvm-p01 ~]# rpm -aq|grep -i libvir libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 libvirt-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 libvirt-client-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 libvirt-python-1.2.17-2.el7.x86_64 libvirt-glib-0.1.9-1.el7.x86_64 libvirt-daemon-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 [root@mdskvm-p01 ~]# cat /etc/release cat: /etc/release: No such file or directory [root@mdskvm-p01 ~]# cat /etc/*release* NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) cpe:/o:scientificlinux:scientificlinux:7.2:ga [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# su - oneadmin Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on ssh:notty There were 9584 failed login attempts since the last successful login. i[oneadmin@mdskvm-p01 ~]$ id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [oneadmin@mdskvm-p01 ~]$ pwd /var/lib/one [oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root 134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. [oneadmin@mdskvm-p01 ~]$
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-38</name> <vcpu>1</vcpu> <cputune> <shares>1024</shares> </cputune> <memory>524288</memory> <os> <type arch='x86_64'>hvm</type> <boot dev='hd'/> </os> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> <disk type='file' device='cdrom'> <source file='/var/lib/one//datastores/0/38/disk.1'/> <target dev='hdb'/> <readonly/> <driver name='qemu' type='raw'/> </disk> <interface type='bridge'> <source bridge='br0'/> <mac address='02:00:c0:a8:00:64'/> </interface> <graphics type='vnc' listen='0.0.0.0' port='5938'/> </devices> <features> <acpi/> </features> </domain>
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0|grep -i nfs [oneadmin@mdskvm-p01 ~]$
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/12/2016 11:45 AM, John Ferlan wrote:
On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread. Can you provide some more details with respect to which libvirt version you have installed. I know I've made changes in this space in more recent versions (not the most recent). I'm no root_squash expert, but I was the last to change things in the space so that makes me partially fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not even used anywhere at all, but care about the conditions we have in the code. Especially when it's constantly changing. So thanks for jumping in. I only replied because nobody else did and I had only the tiniest clue as to what could happen.
Using root_squash is very "finicky" (to say the least)... It wasn't really clear from what you posted how you are attempting to reference things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file use a direct path to the NFS volume or does it use a pool? If a pool, then what type of pool? It is beneficial to provide as many details as possible about the configuration because (so to speak) those that are helping you won't know your environment (I've never used OpenNebula) nor do I have a 'oneadmin' uid:gid.
What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system).
I totally missed this. So the only thing that popped on my mind now was checking the whole path:
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through all the info again, I don't think that'll help.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
I top post by default in thunderbird and we have same setup at work with M$ LookOut. Old habits are to blame I guess. I'll try to reply like this instead. But yeah it's terrible for mailing lists to top post. Here's the output and thanks again: [oneadmin@mdskvm-p01 ~]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20 /var/lib/one/datastores/0/38 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one/datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$ That's the default setting but I think I see what you're getting at that permissions get inherited? Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

On Tue, Apr 12, 2016 at 03:55:45PM -0400, TomK wrote:
On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]
On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
On 4/12/2016 11:45 AM, John Ferlan wrote:
What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system).
I totally missed this. So the only thing that popped on my mind now was checking the whole path:
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through all the info again, I don't think that'll help.
I top post by default in thunderbird and we have same setup at work with M$ LookOut. Old habits are to blame I guess. I'll try to reply like this instead. But yeah it's terrible for mailing lists to top post. Here's the output and thanks again:
[oneadmin@mdskvm-p01 ~]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
Look ^^, maybe for a quick workaround you could try doing: chmod o+rx /var/lib/one Let me know if that does the trick (at least for now).
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20 /var/lib/one/datastores/0/38 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one/datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$
That's the default setting but I think I see what you're getting at that permissions get inherited?
No, I just think you need eXecute on all parent directories. That shouldn't hinder your security and could help.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.

On Tue, Apr 12, 2016 at 10:29:29PM +0200, Martin Kletzander wrote:
On Tue, Apr 12, 2016 at 03:55:45PM -0400, TomK wrote:
On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]
On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
On 4/12/2016 11:45 AM, John Ferlan wrote:
What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system).
I totally missed this. So the only thing that popped on my mind now was checking the whole path:
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through all the info again, I don't think that'll help.
I top post by default in thunderbird and we have same setup at work with M$ LookOut. Old habits are to blame I guess. I'll try to reply like this instead. But yeah it's terrible for mailing lists to top post. Here's the output and thanks again:
[oneadmin@mdskvm-p01 ~]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
Look ^^, maybe for a quick workaround you could try doing:
chmod o+rx /var/lib/one
Actually, o+x ought to be enough.
Let me know if that does the trick (at least for now).
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20 /var/lib/one/datastores/0/38 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one/datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$
That's the default setting but I think I see what you're getting at that permissions get inherited?
No, I just think you need eXecute on all parent directories. That shouldn't hinder your security and could help.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On 4/12/2016 4:36 PM, Martin Kletzander wrote:
On Tue, Apr 12, 2016 at 10:29:29PM +0200, Martin Kletzander wrote:
On Tue, Apr 12, 2016 at 03:55:45PM -0400, TomK wrote:
On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]
On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
On 4/12/2016 11:45 AM, John Ferlan wrote:
What got my attention was the error message "initializing FS storage file" with the "file:" prefix to the name and 9869:9869 as the uid:gid trying to access the file (I assume that's oneadmin:oneadmin on your system).
I totally missed this. So the only thing that popped on my mind now was checking the whole path:
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through all the info again, I don't think that'll help.
I top post by default in thunderbird and we have same setup at work with M$ LookOut. Old habits are to blame I guess. I'll try to reply like this instead. But yeah it's terrible for mailing lists to top post. Here's the output and thanks again:
[oneadmin@mdskvm-p01 ~]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}} drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
Look ^^, maybe for a quick workaround you could try doing:
chmod o+rx /var/lib/one
Actually, o+x ought to be enough.
Let me know if that does the trick (at least for now).
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20 /var/lib/one/datastores/0/38 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20 /var/lib/one/datastores/0/38/disk.1 [oneadmin@mdskvm-p01 ~]$
That's the default setting but I think I see what you're getting at that permissions get inherited?
No, I just think you need eXecute on all parent directories. That shouldn't hinder your security and could help.
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
The execute permissions did the trick to allow creation. So that's good. There's still the write and I'm thinking you intend this as a workaround since oneadmin should be able to write in there with other being --- . The auto deployment of cloud virtuals would still fail then when writes are attempted. [oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/38/deployment.0 create: file(optdata): /var/lib/one//datastores/0/38/deployment.0 Domain one-38 created from /var/lib/one//datastores/0/38/deployment.0 [oneadmin@mdskvm-p01 ~]$ Now should this work without any permissions on other for the unprivileged user oneadmin? Thinking Yes per John Forlan's reply? [oneadmin@mdskvm-p01 0]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/24/deployment.0 create: file(optdata): /var/lib/one//datastores/0/24/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/24/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/24/disk.1': Permission denied [oneadmin@mdskvm-p01 0]$ Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

On 04/12/2016 03:55 PM, TomK wrote:
On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]
On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I may need more time to reread and try them out later tonight. I'm fully in agreement about providing more details. Can't be accurate in a diagnosis if there isn't much data to go on. This pool option is new to me. Please tell me more on it. Can't find it in the file below but maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
Allright, here's the details:
[root@mdskvm-p01 ~]# rpm -aq|grep -i libvir libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 libvirt-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 libvirt-client-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 libvirt-python-1.2.17-2.el7.x86_64 libvirt-glib-0.1.9-1.el7.x86_64 libvirt-daemon-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 [root@mdskvm-p01 ~]# cat /etc/release cat: /etc/release: No such file or directory [root@mdskvm-p01 ~]# cat /etc/*release* NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) cpe:/o:scientificlinux:scientificlinux:7.2:ga [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# su - oneadmin Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on ssh:notty There were 9584 failed login attempts since the last successful login. i[oneadmin@mdskvm-p01 ~]$ id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [oneadmin@mdskvm-p01 ~]$ pwd /var/lib/one [oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root 134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. [oneadmin@mdskvm-p01 ~]$
It'd take more time than I have at the present moment to root out what changed when for NFS root-squash, but suffice to say there were some corner cases. Some involving how qemu-img files are generated - I don't have the details present in my short term memory.
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-38</name> <vcpu>1</vcpu> <cputune> <shares>1024</shares> </cputune> <memory>524288</memory> <os> <type arch='x86_64'>hvm</type> <boot dev='hd'/> </os> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> <disk type='file' device='cdrom'> <source file='/var/lib/one//datastores/0/38/disk.1'/> <target dev='hdb'/> <readonly/> <driver name='qemu' type='raw'/> </disk> <interface type='bridge'> <source bridge='br0'/> <mac address='02:00:c0:a8:00:64'/> </interface> <graphics type='vnc' listen='0.0.0.0' port='5938'/> </devices> <features> <acpi/> </features> </domain>
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0|grep -i nfs [oneadmin@mdskvm-p01 ~]$
Having/using a root squash via an NFS pool is "easy" (famous last words) Create some pool XML (taking the example I have) % cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool> In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server. You've already seen my /etc/exports virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash Now instead of <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> Something like: <disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk> The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere). Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/12/2016 11:45 AM, John Ferlan wrote:
On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread. Can you provide some more details with respect to which libvirt version you have installed. I know I've made changes in this space in more recent versions (not the most recent). I'm no root_squash expert, but I was the last to change things in the space so that makes me partially fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not even used anywhere at all, but care about the conditions we have in the code. Especially when it's constantly changing. So thanks for jumping in. I only replied because nobody else did and I had only the tiniest clue as to what could happen.
I saw the post, but was heads down somewhere else. Suffice to say trying to swap in root_squash is a painful exercise... John [...]

On 4/12/2016 5:08 PM, John Ferlan wrote:
On 04/12/2016 03:55 PM, TomK wrote:
On 4/12/2016 3:40 PM, Martin Kletzander wrote:
[ I would be way easier to reply if you didn't top-post ]
On Tue, Apr 12, 2016 at 12:07:50PM -0400, TomK wrote:
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I may need more time to reread and try them out later tonight. I'm fully in agreement about providing more details. Can't be accurate in a diagnosis if there isn't much data to go on. This pool option is new to me. Please tell me more on it. Can't find it in the file below but maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
Allright, here's the details:
[root@mdskvm-p01 ~]# rpm -aq|grep -i libvir libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64 libvirt-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64 libvirt-client-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64 libvirt-python-1.2.17-2.el7.x86_64 libvirt-glib-0.1.9-1.el7.x86_64 libvirt-daemon-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64 [root@mdskvm-p01 ~]# cat /etc/release cat: /etc/release: No such file or directory [root@mdskvm-p01 ~]# cat /etc/*release* NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) Scientific Linux release 7.2 (Nitrogen) cpe:/o:scientificlinux:scientificlinux:7.2:ga [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# su - oneadmin Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0 Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on ssh:notty There were 9584 failed login attempts since the last successful login. i[oneadmin@mdskvm-p01 ~]$ id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [oneadmin@mdskvm-p01 ~]$ pwd /var/lib/one [oneadmin@mdskvm-p01 ~]$ ls -altriR|grep -i root 134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 .. [oneadmin@mdskvm-p01 ~]$
It'd take more time than I have at the present moment to root out what changed when for NFS root-squash, but suffice to say there were some corner cases. Some involving how qemu-img files are generated - I don't have the details present in my short term memory.
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0 <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-38</name> <vcpu>1</vcpu> <cputune> <shares>1024</shares> </cputune> <memory>524288</memory> <os> <type arch='x86_64'>hvm</type> <boot dev='hd'/> </os> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> <disk type='file' device='cdrom'> <source file='/var/lib/one//datastores/0/38/disk.1'/> <target dev='hdb'/> <readonly/> <driver name='qemu' type='raw'/> </disk> <interface type='bridge'> <source bridge='br0'/> <mac address='02:00:c0:a8:00:64'/> </interface> <graphics type='vnc' listen='0.0.0.0' port='5938'/> </devices> <features> <acpi/> </features> </domain>
[oneadmin@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0|grep -i nfs [oneadmin@mdskvm-p01 ~]$
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
On 4/12/2016 11:45 AM, John Ferlan wrote:
On 04/12/2016 10:58 AM, TomK wrote:
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread. Can you provide some more details with respect to which libvirt version you have installed. I know I've made changes in this space in more recent versions (not the most recent). I'm no root_squash expert, but I was the last to change things in the space so that makes me partially fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not even used anywhere at all, but care about the conditions we have in the code. Especially when it's constantly changing. So thanks for jumping in. I only replied because nobody else did and I had only the tiniest clue as to what could happen.
I saw the post, but was heads down somewhere else. Suffice to say trying to swap in root_squash is a painful exercise...
John
[...]
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
Thanks John! Appreciated again. No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment). I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :) Cheers, Tom Kacperski. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)
The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example. Hope that's understandable and it will resolve your issue permanently. Have a nice day, Martin

On 4/13/2016 1:33 AM, Martin Kletzander wrote:
On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)
The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day, Martin
That fits in with what's happening for sure. I'm just not sure how much of the work libvirtd does it does by root vs nobody vs oneadmin on the NFS mount. If there was a way to find out that, it would help alot. I will give the nobody user setting a try however. Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

On 4/13/2016 1:33 AM, Martin Kletzander wrote:
On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)
The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day, Martin
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
The only reason I said that this might be a 'workaround' is due to John Farlan commenting that he'll look at this later on. Ideally the opennebula community keeps the other permissions to nill and presumably they work on NFSv3 per the forum topic I included earlier from them. But if setting the permissions on nobody to allow for the functionality, I would be comfortable with that. Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

On 04/13/2016 09:23 AM, TomK wrote:
On 4/13/2016 1:33 AM, Martin Kletzander wrote:
On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)
The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day, Martin
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
The only reason I said that this might be a 'workaround' is due to John Farlan commenting that he'll look at this later on. Ideally the opennebula community keeps the other permissions to nill and presumably they work on NFSv3 per the forum topic I included earlier from them. But if setting the permissions on nobody to allow for the functionality, I would be comfortable with that.
Martin and I were taking different paths... But yes, it certainly makes sense given your error message about canonical path and the need for eXecute permissions... I think I started wondering about that first, but then jumped into the NFS pool because that's what my reference point is for root-squash. Since root squash essentially sends root requests as "nfsnobody" (IOW: others not the user or group), then the "o+x" approach is the solution if you're going directly at the file. John

On 4/13/2016 10:00 AM, John Ferlan wrote:
On 04/13/2016 09:23 AM, TomK wrote:
On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)
The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day, Martin
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users The only reason I said that this might be a 'workaround' is due to John Farlan commenting that he'll look at this later on. Ideally the opennebula community keeps the other permissions to nill and presumably
On 4/13/2016 1:33 AM, Martin Kletzander wrote: they work on NFSv3 per the forum topic I included earlier from them. But if setting the permissions on nobody to allow for the functionality, I would be comfortable with that.
Martin and I were taking different paths... But yes, it certainly makes sense given your error message about canonical path and the need for eXecute permissions... I think I started wondering about that first, but then jumped into the NFS pool because that's what my reference point is for root-squash. Since root squash essentially sends root requests as "nfsnobody" (IOW: others not the user or group), then the "o+x" approach is the solution if you're going directly at the file.
John
Yes, appears the o+x is the only way right now. It definitely tries to access the share as root though, on CentOS 7 since I also tried to add nfsnobody and nobody to the oneadmin group and that did not work either. Seems OpenNebula doesn't have this issue with NFSv3 running on Ubuntu: [root@mdskvm-p01 ~]# rmdir /tmp/netfs-rootsquash-pool [root@mdskvm-p01 ~]# cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='opennebula01'/> <dir path='/var/lib/one'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>9869</owner> <group>9869</group> </permissions> </target> </pool> [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# virsh pool-define nfs.xml Pool rootsquash defined from nfs.xml [root@mdskvm-p01 ~]# virsh pool-build rootsquash Pool rootsquash built [root@mdskvm-p01 ~]# virsh pool-start rootsquash error: Failed to start pool rootsquash error: cannot open path '/tmp/netfs-rootsquash-pool': Permission denied [root@mdskvm-p01 ~]# virsh vol-list rootsquash error: Failed to list volumes error: Requested operation is not valid: storage pool 'rootsquash' is not active [root@mdskvm-p01 ~]# ls -altri /tmp/netfs-rootsquash-pool total 4 133 drwxrwxrwt. 14 root root 4096 Apr 14 00:05 .. 68785924 drwxr-xr-x 2 oneadmin oneadmin 6 Apr 14 00:05 . [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [root@mdskvm-p01 ~]# id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody),9869(oneadmin) [root@mdskvm-p01 ~]# id nfsnobody uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody),9869(oneadmin) [root@mdskvm-p01 ~]# id root uid=0(root) gid=0(root) groups=0(root) [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# ps -ef|grep -i libvirtd root 352 31058 0 00:31 pts/1 00:00:00 grep --color=auto -i libvirtd root 1459 1 0 Apr11 ? 00:07:40 /usr/sbin/libvirtd --listen --config /etc/libvirt/libvirtd.conf [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# umount /var/lib/one [root@mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one [root@mdskvm-p01 ~]# umount /var/lib/one [root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# mount|tail -n 1 192.168.0.70:/var/lib/one on /var/lib/one type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70) [root@mdskvm-p01 ~]# umount /var/lib/one [root@mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one [root@mdskvm-p01 ~]# mount|tail -n 1 192.168.0.70:/var/lib/one on /var/lib/one type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70) [root@mdskvm-p01 ~]# su - oneadmin Last login: Thu Apr 14 00:27:59 EDT 2016 on pts/0 [oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/47/deployment.0 create: file(optdata): /var/lib/one//datastores/0/47/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/47/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/47/disk.1': Permission denied [oneadmin@mdskvm-p01 ~]$ CONTROLLER ( NFS Server ) [oneadmin@opennebula01 one]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}} drwxr-xr-x. 19 root root 4096 Apr 4 21:26 /var drwxr-xr-x. 28 root root 4096 Apr 13 03:30 /var/lib drwxr-x---. 12 oneadmin oneadmin 4096 Apr 14 00:40 /var/lib/one drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32 /var/lib/one/datastores/0/47 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32 /var/lib/one/datastores/0/47/disk.1 [oneadmin@opennebula01 one]$ NODE ( NFS Client ) [oneadmin@mdskvm-p01 ~]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}} drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var drwxr-xr-x. 45 root root 4096 Apr 13 04:11 /var/lib drwxr-x--- 12 oneadmin oneadmin 4096 Apr 14 00:39 /var/lib/one drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32 /var/lib/one/datastores/0/47 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32 /var/lib/one/datastores/0/47/disk.1 [oneadmin@mdskvm-p01 ~]$ Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun.

On 4/14/2016 1:01 AM, TomK wrote:
On 4/13/2016 10:00 AM, John Ferlan wrote:
On 04/13/2016 09:23 AM, TomK wrote:
On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='localhost'/> <dir path='/home/bzs/rootsquash/nfs'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>107</owner> <group>107</group> </permissions> </target> </pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml virsh pool-build rootsquash virsh pool-start rootsquash virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/38/disk.0'/> <target dev='hda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk>
Something like:
<disk type='volume' device='disk'> <driver name='qemu' type='qemu' cache='none'/> <source pool='rootsquash' volume='disk.0'/> <target dev='hda'/> </disk>
The volume name may be off, but it's perhaps close. I forget how to do the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your environment and see what you see/get. The privileges for the pool and volumes in the pool become the key to how libvirt decides to "request access" to the volume. "disk.1" having read access is probably not an issue since you seem to be using it as a CDROM; however, "disk.0" is going to be used for read/write - thus would have to be appropriately configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking once you have some free cycles. I can temporarily hop on one leg by using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if that config nfs.xml is a one time thing correct? I'm spinning these up at will via the OpenNebula GUI and if I have update for each VM, that breaks the Cloud provisioning. I'll go over your notes again. I'm optimistic. :)
The more I'm thinking about it, the more I am convinced that the workaround is actually not a workaround. The only thing you need to do is having execute for others (precisely for 'nobody' on the nfs share) in the whole path on all directories. Without that even the pool won't be usable from libvirt. However it does not pose any security issue as it only allows others to check the path. When qemu is launched, it has the proper "label", meaning uid:gid to access the file so it will be able to read/write or whatever permissions you set there. It's just that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day, Martin
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users The only reason I said that this might be a 'workaround' is due to John Farlan commenting that he'll look at this later on. Ideally the opennebula community keeps the other permissions to nill and presumably
On 4/13/2016 1:33 AM, Martin Kletzander wrote: they work on NFSv3 per the forum topic I included earlier from them. But if setting the permissions on nobody to allow for the functionality, I would be comfortable with that.
Martin and I were taking different paths... But yes, it certainly makes sense given your error message about canonical path and the need for eXecute permissions... I think I started wondering about that first, but then jumped into the NFS pool because that's what my reference point is for root-squash. Since root squash essentially sends root requests as "nfsnobody" (IOW: others not the user or group), then the "o+x" approach is the solution if you're going directly at the file.
John
Yes, appears the o+x is the only way right now. It definitely tries to access the share as root though, on CentOS 7 since I also tried to add nfsnobody and nobody to the oneadmin group and that did not work either. Seems OpenNebula doesn't have this issue with NFSv3 running on Ubuntu:
[root@mdskvm-p01 ~]# rmdir /tmp/netfs-rootsquash-pool [root@mdskvm-p01 ~]# cat nfs.xml <pool type='netfs'> <name>rootsquash</name> <source> <host name='opennebula01'/> <dir path='/var/lib/one'/> <format type='nfs'/> </source> <target> <path>/tmp/netfs-rootsquash-pool</path> <permissions> <mode>0755</mode> <owner>9869</owner> <group>9869</group> </permissions> </target> </pool> [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# virsh pool-define nfs.xml Pool rootsquash defined from nfs.xml
[root@mdskvm-p01 ~]# virsh pool-build rootsquash Pool rootsquash built
[root@mdskvm-p01 ~]# virsh pool-start rootsquash error: Failed to start pool rootsquash error: cannot open path '/tmp/netfs-rootsquash-pool': Permission denied
[root@mdskvm-p01 ~]# virsh vol-list rootsquash error: Failed to list volumes error: Requested operation is not valid: storage pool 'rootsquash' is not active
[root@mdskvm-p01 ~]# ls -altri /tmp/netfs-rootsquash-pool total 4 133 drwxrwxrwt. 14 root root 4096 Apr 14 00:05 .. 68785924 drwxr-xr-x 2 oneadmin oneadmin 6 Apr 14 00:05 . [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# id oneadmin uid=9869(oneadmin) gid=9869(oneadmin) groups=9869(oneadmin),992(libvirt),36(kvm) [root@mdskvm-p01 ~]# id nobody uid=99(nobody) gid=99(nobody) groups=99(nobody),9869(oneadmin) [root@mdskvm-p01 ~]# id nfsnobody uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody),9869(oneadmin) [root@mdskvm-p01 ~]# id root uid=0(root) gid=0(root) groups=0(root) [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# ps -ef|grep -i libvirtd root 352 31058 0 00:31 pts/1 00:00:00 grep --color=auto -i libvirtd root 1459 1 0 Apr11 ? 00:07:40 /usr/sbin/libvirtd --listen --config /etc/libvirt/libvirtd.conf [root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]# umount /var/lib/one [root@mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one [root@mdskvm-p01 ~]# umount /var/lib/one [root@mdskvm-p01 ~]# mount /var/lib/one [root@mdskvm-p01 ~]# mount|tail -n 1 192.168.0.70:/var/lib/one on /var/lib/one type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70) [root@mdskvm-p01 ~]# umount /var/lib/one [root@mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one [root@mdskvm-p01 ~]# mount|tail -n 1 192.168.0.70:/var/lib/one on /var/lib/one type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70) [root@mdskvm-p01 ~]# su - oneadmin Last login: Thu Apr 14 00:27:59 EDT 2016 on pts/0 [oneadmin@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create /var/lib/one//datastores/0/47/deployment.0 create: file(optdata): /var/lib/one//datastores/0/47/deployment.0 error: Failed to create domain from /var/lib/one//datastores/0/47/deployment.0 error: can't canonicalize path '/var/lib/one//datastores/0/47/disk.1': Permission denied [oneadmin@mdskvm-p01 ~]$
CONTROLLER ( NFS Server )
[oneadmin@opennebula01 one]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}} drwxr-xr-x. 19 root root 4096 Apr 4 21:26 /var drwxr-xr-x. 28 root root 4096 Apr 13 03:30 /var/lib drwxr-x---. 12 oneadmin oneadmin 4096 Apr 14 00:40 /var/lib/one drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32 /var/lib/one/datastores/0/47 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32 /var/lib/one/datastores/0/47/disk.1 [oneadmin@opennebula01 one]$
NODE ( NFS Client )
[oneadmin@mdskvm-p01 ~]$ ls -ld /var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}} drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var drwxr-xr-x. 45 root root 4096 Apr 13 04:11 /var/lib drwxr-x--- 12 oneadmin oneadmin 4096 Apr 14 00:39 /var/lib/one drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31 /var/lib/one/datastores/0 drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32 /var/lib/one/datastores/0/47 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32 /var/lib/one/datastores/0/47/disk.1 [oneadmin@mdskvm-p01 ~]$
Cheers, Tom K. -------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
+ opennebula runs this as oneadmin: Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 + echo 'Running as user oneadmin' Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 ++ virsh --connect qemu:///system create /var/lib/one//datastores/0/47/deployment.0 Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 error: Failed to create domain from /var/lib/one//datastores/0/47/deployment.0 Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 error: can't canonicalize path '/var/lib/one//datastores/0/47/disk.1': Permission denied Cheers, TK

On Thu, Apr 07, 2016 at 08:26:20 -0400, Francesco Romani wrote:
Hi everyone,
If a VM is configured to have a console attached to it, like using
http://libvirt.org/formatdomain.html#elementCharConsole
Libvirt offers access to VM serial console's using the virDomainOpenConsole API[1] However, I didn't find a way to 1. list the existing connections to the console 2. kill an existing connection - without reconnecting using VIR_DOMAIN_CONSOLE_FORCE[2]
Am I missing something? How can I do that?
Neither of those is possible currently.
Rationale for my request oVirt [3] offers a management interface for VMs, and we have recently integrated user-friandly VM serial console access [4] in the system; in the future release we want to enhance the administation capabilities allowing to check existing connections and to terminate them (maybe because it got stuck).
I think the plan that danpb has in this aspect is to use virlogd to distribute the console output to almost any number of clients, which would solve this kind of problem. Additionally, doesn't oVirt use just one connection from VDSM for this purpose. In that case it's rather trivial to know which connection (to libvirt) has currently opened console stream. ;) Peter
participants (5)
-
Francesco Romani
-
John Ferlan
-
Martin Kletzander
-
Peter Krempa
-
TomK