Discussion:
[libvirt-users] how to list and kill existing console sessions to VMs?
Francesco Romani
2016-04-07 12:26:20 UTC
Permalink
Hi everyone,

If a VM is configured to have a console attached to it, like using

http://libvirt.org/formatdomain.html#elementCharConsole

Libvirt offers access to VM serial console's using the virDomainOpenConsole API[1]
However, I didn't find a way to
1. list the existing connections to the console
2. kill an existing connection - without reconnecting using VIR_DOMAIN_CONSOLE_FORCE[2]

Am I missing something? How can I do that?

Rationale for my request
oVirt [3] offers a management interface for VMs, and we have recently integrated user-friandly
VM serial console access [4] in the system; in the future release we want to enhance the administation
capabilities allowing to check existing connections and to terminate them (maybe because it got stuck).

+++

[1] http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainOpenConsole
[2] http://libvirt.org/html/libvirt-libvirt-domain.html#VIR_DOMAIN_CONSOLE_FORCE
[4] http://www.ovirt.org/
[5] https://www.ovirt.org/develop/release-management/features/engine/serial-console/ et. al.
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
TomK
2016-04-07 23:32:50 UTC
Permalink
Hey All,

I've an issue where libvirtd tries to access an NFS mount but errors out
with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share. root_squash
is used and for security reasons no_root_squash cannot be used.

On the controller and node SELinux is disabled.

[***@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
error: can't canonicalize path '/var/lib/one//datastores/0/38/disk.1':
Permission denied

I added some debug flags to get more info and added -x to the deploy
script. Closest I get to more details is this:

2016-04-06 04:15:35.945+0000: 14072: debug :
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
2016-04-06 04:15:35.954+0000: 14072: error :
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize
path '/var/lib/one//datastores/0/38/disk.1':

https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html

Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."

But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .

Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
TomK
2016-04-09 15:08:19 UTC
Permalink
Adding in libvir-list.

Cheers,
Tom K.
-------------------------------------------------------------------------------------

Mobile: 416 618 8456
Home: 905 857 9652
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors
out with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share.
root_squash is used and for security reasons no_root_squash cannot be
used.
On the controller and node SELinux is disabled.
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
Permission denied
I added some debug flags to get more info and added -x to the deploy
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't canonicalize
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
TomK
2016-04-12 00:02:04 UTC
Permalink
Hey All,

Wondering if anyone had any suggestions on this topic?

Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Adding in libvir-list.
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors
out with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share.
root_squash is used and for security reasons no_root_squash cannot be
used.
On the controller and node SELinux is disabled.
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
error: can't canonicalize path
'/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
Martin Kletzander
2016-04-12 14:03:51 UTC
Permalink
Post by TomK
Hey All,
Wondering if anyone had any suggestions on this topic?
The only thing I can come up with is:
'/var/lib/one//datastores/0/38/disk.1': Permission denied

... that don't have access to that file. Could you elaborate on that?

I think it's either:

a) you are running the domain as root or

b) we don't use the domain's uid/gid to canonicalize the path.

But if read access is enough for canonicalizing that path, I think the
problem is purely with permissions.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Adding in libvir-list.
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors
out with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share.
root_squash is used and for security reasons no_root_squash cannot be
used.
On the controller and node SELinux is disabled.
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
error: can't canonicalize path
'/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
TomK
2016-04-12 14:58:43 UTC
Permalink
Hey Martin,

Thanks very much. Appreciate you jumping in on this thread.

You see, that's just it. I've configured libvirt .conf files to run as
oneadmin.oneadmin (non previlidged) for that NFS share and I can access
all the files on that share as oneadmin without error, including the one
you listed. But libvirtd, by default, always starts as root. So it's
doing something as root, despite being configured to access the share as
oneadmin. As oneadmin I can access that file no problem. Here's how I
read the file off the node on which the NFS share is mounted on:

[***@mdskvm-p01 ~]$ ls -altri /var/lib/one//datastores/0/38/disk.1
34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one//datastores/0/38/disk.1
[***@mdskvm-p01 ~]$ file /var/lib/one//datastores/0/38/disk.1
/var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data
'CONTEXT'
[***@mdskvm-p01 ~]$ strings /var/lib/one//datastores/0/38/disk.1|head
CD001
LINUX CONTEXT
GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C)
1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600
2016040500205600
0000000000000000
2016040500205600

CD001
2016040500205600
2016040500205600
[***@mdskvm-p01 ~]$

My NFS mount looks as follows ( I have to use root_squash for security
reasons. I'm sure it will work using no_root_squash but that option is
not an option here.):

[***@mdskvm-p01 ~]# grep nfs /etc/fstab
# 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs
context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto
192.168.0.70:/var/lib/one/ /var/lib/one/ nfs
soft,intr,rsize=8192,wsize=8192,noauto
[***@mdskvm-p01 ~]#

[***@opennebula01 ~]# cat /etc/exports
/var/lib/one/ *(rw,sync,no_subtree_check,root_squash)
[***@opennebula01 ~]#


So I dug deeper and see that there is a possibility libvirtd is trying
to access that NFS mount as root as some level because as root I also
get a permission denied to the NFS share above. Rightly so since I have
root_squash that I need to keep. But libvirtd should be able to access
the file as oneadmin as I have above. It's not and this is what I read
on it:

https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html

Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."

But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .

My post with OpenNebula is here from which this conversation originates:
https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-oneadmin-nfs-mount-error-cant-canonicalize-path/2054/7

Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Post by TomK
Hey All,
Wondering if anyone had any suggestions on this topic?
'/var/lib/one//datastores/0/38/disk.1': Permission denied
... that don't have access to that file. Could you elaborate on that?
a) you are running the domain as root or
b) we don't use the domain's uid/gid to canonicalize the path.
But if read access is enough for canonicalizing that path, I think the
problem is purely with permissions.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Adding in libvir-list.
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors
out with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share.
root_squash is used and for security reasons no_root_squash cannot be
used.
On the controller and node SELinux is disabled.
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
error: can't canonicalize path
'/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
John Ferlan
2016-04-12 15:45:47 UTC
Permalink
Post by TomK
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.

Using root_squash is very "finicky" (to say the least)... It wasn't
really clear from what you posted how you are attempting to reference
things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file
use a direct path to the NFS volume or does it use a pool? If a pool,
then what type of pool? It is beneficial to provide as many details as
possible about the configuration because (so to speak) those that are
helping you won't know your environment (I've never used OpenNebula) nor
do I have a 'oneadmin' uid:gid.

What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).

This says to me that you're trying to use a "file system" pool (e.g
<pool type="fs">) perhaps rather than the "NFS" pool (e.g. <pool
type="netfs">). Using an NFS pool certainly has the advantage of
"knowing how" to deal with the NFS environment. Since libvirt may
consider this to "just" be a FS file, then it won't necessarily know to
try to access the file properly (OK dependent upon libvirt version too
perhaps - the details have been paged out of my memory while I do other
work).

One other thing that popped out at me:

My /etc/exports has:

/home/bzs/rootsquash/nfs *(rw,sync,root_squash)

which only differs from yours by the 'no_subtree_check'

your environment though seems to have much more "depth" than mine. That
is you have "//datastores/0/38/disk.1" appended on as the (I assume)
disk to use. The question then becomes - does every directory in the
path to that file use "oneadmin:oneadmin" and of course does it have to
with[out] that extra flag.

Again, I'm no expert just trying to provide ideas and help...

John
Post by TomK
You see, that's just it. I've configured libvirt .conf files to run as
oneadmin.oneadmin (non previlidged) for that NFS share and I can access
all the files on that share as oneadmin without error, including the one
you listed. But libvirtd, by default, always starts as root. So it's
doing something as root, despite being configured to access the share as
oneadmin. As oneadmin I can access that file no problem. Here's how I
34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one//datastores/0/38/disk.1
/var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data
'CONTEXT'
CD001
LINUX CONTEXT
GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C)
1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600
2016040500205600
0000000000000000
2016040500205600
CD001
2016040500205600
2016040500205600
My NFS mount looks as follows ( I have to use root_squash for security
reasons. I'm sure it will work using no_root_squash but that option is
# 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs
context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto
192.168.0.70:/var/lib/one/ /var/lib/one/ nfs
soft,intr,rsize=8192,wsize=8192,noauto
/var/lib/one/ *(rw,sync,no_subtree_check,root_squash)
So I dug deeper and see that there is a possibility libvirtd is trying
to access that NFS mount as root as some level because as root I also
get a permission denied to the NFS share above. Rightly so since I have
root_squash that I need to keep. But libvirtd should be able to access
the file as oneadmin as I have above. It's not and this is what I read
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-oneadmin-nfs-mount-error-cant-canonicalize-path/2054/7
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Post by TomK
Hey All,
Wondering if anyone had any suggestions on this topic?
'/var/lib/one//datastores/0/38/disk.1': Permission denied
... that don't have access to that file. Could you elaborate on that?
a) you are running the domain as root or
b) we don't use the domain's uid/gid to canonicalize the path.
But if read access is enough for canonicalizing that path, I think the
problem is purely with permissions.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Adding in libvir-list.
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors
out with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share.
root_squash is used and for security reasons no_root_squash cannot be
used.
On the controller and node SELinux is disabled.
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
error: can't canonicalize path
'/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
TomK
2016-04-12 16:07:50 UTC
Permalink
Hey John,

Hehe, I got the right guy then. Very nice! And very good ideas but I
may need more time to reread and try them out later tonight. I'm fully
in agreement about providing more details. Can't be accurate in a
diagnosis if there isn't much data to go on. This pool option is new to
me. Please tell me more on it. Can't find it in the file below but
maybe it's elsewhere?

( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )


Allright, here's the details:

[***@mdskvm-p01 ~]# rpm -aq|grep -i libvir
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
[***@mdskvm-p01 ~]# cat /etc/release
cat: /etc/release: No such file or directory
[***@mdskvm-p01 ~]# cat /etc/*release*
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
BUG_REPORT_URL="mailto:scientific-linux-***@listserv.fnal.gov"

REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
[***@mdskvm-p01 ~]#

[***@mdskvm-p01 ~]# mount /var/lib/one
[***@mdskvm-p01 ~]# su - oneadmin
Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on
ssh:notty
There were 9584 failed login attempts since the last successful login.
i[***@mdskvm-p01 ~]$ id oneadmin
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
[***@mdskvm-p01 ~]$ pwd
/var/lib/one
[***@mdskvm-p01 ~]$ ls -altriR|grep -i root
134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 ..
[***@mdskvm-p01 ~]$



[***@mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-38</name>
<vcpu>1</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<memory>524288</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source
file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source
file='/var/lib/one//datastores/0/38/disk.1'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='02:00:c0:a8:00:64'/>
</interface>
<graphics type='vnc' listen='0.0.0.0' port='5938'/>
</devices>
<features>
<acpi/>
</features>
</domain>

[***@mdskvm-p01 ~]$ cat
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
[***@mdskvm-p01 ~]$



Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
Post by John Ferlan
Post by TomK
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.
Using root_squash is very "finicky" (to say the least)... It wasn't
really clear from what you posted how you are attempting to reference
things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file
use a direct path to the NFS volume or does it use a pool? If a pool,
then what type of pool? It is beneficial to provide as many details as
possible about the configuration because (so to speak) those that are
helping you won't know your environment (I've never used OpenNebula) nor
do I have a 'oneadmin' uid:gid.
What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).
This says to me that you're trying to use a "file system" pool (e.g
<pool type="fs">) perhaps rather than the "NFS" pool (e.g. <pool
type="netfs">). Using an NFS pool certainly has the advantage of
"knowing how" to deal with the NFS environment. Since libvirt may
consider this to "just" be a FS file, then it won't necessarily know to
try to access the file properly (OK dependent upon libvirt version too
perhaps - the details have been paged out of my memory while I do other
work).
/home/bzs/rootsquash/nfs *(rw,sync,root_squash)
which only differs from yours by the 'no_subtree_check'
your environment though seems to have much more "depth" than mine. That
is you have "//datastores/0/38/disk.1" appended on as the (I assume)
disk to use. The question then becomes - does every directory in the
path to that file use "oneadmin:oneadmin" and of course does it have to
with[out] that extra flag.
Again, I'm no expert just trying to provide ideas and help...
John
Post by TomK
You see, that's just it. I've configured libvirt .conf files to run as
oneadmin.oneadmin (non previlidged) for that NFS share and I can access
all the files on that share as oneadmin without error, including the one
you listed. But libvirtd, by default, always starts as root. So it's
doing something as root, despite being configured to access the share as
oneadmin. As oneadmin I can access that file no problem. Here's how I
34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one//datastores/0/38/disk.1
/var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data
'CONTEXT'
CD001
LINUX CONTEXT
GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C)
1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600
2016040500205600
0000000000000000
2016040500205600
CD001
2016040500205600
2016040500205600
My NFS mount looks as follows ( I have to use root_squash for security
reasons. I'm sure it will work using no_root_squash but that option is
# 192.168.0.70:/var/lib/one/ /var/lib/one/ nfs
context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto
192.168.0.70:/var/lib/one/ /var/lib/one/ nfs
soft,intr,rsize=8192,wsize=8192,noauto
/var/lib/one/ *(rw,sync,no_subtree_check,root_squash)
So I dug deeper and see that there is a possibility libvirtd is trying
to access that NFS mount as root as some level because as root I also
get a permission denied to the NFS share above. Rightly so since I have
root_squash that I need to keep. But libvirtd should be able to access
the file as oneadmin as I have above. It's not and this is what I read
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-oneadmin-nfs-mount-error-cant-canonicalize-path/2054/7
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Post by TomK
Hey All,
Wondering if anyone had any suggestions on this topic?
'/var/lib/one//datastores/0/38/disk.1': Permission denied
... that don't have access to that file. Could you elaborate on that?
a) you are running the domain as root or
b) we don't use the domain's uid/gid to canonicalize the path.
But if read access is enough for canonicalizing that path, I think the
problem is purely with permissions.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Adding in libvir-list.
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by TomK
Hey All,
I've an issue where libvirtd tries to access an NFS mount but errors
out with: can't canonicalize path '/var/lib/one//datastores/0 . The
unprevilidged user is able to read/write fine to the share.
root_squash is used and for security reasons no_root_squash cannot be
used.
On the controller and node SELinux is disabled.
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/38/deployment.0
error: can't canonicalize path
'/var/lib/one//datastores/0/38/disk.1': Permission denied
I added some debug flags to get more info and added -x to the deploy
virStorageFileBackendFileInit:1441 : initializing FS storage file
0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
virStorageFileBackendFileGetUniqueIdentifier:1523 : can't
https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
Comment is: "The current implementation works for local
storage only and returns the canonical path of the volume."
But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
be? Anyway to get around this problem? This is CentOS 7 .
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
Martin Kletzander
2016-04-12 19:40:08 UTC
Permalink
[ I would be way easier to reply if you didn't top-post ]
Post by TomK
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I
may need more time to reread and try them out later tonight. I'm fully
in agreement about providing more details. Can't be accurate in a
diagnosis if there isn't much data to go on. This pool option is new to
me. Please tell me more on it. Can't find it in the file below but
maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
cat: /etc/release: No such file or directory
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on
ssh:notty
There were 9584 failed login attempts since the last successful login.
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
/var/lib/one
134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 ..
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-38</name>
<vcpu>1</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<memory>524288</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source
file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source
file='/var/lib/one//datastores/0/38/disk.1'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='02:00:c0:a8:00:64'/>
</interface>
<graphics type='vnc' listen='0.0.0.0' port='5938'/>
</devices>
<features>
<acpi/>
</features>
</domain>
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by John Ferlan
Post by TomK
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not
even used anywhere at all, but care about the conditions we have in the
code. Especially when it's constantly changing. So thanks for jumping
in. I only replied because nobody else did and I had only the tiniest
clue as to what could happen.
Post by TomK
Post by John Ferlan
Using root_squash is very "finicky" (to say the least)... It wasn't
really clear from what you posted how you are attempting to reference
things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file
use a direct path to the NFS volume or does it use a pool? If a pool,
then what type of pool? It is beneficial to provide as many details as
possible about the configuration because (so to speak) those that are
helping you won't know your environment (I've never used OpenNebula) nor
do I have a 'oneadmin' uid:gid.
What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).
I totally missed this. So the only thing that popped on my mind now was
checking the whole path:

ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}

You can also run it as root and oneadmin, however after reading through
all the info again, I don't think that'll help.
TomK
2016-04-12 19:55:45 UTC
Permalink
Post by Martin Kletzander
[ I would be way easier to reply if you didn't top-post ]
Post by TomK
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I
may need more time to reread and try them out later tonight. I'm fully
in agreement about providing more details. Can't be accurate in a
diagnosis if there isn't much data to go on. This pool option is new to
me. Please tell me more on it. Can't find it in the file below but
maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
cat: /etc/release: No such file or directory
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on
ssh:notty
There were 9584 failed login attempts since the last successful login.
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
/var/lib/one
134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 ..
<domain type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-38</name>
<vcpu>1</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<memory>524288</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source
file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source
file='/var/lib/one//datastores/0/38/disk.1'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='02:00:c0:a8:00:64'/>
</interface>
<graphics type='vnc' listen='0.0.0.0' port='5938'/>
</devices>
<features>
<acpi/>
</features>
</domain>
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by John Ferlan
Post by TomK
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not
even used anywhere at all, but care about the conditions we have in the
code. Especially when it's constantly changing. So thanks for jumping
in. I only replied because nobody else did and I had only the tiniest
clue as to what could happen.
Post by TomK
Post by John Ferlan
Using root_squash is very "finicky" (to say the least)... It wasn't
really clear from what you posted how you are attempting to reference
things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file
use a direct path to the NFS volume or does it use a pool? If a pool,
then what type of pool? It is beneficial to provide as many details as
possible about the configuration because (so to speak) those that are
helping you won't know your environment (I've never used OpenNebula) nor
do I have a 'oneadmin' uid:gid.
What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).
I totally missed this. So the only thing that popped on my mind now was
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through
all the info again, I don't think that'll help.
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
I top post by default in thunderbird and we have same setup at work with
M$ LookOut. Old habits are to blame I guess. I'll try to reply like
this instead. But yeah it's terrible for mailing lists to top post.
Here's the output and thanks again:

[***@mdskvm-p01 ~]$ ls -ld
/var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var
drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib
drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores
drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20
/var/lib/one/datastores/0/38
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one/datastores/0/38/disk.1
[***@mdskvm-p01 ~]$

That's the default setting but I think I see what you're getting at that
permissions get inherited?

Cheers,
Tom K.
-------------------------------------------------------------------------------------


Living on earth is expensive, but it includes a free trip around the sun.
Martin Kletzander
2016-04-12 20:29:29 UTC
Permalink
Post by TomK
Post by Martin Kletzander
[ I would be way easier to reply if you didn't top-post ]
Post by John Ferlan
What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).
I totally missed this. So the only thing that popped on my mind now was
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through
all the info again, I don't think that'll help.
I top post by default in thunderbird and we have same setup at work with
M$ LookOut. Old habits are to blame I guess. I'll try to reply like
this instead. But yeah it's terrible for mailing lists to top post.
/var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var
drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib
drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
Look ^^, maybe for a quick workaround you could try doing:

chmod o+rx /var/lib/one

Let me know if that does the trick (at least for now).
Post by TomK
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores
drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20
/var/lib/one/datastores/0/38
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one/datastores/0/38/disk.1
That's the default setting but I think I see what you're getting at that
permissions get inherited?
No, I just think you need eXecute on all parent directories. That
shouldn't hinder your security and could help.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Martin Kletzander
2016-04-12 20:36:45 UTC
Permalink
Post by Martin Kletzander
Post by TomK
Post by Martin Kletzander
[ I would be way easier to reply if you didn't top-post ]
Post by John Ferlan
What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).
I totally missed this. So the only thing that popped on my mind now was
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through
all the info again, I don't think that'll help.
I top post by default in thunderbird and we have same setup at work with
M$ LookOut. Old habits are to blame I guess. I'll try to reply like
this instead. But yeah it's terrible for mailing lists to top post.
/var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var
drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib
drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
chmod o+rx /var/lib/one
Actually, o+x ought to be enough.
Post by Martin Kletzander
Let me know if that does the trick (at least for now).
Post by TomK
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores
drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20
/var/lib/one/datastores/0/38
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one/datastores/0/38/disk.1
That's the default setting but I think I see what you're getting at that
permissions get inherited?
No, I just think you need eXecute on all parent directories. That
shouldn't hinder your security and could help.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
TomK
2016-04-12 22:09:22 UTC
Permalink
Post by Martin Kletzander
Post by Martin Kletzander
Post by TomK
Post by Martin Kletzander
[ I would be way easier to reply if you didn't top-post ]
Post by John Ferlan
What got my attention was the error message "initializing FS storage
file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
trying to access the file (I assume that's oneadmin:oneadmin on your
system).
I totally missed this. So the only thing that popped on my mind now was
ls -ld /var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
You can also run it as root and oneadmin, however after reading through
all the info again, I don't think that'll help.
I top post by default in thunderbird and we have same setup at work with
M$ LookOut. Old habits are to blame I guess. I'll try to reply like
this instead. But yeah it's terrible for mailing lists to top post.
/var{,/lib{,/one{,/datastores{,/0{,/38{,/disk.1}}}}}}
drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var
drwxr-xr-x. 45 root root 4096 Apr 12 07:58 /var/lib
drwxr-x--- 12 oneadmin oneadmin 4096 Apr 12 15:50 /var/lib/one
chmod o+rx /var/lib/one
Actually, o+x ought to be enough.
Post by Martin Kletzander
Let me know if that does the trick (at least for now).
Post by TomK
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44
/var/lib/one/datastores
drwxrwxr-x 6 oneadmin oneadmin 42 Apr 5 00:20
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 5 00:20
/var/lib/one/datastores/0/38
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
/var/lib/one/datastores/0/38/disk.1
That's the default setting but I think I see what you're getting at that
permissions get inherited?
No, I just think you need eXecute on all parent directories. That
shouldn't hinder your security and could help.
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
--
libvir-list mailing list
https://www.redhat.com/mailman/listinfo/libvir-list
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
The execute permissions did the trick to allow creation. So that's
good. There's still the write and I'm thinking you intend this as a
workaround since oneadmin should be able to write in there with other
being --- . The auto deployment of cloud virtuals would still fail then
when writes are attempted.

[***@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create
/var/lib/one//datastores/0/38/deployment.0
create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
Domain one-38 created from /var/lib/one//datastores/0/38/deployment.0
[***@mdskvm-p01 ~]$

Now should this work without any permissions on other for the
unprivileged user oneadmin? Thinking Yes per John Forlan's reply?

[***@mdskvm-p01 0]$ virsh -d 1 --connect qemu:///system create
/var/lib/one//datastores/0/24/deployment.0
create: file(optdata): /var/lib/one//datastores/0/24/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/24/deployment.0
error: can't canonicalize path '/var/lib/one//datastores/0/24/disk.1':
Permission denied
[***@mdskvm-p01 0]$


Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
John Ferlan
2016-04-12 21:08:47 UTC
Permalink
Post by Martin Kletzander
[ I would be way easier to reply if you didn't top-post ]
Post by TomK
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I
may need more time to reread and try them out later tonight. I'm fully
in agreement about providing more details. Can't be accurate in a
diagnosis if there isn't much data to go on. This pool option is new to
me. Please tell me more on it. Can't find it in the file below but
maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
cat: /etc/release: No such file or directory
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on
ssh:notty
There were 9584 failed login attempts since the last successful login.
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
/var/lib/one
134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 ..
It'd take more time than I have at the present moment to root out what
changed when for NFS root-squash, but suffice to say there were some
corner cases. Some involving how qemu-img files are generated - I don't
have the details present in my short term memory.
Post by Martin Kletzander
Post by TomK
<domain type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-38</name>
<vcpu>1</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<memory>524288</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source
file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source
file='/var/lib/one//datastores/0/38/disk.1'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='02:00:c0:a8:00:64'/>
</interface>
<graphics type='vnc' listen='0.0.0.0' port='5938'/>
</devices>
<features>
<acpi/>
</features>
</domain>
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
Having/using a root squash via an NFS pool is "easy" (famous last words)

Create some pool XML (taking the example I have)

% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>

In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.

You've already seen my /etc/exports

virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash

Now instead of

<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>

Something like:

<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>

The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).

Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Post by Martin Kletzander
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by John Ferlan
Post by TomK
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not
even used anywhere at all, but care about the conditions we have in the
code. Especially when it's constantly changing. So thanks for jumping
in. I only replied because nobody else did and I had only the tiniest
clue as to what could happen.
I saw the post, but was heads down somewhere else. Suffice to say trying
to swap in root_squash is a painful exercise...


John

[...]
TomK
2016-04-12 22:24:16 UTC
Permalink
Post by John Ferlan
Post by Martin Kletzander
[ I would be way easier to reply if you didn't top-post ]
Post by TomK
Hey John,
Hehe, I got the right guy then. Very nice! And very good ideas but I
may need more time to reread and try them out later tonight. I'm fully
in agreement about providing more details. Can't be accurate in a
diagnosis if there isn't much data to go on. This pool option is new to
me. Please tell me more on it. Can't find it in the file below but
maybe it's elsewhere?
( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
cat: /etc/release: No such file or directory
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
Last login: Sat Apr 9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on
ssh:notty
There were 9584 failed login attempts since the last successful login.
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
/var/lib/one
134320262 drwxr-xr-x. 45 root root 4096 Apr 12 07:58 ..
It'd take more time than I have at the present moment to root out what
changed when for NFS root-squash, but suffice to say there were some
corner cases. Some involving how qemu-img files are generated - I don't
have the details present in my short term memory.
Post by Martin Kletzander
Post by TomK
<domain type='kvm'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-38</name>
<vcpu>1</vcpu>
<cputune>
<shares>1024</shares>
</cputune>
<memory>524288</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
</os>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source
file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source
file='/var/lib/one//datastores/0/38/disk.1'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='02:00:c0:a8:00:64'/>
</interface>
<graphics type='vnc' listen='0.0.0.0' port='5938'/>
</devices>
<features>
<acpi/>
</features>
</domain>
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Post by Martin Kletzander
Post by TomK
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
Post by John Ferlan
Post by TomK
Hey Martin,
Thanks very much. Appreciate you jumping in on this thread.
Can you provide some more details with respect to which libvirt version
you have installed. I know I've made changes in this space in more
recent versions (not the most recent). I'm no root_squash expert, but I
was the last to change things in the space so that makes me partially
fluent ;-) in NFS/root_squash speak.
I'm always lost in how do we handle *all* the corner cases that are not
even used anywhere at all, but care about the conditions we have in the
code. Especially when it's constantly changing. So thanks for jumping
in. I only replied because nobody else did and I had only the tiniest
clue as to what could happen.
I saw the post, but was heads down somewhere else. Suffice to say trying
to swap in root_squash is a painful exercise...
John
[...]
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
Thanks John! Appreciated again.

No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).

I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)

Cheers,
Tom Kacperski.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
Martin Kletzander
2016-04-13 05:33:04 UTC
Permalink
Post by TomK
Post by John Ferlan
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)
The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround. The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories. Without that even the pool won't
be usable from libvirt. However it does not pose any security issue as
it only allows others to check the path. When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there. It's just
that libvirt does some checks that the path exists for example.

Hope that's understandable and it will resolve your issue permanently.

Have a nice day,
Martin
TomK
2016-04-13 13:19:41 UTC
Permalink
Post by Martin Kletzander
Post by TomK
Post by John Ferlan
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)
The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround. The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories. Without that even the pool won't
be usable from libvirt. However it does not pose any security issue as
it only allows others to check the path. When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there. It's just
that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day,
Martin
That fits in with what's happening for sure. I'm just not sure how much
of the work libvirtd does it does by root vs nobody vs oneadmin on the
NFS mount. If there was a way to find out that, it would help alot. I
will give the nobody user setting a try however.

Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
TomK
2016-04-13 13:23:07 UTC
Permalink
Post by Martin Kletzander
Post by TomK
Post by John Ferlan
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)
The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround. The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories. Without that even the pool won't
be usable from libvirt. However it does not pose any security issue as
it only allows others to check the path. When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there. It's just
that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day,
Martin
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
The only reason I said that this might be a 'workaround' is due to John
Farlan commenting that he'll look at this later on. Ideally the
opennebula community keeps the other permissions to nill and presumably
they work on NFSv3 per the forum topic I included earlier from them.
But if setting the permissions on nobody to allow for the functionality,
I would be comfortable with that.

Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
John Ferlan
2016-04-13 14:00:17 UTC
Permalink
Post by TomK
Post by Martin Kletzander
Post by TomK
Post by John Ferlan
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)
The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround. The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories. Without that even the pool won't
be usable from libvirt. However it does not pose any security issue as
it only allows others to check the path. When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there. It's just
that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day,
Martin
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
The only reason I said that this might be a 'workaround' is due to John
Farlan commenting that he'll look at this later on. Ideally the
opennebula community keeps the other permissions to nill and presumably
they work on NFSv3 per the forum topic I included earlier from them.
But if setting the permissions on nobody to allow for the functionality,
I would be comfortable with that.
Martin and I were taking different paths... But yes, it certainly makes
sense given your error message about canonical path and the need for
eXecute permissions... I think I started wondering about that first, but
then jumped into the NFS pool because that's what my reference point is
for root-squash. Since root squash essentially sends root requests as
"nfsnobody" (IOW: others not the user or group), then the "o+x" approach
is the solution if you're going directly at the file.

John
TomK
2016-04-14 05:01:32 UTC
Permalink
Post by John Ferlan
Post by TomK
Post by Martin Kletzander
Post by TomK
Post by John Ferlan
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)
The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround. The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories. Without that even the pool won't
be usable from libvirt. However it does not pose any security issue as
it only allows others to check the path. When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there. It's just
that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day,
Martin
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
The only reason I said that this might be a 'workaround' is due to John
Farlan commenting that he'll look at this later on. Ideally the
opennebula community keeps the other permissions to nill and presumably
they work on NFSv3 per the forum topic I included earlier from them.
But if setting the permissions on nobody to allow for the functionality,
I would be comfortable with that.
Martin and I were taking different paths... But yes, it certainly makes
sense given your error message about canonical path and the need for
eXecute permissions... I think I started wondering about that first, but
then jumped into the NFS pool because that's what my reference point is
for root-squash. Since root squash essentially sends root requests as
"nfsnobody" (IOW: others not the user or group), then the "o+x" approach
is the solution if you're going directly at the file.
John
Yes, appears the o+x is the only way right now. It definitely tries to
access the share as root though, on CentOS 7 since I also tried to add
nfsnobody and nobody to the oneadmin group and that did not work
either. Seems OpenNebula doesn't have this issue with NFSv3 running on
Ubuntu:

[***@mdskvm-p01 ~]# rmdir /tmp/netfs-rootsquash-pool
[***@mdskvm-p01 ~]# cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='opennebula01'/>
<dir path='/var/lib/one'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>9869</owner>
<group>9869</group>
</permissions>
</target>
</pool>
[***@mdskvm-p01 ~]#
[***@mdskvm-p01 ~]#

[***@mdskvm-p01 ~]# virsh pool-define nfs.xml
Pool rootsquash defined from nfs.xml

[***@mdskvm-p01 ~]# virsh pool-build rootsquash
Pool rootsquash built

[***@mdskvm-p01 ~]# virsh pool-start rootsquash
error: Failed to start pool rootsquash
error: cannot open path '/tmp/netfs-rootsquash-pool': Permission denied

[***@mdskvm-p01 ~]# virsh vol-list rootsquash
error: Failed to list volumes
error: Requested operation is not valid: storage pool 'rootsquash' is
not active

[***@mdskvm-p01 ~]# ls -altri /tmp/netfs-rootsquash-pool
total 4
133 drwxrwxrwt. 14 root root 4096 Apr 14 00:05 ..
68785924 drwxr-xr-x 2 oneadmin oneadmin 6 Apr 14 00:05 .
[***@mdskvm-p01 ~]#

[***@mdskvm-p01 ~]# id oneadmin
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
[***@mdskvm-p01 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody),9869(oneadmin)
[***@mdskvm-p01 ~]# id nfsnobody
uid=65534(nfsnobody) gid=65534(nfsnobody)
groups=65534(nfsnobody),9869(oneadmin)
[***@mdskvm-p01 ~]# id root
uid=0(root) gid=0(root) groups=0(root)
[***@mdskvm-p01 ~]#

[***@mdskvm-p01 ~]# ps -ef|grep -i libvirtd
root 352 31058 0 00:31 pts/1 00:00:00 grep --color=auto -i
libvirtd
root 1459 1 0 Apr11 ? 00:07:40 /usr/sbin/libvirtd
--listen --config /etc/libvirt/libvirtd.conf
[***@mdskvm-p01 ~]#



[***@mdskvm-p01 ~]# umount /var/lib/one
[***@mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one
[***@mdskvm-p01 ~]# umount /var/lib/one
[***@mdskvm-p01 ~]# mount /var/lib/one
[***@mdskvm-p01 ~]# mount|tail -n 1
192.168.0.70:/var/lib/one on /var/lib/one type nfs4
(rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70)
[***@mdskvm-p01 ~]# umount /var/lib/one
[***@mdskvm-p01 ~]# mount --no-canonicalize /var/lib/one
[***@mdskvm-p01 ~]# mount|tail -n 1
192.168.0.70:/var/lib/one on /var/lib/one type nfs4
(rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70)
[***@mdskvm-p01 ~]# su - oneadmin
Last login: Thu Apr 14 00:27:59 EDT 2016 on pts/0
[***@mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create
/var/lib/one//datastores/0/47/deployment.0
create: file(optdata): /var/lib/one//datastores/0/47/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/47/deployment.0
error: can't canonicalize path '/var/lib/one//datastores/0/47/disk.1':
Permission denied
[***@mdskvm-p01 ~]$




CONTROLLER ( NFS Server )

[***@opennebula01 one]$ ls -ld
/var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}}
drwxr-xr-x. 19 root root 4096 Apr 4 21:26 /var
drwxr-xr-x. 28 root root 4096 Apr 13 03:30 /var/lib
drwxr-x---. 12 oneadmin oneadmin 4096 Apr 14 00:40 /var/lib/one
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores
drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32
/var/lib/one/datastores/0/47
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32
/var/lib/one/datastores/0/47/disk.1
[***@opennebula01 one]$



NODE ( NFS Client )

[***@mdskvm-p01 ~]$ ls -ld
/var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}}
drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var
drwxr-xr-x. 45 root root 4096 Apr 13 04:11 /var/lib
drwxr-x--- 12 oneadmin oneadmin 4096 Apr 14 00:39 /var/lib/one
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44 /var/lib/one/datastores
drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32
/var/lib/one/datastores/0/47
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32
/var/lib/one/datastores/0/47/disk.1
[***@mdskvm-p01 ~]$



Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.
TomK
2016-04-14 05:24:04 UTC
Permalink
Post by TomK
Post by John Ferlan
Post by TomK
Post by Martin Kletzander
Post by TomK
Post by John Ferlan
Having/using a root squash via an NFS pool is "easy" (famous last words)
Create some pool XML (taking the example I have)
% cat nfs.xml
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='localhost'/>
<dir path='/home/bzs/rootsquash/nfs'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>107</owner>
<group>107</group>
</permissions>
</target>
</pool>
In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.
You've already seen my /etc/exports
virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash
Now instead of
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/0/38/disk.0'/>
<target dev='hda'/>
<driver name='qemu' type='qcow2' cache='none'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='qemu' cache='none'/>
<source pool='rootsquash' volume='disk.0'/>
<target dev='hda'/>
</disk>
The volume name may be off, but it's perhaps close. I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).
Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get. The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume. "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be
appropriately
configured...
Thanks John! Appreciated again.
No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles. I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).
I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct? I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning. I'll go over your notes again. I'm
optimistic. :)
The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround. The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories. Without that even the pool won't
be usable from libvirt. However it does not pose any security issue as
it only allows others to check the path. When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there. It's just
that libvirt does some checks that the path exists for example.
Hope that's understandable and it will resolve your issue permanently.
Have a nice day,
Martin
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
The only reason I said that this might be a 'workaround' is due to John
Farlan commenting that he'll look at this later on. Ideally the
opennebula community keeps the other permissions to nill and presumably
they work on NFSv3 per the forum topic I included earlier from them.
But if setting the permissions on nobody to allow for the
functionality,
I would be comfortable with that.
Martin and I were taking different paths... But yes, it certainly makes
sense given your error message about canonical path and the need for
eXecute permissions... I think I started wondering about that first, but
then jumped into the NFS pool because that's what my reference point is
for root-squash. Since root squash essentially sends root requests as
"nfsnobody" (IOW: others not the user or group), then the "o+x" approach
is the solution if you're going directly at the file.
John
Yes, appears the o+x is the only way right now. It definitely tries
to access the share as root though, on CentOS 7 since I also tried to
add nfsnobody and nobody to the oneadmin group and that did not work
either. Seems OpenNebula doesn't have this issue with NFSv3 running
<pool type='netfs'>
<name>rootsquash</name>
<source>
<host name='opennebula01'/>
<dir path='/var/lib/one'/>
<format type='nfs'/>
</source>
<target>
<path>/tmp/netfs-rootsquash-pool</path>
<permissions>
<mode>0755</mode>
<owner>9869</owner>
<group>9869</group>
</permissions>
</target>
</pool>
Pool rootsquash defined from nfs.xml
Pool rootsquash built
error: Failed to start pool rootsquash
error: cannot open path '/tmp/netfs-rootsquash-pool': Permission denied
error: Failed to list volumes
error: Requested operation is not valid: storage pool 'rootsquash' is
not active
total 4
133 drwxrwxrwt. 14 root root 4096 Apr 14 00:05 ..
68785924 drwxr-xr-x 2 oneadmin oneadmin 6 Apr 14 00:05 .
uid=9869(oneadmin) gid=9869(oneadmin)
groups=9869(oneadmin),992(libvirt),36(kvm)
uid=99(nobody) gid=99(nobody) groups=99(nobody),9869(oneadmin)
uid=65534(nfsnobody) gid=65534(nfsnobody)
groups=65534(nfsnobody),9869(oneadmin)
uid=0(root) gid=0(root) groups=0(root)
root 352 31058 0 00:31 pts/1 00:00:00 grep --color=auto -i
libvirtd
root 1459 1 0 Apr11 ? 00:07:40 /usr/sbin/libvirtd
--listen --config /etc/libvirt/libvirtd.conf
192.168.0.70:/var/lib/one on /var/lib/one type nfs4
(rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70)
192.168.0.70:/var/lib/one on /var/lib/one type nfs4
(rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.60,local_lock=none,addr=192.168.0.70)
Last login: Thu Apr 14 00:27:59 EDT 2016 on pts/0
/var/lib/one//datastores/0/47/deployment.0
create: file(optdata): /var/lib/one//datastores/0/47/deployment.0
error: Failed to create domain from
/var/lib/one//datastores/0/47/deployment.0
Permission denied
CONTROLLER ( NFS Server )
/var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}}
drwxr-xr-x. 19 root root 4096 Apr 4 21:26 /var
drwxr-xr-x. 28 root root 4096 Apr 13 03:30 /var/lib
drwxr-x---. 12 oneadmin oneadmin 4096 Apr 14 00:40 /var/lib/one
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44
/var/lib/one/datastores
drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32
/var/lib/one/datastores/0/47
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32
/var/lib/one/datastores/0/47/disk.1
NODE ( NFS Client )
/var{,/lib{,/one{,/datastores{,/0{,/47{,/disk.1}}}}}}
drwxr-xr-x. 21 root root 4096 Apr 11 07:10 /var
drwxr-xr-x. 45 root root 4096 Apr 13 04:11 /var/lib
drwxr-x--- 12 oneadmin oneadmin 4096 Apr 14 00:39 /var/lib/one
drwxrwxr-x 6 oneadmin oneadmin 46 Mar 31 02:44
/var/lib/one/datastores
drwxrwxr-x 8 oneadmin oneadmin 60 Apr 13 23:31
/var/lib/one/datastores/0
drwxrwxr-x 2 oneadmin oneadmin 68 Apr 13 23:32
/var/lib/one/datastores/0/47
-rw-r--r-- 1 oneadmin oneadmin 372736 Apr 13 23:32
/var/lib/one/datastores/0/47/disk.1
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
_______________________________________________
libvirt-users mailing list
https://www.redhat.com/mailman/listinfo/libvirt-users
+ opennebula runs this as oneadmin:

Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 + echo
'Running as user oneadmin'
Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 ++
virsh --connect qemu:///system create
/var/lib/one//datastores/0/47/deployment.0
Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 error:
Failed to create domain from /var/lib/one//datastores/0/47/deployment.0
Wed Apr 13 23:32:40 2016 [Z0][VMM][D]: Message received: LOG I 47 error:
can't canonicalize path '/var/lib/one//datastores/0/47/disk.1':
Permission denied

Cheers,
TK

Peter Krempa
2016-04-08 12:07:46 UTC
Permalink
Post by Francesco Romani
Hi everyone,
If a VM is configured to have a console attached to it, like using
http://libvirt.org/formatdomain.html#elementCharConsole
Libvirt offers access to VM serial console's using the virDomainOpenConsole API[1]
However, I didn't find a way to
1. list the existing connections to the console
2. kill an existing connection - without reconnecting using VIR_DOMAIN_CONSOLE_FORCE[2]
Am I missing something? How can I do that?
Neither of those is possible currently.
Post by Francesco Romani
Rationale for my request
oVirt [3] offers a management interface for VMs, and we have recently integrated user-friandly
VM serial console access [4] in the system; in the future release we want to enhance the administation
capabilities allowing to check existing connections and to terminate them (maybe because it got stuck).
I think the plan that danpb has in this aspect is to use virlogd to
distribute the console output to almost any number of clients, which
would solve this kind of problem.

Additionally, doesn't oVirt use just one connection from VDSM for this
purpose. In that case it's rather trivial to know which connection
(to libvirt) has currently opened console stream. ;)

Peter
Loading...