On Tue, Apr 12, 2016 at 06:24:16PM -0400, TomK wrote:
On 4/12/2016 5:08 PM, John Ferlan wrote:
Having/using a root squash via an NFS pool is "easy" (famous last words)

Create some pool XML (taking the example I have)

% cat nfs.xml
<pool type='netfs'>
     <name>rootsquash</name>
     <source>
         <host name='localhost'/>
         <dir path='/home/bzs/rootsquash/nfs'/>
         <format type='nfs'/>
     </source>
     <target>
         <path>/tmp/netfs-rootsquash-pool</path>
         <permissions>
             <mode>0755</mode>
             <owner>107</owner>
             <group>107</group>
         </permissions>
     </target>
</pool>

In this case 107:107 is qemu:qemu and I used 'localhost' as the
hostname, but that can be a fqdn or ip-addr to the NFS server.

You've already seen my /etc/exports

virsh pool-define nfs.xml
virsh pool-build rootsquash
virsh pool-start rootsquash
virsh vol-list rootsquash

Now instead of

    <disk type='file' device='disk'>
      <source file='/var/lib/one//datastores/0/38/disk.0'/>
      <target dev='hda'/>
      <driver name='qemu' type='qcow2' cache='none'/>
    </disk>

Something like:

   <disk type='volume' device='disk'>
     <driver name='qemu' type='qemu' cache='none'/>
     <source pool='rootsquash' volume='disk.0'/>
     <target dev='hda'/>
   </disk>

The volume name may be off, but it's perhaps close.  I forget how to do
the readonly bit for a pool (again, my focus is elsewhere).

Of course you'd have to adjust the nfs.xml above to suit your
environment and see what you see/get.  The privileges for the pool and
volumes in the pool become the key to how libvirt decides to "request
access" to the volume.  "disk.1" having read access is probably not an
issue since you seem to be using it as a CDROM; however, "disk.0" is
going to be used for read/write - thus would have to be appropriately
configured...


Thanks John!  Appreciated again.

No worries, handle what's on the plate now and earmark this for checking
once you have some free cycles.  I can temporarily hop on one leg by
using Martin Kletzander's workaround (It's a POC at the moment).

I'll have a look at your instructions further but wanted to find out if
that config nfs.xml is a one time thing correct?  I'm spinning these up
at will via the OpenNebula GUI and if I have update for each VM, that
breaks the Cloud provisioning.  I'll go over your notes again.  I'm
optimistic.   :)


The more I'm thinking about it, the more I am convinced that the
workaround is actually not a workaround.  The only thing you need to do
is having execute for others (precisely for 'nobody' on the nfs share)
in the whole path on all directories.  Without that even the pool won't
be usable from libvirt.  However it does not pose any security issue as
it only allows others to check the path.  When qemu is launched, it has
the proper "label", meaning uid:gid to access the file so it will be
able to read/write or whatever permissions you set there.  It's just
that libvirt does some checks that the path exists for example.

Hope that's understandable and it will resolve your issue permanently.

Have a nice day,
Martin

Attachment: signature.asc
Description: Digital signature

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to