On 10/25/2015 01:25 PM, Sage Weil wrote:
Hi everyone,

We're currently working on an improved mechanism for plumbing file access
into a VM or container and in most cases some interaction and
configuration on the hypervisor/host is required to make it happen.  The
goal is to create something that generalizes beyond the current NFS and
CIFS-only assumptions that are baked into the current crop of Manila
drivers.

The current target is to use the new VSOCK zero-configuration sockets,
which I will be talking about at the summit on Wednesday:

  - mount distributed fs on host, export via knfsd to guest over VSOCK
  - run Ganesha with distributed fs FSAL on host, export to guest over VSOCK

NFS-over-vsock is a VERY interesting idea. I understood that it wasn't implemented yet in Linux... If that's no longer true then I'm excited to find a way to make it work.

(In short, VSOCK requires no configuration on the guest--so we can
continue to treat it as a black box--and only a simple network id
assignment on the host.  This reduces driver complexity, reduces the
potential for user error, and improves security.)

Or, the same can be done using a more configuration-intensive private
network:

  - mount distributed fs on host, export via knfsd to guest over private
    host/guest network
  - run ganesha with distributed fs FSAL on host, export to guest over
    private host/guest network

We are also interested in container use-cases (e.g., for Nova-managed
containers using lxc or nova-docker):

  - mount distributed fs on host, bind mount a volume/directory into the
guest/container namespace

 From a plumbing perspective, these approaches appear to be the most
attractive as far as efficiency and security.  When it comes to making
openstack actually orchestrate it, however, things get complicated (and
as a developer in a peripheral project I don't have the clearest view of
how things should work).

The problem is that in any of these scenarios, the default/naive strategy
of having Manila go poking around on the host machine is very fragile:

  - What if the host is down when the Manila configuration is made?
  - What if the VM is migrated to another host?
  - What if Manila doesn't understand what (the) Nova (driver) is doing?

For block volumes, there is a simple separation of roles: Cinder manages
the volumes themselves, and Nova attaches them to guests.  This means
there is some volume-related code in Nova, but it also means that that
code is running in a context where it can Do The Right Thing.  I believe
the same is true of Manila file volumes as well... as soon as
you look beyond the current assumption that all volumes must be
reached via a network file protocol (and usually a Neutron
network).

That is, we would love to see a new set of Nova APIs to attach and detach
a manila fs to the guest, along with a "get share info" call that returns
any details required for the user/tenant to do the final mount within the
guest.

This enables our new topologies:

  - Attach API might mount CephFS on the host and exportfs it over VSOCK to
the guest VM.  Get-info API would direct the user to mount -t nfs vsock:1/
with -o nfsvers=4.1,proto=vsock,clientaddr=N ..
  - Attach API might configure/start a Ganesha instance doing the same
  - Attach API might mount CephFS on the host and then mount --bind the
share to $guestroot/dev/manila/$shareid.  Get-info API would direct the
user to mount --bind /dev/manila/$shareid.

I suspect it would also clean up some existing Manila topologies (though
I'm not sure).  For example, the current the NFS-based Manila drivers will
help you configure a Neutron net that can access the storage server.  The
attach API might attach the guest to that same network.  (Maybe.. I'm
super fuzzy on how this stuff works!)

What do people (especially the Nova folks) think about this?  I would love
sit down with anyone to talk about this this week in Tokyo on either/both
the Nova and Manila side of the picture.

Thanks!
sage

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to