Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-25 Thread Amar Tumballi
Responses inline.

On Fri, Nov 22, 2019 at 6:04 PM Niels de Vos  wrote:

> On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> > Hi All,
> >
> > As per the discussion on https://review.gluster.org/23645, recently we
> > changed the status of gNFS (gluster's native NFSv3 support) feature to
> > 'Depricated / Orphan' state. (ref:
> > https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189
> ).
> > With this email, I am proposing to change the status again to 'Odd Fixes'
> > (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> I'd recommend against re-surrecting gNFS. The server is not very
> extensible and adding new features is pretty tricky without breaking
> other (mostly undocumented) use-cases.


I too am against adding the features/enhancements to gNFS. It doesn't make
sense. We are removing features from glusterfs itself, adding features to
gNFS after 3 years wouldn't even be feasible.

I guess you missed the intention of my proposal. It was not about
'resurrecting' gNFS to 'Maintained' or 'Supported' status. It was about
taking it out of 'Orphan' status, because there are still users who are
'happy' with it. Hence I picked the status as 'Odd Fixes' (as per
MAINTAINERS file, there was nothing else which would give meaning of *'This
feature is still shipped, but we are not adding any features or not
actively maintaining it'. *



> Eventhough NFSv3 is stateless,
> the actual usage of NFSv3, mounting and locking is definitely not. The
> server keeps track of which clients have an export mounted, and which
> clients received grants for locks. These things are currently not very
> reliable in combination with high-availability. And there is also the by
> default disabled duplicate-reply-cache (DRC) that has always been very
> buggy (and neither cluster-aware).
>
> If we enable gNFS by default again, we're sending out an incorrect
> message to our users. gNFS works fine for certain workloads and
> environments, but it should not be advertised as 'clustered NFS'.
>
>
I didn't talk or was intending to go this route. I am not even talking
about making gNFS 'default' enable. That would take away our focus on
glusterfs, and different things we can solve with Gluster alone. Not sure
why my email was taken as there would be focus on gNFS.


> Instead of going the gNFS route, I suggest to make it easier to deploy
> NFS-Ganesha as that is a more featured, well maintained and can be
> configured for much more reliable high-availability than gNFS.
>
>
I believe this is critical, and we surely need to work on it. But doesn't
come in the way of doing 1-2 bug fixes in gNFS (if any) in a release.


> If someone really wants to maintain gNFS, I won't object much, but they
> should know that previous maintainers have had many difficulties just
> keeping it working well while other components evolved. Addressing some
> of the bugs/limitations will be extremely difficult and may require
> large rewrites of parts of gNFS.
>

Yes, that awareness is critical, and it should exist.


> Until now, I have not read convincing arguments in this thread that gNFS
> is stable enough to be consumed by anyone in the community. Users should
> be aware of its limitations and be careful what workloads to run on it.
>

In this thread, Xie mentioned that he is managing gNFS on 1000+ servers
with 2000+ clients (more than 24 gluster cluster overall) for more than 2
years now. If that doesn't sound as 'stability', not sure what sounds as.

I agree that the users should be careful about the proper usecase to use
gNFS. I am even open to say we should add a warning or console log in
gluster CLI when 'gluster volume set  nfs.disable false' is performed,
saying it is advised to move to NFS-Ganesha based approach, and give a URL
link in that message. But the whole point is, when we make a release, we
should still ship gNFS as there are some users, very happy with gNFS, and
their usecases are properly handled by gNFS in its current form itself. Why
make them unhappy, or shift to other projects?

End of the day, as developers it is our duty to make sure we suggest the
best technologies to users, but the intentions should always be to make
sure we solve problems. If there are already solved problems, why resurface
them in the name of better technology?

So, again, my proposal is, to keep gNFS in the codebase (not as Orphan),
and continue to make releases with gNFS binary shipped when we make
release, not to make the focus of project to start working on enhancements
of gNFS.

Happy to answer if anyone has further queries.

I have sent a patch https://review.gluster.org/23738 for the same, and I
see people commenting already on that. I agree that Xie's contributions to
Gluster may need to increase (specifically in gNFS component) to be called
as MAINTAINER. Happy to introduce him as 'Peer' and change the title later
when it is time. Jiffin, thanks for volunteering to have a look on patches
when you have 

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-22 Thread Niels de Vos
On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> Hi All,
> 
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)

I'd recommend against re-surrecting gNFS. The server is not very
extensible and adding new features is pretty tricky without breaking
other (mostly undocumented) use-cases. Eventhough NFSv3 is stateless,
the actual usage of NFSv3, mounting and locking is definitely not. The
server keeps track of which clients have an export mounted, and which
clients received grants for locks. These things are currently not very
reliable in combination with high-availability. And there is also the by
default disabled duplicate-reply-cache (DRC) that has always been very
buggy (and neither cluster-aware).

If we enable gNFS by default again, we're sending out an incorrect
message to our users. gNFS works fine for certain workloads and
environments, but it should not be advertised as 'clustered NFS'.

Instead of going the gNFS route, I suggest to make it easier to deploy
NFS-Ganesha as that is a more featured, well maintained and can be
configured for much more reliable high-availability than gNFS.

If someone really wants to maintain gNFS, I won't object much, but they
should know that previous maintainers have had many difficulties just
keeping it working well while other components evolved. Addressing some
of the bugs/limitations will be extremely difficult and may require
large rewrites of parts of gNFS.

Until now, I have not read convincing arguments in this thread that gNFS
is stable enough to be consumed by anyone in the community. Users should
be aware of its limitations and be careful what workloads to run on it.

HTH,
Niels

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-21 Thread Kaleb Keithley
I personally wouldn't call three years ago — when we started to deprecate
it, in glusterfs-3.9 — a recent change.

As a community the decision was made to move to NFS-Ganesha as the
preferred NFS solution, but it was agreed to keep the old code in the tree
for those who wanted it. There have been plans to drop it from the
community packages for most of those three years, but we didn't follow
through across the board until fairly recently. Perhaps the most telling
piece of data is that it's been gone from the packages in the CentOS
Storage SIG in glusterfs-4.0, -4.1, -5, -6, and -7 with no complaints ever,
that I can recall.

Ganesha is a preferable solution because it supports NFSv4, NFSv4.1,
NFSv4.2, and pNFS, in addition to legacy NFSv3. More importantly, it is
actively developed, maintained, and supported, both in the community and
commercially. There are several vendors selling it, or support for it; and
there are community packages for it for all the same distributions that
Gluster packages are available for.

Out in the world, the default these days is NFSv4. Specifically v4.2 or
v4.1 depending on how recent your linux kernel is. In the linux kernel,
client mounts start negotiating for v4.2 and work down to v4.1, v4.0, and
only as a last resort v3. NFSv3 client support in the linux kernel largely
exists at this point only because of the large number of legacy servers
still running that can't do anything higher than v3. The linux NFS
developers would drop the v3 support in a heartbeat if they could.

IMO, providing it, and calling it maintained, only encourages people to
keep using a dead end solution. Anyone in favor of bringing back NFSv2,
SSHv1, or X10R4? No? I didn't think so.

The recent issue[1] where someone built gnfs in glusterfs-7.0 on CentOS7
strongly suggests to me that gnfs is not actually working well. Three years
of no maintenance seems to have taken its toll.

Other people are more than welcome to build their own packages from the
src.rpms and/or tarballs that are available from gluster — and support
them. It's still in the source and there are no plans to remove it. (Unlike
most of the other deprecated features which were recently removed in
glusterfs-7.)



[1] https://github.com/gluster/glusterfs/issues/764

On Thu, Nov 21, 2019 at 5:31 AM Amar Tumballi  wrote:

> Hi All,
>
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> TL;DR;
>
> I understand the current maintainers are not able to focus on maintaining
> it as the focus of the project, as earlier described, is keeping
> NFS-Ganesha based integration with glusterfs. But, I am volunteering along
> with Xie Changlong (currently working at Chinamobile), to keep the feature
> running as it used to in previous versions. Hence the status of 'Odd
> Fixes'.
>
> Before sending the patch to make these changes, I am proposing it here
> now, as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
> heard from some users that it was working great for them with earlier
> releases, as all they wanted was NFS v3 support, and not much of features
> from gNFS. Also note that, even though the packages are not built, none of
> the regression tests using gNFS are stopped with latest master, so it is
> working same from at least last 2 years.
>
> I request the package maintainers to please add '--with gnfs' (or
> --enable-gnfs) back to their release script through this email, so those
> users wanting to use gNFS happily can continue to use it. Also points to
> users/admins is that, the status is 'Odd Fixes', so don't expect any
> 'enhancements' on the features provided by gNFS.
>
> Happy to hear feedback, if any.
>
> Regards,
> Amar
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel