Re: [Gluster-users] Unable to setup geo replication

2019-11-25 Thread Tan, Jian Chern
Rsync on both the slave and master are rsync  version 3.1.3  protocol version 
31, so both are up to date as far as I know.
Gluster version on both machines are glusterfs 5.10
OS on both machines are Fedora 29 Server Edition

From: Kotresh Hiremath Ravishankar 
Sent: Tuesday, November 26, 2019 3:04 PM
To: Tan, Jian Chern 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to setup geo replication

The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between 
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?

On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
I’m new to GlusterFS and trying to setup geo-replication with a master volume 
being mirrored to a slave volume on another machine. However I just can’t seem 
to get it to work after starting the geo replication volume with the logs 
showing it failing rsync with error code 14. I can’t seem to find any info 
about this online so any help would be much appreciated.

[2019-11-26 05:46:31.24706] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change  status=Initializing...
[2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor: starting 
gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0  
slave_node=pgsotc11.png.intel.com
[2019-11-26 05:46:31.90935] I [gsyncd(agent 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.92105] I [changelogagent(agent 
/data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent 
listining...
[2019-11-26 05:46:31.93148] I [gsyncd(worker 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file   
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.102422] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH: Initializing 
SSH connection between master and slave...
[2019-11-26 05:46:50.355233] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH connection 
between master and slave established.duration=19.2526
[2019-11-26 05:46:50.355583] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting gluster 
volume locally...
[2019-11-26 05:46:51.404998] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted gluster 
volume duration=1.0492
[2019-11-26 05:46:51.405363] I [subcmds(worker 
/data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn 
successful. Acknowledging back to monitor
[2019-11-26 05:46:53.431502] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working dir  
  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
[2019-11-26 05:46:53.431846] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register time 
time=1574747213
[2019-11-26 05:46:53.445589] I [gsyncdstatus(worker 
/data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker 
Status Changestatus=Active
[2019-11-26 05:46:53.446184] I [gsyncdstatus(worker 
/data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status] 
GeorepStatus: Crawl Status Changestatus=History Crawl
[2019-11-26 05:46:53.446367] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting history 
crawlturns=1 stime=(1574669325, 0)   etime=1574747213
entry_stime=None
[2019-11-26 05:46:54.448994] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1546:crawl] _GMaster: slave's time  
stime=(1574669325, 0)
[2019-11-26 05:46:54.928395] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1954:syncjob] Syncer: Sync Time Taken 
  job=1   num_files=1 return_code=14  duration=0.0162
[2019-11-26 05:46:54.928607] E [syncdutils(worker 
/data/glusterimagebrick/jfsotc22-gv0):809:errlog] Popen: command returned error 
  cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids 
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-rgpu74f3/de0855b3336b4c3233934fcbeeb3674c.sock 
pgsotc11.png.intel.com:/proc/29549/cwd  error=14
[2019-11-26 05:46:54.935529] I [repce(agent 
/data/glusterimagebrick/jfsotc22-gv0):97:service_loop] RepceServer: terminating 
on reaching EOF.
[2019-11-26 05:46:55.410444] I [monitor(monitor):278:monitor] Monitor: worker 
died in startup phase brick=/data/glusterimagebri

[Gluster-users] Minutes of Gluster Community Meeting (APAC) 26th Nov 2019

2019-11-25 Thread Shwetha Acharya
# Gluster Community Meeting -  26th Nov, 2019


### Previous Meeting minutes:

- http://github.com/gluster/community

### Date/Time: Check the [community calendar](
https://calendar.google.com/calendar/b/1?cid=dmViajVibDBrbnNiOWQwY205ZWg5cGJsaTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ
)

### Bridge
* APAC friendly hours
  - Tuesday 26th Nov, 2019, 11:30AM IST
  - Bridge: https://bluejeans.com/441850968
* NA/EMEA
  - Every 1st and 3rd Tuesday at 01:00 PM EDT
  - Bridge: https://bluejeans.com/118564314


---

### Attendance
Name (#gluster-dev alias) - company
* Ravi (itisravi)- Red Hat
* Sheetal Pamecha(spamecha) - Red Hat
* Shwetha Acharya (sacharya) - Red Hat
* Amar Tumballi - Consultant
* Vishal Pandey - RedHat
* Sunil Kumar - RedHat
* Aravinda VK (aravindavk) - Red Hat
* Rishubh Jain (risjain) - Red Hat
* Rinku Kothiya (rkothiya) - Red Hat
* Ashish Pandey (apandey) - Red Hat
* Kotresh (kotreshhr) - RedHat

### User stories
*


### Community

* Project metrics:

*Metrics*  *Value*

Coverity 
49
Clang Scan 
   58
Test coverage

 70.9%
New Bugs in last 14 day:
master

  10
7.x

7
6.x

6
5.x

2
Gluster User Queries in last 14 days

   48
Total Bugs

343
Total Github issues 
  428


* Any release updates?

* Blocker issues across the project?

* Notable thread form mailing list



### Conferences / Meetups

Devconf.cz
Important dates:
CFP Closes- closed
Schedule Announcement- TBA
Event Open for Registration- Dec 9, 2019
Last Date of Registration- TBA
Event dates- January 24-26, 2020
Venue- Brno, Czech Republic

[FOSDEM'20](https://fosdem.org/2020/)
Important dates:
CFP Closed- closed for main track, for storage dev room - 24 Nov, 2019
Schedule Announcement- TBA
Event Open for Registration-
Last Date of Registration-
Event dates: 1 & 2 February 2020
Venue: Brussels (Belgium)

Talks related to gluster:
* Evolution of Geo-replication in Gluster - Hari Gowtham
* Blockchain for decentralized storage with gluster - Prajith Prasad




### GlusterFS - v8.0 and beyond

* Proposal -
* Proposed Plan:



### Developer focus

* Any design specs to discuss?



### Component status
* Arbiter - No update
* AFR - No update except lock healing patch got merged and lock-less
heal-info work in progress.
* DHT - memory corruption patch to be pushed in release 7 branch
* EC - rename issue design in progress
* FUSE - no update
* POSIX -  no update
* DOC - no update
* Geo Replication - changelog update (Patch:
https://review.gluster.org/#/c/glusterfs/+/23733/ Issue:
https://github.com/gluster/glusterfs/issues/154 )
* libglusterfs - no update
* Glusterd - sanju working on lockless gluster volume info , vishal working
on issue when brick fails when brick-mux on and a user case dealing with
* Snapshot - no update
* NFS - Proposal to change gNFS status - mai

Re: [Gluster-users] Unable to setup geo replication

2019-11-25 Thread Kotresh Hiremath Ravishankar
The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?

On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
wrote:

> I’m new to GlusterFS and trying to setup geo-replication with a master
> volume being mirrored to a slave volume on another machine. However I just
> can’t seem to get it to work after starting the geo replication volume with
> the logs showing it failing rsync with error code 14. I can’t seem to find
> any info about this online so any help would be much appreciated.
>
>
>
> [2019-11-26 05:46:31.24706] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change  status=Initializing...
>
> [2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor:
> starting gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0
> slave_node=pgsotc11.png.intel.com
>
> [2019-11-26 05:46:31.90935] I [gsyncd(agent
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.92105] I [changelogagent(agent
> /data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent
> listining...
>
> [2019-11-26 05:46:31.93148] I [gsyncd(worker
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.102422] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH:
> Initializing SSH connection between master and slave...
>
> [2019-11-26 05:46:50.355233] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH
> connection between master and slave established.duration=19.2526
>
> [2019-11-26 05:46:50.355583] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting
> gluster volume locally...
>
> [2019-11-26 05:46:51.404998] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted
> gluster volume duration=1.0492
>
> [2019-11-26 05:46:51.405363] I [subcmds(worker
> /data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn
> successful. Acknowledging back to monitor
>
> [2019-11-26 05:46:53.431502] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working
> dir
> path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
>
> [2019-11-26 05:46:53.431846] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register
> time time=1574747213
>
> [2019-11-26 05:46:53.445589] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker
> Status Changestatus=Active
>
> [2019-11-26 05:46:53.446184] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status]
> GeorepStatus: Crawl Status Changestatus=History Crawl
>
> [2019-11-26 05:46:53.446367] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting
> history crawlturns=1 stime=(1574669325, 0)
> etime=1574747213entry_stime=None
>
> [2019-11-26 05:46:54.448994] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1546:crawl] _GMaster: slave's time
> stime=(1574669325, 0)
>
> [2019-11-26 05:46:54.928395] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1954:syncjob] Syncer: Sync Time
> Taken   job=1   num_files=1 return_code=14  duration=0.0162
>
> [2019-11-26 05:46:54.928607] E [syncdutils(worker
> /data/glusterimagebrick/jfsotc22-gv0):809:errlog] Popen: command returned
> error   cmd=rsync -aR0 --inplace --files-from=- --super --stats
> --numeric-ids --no-implied-dirs --existing --xattrs --acls
> --ignore-missing-args . -e ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
> -p 22 -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-rgpu74f3/de0855b3336b4c3233934fcbeeb3674c.sock
> pgsotc11.png.intel.com:/proc/29549/cwd  error=14
>
> [2019-11-26 05:46:54.935529] I [repce(agent
> /data/glusterimagebrick/jfsotc22-gv0):97:service_loop] RepceServer:
> terminating on reaching EOF.
>
> [2019-11-26 05:46:55.410444] I [monitor(monitor):278:monitor] Monitor:
> worker died in startup phase brick=/data/glusterimagebrick/jfsotc22-gv0
>
> [2019-11-26 05:46:55.412591] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Faulty
>
> [2019-11-26 05:47:05.631944] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Initializing...
>
> ….
>
>
>
> Thanks!
>
> Jian Chern
>
>
> 
>
> Community M

[Gluster-users] Unable to setup geo replication

2019-11-25 Thread Tan, Jian Chern
I'm new to GlusterFS and trying to setup geo-replication with a master volume 
being mirrored to a slave volume on another machine. However I just can't seem 
to get it to work after starting the geo replication volume with the logs 
showing it failing rsync with error code 14. I can't seem to find any info 
about this online so any help would be much appreciated.

[2019-11-26 05:46:31.24706] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change  status=Initializing...
[2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor: starting 
gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0  
slave_node=pgsotc11.png.intel.com
[2019-11-26 05:46:31.90935] I [gsyncd(agent 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.92105] I [changelogagent(agent 
/data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent 
listining...
[2019-11-26 05:46:31.93148] I [gsyncd(worker 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file   
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.102422] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH: Initializing 
SSH connection between master and slave...
[2019-11-26 05:46:50.355233] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH connection 
between master and slave established.duration=19.2526
[2019-11-26 05:46:50.355583] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting gluster 
volume locally...
[2019-11-26 05:46:51.404998] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted gluster 
volume duration=1.0492
[2019-11-26 05:46:51.405363] I [subcmds(worker 
/data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn 
successful. Acknowledging back to monitor
[2019-11-26 05:46:53.431502] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working dir  
  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
[2019-11-26 05:46:53.431846] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register time 
time=1574747213
[2019-11-26 05:46:53.445589] I [gsyncdstatus(worker 
/data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker 
Status Changestatus=Active
[2019-11-26 05:46:53.446184] I [gsyncdstatus(worker 
/data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status] 
GeorepStatus: Crawl Status Changestatus=History Crawl
[2019-11-26 05:46:53.446367] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting history 
crawlturns=1 stime=(1574669325, 0)   etime=1574747213
entry_stime=None
[2019-11-26 05:46:54.448994] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1546:crawl] _GMaster: slave's time  
stime=(1574669325, 0)
[2019-11-26 05:46:54.928395] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1954:syncjob] Syncer: Sync Time Taken 
  job=1   num_files=1 return_code=14  duration=0.0162
[2019-11-26 05:46:54.928607] E [syncdutils(worker 
/data/glusterimagebrick/jfsotc22-gv0):809:errlog] Popen: command returned error 
  cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids 
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-rgpu74f3/de0855b3336b4c3233934fcbeeb3674c.sock 
pgsotc11.png.intel.com:/proc/29549/cwd  error=14
[2019-11-26 05:46:54.935529] I [repce(agent 
/data/glusterimagebrick/jfsotc22-gv0):97:service_loop] RepceServer: terminating 
on reaching EOF.
[2019-11-26 05:46:55.410444] I [monitor(monitor):278:monitor] Monitor: worker 
died in startup phase brick=/data/glusterimagebrick/jfsotc22-gv0
[2019-11-26 05:46:55.412591] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change status=Faulty
[2019-11-26 05:47:05.631944] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change status=Initializing...


Thanks!
Jian Chern



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-25 Thread Amar Tumballi
Responses inline.

On Fri, Nov 22, 2019 at 6:04 PM Niels de Vos  wrote:

> On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> > Hi All,
> >
> > As per the discussion on https://review.gluster.org/23645, recently we
> > changed the status of gNFS (gluster's native NFSv3 support) feature to
> > 'Depricated / Orphan' state. (ref:
> > https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189
> ).
> > With this email, I am proposing to change the status again to 'Odd Fixes'
> > (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> I'd recommend against re-surrecting gNFS. The server is not very
> extensible and adding new features is pretty tricky without breaking
> other (mostly undocumented) use-cases.


I too am against adding the features/enhancements to gNFS. It doesn't make
sense. We are removing features from glusterfs itself, adding features to
gNFS after 3 years wouldn't even be feasible.

I guess you missed the intention of my proposal. It was not about
'resurrecting' gNFS to 'Maintained' or 'Supported' status. It was about
taking it out of 'Orphan' status, because there are still users who are
'happy' with it. Hence I picked the status as 'Odd Fixes' (as per
MAINTAINERS file, there was nothing else which would give meaning of *'This
feature is still shipped, but we are not adding any features or not
actively maintaining it'. *



> Eventhough NFSv3 is stateless,
> the actual usage of NFSv3, mounting and locking is definitely not. The
> server keeps track of which clients have an export mounted, and which
> clients received grants for locks. These things are currently not very
> reliable in combination with high-availability. And there is also the by
> default disabled duplicate-reply-cache (DRC) that has always been very
> buggy (and neither cluster-aware).
>
> If we enable gNFS by default again, we're sending out an incorrect
> message to our users. gNFS works fine for certain workloads and
> environments, but it should not be advertised as 'clustered NFS'.
>
>
I didn't talk or was intending to go this route. I am not even talking
about making gNFS 'default' enable. That would take away our focus on
glusterfs, and different things we can solve with Gluster alone. Not sure
why my email was taken as there would be focus on gNFS.


> Instead of going the gNFS route, I suggest to make it easier to deploy
> NFS-Ganesha as that is a more featured, well maintained and can be
> configured for much more reliable high-availability than gNFS.
>
>
I believe this is critical, and we surely need to work on it. But doesn't
come in the way of doing 1-2 bug fixes in gNFS (if any) in a release.


> If someone really wants to maintain gNFS, I won't object much, but they
> should know that previous maintainers have had many difficulties just
> keeping it working well while other components evolved. Addressing some
> of the bugs/limitations will be extremely difficult and may require
> large rewrites of parts of gNFS.
>

Yes, that awareness is critical, and it should exist.


> Until now, I have not read convincing arguments in this thread that gNFS
> is stable enough to be consumed by anyone in the community. Users should
> be aware of its limitations and be careful what workloads to run on it.
>

In this thread, Xie mentioned that he is managing gNFS on 1000+ servers
with 2000+ clients (more than 24 gluster cluster overall) for more than 2
years now. If that doesn't sound as 'stability', not sure what sounds as.

I agree that the users should be careful about the proper usecase to use
gNFS. I am even open to say we should add a warning or console log in
gluster CLI when 'gluster volume set  nfs.disable false' is performed,
saying it is advised to move to NFS-Ganesha based approach, and give a URL
link in that message. But the whole point is, when we make a release, we
should still ship gNFS as there are some users, very happy with gNFS, and
their usecases are properly handled by gNFS in its current form itself. Why
make them unhappy, or shift to other projects?

End of the day, as developers it is our duty to make sure we suggest the
best technologies to users, but the intentions should always be to make
sure we solve problems. If there are already solved problems, why resurface
them in the name of better technology?

So, again, my proposal is, to keep gNFS in the codebase (not as Orphan),
and continue to make releases with gNFS binary shipped when we make
release, not to make the focus of project to start working on enhancements
of gNFS.

Happy to answer if anyone has further queries.

I have sent a patch https://review.gluster.org/23738 for the same, and I
see people commenting already on that. I agree that Xie's contributions to
Gluster may need to increase (specifically in gNFS component) to be called
as MAINTAINER. Happy to introduce him as 'Peer' and change the title later
when it is time. Jiffin, thanks for volunteering to have a look on patches
when you have ti

[Gluster-users] Updated invitation: Gluster community meeting APAC @ Tue Nov 26, 2019 11:30am - 12:30pm (IST) (gluster-users@gluster.org)

2019-11-25 Thread sacharya
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20191126T06Z
DTEND:20191126T07Z
DTSTAMP:20191126T042520Z
ORGANIZER;CN=sacha...@redhat.com:mailto:sacha...@redhat.com
UID:5ejlkfb6675vd1o0gd741jj...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=gluster-users@gluster.org;X-NUM-GUESTS=0:mailto:gluster-users@glust
 er.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=sacha...@redhat.com;X-NUM-GUESTS=0:mailto:sacha...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=gluster-de...@gluster.org;X-NUM-GUESTS=0:mailto:gluster-devel@glust
 er.org
ATTENDEE;CUTYPE=RESOURCE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TR
 UE;CN=bangalore-engg-vijayanagara-12-p-vc;X-NUM-GUESTS=0:mailto:redhat.com_
 62616e67616c6f72652d76696a6179616e61676172612d31342d364c5864426d4d5366@reso
 urce.calendar.google.com
X-MICROSOFT-CDO-OWNERAPPTID:-1643036435
CREATED:20191126T042141Z
DESCRIPTION:https://bluejeans.com/441850968Previous meeting minutes: ht
 tps://github.com/gluster/communityMeeting minutes: \;https://hackmd.io/o47IlFysRJeDW2H
 zzW5ptQ?both\n\n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~
 :~:~:~:~:~:~:~:~:~:~:~:~::~:~::-\nPlease do not edit this section of the de
 scription.\n\nView your event at https://www.google.com/calendar/event?acti
 on=VIEW&eid=NWVqbGtmYjY2NzV2ZDFvMGdkNzQxampjYWYgZ2x1c3Rlci11c2Vyc0BnbHVzdGV
 yLm9yZw&tok=MTkjc2FjaGFyeWFAcmVkaGF0LmNvbTEzMzc3ZGNjM2QwY2ZkZGNhNjVlOGZjN2V
 hZDQ1YTViMjVkZGQzYWY&ctz=Asia%2FKolkata&hl=en&es=0.\n-::~:~::~:~:~:~:~:~:~:
 ~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-
LAST-MODIFIED:20191126T042520Z
LOCATION:bangalore-engg-vijayanagara-12-p-vc
SEQUENCE:1
STATUS:CONFIRMED
SUMMARY:Gluster community meeting APAC
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users