Re: [Gluster-users] NFS-Ganesha question

2018-10-15 Thread Jiffin Tony Thottan

CCing ganesha list as well


On Monday 15 October 2018 07:44 PM, Renaud Fortier wrote:


Hi,

We are currently facing a strange behaviour with our cluster. Right 
now I’m running bitrot scrub against the volume but I’m not sure it 
will help finding the problem. Anyway, my question is about 
nfs-ganesha and NFSv4. Since this strange behaviour begun, I read alot 
and I found that idmapd is needed for NFSv4. If I run rpcinfo or ps 
–ef |grep idmapd on our nodes, I don’t see it.


Is rpc.idmapd supposed to be running when using nfs-ganesha 2.6.3 with 
gluster 4.1.5 ?




IMO rpc.idmap as a service is not required for ganesha, but ganesha uses 
apis from "libnfsidmap"  for id mapping for confirming the same


CCing ganesha devel list as well.

--
Jiffin


Thank you



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-15 Thread Dmitry Melekhov

15.10.2018 23:33, Alfredo De Luca пишет:

Hi all.
I have 3 nodes glusterfs servers and multiple client and as I am a bit 
newbie on this not sure how to setup correctly the clients.
1. The clients mounts the glusterfs in fstab but when I reboot them 
they don't  mount it automatically
2. Not sure what to exactly put in the fastab as right now someone had 
:/vol1 /volume1 glusterfs default,netdev 0 0




Dunno, we run gluster on the same nodes as VM, so we put localhost in 
domain definitions.

In your situation I'd use something like VRRP ( keepalived , for instance).


But what happened when NODE1 is unavailable?

Clients are centos 7.5 so the servers

Thanks

--
/*Alfredo*/



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-15 Thread Alfredo De Luca
Hi Diego...sorry it's a typo here on the email... but I ve put _netdev in
the fstab.

Thanks


On Mon, Oct 15, 2018 at 9:33 PM Alfredo De Luca 
wrote:

> Hi all.
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients mounts the glusterfs in fstab but when I reboot them they
> don't  mount it automatically
> 2. Not sure what to exactly put in the fastab as right now someone had
> :/vol1 /volume1 glusterfs default,netdev 0 0
>
> But what happened when NODE1 is unavailable?
>
> Clients are centos 7.5 so the servers
>
> Thanks
>
> --
> *Alfredo*
>
>

-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-15 Thread Diego Remolina
You may have a typo, "_netdev" you are missing "_".

Give that a try.

Diego


On Mon, Oct 15, 2018, 15:33 Alfredo De Luca 
wrote:

> Hi all.
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients mounts the glusterfs in fstab but when I reboot them they
> don't  mount it automatically
> 2. Not sure what to exactly put in the fastab as right now someone had
> :/vol1 /volume1 glusterfs default,netdev 0 0
>
> But what happened when NODE1 is unavailable?
>
> Clients are centos 7.5 so the servers
>
> Thanks
>
> --
> *Alfredo*
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster client

2018-10-15 Thread Alfredo De Luca
Hi all.
I have 3 nodes glusterfs servers and multiple client and as I am a bit
newbie on this not sure how to setup correctly the clients.
1. The clients mounts the glusterfs in fstab but when I reboot them they
don't  mount it automatically
2. Not sure what to exactly put in the fastab as right now someone had
:/vol1 /volume1 glusterfs default,netdev 0 0

But what happened when NODE1 is unavailable?

Clients are centos 7.5 so the servers

Thanks

-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Maintainer meeting minutes : 15th Oct, 2018

2018-10-15 Thread Shyam Ranganathan
### BJ Link
* Bridge: https://bluejeans.com/217609845
* Watch: 

### Attendance
* Nigel, Nithya, Deepshikha, Akarsha, Kaleb, Shyam, Sunny

### Agenda
* AI from previous meeting:
  - Glusto-Test completion on release-5 branch - On Glusto team
  - Vijay will take this on.
  - He will be focusing it on next week.
  - Glusto for 5 may not be happening before release, but we'll do
it right after release it looks like.

- Release 6 Scope
- Will be sending out an email today/tomorrow for scope of release 6.
- Send a biweekly email with focus on glusterfs release focus areas.

- GCS scope into release-6 scope and get issues marked against the same
- For release-6 we want a thinner stack. This means we'd be removing
xlators from the code that Amar has already sent an email about.
- Locking support for gluster-block. Design still WIP. One of the
big ticket items that should make it to release 6. Includes reflink
support and enough locking support to ensure snapshots are consistent.
- GD1 vs GD2. We've been talking about it since release-4.0. We need
to call this out and understand if we will have GD2 as default. This is
call out for a plan for when we want to make this transition.

- Round Table
- [Nigel] Minimum build and CI health for all projects (including
sub-projects).
- This was primarily driven for GCS
- But, we need this even otherwise to sustain quality of projects
- AI: Call out on lists around release 6 scope, with a possible
list of sub-projects
- [Kaleb] SELinux package status
- Waiting on testing to understand if this is done right
- Can be released when required, as it is a separate package
- Release-5 the SELinux policies are with Fedora packages
- Need to coordinate with Fedora release, as content is in 2
packages
- AI: Nigel to follow up and get updates by the next meeting

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] NFS-Ganesha question

2018-10-15 Thread Renaud Fortier
Hi,
We are currently facing a strange behaviour with our cluster. Right now I'm 
running bitrot scrub against the volume but I'm not sure it will help finding 
the problem. Anyway, my question is about nfs-ganesha and NFSv4. Since this 
strange behaviour begun, I read alot and I found that idmapd is needed for 
NFSv4. If I run rpcinfo or ps -ef |grep idmapd on our nodes, I don't see it.

Is rpc.idmapd supposed to be running when using nfs-ganesha 2.6.3 with gluster 
4.1.5 ?

Thank you
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster connection interrupted during transfer

2018-10-15 Thread Richard Neuboeck
Hi Vijay,

sorry it took so long. I've upgraded the gluster server and client to
the latest packages 3.12.14-1.el7.x86_64 available in CentOS.

Incredibly my first test after the update worked perfectly! I'll do
another couple of rsyncs, maybe apply the performance improvements again
and do statedumps all the way.

I'll report back if there are any more problems or if they are resolved.

Thanks for the help so far!
Cheers
Richard


On 25.09.18 00:39, Vijay Bellur wrote:
> Hello Richard,
> 
> Thank you for the logs.
> 
> I am wondering if this could be a different memory leak than the one
> addressed in the bug. Would it be possible for you to obtain a
> statedump of the client so that we can understand the memory allocation
> pattern better? Details about gathering a statedump can be found at [1].
> Please ensure that /var/run/gluster is present before triggering a
> statedump.
> 
> Regards,
> Vijay
> 
> [1] https://docs.gluster.org/en/v3/Troubleshooting/statedump/
> 
> 
> On Fri, Sep 21, 2018 at 12:14 AM Richard Neuboeck  > wrote:
> 
> Hi again,
> 
> in my limited - non full time programmer - understanding it's a memory
> leak in the gluster fuse client.
> 
> Should I reopen the mentioned bugreport or open a new one? Or would the
> community prefer an entirely different approach?
> 
> Thanks
> Richard
> 
> On 13.09.18 10:07, Richard Neuboeck wrote:
> > Hi,
> >
> > I've created excerpts from the brick and client logs +/- 1 minute to
> > the kill event. Still the logs are ~400-500MB so will put them
> > somewhere to download since I have no idea what I should be looking
> > for and skimming them didn't reveal obvious problems to me.
> >
> > http://www.tbi.univie.ac.at/~hawk/gluster/brick_3min_excerpt.log
> 
> > http://www.tbi.univie.ac.at/~hawk/gluster/mnt_3min_excerpt.log
> 
> >
> > I was pointed in the direction of the following Bugreport
> > https://bugzilla.redhat.com/show_bug.cgi?id=1613512
> > It sounds right but seems to have been addressed already.
> >
> > If there is anything I can do to help solve this problem please let
> > me know. Thanks for your help!
> >
> > Cheers
> > Richard
> >
> >
> > On 9/11/18 10:10 AM, Richard Neuboeck wrote:
> >> Hi,
> >>
> >> since I feared that the logs would fill up the partition (again) I
> >> checked the systems daily and finally found the reason. The glusterfs
> >> process on the client runs out of memory and get's killed by OOM
> after
> >> about four days. Since rsync runs for a couple of days longer till it
> >> ends I never checked the whole time frame in the system logs and
> never
> >> stumbled upon the OOM message.
> >>
> >> Running out of memory on a 128GB RAM system even with a DB occupying
> >> ~40% of that is kind of strange though. Might there be a leak?
> >>
> >> But this would explain the erratic behavior I've experienced over the
> >> last 1.5 years while trying to work with our homes on glusterfs.
> >>
> >> Here is the kernel log message for the killed glusterfs process.
> >> https://gist.github.com/bleuchien/3d2b87985ecb944c60347d5e8660e36a
> >>
> >> I'm checking the brick and client trace logs. But those are
> respectively
> >> 1TB and 2TB in size so searching in them takes a while. I'll be
> creating
> >> gists for both logs about the time when the process died.
> >>
> >> As soon as I have more details I'll post them.
> >>
> >> Here you can see a graphical representation of the memory usage
> of this
> >> system: https://imgur.com/a/4BINtfr
> >>
> >> Cheers
> >> Richard
> >>
> >>
> >>
> >> On 31.08.18 08:13, Raghavendra Gowdappa wrote:
> >>>
> >>>
> >>> On Fri, Aug 31, 2018 at 11:11 AM, Richard Neuboeck
> >>> mailto:h...@tbi.univie.ac.at>
> >> wrote:
> >>>
> >>>     On 08/31/2018 03:50 AM, Raghavendra Gowdappa wrote:
> >>>     > +Mohit. +Milind
> >>>     >
> >>>     > @Mohit/Milind,
> >>>     >
> >>>     > Can you check logs and see whether you can find anything
> relevant?
> >>>
> >>>     From glances at the system logs nothing out of the ordinary
> >>>     occurred. However I'll start another rsync and take a closer
> look.
> >>>     It will take a few days.
> >>>
> >>>     >
> >>>     > On Thu, Aug 30, 2018 at 7:04 PM, Richard Neuboeck
> >>>     > mailto:h...@tbi.univie.ac.at>
> >
> >>>     
>