Re: [Gluster-users] Old georep files in /var/lib/misc/glusterfsd

2018-06-25 Thread Kotresh Hiremath Ravishankar
Hi Mabi,

You can safely delete old files under /var/lib/misc/glusterfsd.

Thanks,
Kotresh

On Mon, Jun 25, 2018 at 7:30 PM, mabi  wrote:

> Hi,
>
> In the past I was using geo-replication but unconfigured it on my two
> volumes by using:
>
> gluster volume geo-replication ... stop
> gluster volume geo-replication ... delete
>
> ​​Now I found out that I still have some old files in
> /var/lib/misc/glusterfsd belonging to my two volumes which were
> geo-replicated. Can I safely delete everything under
> /var/lib/misc/glusterfsd on all of my nodes? I have a replica 2 with an
> arbiter and I am using GlusterFS 3.12.9.
>
> Thanks,
> M.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users




-- 
Thanks and Regards,
Kotresh H R
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Slow write times to gluster disk

2018-06-25 Thread Raghavendra Gowdappa
On Tue, Jun 26, 2018 at 3:21 AM, Pat Haley  wrote:

>
> Hi Raghavendra,
>
> Setting the performance.write-behind off had a small improvement on the
> write speed (~3%),
>
> We were unable to turn on "group metadata-cache".  When we try get errors
> like
>
> # gluster volume set data-volume group metadata-cache
> '/var/lib/glusterd/groups/metadata-cache' file format not valid.
>
> Was metadata-cache available for gluster 3.7.11? We ask because the
> release notes for 3.11 mentions “Feature for metadata-caching/small file
> performance is production ready.” (https://gluster.readthedocs.
> io/en/latest/release-notes/3.11.0/).
>
> Do any of these results suggest anything?  If not, what further tests
> would be useful?
>

Group metadata-cache is just a bunch of options one sets on a volume. So,
You can set them manually using gluster cli. Following are the options and
their values:

performance.md-cache-timeout=600
network.inode-lru-limit=5



> Thanks
>
> Pat
>
>
>
>
> On 06/22/2018 07:51 AM, Raghavendra Gowdappa wrote:
>
>
>
> On Thu, Jun 21, 2018 at 8:41 PM, Pat Haley  wrote:
>
>>
>> Hi Raghavendra,
>>
>> Thanks for the suggestions.  Our technician will be in on Monday.  We'll
>> test then and let you know the results.
>>
>> One question I have, is the "group metadata-cache" option supposed to
>> directly impact the performance or is it to help collect data?  If the
>> latter, where will the data be located?
>>
>
> It impacts performance.
>
>
>> Thanks again.
>>
>> Pat
>>
>>
>>
>> On 06/21/2018 01:01 AM, Raghavendra Gowdappa wrote:
>>
>>
>>
>> On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>> For the case of writes to glusterfs mount,
>>>
>>> I saw in earlier conversations that there are too many lookups, but
>>> small number of writes. Since writes cached in write-behind would
>>> invalidate metadata cache, lookups won't be absorbed by md-cache. I am
>>> wondering what would results look like if we turn off
>>> performance.write-behind.
>>>
>>> @Pat,
>>>
>>> Can you set,
>>>
>>> # gluster volume set  performance.write-behind off
>>>
>>
>> Please turn on "group metadata-cache" for write tests too.
>>
>>
>>> and redo the tests writing to glusterfs mount? Let us know about the
>>> results you see.
>>>
>>> regards,
>>> Raghavendra
>>>
>>> On Thu, Jun 21, 2018 at 8:33 AM, Raghavendra Gowdappa <
>>> rgowd...@redhat.com> wrote:
>>>


 On Thu, Jun 21, 2018 at 8:32 AM, Raghavendra Gowdappa <
 rgowd...@redhat.com> wrote:

> For the case of reading from Glusterfs mount, read-ahead should help.
> However, we've known issues with read-ahead[1][2]. To work around these,
> can you try with,
>
> 1. Turn off performance.open-behind
> #gluster volume set  performance.open-behind off
>
> 2. enable group meta metadata-cache
> # gluster volume set  group metadata-cache
>

 [1]  https://bugzilla.redhat.com/show_bug.cgi?id=1084508
 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1214489


>
>
> On Thu, Jun 21, 2018 at 5:00 AM, Pat Haley  wrote:
>
>>
>> Hi,
>>
>> We were recently revisiting our problems with the slowness of gluster
>> writes (http://lists.gluster.org/pipermail/gluster-users/2017-April
>> /030529.html). Specifically we were testing the suggestions in a
>> recent post (http://lists.gluster.org/pipe
>> rmail/gluster-users/2018-March/033699.html). The first two
>> suggestions (specifying a negative-timeout in the mount settings or 
>> adding
>> rpc-auth-allow-insecure to glusterd.vol) did not improve our performance,
>> while setting "disperse.eager-lock off" provided a tiny (5%) speed-up.
>>
>> Some of the various tests we have tried earlier can be seen in the
>> links below.  Do any of the above observations suggest what we could try
>> next to either improve the speed or debug the issue?  Thanks
>>
>> http://lists.gluster.org/pipermail/gluster-users/2017-June/0
>> 31565.html
>> http://lists.gluster.org/pipermail/gluster-users/2017-May/030937.html
>>
>> Pat
>>
>> --
>>
>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>> Pat Haley  Email:  pha...@mit.edu
>> Center for Ocean Engineering   Phone:  (617) 253-6824
>> Dept. of Mechanical EngineeringFax:(617) 253-8125
>> MIT, Room 5-213http://web.mit.edu/phaley/www/
>> 77 Massachusetts Avenue
>> Cambridge, MA  02139-4301
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

>>>
>>
>> --
>>
>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>> Pat Haley  Email:  pha...@mit.edu
>> Center for Ocean Engineering   Phone:  (617) 

Re: [Gluster-users] Slow write times to gluster disk

2018-06-25 Thread Pat Haley


Hi Raghavendra,

Setting the performance.write-behind off had a small improvement on the 
write speed (~3%),


We were unable to turn on "group metadata-cache".  When we try get 
errors like


# gluster volume set data-volume group metadata-cache
'/var/lib/glusterd/groups/metadata-cache' file format not valid.

Was metadata-cache available for gluster 3.7.11? We ask because the 
release notes for 3.11 mentions “Feature for metadata-caching/small file 
performance is production ready.” 
(https://gluster.readthedocs.io/en/latest/release-notes/3.11.0/).


Do any of these results suggest anything?  If not, what further tests 
would be useful?


Thanks

Pat



On 06/22/2018 07:51 AM, Raghavendra Gowdappa wrote:



On Thu, Jun 21, 2018 at 8:41 PM, Pat Haley > wrote:



Hi Raghavendra,

Thanks for the suggestions.  Our technician will be in on Monday. 
We'll test then and let you know the results.

One question I have, is the "group metadata-cache" option supposed
to directly impact the performance or is it to help collect data? 
If the latter, where will the data be located?


It impacts performance.


Thanks again.

Pat



On 06/21/2018 01:01 AM, Raghavendra Gowdappa wrote:



On Thu, Jun 21, 2018 at 10:24 AM, Raghavendra Gowdappa
mailto:rgowd...@redhat.com>> wrote:

For the case of writes to glusterfs mount,

I saw in earlier conversations that there are too many
lookups, but small number of writes. Since writes cached in
write-behind would invalidate metadata cache, lookups won't
be absorbed by md-cache. I am wondering what would results
look like if we turn off performance.write-behind.

@Pat,

Can you set,

# gluster volume set  performance.write-behind off


Please turn on "group metadata-cache" for write tests too.


and redo the tests writing to glusterfs mount? Let us know
about the results you see.

regards,
Raghavendra

On Thu, Jun 21, 2018 at 8:33 AM, Raghavendra Gowdappa
mailto:rgowd...@redhat.com>> wrote:



On Thu, Jun 21, 2018 at 8:32 AM, Raghavendra Gowdappa
mailto:rgowd...@redhat.com>> wrote:

For the case of reading from Glusterfs mount,
read-ahead should help. However, we've known issues
with read-ahead[1][2]. To work around these, can you
try with,

1. Turn off performance.open-behind
#gluster volume set  performance.open-behind off

2. enable group meta metadata-cache
# gluster volume set  group metadata-cache


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1084508

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1214489





On Thu, Jun 21, 2018 at 5:00 AM, Pat Haley
mailto:pha...@mit.edu>> wrote:


Hi,

We were recently revisiting our problems with the
slowness of gluster writes

(http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html

).
Specifically we were testing the suggestions in a
recent post

(http://lists.gluster.org/pipermail/gluster-users/2018-March/033699.html

).
The first two suggestions (specifying a
negative-timeout in the mount settings or adding
rpc-auth-allow-insecure to glusterd.vol) did not
improve our performance, while setting
"disperse.eager-lock off" provided a tiny (5%)
speed-up.

Some of the various tests we have tried earlier
can be seen in the links below. Do any of the
above observations suggest what we could try next
to either improve the speed or debug the issue?
Thanks


http://lists.gluster.org/pipermail/gluster-users/2017-June/031565.html



http://lists.gluster.org/pipermail/gluster-users/2017-May/030937.html



Pat

-- 



-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley       Email: pha...@mit.edu


Re: [Gluster-users] gluster becomes too slow, need frequent stop-start or reboot

2018-06-25 Thread Anh Vo
Anyone able to help us troubleshoot this issue? This is getting worse. We
are back to our 3-replica setup but the issue is still happening. What we
have found is that if I just bring one set of bricks offline. For example
if I have (0 1 2) (3 4 5) (6 7 8) (9 10 11) and if I take the bricks 0 3 6
9, or bricks 1 4 7 10 offline then performance is super fast. The moment
all bricks are online things become very slow. It seems like gluster is
having some sort of lock contention between its members. During the period
of slowness gluster profile would show excessive time spent in LOOKUP,
FINODELK

 11.60 752.64 us  10.00 us 2647757.00 us  272476323
LOOKUP
 15.836884.12 us  29.00 us 2190470.00 us   40626259
 WRITE
 27.84   80480.22 us  40.00 us 11731910.00 us6114072
FXATTROP
 37.83  105125.18 us  12.00 us 276088722.00 us6359515
FINODELK

We have about one or two months before we need to make a decision to keep
Gluster and so far it has been a lot of headache.

On Thu, Jun 14, 2018 at 10:18 AM, Anh Vo  wrote:

> Our gluster keeps getting to a state where it becomes painfully slow and
> many of our applications time out on read/write call. When this happens a
> simple ls at top level directory from the mount takes somewhere between
> 8-25s (normally it is very fast, at most 1-2s). The top level directory
> only has about 10 folders.
>
> The two methods to mitigate this problem have been 1) restart all GFS
> servers or 2) stop/start the volume. 2) does take somewhere between half an
> hour to an hour for gluster to get back to its desired performance.
>
> So far the logs don't show anything unusual but perhaps I don't know what
> I should be looking for in the logs. Even when gluster are fully functional
> we see lots of logs, hard to tell which error is harmless and what is not.
>
> This issue does not seem to happen with our 3 replica glusters, only with
> 2-replica-1-arbiter and 2-replica. However, our 3-replica glusters are only
> 30% full while the 2-replica ones are about 80% full.
> We're running 3.12.9 for the servers. The clients are 3.8.15, but we
> notice the slowness of operations on 3.12.9 clients as well.
>
> Configuration: 12 GFS servers, one brick per server, replica 2, 80T each
> brick. We used to have arbiters but thought the arbiters were causing the
> slow down so we took them out. Apparently it's not the case.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Old georep files in /var/lib/misc/glusterfsd

2018-06-25 Thread mabi
Hi,

In the past I was using geo-replication but unconfigured it on my two volumes 
by using:

gluster volume geo-replication ... stop
gluster volume geo-replication ... delete

​​Now I found out that I still have some old files in /var/lib/misc/glusterfsd 
belonging to my two volumes which were geo-replicated. Can I safely delete 
everything under /var/lib/misc/glusterfsd on all of my nodes? I have a replica 
2 with an arbiter and I am using GlusterFS 3.12.9.

Thanks,
M.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users]   disable automatic share creation in smb.conf

2018-06-25 Thread 김지현 연구원



You can disable it  with removing a hook script exists in /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh


- 원본 메일 -
보낸사람:lejeczek 
받는사람:gluster-users@gluster.org
날짜:Mon Jun 25 19:06:59 GMT+09:00 2018
제목:[Gluster-users] disable automatic share creation in smb.conf

hi guysis that configurable somewhere as a global setting?many thanks, L.___Gluster-users mailing listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] disable automatic share creation in smb.conf

2018-06-25 Thread lejeczek

hi guys

is that configurable somewhere as a global setting?

many thanks, L.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users