Re: [Gluster-users] Slow write times to gluster disk

2017-04-07 Thread Ravishankar N

Hi Pat,

I'm assuming you are using gluster native (fuse mount). If it helps, you 
could try mounting it via gluster NFS (gnfs) and then see if there is an 
improvement in speed. Fuse mounts are slower than gnfs mounts but you 
get the benefit of avoiding a single point of failure. Unlike fuse 
mounts, if the gluster node containing the gnfs server goes down, all 
mounts done using that node will fail). For fuse mounts, you could try 
tweaking the write-behind xlator settings to see if it helps. See the 
performance.write-behind and performance.write-behind-window-size 
options in `gluster volume set help`. Of course, even for gnfs mounts, 
you can achieve fail-over by using CTDB.


Thanks,
Ravi

On 04/08/2017 12:07 AM, Pat Haley wrote:


Hi,

We noticed a dramatic slowness when writing to a gluster disk when 
compared to writing to an NFS disk. Specifically when using dd (data 
duplicator) to write a 4.3 GB file of zeros:


  * on NFS disk (/home): 9.5 Gb/s
  * on gluster disk (/gdata): 508 Mb/s

The gluser disk is 2 bricks joined together, no replication or 
anything else. The hardware is (literally) the same:


  * one server with 70 hard disks  and a hardware RAID card.
  * 4 disks in a RAID-6 group (the NFS disk)
  * 32 disks in a RAID-6 group (the max allowed by the card, /mnt/brick1)
  * 32 disks in another RAID-6 group (/mnt/brick2)
  * 2 hot spare

Some additional information and more tests results (after changing the 
log level):


glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)
RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 
[Invader] (rev 02)




*Create the file to /gdata (gluster)*
[root@mseas-data2 gdata]# dd if=/dev/zero of=/gdata/zero1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.91876 s, *546 MB/s*

*Create the file to /home (ext4)*
[root@mseas-data2 gdata]# dd if=/dev/zero of=/home/zero1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.686021 s, *1.5 GB/s - *3 times as 
fast*



Copy from /gdata to /gdata (gluster to gluster)
*[root@mseas-data2 gdata]# dd if=/gdata/zero1 of=/gdata/zero2
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 101.052 s, *10.4 MB/s* - realllyyy 
slooowww



*Copy from /gdata to /gdata* *2nd time *(gluster to gluster)**
[root@mseas-data2 gdata]# dd if=/gdata/zero1 of=/gdata/zero2
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 92.4904 s, *11.3 MB/s* - realllyyy 
slooowww again




*Copy from /home to /home (ext4 to ext4)*
[root@mseas-data2 gdata]# dd if=/home/zero1 of=/home/zero2
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 3.53263 s, *297 MB/s *30 times as fast


*Copy from /home to /home (ext4 to ext4)*
[root@mseas-data2 gdata]# dd if=/home/zero1 of=/home/zero3
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 4.1737 s, *251 MB/s* - 30 times as fast


As a test, can we copy data directly to the xfs mountpoint 
(/mnt/brick1) and bypass gluster?



Any help you could give us would be appreciated.

Thanks

--

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email:pha...@mit.edu
Center for Ocean Engineering   Phone:  (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "Fake" distributed-replicated volume

2017-04-07 Thread Ravishankar N

On 04/07/2017 11:34 PM, Jamie Lawrence wrote:

Greetings, Glusterites -

I have a suboptimal situation, and am wondering if there is any way to create a 
replica-3 distributed/replicated volume with three machines. I saw in the docs 
that the create command will fail with multiple bricks on the same peer; is 
there a way around that/some other way to achieve this?

The reason for this terrible idea is entirely about deadlines interacting with 
reality. I need to go into production with a new volume I’d really like to be 
D/R before additional machines arrive.
You can use the force option at the end of the `volume create` command 
to make it work for whatever brick placement strategy you wish but since 
you anyway have 3 machines, there shouldn't be a need to do that. The 
following layout should work:


Node1  Node2  Node3
  - -
B1 B1'B1"   <--3 bricks of a replica subvol
B2 B2'   B2"
B3 B3'   B3"

and so on. `Volume create` would complain only when you try to place 
bricks of the same replica subvol on the same node.



Hope that helps,
Ravi

Thanks,

-j
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Server source code installation

2017-04-07 Thread Amar Tumballi
On Fri, Apr 7, 2017 at 11:24 PM, Tahereh Fattahi 
wrote:

> Hi
> I have 3 gluster servers and one volume. Know, I want to change the source
> code of serves and install again.
> Is it enough to stop and delete the volume, make and install the code? Or
> before installation I should delete some files or folders?
>

If you want the same volume to be running after code change, just make
/make install is good enough. If you have done change in volfile reading
section (ie, init() considering new options etc), then delete volume etc.

-Amar

>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] hanging httpd processes.

2017-04-07 Thread Amar Tumballi
On Sat, Apr 8, 2017 at 12:02 AM, Alvin Starr  wrote:

> Thanks for the help.
>
> That seems to have fixed it.
>
> We were seeing hangs clocking up at a rate of a few hundred a day and for
> the last week there have been none.
>
>
>
Thanks for confirming this. Good to know one of the major hurdle for you is
resolved.

-Amar

>
> On 03/31/2017 05:54 AM, Mohit Agrawal wrote:
>
> Hi,
>
> As you have mentioned client/server version in thread it shows package
> version are different on both(client,server).
> We would recommend you to upgrade both servers and clients to rhs-3.10.1.
> If it is not possible to upgrade both(client,server) then in this case it
> is required to upgrade client only.
>
> Thanks
> Mohit Agrawal
>
> On Fri, Mar 31, 2017 at 2:27 PM, Mohit Agrawal 
> wrote:
>
>> Hi,
>>
>> As per attached glusterdump/stackdump  it seems it is a known issue 
>> (https://bugzilla.redhat.com/show_bug.cgi?id=1372211) and issue is already 
>> fixed from the patch (https://review.gluster.org/#/c/15380/).
>>
>> The issue is happened in this case
>> Assume a file is opened with fd1 and fd2.
>> 1. some WRITE opto fd1 got error, they were add back to 'todo' queue
>>because of those error.
>> 2. fd2 closed, a FLUSH op is send to write-behind.
>> 3. FLUSH can not be unwind because it's not a legal waiter for those
>>failed write(as func __wb_request_waiting_on() say). and those failed
>>WRITE also can not be ended if fd1 is not closed. fd2 stuck in close
>>syscall.
>>
>> As per statedump it also shows flush op fd is not same as write op fd.
>> Kindly upgrade the package on 3.10.1 and share the result.
>>
>>
>>
>> Thanks
>> Mohit Agrawal
>>
>> On Fri, Mar 31, 2017 at 12:29 PM, Amar Tumballi > > wrote:
>>
>> >* Hi Alvin,
>> *>>* Thanks for the dump output. It helped a bit.
>> *>>* For now, recommend turning off open-behind and read-ahead performance
>> *>* translators for you to get rid of this situation, As I noticed hung FLUSH
>> *>* operations from these translators.
>> *>
>> Looks like I gave wrong advise by looking at below snippet:
>>
>> [global.callpool.stack.61]
>> >* stack=0x7f6c6f628f04
>> *>* uid=48
>> *>* gid=48
>> *>* pid=11077
>> *>* unique=10048797
>> *>* lk-owner=a73ae5bdb5fcd0d2
>> *>* op=FLUSH
>> *>* type=1
>> *>* cnt=5
>> *>>* [global.callpool.stack.61.frame.1]
>> *>* frame=0x7f6c6f793d88
>> *>* ref_count=0
>> *>* translator=edocs-production-write-behind
>> *>* complete=0
>> *>* parent=edocs-production-read-ahead
>> *>* wind_from=ra_flush
>> *>* wind_to=FIRST_CHILD (this)->fops->flush
>> *>* unwind_to=ra_flush_cbk
>> *>>* [global.callpool.stack.61.frame.2]
>> *>* frame=0x7f6c6f796c90
>> *>* ref_count=1
>> *>* translator=edocs-production-read-ahead
>> *>* complete=0
>> *>* parent=edocs-production-open-behind
>> *>* wind_from=default_flush_resume
>> *>* wind_to=FIRST_CHILD(this)->fops->flush
>> *>* unwind_to=default_flush_cbk
>> *>>* [global.callpool.stack.61.frame.3]
>> *>* frame=0x7f6c6f79b724
>> *>* ref_count=1
>> *>* translator=edocs-production-open-behind
>> *>* complete=0
>> *>* parent=edocs-production
>> *>* wind_from=io_stats_flush
>> *>* wind_to=FIRST_CHILD(this)->fops->flush
>> *>* unwind_to=io_stats_flush_cbk
>> *>>* [global.callpool.stack.61.frame.4]
>> *>* frame=0x7f6c6f79b474
>> *>* ref_count=1
>> *>* translator=edocs-production
>> *>* complete=0
>> *>* parent=fuse
>> *>* wind_from=fuse_flush_resume
>> *>* wind_to=FIRST_CHILD(this)->fops->flush
>> *>* unwind_to=fuse_err_cbk
>> *>>* [global.callpool.stack.61.frame.5]
>> *>* frame=0x7f6c6f796684
>> *>* ref_count=1
>> *>* translator=fuse
>> *>* complete=0
>> *>
>> Mos probably, issue is with write-behind's flush. So please turn off
>> write-behind and test. If you don't have any hung httpd processes, please
>> let us know.
>>
>> -Amar
>>
>>
>> >* -Amar
>> *>>* On Wed, Mar 29, 2017 at 6:56 AM, Alvin Starr > > wrote:
>> *>>>* We are running gluster 3.8.9-1 on Centos 7.3.1611 for the servers and 
>> on
>> *>>* the clients 3.7.11-2 on Centos 6.8
>> ** We are seeing httpd processes hang in fuse_request_send or sync_page.
>> ** These calls are from PHP 5.3.3-48 scripts
>> ** I am attaching  a tgz file that contains the process dump from 
>> glusterfsd
>> *>>* and the hung pids along with the offending pid's stacks from
>> *>>* /proc/{pid}/stack.
>> ** This has been a low level annoyance for a while but it has become a 
>> much
>> *>>* bigger issue because the number of hung processes went from a few a 
>> week to
>> *>>* a few hundred a day.
>> *>>* --
>> *>>* Alvin Starr   ||   voice: (905)513-7688
>> *>>* Netvel Inc.   ||   Cell:  (416)806-0133
>> *>>* alvin at netvel.net 
>>   ||
>> *>>* ___
>> *>>* Gluster-users mailing list
>> *>>* Gluster-users at glu

[Gluster-users] Slow write times to gluster disk

2017-04-07 Thread Pat Haley


Hi,

We noticed a dramatic slowness when writing to a gluster disk when 
compared to writing to an NFS disk. Specifically when using dd (data 
duplicator) to write a 4.3 GB file of zeros:


 * on NFS disk (/home): 9.5 Gb/s
 * on gluster disk (/gdata): 508 Mb/s

The gluser disk is 2 bricks joined together, no replication or anything 
else. The hardware is (literally) the same:


 * one server with 70 hard disks  and a hardware RAID card.
 * 4 disks in a RAID-6 group (the NFS disk)
 * 32 disks in a RAID-6 group (the max allowed by the card, /mnt/brick1)
 * 32 disks in another RAID-6 group (/mnt/brick2)
 * 2 hot spare

Some additional information and more tests results (after changing the 
log level):


glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)
RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 
[Invader] (rev 02)




*Create the file to /gdata (gluster)*
[root@mseas-data2 gdata]# dd if=/dev/zero of=/gdata/zero1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.91876 s, *546 MB/s*

*Create the file to /home (ext4)*
[root@mseas-data2 gdata]# dd if=/dev/zero of=/home/zero1 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 0.686021 s, *1.5 GB/s - *3 times as fast*


Copy from /gdata to /gdata (gluster to gluster)
*[root@mseas-data2 gdata]# dd if=/gdata/zero1 of=/gdata/zero2
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 101.052 s, *10.4 MB/s* - realllyyy 
slooowww



*Copy from /gdata to /gdata* *2nd time *(gluster to gluster)**
[root@mseas-data2 gdata]# dd if=/gdata/zero1 of=/gdata/zero2
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 92.4904 s, *11.3 MB/s* - realllyyy 
slooowww again




*Copy from /home to /home (ext4 to ext4)*
[root@mseas-data2 gdata]# dd if=/home/zero1 of=/home/zero2
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 3.53263 s, *297 MB/s *30 times as fast


*Copy from /home to /home (ext4 to ext4)*
[root@mseas-data2 gdata]# dd if=/home/zero1 of=/home/zero3
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 4.1737 s, *251 MB/s* - 30 times as fast


As a test, can we copy data directly to the xfs mountpoint (/mnt/brick1) 
and bypass gluster?



Any help you could give us would be appreciated.

Thanks

--

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email:  pha...@mit.edu
Center for Ocean Engineering   Phone:  (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] hanging httpd processes.

2017-04-07 Thread Alvin Starr

Thanks for the help.

That seems to have fixed it.

We were seeing hangs clocking up at a rate of a few hundred a day and 
for the last week there have been none.




On 03/31/2017 05:54 AM, Mohit Agrawal wrote:

Hi,

As you have mentioned client/server version in thread it shows package 
version are different on both(client,server).

We would recommend you to upgrade both servers and clients to rhs-3.10.1.
If it is not possible to upgrade both(client,server) then in this case 
it is required to upgrade client only.


Thanks
Mohit Agrawal

On Fri, Mar 31, 2017 at 2:27 PM, Mohit Agrawal > wrote:


Hi, As per attached glusterdump/stackdump it seems it is a known
issue (https://bugzilla.redhat.com/show_bug.cgi?id=1372211
) and issue
is already fixed from the patch
(https://review.gluster.org/#/c/15380/
). The issue is happened in
this case Assume a file is opened with fd1 and fd2. 1. some WRITE
opto fd1 got error, they were add back to 'todo' queue because of
those error. 2. fd2 closed, a FLUSH op is send to write-behind. 3.
FLUSH can not be unwind because it's not a legal waiter for those
failed write(as func __wb_request_waiting_on() say). and those
failed WRITE also can not be ended if fd1 is not closed. fd2 stuck
in close syscall. As per statedump it also shows flush op fd is
not same as write op fd. Kindly upgrade the package on 3.10.1 and
share the result. Thanks Mohit Agrawal

On Fri, Mar 31, 2017 at 12:29 PM, Amar Tumballi http://lists.gluster.org/mailman/listinfo/gluster-users>> wrote:

>/Hi Alvin, />//>/Thanks for the dump output. It helped a bit. />//>/For 
now, recommend turning off open-behind and read-ahead
performance />/translators for you to get rid of this situation, As I 
noticed
hung FLUSH />/operations from these translators. />//
Looks like I gave wrong advise by looking at below snippet:

[global.callpool.stack.61]
>/stack=0x7f6c6f628f04 />/uid=48 />/gid=48 />/pid=11077 />/unique=10048797 />/lk-owner=a73ae5bdb5fcd0d2 />/op=FLUSH />/type=1 />/cnt=5 />//>/[global.callpool.stack.61.frame.1] />/frame=0x7f6c6f793d88 />/ref_count=0 
/>/translator=edocs-production-write-behind />/complete=0 />/parent=edocs-production-read-ahead />/wind_from=ra_flush />/wind_to=FIRST_CHILD (this)->fops->flush />/unwind_to=ra_flush_cbk />//>/[global.callpool.stack.61.frame.2] 
/>/frame=0x7f6c6f796c90 />/ref_count=1 />/translator=edocs-production-read-ahead />/complete=0 />/parent=edocs-production-open-behind />/wind_from=default_flush_resume />/wind_to=FIRST_CHILD(this)->fops->flush />/unwind_to=default_flush_cbk 
/>//>/[global.callpool.stack.61.frame.3] />/frame=0x7f6c6f79b724 />/ref_count=1 />/translator=edocs-production-open-behind />/complete=0 />/parent=edocs-production />/wind_from=io_stats_flush />/wind_to=FIRST_CHILD(this)->fops->flush 
/>/unwind_to=io_stats_flush_cbk />//>/[global.callpool.stack.61.frame.4] />/frame=0x7f6c6f79b474 />/ref_count=1 />/translator=edocs-production />/complete=0 />/parent=fuse />/wind_from=fuse_flush_resume 
/>/wind_to=FIRST_CHILD(this)->fops->flush />/unwind_to=fuse_err_cbk />//>/[global.callpool.stack.61.frame.5] />/frame=0x7f6c6f796684 />/ref_count=1 />/translator=fuse />/complete=0 />//
Mos probably, issue is with write-behind's flush. So please turn off
write-behind and test. If you don't have any hung httpd processes, please
let us know.

-Amar


>/-Amar />//>/On Wed, Mar 29, 2017 at 6:56 AM, Alvin Starr http://lists.gluster.org/mailman/listinfo/gluster-users>> wrote: />//>>/We 
are running gluster 3.8.9-1 on Centos 7.3.1611 for the servers
and on />>/the clients 3.7.11-2 on Centos 6.8 />>//>>/We are seeing httpd 
processes hang in fuse_request_send or
sync_page. />>//>>/These calls are from PHP 5.3.3-48 scripts />>//>>/I am 
attaching a tgz file that contains the process dump from
glusterfsd />>/and the hung pids along with the offending pid's stacks from 
/>>//proc/{pid}/stack. />>//>>/This has been a low level annoyance for a while but it 
has become
a much />>/bigger issue because the number of hung processes went from a few
a week to />>/a few hundred a day. />>//>>//>>/-- />>/Alvin Starr || voice: 
(905)513-7688 />>/Netvel Inc. || Cell: (416)806-0133 />>/alvin at netvel.net
 || 
/>>//>>//>>/___ />>/Gluster-users mailing list 
/>>/Gluster-users at gluster.org
 
/>>/http://lists.gluster.org/mailman/listinfo/gluster-users
 />>//>//>//>//>/-- 
/>/Amar Tumballi (amarts) />// --


--
Alvin Starr   ||   voice: (905)513-7688
Netvel Inc.   ||   Cell:  (416)806-0133
al...@net

[Gluster-users] "Fake" distributed-replicated volume

2017-04-07 Thread Jamie Lawrence
Greetings, Glusterites -

I have a suboptimal situation, and am wondering if there is any way to create a 
replica-3 distributed/replicated volume with three machines. I saw in the docs 
that the create command will fail with multiple bricks on the same peer; is 
there a way around that/some other way to achieve this?

The reason for this terrible idea is entirely about deadlines interacting with 
reality. I need to go into production with a new volume I’d really like to be 
D/R before additional machines arrive.

Thanks,

-j
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Server source code installation

2017-04-07 Thread Tahereh Fattahi
Hi
I have 3 gluster servers and one volume. Know, I want to change the source
code of serves and install again.
Is it enough to stop and delete the volume, make and install the code? Or
before installation I should delete some files or folders?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] rebalance fix layout necessary

2017-04-07 Thread Amudhan P
Volume type:
Disperse Volume  8+2  = 1080 bricks

First time added 8+2 * 3 sets and it started giving issue in listing
folder. so, remounted mount point and it was working fine.

Second added 8+2 *13 sets and it also had the same issue.

when listing folder it was returning an empty folder or not showing all the
folders.

when ongoing write was interrupted it throws an error destination not
folder not available.

adding few more lines from log.. let me know if you need full log file.

[2017-04-05 13:40:03.702624] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2017-04-05 13:40:04.970055] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-123: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.971194] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-122: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.972144] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-121: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.973131] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-120: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.974072] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-119: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.975005] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-118: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.975936] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-117: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.976905] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-116: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.977825] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-115: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.978755] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-114: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.979689] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-113: Using 'sse' CPU
extensions
[2017-04-05 13:40:04.980626] I [MSGID: 122067]
[ec-code.c:1046:ec_code_detect] 2-gfs-vol-disperse-112: Using 'sse' CPU
extensions
[2017-04-05 13:40:07.270412] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-736: changing port to 49153 (from 0)
[2017-04-05 13:40:07.271902] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-746: changing port to 49154 (from 0)
[2017-04-05 13:40:07.272076] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-756: changing port to 49155 (from 0)
[2017-04-05 13:40:07.273154] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-766: changing port to 49156 (from 0)
[2017-04-05 13:40:07.273193] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-776: changing port to 49157 (from 0)
[2017-04-05 13:40:07.273371] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 2-gfs-vol-client-579:
Connected to gfs-vol-client-579, attached to remote volume
'/media/disk22/brick22'.
[2017-04-05 13:40:07.273388] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 2-gfs-vol-client-579: Server
and Client lk-version numbers are not same, reopening the fds
[2017-04-05 13:40:07.273435] I [MSGID: 114035]
[client-handshake.c:202:client_set_lk_version_cbk] 2-gfs-vol-client-433:
Server lk version = 1
[2017-04-05 13:40:07.275632] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-786: changing port to 49158 (from 0)
[2017-04-05 13:40:07.275685] I [MSGID: 114046]
[client-handshake.c:1216:client_setvolume_cbk] 2-gfs-vol-client-589:
Connected to gfs-vol-client-589, attached to remote volume
'/media/disk23/brick23'.
[2017-04-05 13:40:07.275707] I [MSGID: 114047]
[client-handshake.c:1227:client_setvolume_cbk] 2-gfs-vol-client-589: Server
and Client lk-version numbers are not same, reopening the fds
[2017-04-05 13:40:07.087011] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-811: changing port to 49161 (from 0)
[2017-04-05 13:40:07.087031] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-420: changing port to 49158 (from 0)
[2017-04-05 13:40:07.087045] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-521: changing port to 49168 (from 0)
[2017-04-05 13:40:07.087060] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-430: changing port to 49159 (from 0)
[2017-04-05 13:40:07.087074] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-531: changing port to 49169 (from 0)
[2017-04-05 13:40:07.087098] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-440: changing port to 49160 (from 0)
[2017-04-05 13:40:07.087105] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-821: changing port to 49162 (from 0)
[2017-04-05 13:40:07.087117] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-450: changing port to 49161 (from 0)
[2017-04-05 13:40:07.087131] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-831: changing port to 49163 (from 0)
[2017-04-05 13:40:07.087134] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
2-gfs-vol-client-460: changing port to 49162 (from 0)
[2017

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
Means if old data is present in brick and volume is not present then it
should be visible in our brick dir /opt/lvmdir/c2/brick?

On Fri, Apr 7, 2017 at 3:04 PM, Ashish Pandey  wrote:

>
> If you are creating a fresh volume, then it is your responsibility to have
> clean bricks.
> I don't think gluster will give you a guarantee that it will be cleaned up.
>
> So you have to investigate if you have any previous data on that brick.
> What I meant was that you have to find out the number of files you see on
> your mount point and the corresponding number of gfid are same.
> If you create a file on mount point test-file, it will have a gfid
> xx-yy- and in .glusterfs the path would be xx/yy/xx-yy-
> so one test-file = one xx/yy/xx-yy-j
> In this way you have to find out if you have anything  in .glusterfs which
> should not be. I would have started from the biggest entry I see in
> .glusterfs
> like "154M/opt/lvmdir/c2/brick/.glusterfs/08"
>
>
>
> --
> *From: *"ABHISHEK PALIWAL" 
> *To: *"Ashish Pandey" 
> *Cc: *"gluster-users" , "Gluster Devel" <
> gluster-de...@gluster.org>
> *Sent: *Friday, April 7, 2017 2:28:46 PM
> *Subject: *Re: [Gluster-users] [Gluster-devel] Glusterfs meta data
> spaceconsumption issue
>
>
> Hi Ashish,
>
> I don't think so that count of files on mount point and .glusterfs/ will
> remain same. Because I have created one file on the gluster mount poing but
> on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
> creates .glusterfs/xx/xx/x... which is two parent dir and one
> truefid file.
>
>
> Regards,
> Abhishek
>
> On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey  wrote:
>
>>
>> Are you sure that the bricks which you used for this volume was not
>> having any previous data?
>> Find out the total number of files and directories on your mount point
>> (recursively) and then see the number of entries on .glusterfs/
>>
>>
>> --
>> *From: *"ABHISHEK PALIWAL" 
>> *To: *"Gluster Devel" , "gluster-users" <
>> gluster-users@gluster.org>
>> *Sent: *Friday, April 7, 2017 12:15:22 PM
>> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption
>> issue
>>
>>
>>
>> Is there any update ??
>>
>> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We are currently experiencing a serious issue w.r.t volume space usage
>>> by glusterfs.
>>>
>>> In the below outputs, we can see that the size of the real data in /c
>>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>>
>>> Can you tell us why the volume space is fully used by glusterfs even
>>> though the real data size is around 1GB itself ?
>>>
>>> # gluster peer status
>>> Number of Peers: 0
>>> #
>>> #
>>> # gluster volume status
>>> Status of volume: c_glusterfs
>>> Gluster process TCP Port  RDMA Port
>>> Online  Pid
>>> 
>>> --
>>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0
>>> Y   1507
>>>
>>> Task Status of Volume c_glusterfs
>>> 
>>> --
>>> There are no active volume tasks
>>>
>>> # gluster volume info
>>>
>>> Volume Name: c_glusterfs
>>> Type: Distribute
>>> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
>>> Status: Started
>>> Number of Bricks: 1
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>>> Options Reconfigured:
>>> nfs.disable: on
>>> network.ping-timeout: 4
>>> performance.readdir-ahead: on
>>> #
>>> # ls -a /c/
>>> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
>>> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
>>> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
>>> toplog.txt  up  usr  xb
>>> # du -sh /c/.trashcan/
>>> 8.0K/c/.trashcan/
>>> # du -sh /c/*
>>> 11K /c/RNC_Exceptions
>>> 5.5K/c/configuration
>>> 62M /c/java
>>> 138K/c/license
>>> 609M/c/loadmodules
>>> 90M /c/loadmodules_norepl
>>> 246M/c/loadmodules_tftp
>>> 4.1M/c/logfiles
>>> 4.0K/c/lost+found
>>> 5.0K/c/node_id
>>> 8.0K/c/pm_data
>>> 4.5K/c/pmd
>>> 9.1M/c/public_html
>>> 113K/c/rnc
>>> 16K /c/security
>>> 1.3M/c/systemfiles
>>> 228K/c/tmp
>>> 75K /c/toplog.txt
>>> 1.5M/c/up
>>> 4.0K/c/usr
>>> 4.0K/c/xb
>>> # du -sh /c/
>>> 1022M   /c/
>>> # df -h /c/
>>> Filesystem  Size  Used Avail Use% Mounted on
>>> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
>>> #
>>> #
>>> #
>>> # du -sh /opt/lvmdir/c2/brick/
>>> 3.4G/opt/lvmdir/c2/brick/
>>> # du -sh /opt/lvmdir/c2/brick/*
>>> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
>>> 36K /opt/lvmdir/c2/brick/configuration
>>> 63M /opt/lvmdir/c2/brick/java
>>> 176K   

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread Ashish Pandey

If you are creating a fresh volume, then it is your responsibility to have 
clean bricks. 
I don't think gluster will give you a guarantee that it will be cleaned up. 

So you have to investigate if you have any previous data on that brick. 
What I meant was that you have to find out the number of files you see on your 
mount point and the corresponding number of gfid are same. 
If you create a file on mount point test-file, it will have a gfid 
xx-yy- and in .glusterfs the path would be xx/yy/xx-yy- 
so one test-file = one xx/yy/xx-yy-j 
In this way you have to find out if you have anything in .glusterfs which 
should not be. I would have started from the biggest entry I see in .glusterfs 
like " 154M /opt/lvmdir/c2/brick/.glusterfs/08" 



- Original Message -

From: "ABHISHEK PALIWAL"  
To: "Ashish Pandey"  
Cc: "gluster-users" , "Gluster Devel" 
 
Sent: Friday, April 7, 2017 2:28:46 PM 
Subject: Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space 
consumption issue 

Hi Ashish, 

I don't think so that count of files on mount point and .glusterfs/ will remain 
same. Because I have created one file on the gluster mount poing but on 
.glusterfs/ it increased by 3 in numbers. Reason behind that is it creates 
.glusterfs/xx/xx/x... which is two parent dir and one truefid file. 


Regards, 
Abhishek 

On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey < aspan...@redhat.com > wrote: 




Are you sure that the bricks which you used for this volume was not having any 
previous data? 
Find out the total number of files and directories on your mount point 
(recursively) and then see the number of entries on .glusterfs/ 



From: "ABHISHEK PALIWAL" < abhishpali...@gmail.com > 
To: "Gluster Devel" < gluster-de...@gluster.org >, "gluster-users" < 
gluster-users@gluster.org > 
Sent: Friday, April 7, 2017 12:15:22 PM 
Subject: Re: [Gluster-devel] Glusterfs meta data space consumption issue 



Is there any update ?? 

On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL < abhishpali...@gmail.com > 
wrote: 



Hi, 
We are currently experiencing a serious issue w.r.t volume space usage by 
glusterfs. 
In the below outputs, we can see that the size of the real data in /c 
(glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the 
brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB 
Can you tell us why the volume space is fully used by glusterfs even though the 
real data size is around 1GB itself ? 
# gluster peer status 
Number of Peers: 0 
# 
# 
# gluster volume status 
Status of volume: c_glusterfs 
Gluster process TCP Port RDMA Port Online Pid 
-- 
Brick 10.32.0.48:/opt/lvmdir/c2/brick 49152 0 Y 1507 
Task Status of Volume c_glusterfs 
-- 
There are no active volume tasks 
# gluster volume info 
Volume Name: c_glusterfs 
Type: Distribute 
Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc 
Status: Started 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: 10.32.0.48:/opt/lvmdir/c2/brick 
Options Reconfigured: 
nfs.disable: on 
network.ping-timeout: 4 
performance.readdir-ahead: on 
# 
# ls -a /c/ 
. .. .trashcan RNC_Exceptions configuration java license loadmodules 
loadmodules_norepl loadmodules_tftp logfiles lost+found node_id pm_data pmd 
public_html rnc security systemfiles tmp toplog.txt up usr xb 
# du -sh /c/.trashcan/ 
8.0K /c/.trashcan/ 
# du -sh /c/* 
11K /c/RNC_Exceptions 
5.5K /c/configuration 
62M /c/java 
138K /c/license 
609M /c/loadmodules 
90M /c/loadmodules_norepl 
246M /c/loadmodules_tftp 
4.1M /c/logfiles 
4.0K /c/lost+found 
5.0K /c/node_id 
8.0K /c/pm_data 
4.5K /c/pmd 
9.1M /c/public_html 
113K /c/rnc 
16K /c/security 
1.3M /c/systemfiles 
228K /c/tmp 
75K /c/toplog.txt 
1.5M /c/up 
4.0K /c/usr 
4.0K /c/xb 
# du -sh /c/ 
1022M /c/ 
# df -h /c/ 
Filesystem Size Used Avail Use% Mounted on 
10.32.0.48:c_glusterfs 3.6G 3.4G 0 100% /mnt/c 
# 
# 
# 
# du -sh /opt/lvmdir/c2/brick/ 
3.4G /opt/lvmdir/c2/brick/ 
# du -sh /opt/lvmdir/c2/brick/* 
112K /opt/lvmdir/c2/brick/RNC_Exceptions 
36K /opt/lvmdir/c2/brick/configuration 
63M /opt/lvmdir/c2/brick/java 
176K /opt/lvmdir/c2/brick/license 
610M /opt/lvmdir/c2/brick/loadmodules 
95M /opt/lvmdir/c2/brick/loadmodules_norepl 
246M /opt/lvmdir/c2/brick/loadmodules_tftp 
4.2M /opt/lvmdir/c2/brick/logfiles 
8.0K /opt/lvmdir/c2/brick/lost+found 
24K /opt/lvmdir/c2/brick/node_id 
16K /opt/lvmdir/c2/brick/pm_data 
16K /opt/lvmdir/c2/brick/pmd 
9.2M /opt/lvmdir/c2/brick/public_html 
268K /opt/lvmdir/c2/brick/rnc 
80K /opt/lvmdir/c2/brick/security 
1.4M /opt/lvmdir/c2/brick/systemfiles 
252K /opt/lvmdir/c2/brick/tmp 
80K /opt/lvmdir/c2/brick/toplog.txt 
1.5M /opt/lvmdir/c2/brick/up 
8.0K /opt/lvmdir/c2/brick/usr 
8.0K /opt/lvmdir/c2/brick/xb 
# du -sh /opt/lvmdir/c2/brick/.glusterfs/ 
3.4G /opt/lvmdir/c2/brick/.glusterfs/ 

Below are the static

[Gluster-users] gluster_to_smb_acl: Unknown gluster ACL version: 0

2017-04-07 Thread Pointner, Anton
Hello,

 

after upgrading from 3.8.9 to 3.10.1 i have tousands errrors in log.smbd
and /var/log/messages

 

[2017/04/07 10:40:12.903220,  0]
../source3/modules/vfs_glusterfs.c:1365(gluster_to_smb_acl)

  Unknown gluster ACL version: 0

[2017/04/07 10:40:12.903742,  0]
../source3/modules/vfs_glusterfs.c:1365(gluster_to_smb_acl)

  Unknown gluster ACL version: 0

 

 

 

/etc/samba/smb.conf

[EPI2-Archiv]

comment = Hr. Linkohr EPI2

vfs objects = acl_xattr glusterfs

glusterfs:volume = vol1

glusterfs:logfile = /var/log/samba/glusterfs-vol1.%M.log

glusterfs:loglevel = 7

path = /EPI2-Archiv

read only = no

kernel share modes = no

 

 

on both nodes the same version:

[root@nas81i samba]# gluster volume get all cluster.max-op-version

Option  Value

--  -

cluster.max-op-version  31000

 

OR

[root@nas81i samba]# cat /var/lib/glusterd/glusterd.info

UUID=7b989894-ba5f-4f54-9db1-e5be1cee7587

operating-version=31000

 

The Operating System is Centos 7.3 with all updates from (today).

[root@nas81i samba]# yum list installed |grep samba

samba.x86_64   4.4.4-12.el7_3
@updates

samba-client.x86_644.4.4-12.el7_3
@updates

samba-client-libs.x86_64   4.4.4-12.el7_3
@updates

samba-common.noarch4.4.4-12.el7_3
@updates

samba-common-libs.x86_64   4.4.4-12.el7_3
@updates

samba-common-tools.x86_64  4.4.4-12.el7_3
@updates

samba-libs.x86_64  4.4.4-12.el7_3
@updates

samba-vfs-glusterfs.x86_64 4.4.4-12.el7_3
@updates

samba-winbind.x86_64   4.4.4-12.el7_3
@updates

samba-winbind-clients.x86_64   4.4.4-12.el7_3
@updates

samba-winbind-modules.x86_64   4.4.4-12.el7_3
@updates

[root@nas81i samba]# yum list installed |grep gluster

centos-release-gluster310.noarch   1.0-1.el7.centos
@extras

centos-release-gluster38.noarch1.0-1.el7.centos
@extras

glusterfs.x86_64   3.10.1-1.el7
@centos-gluster310

glusterfs-api.x86_64   3.10.1-1.el7
@centos-gluster310

glusterfs-cli.x86_64   3.10.1-1.el7
@centos-gluster310

glusterfs-client-xlators.x86_643.10.1-1.el7
@centos-gluster310

glusterfs-fuse.x86_64  3.10.1-1.el7
@centos-gluster310

glusterfs-libs.x86_64  3.10.1-1.el7
@centos-gluster310

glusterfs-server.x86_643.10.1-1.el7
@centos-gluster310

samba-vfs-glusterfs.x86_64 4.4.4-12.el7_3
@updates

userspace-rcu.x86_64   0.7.16-3.el7
@centos-gluster310

 

 

Thanks for your help.

 

Regards

Anton

 


 
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
Hi Ashish,

I don't think so that count of files on mount point and .glusterfs/ will
remain same. Because I have created one file on the gluster mount poing but
on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
creates .glusterfs/xx/xx/x... which is two parent dir and one
truefid file.


Regards,
Abhishek

On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey  wrote:

>
> Are you sure that the bricks which you used for this volume was not having
> any previous data?
> Find out the total number of files and directories on your mount point
> (recursively) and then see the number of entries on .glusterfs/
>
>
> --
> *From: *"ABHISHEK PALIWAL" 
> *To: *"Gluster Devel" , "gluster-users" <
> gluster-users@gluster.org>
> *Sent: *Friday, April 7, 2017 12:15:22 PM
> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption issue
>
>
>
> Is there any update ??
>
> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL  > wrote:
>
>> Hi,
>>
>> We are currently experiencing a serious issue w.r.t volume space usage by
>> glusterfs.
>>
>> In the below outputs, we can see that the size of the real data in /c
>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>
>> Can you tell us why the volume space is fully used by glusterfs even
>> though the real data size is around 1GB itself ?
>>
>> # gluster peer status
>> Number of Peers: 0
>> #
>> #
>> # gluster volume status
>> Status of volume: c_glusterfs
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>> 
>> --
>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
>> 1507
>>
>> Task Status of Volume c_glusterfs
>> 
>> --
>> There are no active volume tasks
>>
>> # gluster volume info
>>
>> Volume Name: c_glusterfs
>> Type: Distribute
>> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
>> Status: Started
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>> Options Reconfigured:
>> nfs.disable: on
>> network.ping-timeout: 4
>> performance.readdir-ahead: on
>> #
>> # ls -a /c/
>> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
>> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
>> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
>> toplog.txt  up  usr  xb
>> # du -sh /c/.trashcan/
>> 8.0K/c/.trashcan/
>> # du -sh /c/*
>> 11K /c/RNC_Exceptions
>> 5.5K/c/configuration
>> 62M /c/java
>> 138K/c/license
>> 609M/c/loadmodules
>> 90M /c/loadmodules_norepl
>> 246M/c/loadmodules_tftp
>> 4.1M/c/logfiles
>> 4.0K/c/lost+found
>> 5.0K/c/node_id
>> 8.0K/c/pm_data
>> 4.5K/c/pmd
>> 9.1M/c/public_html
>> 113K/c/rnc
>> 16K /c/security
>> 1.3M/c/systemfiles
>> 228K/c/tmp
>> 75K /c/toplog.txt
>> 1.5M/c/up
>> 4.0K/c/usr
>> 4.0K/c/xb
>> # du -sh /c/
>> 1022M   /c/
>> # df -h /c/
>> Filesystem  Size  Used Avail Use% Mounted on
>> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
>> #
>> #
>> #
>> # du -sh /opt/lvmdir/c2/brick/
>> 3.4G/opt/lvmdir/c2/brick/
>> # du -sh /opt/lvmdir/c2/brick/*
>> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
>> 36K /opt/lvmdir/c2/brick/configuration
>> 63M /opt/lvmdir/c2/brick/java
>> 176K/opt/lvmdir/c2/brick/license
>> 610M/opt/lvmdir/c2/brick/loadmodules
>> 95M /opt/lvmdir/c2/brick/loadmodules_norepl
>> 246M/opt/lvmdir/c2/brick/loadmodules_tftp
>> 4.2M/opt/lvmdir/c2/brick/logfiles
>> 8.0K/opt/lvmdir/c2/brick/lost+found
>> 24K /opt/lvmdir/c2/brick/node_id
>> 16K /opt/lvmdir/c2/brick/pm_data
>> 16K /opt/lvmdir/c2/brick/pmd
>> 9.2M/opt/lvmdir/c2/brick/public_html
>> 268K/opt/lvmdir/c2/brick/rnc
>> 80K /opt/lvmdir/c2/brick/security
>> 1.4M/opt/lvmdir/c2/brick/systemfiles
>> 252K/opt/lvmdir/c2/brick/tmp
>> 80K /opt/lvmdir/c2/brick/toplog.txt
>> 1.5M/opt/lvmdir/c2/brick/up
>> 8.0K/opt/lvmdir/c2/brick/usr
>> 8.0K/opt/lvmdir/c2/brick/xb
>> # du -sh /opt/lvmdir/c2/brick/.glusterfs/
>> 3.4G/opt/lvmdir/c2/brick/.glusterfs/
>>
>> Below are the statics of the below command "du -sh /opt/lvmdir/c2/brick/.
>> glusterfs/*"
>>
>> # du -sh /opt/lvmdir/c2/brick/.glusterfs/*
>> 14M /opt/lvmdir/c2/brick/.glusterfs/00
>> 8.3M/opt/lvmdir/c2/brick/.glusterfs/01
>> 23M /opt/lvmdir/c2/brick/.glusterfs/02
>> 17M /opt/lvmdir/c2/brick/.glusterfs/03
>> 7.1M/opt/lvmdir/c2/brick/.glusterfs/04
>> 336K/opt/lvmdir/c2/brick/.glusterfs/05
>> 3.5M/opt/lvmdir/c2/brick/.glusterfs/06
>> 1.7M/opt/lvmdir/c2/brick/.glusterfs/07
>> 154M/opt/lvmdir/c2/brick/.glusterfs/08
>> 14M /opt/lvmdir/c2/brick/.glusterfs/09
>> 9.5M/opt/lvmdir/c2/brick

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
HI Ashish,


Even if there is a old data then it should be clear by gluster it self
right? or you want to do it manually?

Regards,
Abhishek

On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey  wrote:

>
> Are you sure that the bricks which you used for this volume was not having
> any previous data?
> Find out the total number of files and directories on your mount point
> (recursively) and then see the number of entries on .glusterfs/
>
>
> --
> *From: *"ABHISHEK PALIWAL" 
> *To: *"Gluster Devel" , "gluster-users" <
> gluster-users@gluster.org>
> *Sent: *Friday, April 7, 2017 12:15:22 PM
> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption issue
>
>
>
> Is there any update ??
>
> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL  > wrote:
>
>> Hi,
>>
>> We are currently experiencing a serious issue w.r.t volume space usage by
>> glusterfs.
>>
>> In the below outputs, we can see that the size of the real data in /c
>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>
>> Can you tell us why the volume space is fully used by glusterfs even
>> though the real data size is around 1GB itself ?
>>
>> # gluster peer status
>> Number of Peers: 0
>> #
>> #
>> # gluster volume status
>> Status of volume: c_glusterfs
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>> 
>> --
>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
>> 1507
>>
>> Task Status of Volume c_glusterfs
>> 
>> --
>> There are no active volume tasks
>>
>> # gluster volume info
>>
>> Volume Name: c_glusterfs
>> Type: Distribute
>> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
>> Status: Started
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>> Options Reconfigured:
>> nfs.disable: on
>> network.ping-timeout: 4
>> performance.readdir-ahead: on
>> #
>> # ls -a /c/
>> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
>> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
>> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
>> toplog.txt  up  usr  xb
>> # du -sh /c/.trashcan/
>> 8.0K/c/.trashcan/
>> # du -sh /c/*
>> 11K /c/RNC_Exceptions
>> 5.5K/c/configuration
>> 62M /c/java
>> 138K/c/license
>> 609M/c/loadmodules
>> 90M /c/loadmodules_norepl
>> 246M/c/loadmodules_tftp
>> 4.1M/c/logfiles
>> 4.0K/c/lost+found
>> 5.0K/c/node_id
>> 8.0K/c/pm_data
>> 4.5K/c/pmd
>> 9.1M/c/public_html
>> 113K/c/rnc
>> 16K /c/security
>> 1.3M/c/systemfiles
>> 228K/c/tmp
>> 75K /c/toplog.txt
>> 1.5M/c/up
>> 4.0K/c/usr
>> 4.0K/c/xb
>> # du -sh /c/
>> 1022M   /c/
>> # df -h /c/
>> Filesystem  Size  Used Avail Use% Mounted on
>> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
>> #
>> #
>> #
>> # du -sh /opt/lvmdir/c2/brick/
>> 3.4G/opt/lvmdir/c2/brick/
>> # du -sh /opt/lvmdir/c2/brick/*
>> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
>> 36K /opt/lvmdir/c2/brick/configuration
>> 63M /opt/lvmdir/c2/brick/java
>> 176K/opt/lvmdir/c2/brick/license
>> 610M/opt/lvmdir/c2/brick/loadmodules
>> 95M /opt/lvmdir/c2/brick/loadmodules_norepl
>> 246M/opt/lvmdir/c2/brick/loadmodules_tftp
>> 4.2M/opt/lvmdir/c2/brick/logfiles
>> 8.0K/opt/lvmdir/c2/brick/lost+found
>> 24K /opt/lvmdir/c2/brick/node_id
>> 16K /opt/lvmdir/c2/brick/pm_data
>> 16K /opt/lvmdir/c2/brick/pmd
>> 9.2M/opt/lvmdir/c2/brick/public_html
>> 268K/opt/lvmdir/c2/brick/rnc
>> 80K /opt/lvmdir/c2/brick/security
>> 1.4M/opt/lvmdir/c2/brick/systemfiles
>> 252K/opt/lvmdir/c2/brick/tmp
>> 80K /opt/lvmdir/c2/brick/toplog.txt
>> 1.5M/opt/lvmdir/c2/brick/up
>> 8.0K/opt/lvmdir/c2/brick/usr
>> 8.0K/opt/lvmdir/c2/brick/xb
>> # du -sh /opt/lvmdir/c2/brick/.glusterfs/
>> 3.4G/opt/lvmdir/c2/brick/.glusterfs/
>>
>> Below are the statics of the below command "du -sh /opt/lvmdir/c2/brick/.
>> glusterfs/*"
>>
>> # du -sh /opt/lvmdir/c2/brick/.glusterfs/*
>> 14M /opt/lvmdir/c2/brick/.glusterfs/00
>> 8.3M/opt/lvmdir/c2/brick/.glusterfs/01
>> 23M /opt/lvmdir/c2/brick/.glusterfs/02
>> 17M /opt/lvmdir/c2/brick/.glusterfs/03
>> 7.1M/opt/lvmdir/c2/brick/.glusterfs/04
>> 336K/opt/lvmdir/c2/brick/.glusterfs/05
>> 3.5M/opt/lvmdir/c2/brick/.glusterfs/06
>> 1.7M/opt/lvmdir/c2/brick/.glusterfs/07
>> 154M/opt/lvmdir/c2/brick/.glusterfs/08
>> 14M /opt/lvmdir/c2/brick/.glusterfs/09
>> 9.5M/opt/lvmdir/c2/brick/.glusterfs/0a
>> 5.5M/opt/lvmdir/c2/brick/.glusterfs/0b
>> 11M /opt/lvmdir/c2/brick/.glusterfs/0c
>> 764K/opt/lvmdir/c2/brick/.glusterfs/0d
>> 69M /opt/lvmdir/c2/brick/.glusterf