[Gluster-devel] Glusterfs meta data space consumption issue

2017-04-06 Thread ABHISHEK PALIWAL
Hi,

We are currently experiencing a serious issue w.r.t volume space usage by
glusterfs.

In the below outputs, we can see that the size of the real data in /c
(glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB

Can you tell us why the volume space is fully used by glusterfs even though
the real data size is around 1GB itself ?

# gluster peer status
Number of Peers: 0
#
#
# gluster volume status
Status of volume: c_glusterfs
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
1507

Task Status of Volume c_glusterfs
--
There are no active volume tasks

# gluster volume info

Volume Name: c_glusterfs
Type: Distribute
Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
Options Reconfigured:
nfs.disable: on
network.ping-timeout: 4
performance.readdir-ahead: on
#
# ls -a /c/
.  ..  .trashcan  RNC_Exceptions  configuration  java  license
loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
toplog.txt  up  usr  xb
# du -sh /c/.trashcan/
8.0K/c/.trashcan/
# du -sh /c/*
11K /c/RNC_Exceptions
5.5K/c/configuration
62M /c/java
138K/c/license
609M/c/loadmodules
90M /c/loadmodules_norepl
246M/c/loadmodules_tftp
4.1M/c/logfiles
4.0K/c/lost+found
5.0K/c/node_id
8.0K/c/pm_data
4.5K/c/pmd
9.1M/c/public_html
113K/c/rnc
16K /c/security
1.3M/c/systemfiles
228K/c/tmp
75K /c/toplog.txt
1.5M/c/up
4.0K/c/usr
4.0K/c/xb
# du -sh /c/
1022M   /c/
# df -h /c/
Filesystem  Size  Used Avail Use% Mounted on
10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
#
#
#
# du -sh /opt/lvmdir/c2/brick/
3.4G/opt/lvmdir/c2/brick/
# du -sh /opt/lvmdir/c2/brick/*
112K/opt/lvmdir/c2/brick/RNC_Exceptions
36K /opt/lvmdir/c2/brick/configuration
63M /opt/lvmdir/c2/brick/java
176K/opt/lvmdir/c2/brick/license
610M/opt/lvmdir/c2/brick/loadmodules
95M /opt/lvmdir/c2/brick/loadmodules_norepl
246M/opt/lvmdir/c2/brick/loadmodules_tftp
4.2M/opt/lvmdir/c2/brick/logfiles
8.0K/opt/lvmdir/c2/brick/lost+found
24K /opt/lvmdir/c2/brick/node_id
16K /opt/lvmdir/c2/brick/pm_data
16K /opt/lvmdir/c2/brick/pmd
9.2M/opt/lvmdir/c2/brick/public_html
268K/opt/lvmdir/c2/brick/rnc
80K /opt/lvmdir/c2/brick/security
1.4M/opt/lvmdir/c2/brick/systemfiles
252K/opt/lvmdir/c2/brick/tmp
80K /opt/lvmdir/c2/brick/toplog.txt
1.5M/opt/lvmdir/c2/brick/up
8.0K/opt/lvmdir/c2/brick/usr
8.0K/opt/lvmdir/c2/brick/xb
# du -sh /opt/lvmdir/c2/brick/.glusterfs/
3.4G/opt/lvmdir/c2/brick/.glusterfs/

Below are the statics of the below command "du -sh
/opt/lvmdir/c2/brick/.glusterfs/*"

# du -sh /opt/lvmdir/c2/brick/.glusterfs/*
14M /opt/lvmdir/c2/brick/.glusterfs/00
8.3M/opt/lvmdir/c2/brick/.glusterfs/01
23M /opt/lvmdir/c2/brick/.glusterfs/02
17M /opt/lvmdir/c2/brick/.glusterfs/03
7.1M/opt/lvmdir/c2/brick/.glusterfs/04
336K/opt/lvmdir/c2/brick/.glusterfs/05
3.5M/opt/lvmdir/c2/brick/.glusterfs/06
1.7M/opt/lvmdir/c2/brick/.glusterfs/07
154M/opt/lvmdir/c2/brick/.glusterfs/08
14M /opt/lvmdir/c2/brick/.glusterfs/09
9.5M/opt/lvmdir/c2/brick/.glusterfs/0a
5.5M/opt/lvmdir/c2/brick/.glusterfs/0b
11M /opt/lvmdir/c2/brick/.glusterfs/0c
764K/opt/lvmdir/c2/brick/.glusterfs/0d
69M /opt/lvmdir/c2/brick/.glusterfs/0e
3.7M/opt/lvmdir/c2/brick/.glusterfs/0f
14M /opt/lvmdir/c2/brick/.glusterfs/10
1.8M/opt/lvmdir/c2/brick/.glusterfs/11
5.0M/opt/lvmdir/c2/brick/.glusterfs/12
18M /opt/lvmdir/c2/brick/.glusterfs/13
7.8M/opt/lvmdir/c2/brick/.glusterfs/14
151M/opt/lvmdir/c2/brick/.glusterfs/15
15M /opt/lvmdir/c2/brick/.glusterfs/16
9.0M/opt/lvmdir/c2/brick/.glusterfs/17
11M /opt/lvmdir/c2/brick/.glusterfs/18
3.1M/opt/lvmdir/c2/brick/.glusterfs/19
8.1M/opt/lvmdir/c2/brick/.glusterfs/1a
82M /opt/lvmdir/c2/brick/.glusterfs/1b
95M /opt/lvmdir/c2/brick/.glusterfs/1c
2.1M/opt/lvmdir/c2/brick/.glusterfs/1d
4.5M/opt/lvmdir/c2/brick/.glusterfs/1e
9.7M/opt/lvmdir/c2/brick/.glusterfs/1f
13M /opt/lvmdir/c2/brick/.glusterfs/20
54M /opt/lvmdir/c2/brick/.glusterfs/21
9.4M/opt/lvmdir/c2/brick/.glusterfs/22
2.3M/opt/lvmdir/c2/brick/.glusterfs/23
14M /opt/lvmdir/c2/brick/.glusterfs/24
12M /opt/lvmdir/c2/brick/.glusterfs/25
56M /opt/lvmdir/c2/brick/.glusterfs/26
5.6M/opt/lvmdir/c2/brick/.glusterfs/27
2.5M/opt/lvmdir/c2/brick/.glusterfs/28
55M /opt/lvmdir/c2/brick/.glusterfs/29
47M  

[Gluster-devel] EC Healing Algorithm

2017-04-06 Thread jayakrishnan mm
Hi

I am using Glusterfs3.7.15.
What type of algorithm is used in EC Healing ? I mean , if a brick fails
during writing and if it comes back online later , whether all the bricks
 will be re-written  or only the failed brick is written with the new data?

Best regards
JK
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] EC Healing Algorithm

2017-04-06 Thread Ashish Pandey

If the data is written on minimum number of brick, heal will take place on 
failed brick only. 
Data will be read from good bricks, encoding will happen and the fragment on 
the failed brick will be written only. 

- Original Message -

From: "jayakrishnan mm"  
To: "Gluster Devel"  
Sent: Thursday, April 6, 2017 2:21:26 PM 
Subject: [Gluster-devel] EC Healing Algorithm 

Hi 

I am using Glusterfs3.7.15. 
What type of algorithm is used in EC Healing ? I mean , if a brick fails during 
writing and if it comes back online later , whether all the bricks will be 
re-written or only the failed brick is written with the new data? 

Best regards 
JK 


___ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2017-04-05-93e3c9ab (master branch)

2017-04-06 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-05-93e3c9ab
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] commit hash update (a field like commit hash)

2017-04-06 Thread Tahereh Fattahi
Hi
I declare a new variable in dht_layout->list structure similar to
commit_hash, but I can not update this field in global .
This update is just local and in client that do this update, servers and
othe client can not see this change.
For update this field I do like commit_hash
functions, dht_update_commit_hash_for_layout in dht_selfheal.c file, all
things is ok, setxattr is done with no error in STACK_WIND , but the change
does not achieve to server!
How can I solve this problem, where should I search for the problem?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] commit hash update (a field like commit hash)

2017-04-06 Thread Mohammed Rafi K C
Like commit hash, I hope you are doing this on directories only,
nevertheless it is good to look into the brick logs and client logs. If
logs are not helping, gdb will definitely help here.

You can share your code with us if that is possible, more people can
look into your code to debug it. or give us more detail like how are you
sending the xattrs , things like that .

Regards
Rafi KC

On 04/06/2017 07:37 PM, Tahereh Fattahi wrote:
> Hi
> I declare a new variable in dht_layout->list structure similar to
> commit_hash, but I can not update this field in global . 
> This update is just local and in client that do this update, servers
> and othe client can not see this change.
> For update this field I do like commit_hash
> functions, dht_update_commit_hash_for_layout in dht_selfheal.c file,
> all things is ok, setxattr is done with no error in STACK_WIND , but
> the change does not achieve to server!
> How can I solve this problem, where should I search for the problem?
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS+NFS-Ganesha longevity cluster

2017-04-06 Thread Kaleb S. KEITHLEY


The longevity cluster has been updated to glusterfs-3.10.1 (from 3.8.5).

General information on the longevity cluster is at [1].

In the previous update sharding was enabled on the gluster volume. This 
time I have added a NFS-Ganesha NFS server on one server. Its memory 
usage is being sampled along with gluster's memory usage.


fsstress is used to run an I/O load over both the glusterfs and NFS mounts.

Snapshots of RSZ and VSZ are collected hourly for glusterd, the 
glusterfsd brick processes, the glusterfs SHD processes, and the 
NFS-Ganesha ganesha.nfsd process. There are also hourly statedumps of 
the glusterfsd brick processes and the nfs-ganesha gluster FSAL which 
uses gfapi.


You can see the collected data at [2], or follow the link on [1]

[1] https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis
[2] 
https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/


--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-06 Thread ABHISHEK PALIWAL
Is there any update ??

On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL 
wrote:

> Hi,
>
> We are currently experiencing a serious issue w.r.t volume space usage by
> glusterfs.
>
> In the below outputs, we can see that the size of the real data in /c
> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>
> Can you tell us why the volume space is fully used by glusterfs even
> though the real data size is around 1GB itself ?
>
> # gluster peer status
> Number of Peers: 0
> #
> #
> # gluster volume status
> Status of volume: c_glusterfs
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
> 1507
>
> Task Status of Volume c_glusterfs
> 
> --
> There are no active volume tasks
>
> # gluster volume info
>
> Volume Name: c_glusterfs
> Type: Distribute
> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> Options Reconfigured:
> nfs.disable: on
> network.ping-timeout: 4
> performance.readdir-ahead: on
> #
> # ls -a /c/
> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
> toplog.txt  up  usr  xb
> # du -sh /c/.trashcan/
> 8.0K/c/.trashcan/
> # du -sh /c/*
> 11K /c/RNC_Exceptions
> 5.5K/c/configuration
> 62M /c/java
> 138K/c/license
> 609M/c/loadmodules
> 90M /c/loadmodules_norepl
> 246M/c/loadmodules_tftp
> 4.1M/c/logfiles
> 4.0K/c/lost+found
> 5.0K/c/node_id
> 8.0K/c/pm_data
> 4.5K/c/pmd
> 9.1M/c/public_html
> 113K/c/rnc
> 16K /c/security
> 1.3M/c/systemfiles
> 228K/c/tmp
> 75K /c/toplog.txt
> 1.5M/c/up
> 4.0K/c/usr
> 4.0K/c/xb
> # du -sh /c/
> 1022M   /c/
> # df -h /c/
> Filesystem  Size  Used Avail Use% Mounted on
> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
> #
> #
> #
> # du -sh /opt/lvmdir/c2/brick/
> 3.4G/opt/lvmdir/c2/brick/
> # du -sh /opt/lvmdir/c2/brick/*
> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
> 36K /opt/lvmdir/c2/brick/configuration
> 63M /opt/lvmdir/c2/brick/java
> 176K/opt/lvmdir/c2/brick/license
> 610M/opt/lvmdir/c2/brick/loadmodules
> 95M /opt/lvmdir/c2/brick/loadmodules_norepl
> 246M/opt/lvmdir/c2/brick/loadmodules_tftp
> 4.2M/opt/lvmdir/c2/brick/logfiles
> 8.0K/opt/lvmdir/c2/brick/lost+found
> 24K /opt/lvmdir/c2/brick/node_id
> 16K /opt/lvmdir/c2/brick/pm_data
> 16K /opt/lvmdir/c2/brick/pmd
> 9.2M/opt/lvmdir/c2/brick/public_html
> 268K/opt/lvmdir/c2/brick/rnc
> 80K /opt/lvmdir/c2/brick/security
> 1.4M/opt/lvmdir/c2/brick/systemfiles
> 252K/opt/lvmdir/c2/brick/tmp
> 80K /opt/lvmdir/c2/brick/toplog.txt
> 1.5M/opt/lvmdir/c2/brick/up
> 8.0K/opt/lvmdir/c2/brick/usr
> 8.0K/opt/lvmdir/c2/brick/xb
> # du -sh /opt/lvmdir/c2/brick/.glusterfs/
> 3.4G/opt/lvmdir/c2/brick/.glusterfs/
>
> Below are the statics of the below command "du -sh /opt/lvmdir/c2/brick/.
> glusterfs/*"
>
> # du -sh /opt/lvmdir/c2/brick/.glusterfs/*
> 14M /opt/lvmdir/c2/brick/.glusterfs/00
> 8.3M/opt/lvmdir/c2/brick/.glusterfs/01
> 23M /opt/lvmdir/c2/brick/.glusterfs/02
> 17M /opt/lvmdir/c2/brick/.glusterfs/03
> 7.1M/opt/lvmdir/c2/brick/.glusterfs/04
> 336K/opt/lvmdir/c2/brick/.glusterfs/05
> 3.5M/opt/lvmdir/c2/brick/.glusterfs/06
> 1.7M/opt/lvmdir/c2/brick/.glusterfs/07
> 154M/opt/lvmdir/c2/brick/.glusterfs/08
> 14M /opt/lvmdir/c2/brick/.glusterfs/09
> 9.5M/opt/lvmdir/c2/brick/.glusterfs/0a
> 5.5M/opt/lvmdir/c2/brick/.glusterfs/0b
> 11M /opt/lvmdir/c2/brick/.glusterfs/0c
> 764K/opt/lvmdir/c2/brick/.glusterfs/0d
> 69M /opt/lvmdir/c2/brick/.glusterfs/0e
> 3.7M/opt/lvmdir/c2/brick/.glusterfs/0f
> 14M /opt/lvmdir/c2/brick/.glusterfs/10
> 1.8M/opt/lvmdir/c2/brick/.glusterfs/11
> 5.0M/opt/lvmdir/c2/brick/.glusterfs/12
> 18M /opt/lvmdir/c2/brick/.glusterfs/13
> 7.8M/opt/lvmdir/c2/brick/.glusterfs/14
> 151M/opt/lvmdir/c2/brick/.glusterfs/15
> 15M /opt/lvmdir/c2/brick/.glusterfs/16
> 9.0M/opt/lvmdir/c2/brick/.glusterfs/17
> 11M /opt/lvmdir/c2/brick/.glusterfs/18
> 3.1M/opt/lvmdir/c2/brick/.glusterfs/19
> 8.1M/opt/lvmdir/c2/brick/.glusterfs/1a
> 82M /opt/lvmdir/c2/brick/.glusterfs/1b
> 95M /opt/lvmdir/c2/brick/.glusterfs/1c
> 2.1M/opt/lvmdir/c2/brick/.glusterfs/1d
> 4.5M/opt/lvmdir/c2/brick/.glusterfs/1e
> 9.7M/opt/lvmdir/c2/brick/.glusterfs/1f
> 13M /opt/lvmdir/c2/brick/.glusterfs/20
> 54M /opt/lvmdir/c2