Re: [Gluster-devel] performance test and a spacial workload

2017-04-16 Thread Vijay Bellur
On Sun, Apr 16, 2017 at 2:52 AM, Tahereh Fattahi 
wrote:

> Hi
> I want to create a performance test with a special workload:
> 1. create a file in a directory
> 2. setxattr on the directory of the previous file
> I could not merge this two in glister code and could not find a framework
> that generate this workload for me.
> I read the code of smallfile (a framework for performance testing the
> gluster document introduced), maybe there is a way to change the code of
> this software to do a setxattr on directory after create a file.
> Which one is better to spend time? change the gluster code for merge or
> the smallfile code? Can anyone help me?
>


What performance metrics are you interested in measuring?

If you are interested in measuring time, a small bash script like:

time for i in 1..N
do
   touch /mnt/glusterfs/foo/file.$i
setfattr -n  -v  /mnt/glusterfs/foo
done

would be simpler than either approach.

Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Nfs-ganesha-devel] Device or resource busy when runltp cleanup test-files

2017-04-16 Thread Vijay Bellur
On Wed, Apr 12, 2017 at 10:37 PM, Kinglong Mee 
wrote:

> Yes, this one is silly rename,
>
> >>> rm: cannot remove ‘/mnt/nfs/ltp-JEYAuky2dz/.nfsaa46457a6a72f8ea14f5’:
> Device or resource busy
>
> But, the other one isn't client's silly rename,
>
>  rm: cannot remove ‘/mnt/nfs/ltp-JEYAuky2dz/rmderQsjV’: Directory
> not empty
>
> Also, the second one often appears when testing ganesha nfs by ltp.
>
> I try to get an tcpdump, I found, when ls under the nfs client directory,
>
> the glusterfs client doesn't send readdir to glusterfs server, so that,
> the nfs client gets an empty directory information, but it's empty
> underlying.
>
> ls at some other directory,
> the glusterfs client send readdir to glusterfs server, nfs client gets
> right dirctory
> information as the underlying.
>
> So, I guess maybe there are some problems in MDCHANE or glusterfs client?
>
> ]# cat /var/lib/glusterd/vols/gvtest/trusted-gvtest.tcp-fuse.vol
> volume gvtest-client-0
> type protocol/client
> option send-gids true
> option password d80765c8-95ae-46ed-912a-0c98fdd1e7cd
> option username 7bc6276f-245c-477a-99d0-7ead4fcb2968
> option transport.address-family inet
> option transport-type tcp
> option remote-subvolume /gluster-test/gvtest
> option remote-host 192.168.9.111
> option ping-timeout 42
> end-volume
>
> volume gvtest-client-1
> type protocol/client
> option send-gids true
> option password d80765c8-95ae-46ed-912a-0c98fdd1e7cd
> option username 7bc6276f-245c-477a-99d0-7ead4fcb2968
> option transport.address-family inet
> option transport-type tcp
> option remote-subvolume /gluster-test/gvtest
> option remote-host 192.168.9.111
> option ping-timeout 42
> end-volume
>
> volume gvtest-dht
> type cluster/distribute
> option lock-migration off
> subvolumes gvtest-client-0 gvtest-client-1
> end-volume
>
> volume gvtest
> type debug/io-stats
> option count-fop-hits off
> option latency-measurement off
> option log-level INFO
> subvolumes gvtest-dht
> end-volume
>
>
>

Have you checked the gluster export directories after receiving ENOTEMPTY?
If you observe any files there, can you please check if they are 0-byte
files with access mode 600 and sticky bit set?

When a file is being renamed, if the hash value of its new name happens to
fall in a range that is different than that of the old name, gluster
creates one such file. In the past we have seen instances of these files
not being cleaned up properly in some cases and that causes a ENOTEMPTY
when rmdir lands on the parent directory. Trying to see if we are hitting
the same problem here.

Regards,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is anyone else having trouble authenticating with review.gluster.org over ssh?

2017-04-16 Thread Raghavendra Talur
On Sun, Apr 16, 2017 at 9:07 PM, Raghavendra Talur  wrote:
> I am not able to login even after specifying the key file
>
> $ ssh -T -vvv -i ~/.ssh/gluster raghavendra-ta...@git.gluster.org
> OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
> debug1: Reading configuration data /home/rtalur/.ssh/config
> debug1: Reading configuration data /etc/ssh/ssh_config
> debug3: /etc/ssh/ssh_config line 56: Including file
> /etc/ssh/ssh_config.d/05-redhat.conf depth 0
> debug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf
> debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 2: include
> /etc/crypto-policies/back-ends/openssh.txt matched no files
> debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options for *
> debug1: auto-mux: Trying existing master
> debug1: Control socket
> "/tmp/ssh_mux_git.gluster.org_22_raghavendra-talur" does not exist
> debug2: resolving "git.gluster.org" port 22
> debug2: ssh_connect_direct: needpriv 0
> debug1: Connecting to git.gluster.org [8.43.85.171] port 22.
> debug1: Connection established.
> debug1: identity file /home/rtalur/.ssh/gluster type 1
> debug1: key_load_public: No such file or directory
> debug1: identity file /home/rtalur/.ssh/gluster-cert type -1
> debug1: Enabling compatibility mode for protocol 2.0
> debug1: Local version string SSH-2.0-OpenSSH_7.4
> ssh_exchange_identification: Connection closed by remote host

Confirmed with Pranith that he is facing same issue.

Nigel/Misc,
Please have a look.


>
> Thanks,
> Raghavendra Talur
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Is anyone else having trouble authenticating with review.gluster.org over ssh?

2017-04-16 Thread Vijay Bellur
On Sun, Apr 16, 2017 at 11:44 AM, Raghavendra Talur 
wrote:

> On Sun, Apr 16, 2017 at 9:07 PM, Raghavendra Talur 
> wrote:
> > I am not able to login even after specifying the key file
> >
> > $ ssh -T -vvv -i ~/.ssh/gluster raghavendra-ta...@git.gluster.org
> > OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
> > debug1: Reading configuration data /home/rtalur/.ssh/config
> > debug1: Reading configuration data /etc/ssh/ssh_config
> > debug3: /etc/ssh/ssh_config line 56: Including file
> > /etc/ssh/ssh_config.d/05-redhat.conf depth 0
> > debug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf
> > debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 2: include
> > /etc/crypto-policies/back-ends/openssh.txt matched no files
> > debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options
> for *
> > debug1: auto-mux: Trying existing master
> > debug1: Control socket
> > "/tmp/ssh_mux_git.gluster.org_22_raghavendra-talur" does not exist
> > debug2: resolving "git.gluster.org" port 22
> > debug2: ssh_connect_direct: needpriv 0
> > debug1: Connecting to git.gluster.org [8.43.85.171] port 22.
> > debug1: Connection established.
> > debug1: identity file /home/rtalur/.ssh/gluster type 1
> > debug1: key_load_public: No such file or directory
> > debug1: identity file /home/rtalur/.ssh/gluster-cert type -1
> > debug1: Enabling compatibility mode for protocol 2.0
> > debug1: Local version string SSH-2.0-OpenSSH_7.4
> > ssh_exchange_identification: Connection closed by remote host
>
> Confirmed with Pranith that he is facing same issue.
>


One of my jenkins jobs also bailed out since it was unable to clone from
r.g.o.

Thanks,
Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] git pull failed!

2017-04-16 Thread Zhengping Zhou
 Did anyone encount this problem when excute "git pull", this output is :


[zzp@42 gluster]$ git pull
ssh_exchange_identification: Connection closed by remote host
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.


I have changed my private key and added the new public key to my
configuration in gerrit system.

But the problem still not fixed!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Glusterfs meta data space consumption issue

2017-04-16 Thread ABHISHEK PALIWAL
Hi All,

Here we have below steps to reproduce the issue

Reproduction steps:



root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
- create the gluster volume

volume create: brick: success: please start the volume to access data

root@128:~# gluster volume set brick nfs.disable true

volume set: success

root@128:~# gluster volume start brick

volume start: brick: success

root@128:~# gluster volume info

Volume Name: brick

Type: Distribute

Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: 128.224.95.140:/tmp/brick

Options Reconfigured:

nfs.disable: true

performance.readdir-ahead: on

root@128:~# gluster volume status

Status of volume: brick

Gluster process TCP Port RDMA Port Online Pid


--

Brick 128.224.95.140:/tmp/brick 49155 0 Y 768



Task Status of Volume brick


--

There are no active volume tasks



root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/

root@128:~# cd gluster/

root@128:~/gluster# du -sh

0 .

root@128:~/gluster# mkdir -p test/

root@128:~/gluster# cp ~/tmp.file gluster/

root@128:~/gluster# cp tmp.file test

root@128:~/gluster# cd /tmp/brick

root@128:/tmp/brick# du -sh *

768K test

768K tmp.file

root@128:/tmp/brick# rm -rf test - delete the test directory and
data in the server side, not reasonable

root@128:/tmp/brick# ls

tmp.file

root@128:/tmp/brick# du -sh *

768K tmp.file

*root@128:/tmp/brick# du -sh (brick dir)*

*1.6M .*

root@128:/tmp/brick# cd .glusterfs/

root@128:/tmp/brick/.glusterfs# du -sh *

0 00

0 2a

0 bb

768K c8

0 c9

0 changelogs

768K d0

4.0K health_check

0 indices

0 landfill

*root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*

*1.6M .*

root@128:/tmp/brick# cd ~/gluster

root@128:~/gluster# ls

tmp.file

*root@128:~/gluster# du -sh * (Mount dir)*

*768K tmp.file*



In the reproduce steps, we delete the test directory in the server side,
not in the client side. I think this delete operation is not reasonable.
Please ask the customer to check whether they do this unreasonable
operation.


*It seems while deleting the data from BRICK, metadata will not deleted
from .glusterfs directory.*


*I don't know whether it is a bug of limitations, please let us know about
this?*


Regards,

Abhishek


On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> yes it is ext4. but what is the impact of this.
>>
>
> Did you have a lot of data before and you deleted all that data? ext4 if I
> remember correctly doesn't decrease size of directory once it expands it.
> So in ext4 inside a directory if you create lots and lots of files and
> delete them all, the directory size would increase at the time of creation
> but won't decrease after deletion. I don't have any system with ext4 at the
> moment to test it now. This is something we faced 5-6 years back but not
> sure if it is fixed in ext4 in the latest releases.
>
>
>>
>> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> Yes
>>>
>>> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Means the fs where this brick has been created?
 On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" 
 wrote:

> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> No,we are not using sharding
>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi"  wrote:
>>
>>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>>
>>> I have did more investigation and find out that brick dir size is
>>> equivalent to gluster mount point but .glusterfs having too much 
>>> difference
>>>
>>>
>>> You are probably using sharding?
>>>
>>>
>>> Buon lavoro.
>>> *Alessandro Briosi*
>>>
>>> *METAL.it Nord S.r.l.*
>>> Via Maioliche 57/C - 38068 Rovereto (TN)
>>> Tel.+39.0464.430130 - Fax +39.0464.437393
>>> www.metalit.com
>>>
>>>
>>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>

>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
> Pranith
>



-- 




Regards
Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Glusterfs meta data space consumption issue

2017-04-16 Thread Atin Mukherjee
On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL 
wrote:

> Hi All,
>
> Here we have below steps to reproduce the issue
>
> Reproduction steps:
>
>
>
> root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
> - create the gluster volume
>
> volume create: brick: success: please start the volume to access data
>
> root@128:~# gluster volume set brick nfs.disable true
>
> volume set: success
>
> root@128:~# gluster volume start brick
>
> volume start: brick: success
>
> root@128:~# gluster volume info
>
> Volume Name: brick
>
> Type: Distribute
>
> Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3
>
> Status: Started
>
> Number of Bricks: 1
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: 128.224.95.140:/tmp/brick
>
> Options Reconfigured:
>
> nfs.disable: true
>
> performance.readdir-ahead: on
>
> root@128:~# gluster volume status
>
> Status of volume: brick
>
> Gluster process TCP Port RDMA Port Online Pid
>
> 
> --
>
> Brick 128.224.95.140:/tmp/brick 49155 0 Y 768
>
>
>
> Task Status of Volume brick
>
> 
> --
>
> There are no active volume tasks
>
>
>
> root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/
>
> root@128:~# cd gluster/
>
> root@128:~/gluster# du -sh
>
> 0 .
>
> root@128:~/gluster# mkdir -p test/
>
> root@128:~/gluster# cp ~/tmp.file gluster/
>
> root@128:~/gluster# cp tmp.file test
>
> root@128:~/gluster# cd /tmp/brick
>
> root@128:/tmp/brick# du -sh *
>
> 768K test
>
> 768K tmp.file
>
> root@128:/tmp/brick# rm -rf test - delete the test directory and
> data in the server side, not reasonable
>
> root@128:/tmp/brick# ls
>
> tmp.file
>
> root@128:/tmp/brick# du -sh *
>
> 768K tmp.file
>
> *root@128:/tmp/brick# du -sh (brick dir)*
>
> *1.6M .*
>
> root@128:/tmp/brick# cd .glusterfs/
>
> root@128:/tmp/brick/.glusterfs# du -sh *
>
> 0 00
>
> 0 2a
>
> 0 bb
>
> 768K c8
>
> 0 c9
>
> 0 changelogs
>
> 768K d0
>
> 4.0K health_check
>
> 0 indices
>
> 0 landfill
>
> *root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*
>
> *1.6M .*
>
> root@128:/tmp/brick# cd ~/gluster
>
> root@128:~/gluster# ls
>
> tmp.file
>
> *root@128:~/gluster# du -sh * (Mount dir)*
>
> *768K tmp.file*
>
>
>
> In the reproduce steps, we delete the test directory in the server side,
> not in the client side. I think this delete operation is not reasonable.
> Please ask the customer to check whether they do this unreasonable
> operation.
>

What's the need of deleting data from backend (i.e bricks) directly?


> *It seems while deleting the data from BRICK, metadata will not deleted
> from .glusterfs directory.*
>
>
> *I don't know whether it is a bug of limitations, please let us know about
> this?*
>
>
> Regards,
>
> Abhishek
>
>
> On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> yes it is ext4. but what is the impact of this.
>>>
>>
>> Did you have a lot of data before and you deleted all that data? ext4 if
>> I remember correctly doesn't decrease size of directory once it expands it.
>> So in ext4 inside a directory if you create lots and lots of files and
>> delete them all, the directory size would increase at the time of creation
>> but won't decrease after deletion. I don't have any system with ext4 at the
>> moment to test it now. This is something we faced 5-6 years back but not
>> sure if it is fixed in ext4 in the latest releases.
>>
>>
>>>
>>> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
 Yes

 On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> Means the fs where this brick has been created?
> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <
> pkara...@redhat.com> wrote:
>
>> Is your backend filesystem ext4?
>>
>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> No,we are not using sharding
>>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" 
>>> wrote:
>>>
 Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:

 I have did more investigation and find out that brick dir size is
 equivalent to gluster mount point but .glusterfs having too much 
 difference


 You are probably using sharding?


 Buon lavoro.
 *Alessandro Briosi*

 *METAL.it Nord S.r.l.*
 Via Maioliche 57/C - 38068 Rovereto (TN)
 Tel.+39.0464.430130 - Fax +39.0464.437393
 www.metalit.com



>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/glus

Re: [Gluster-devel] [Gluster-users] Glusterfs meta data space consumption issue

2017-04-16 Thread ABHISHEK PALIWAL
There is no need but it could happen accidentally and I think it should be
protect or should not be permissible.



On Mon, Apr 17, 2017 at 8:36 AM, Atin Mukherjee  wrote:

>
>
> On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL 
> wrote:
>
>> Hi All,
>>
>> Here we have below steps to reproduce the issue
>>
>> Reproduction steps:
>>
>>
>>
>> root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
>> - create the gluster volume
>>
>> volume create: brick: success: please start the volume to access data
>>
>> root@128:~# gluster volume set brick nfs.disable true
>>
>> volume set: success
>>
>> root@128:~# gluster volume start brick
>>
>> volume start: brick: success
>>
>> root@128:~# gluster volume info
>>
>> Volume Name: brick
>>
>> Type: Distribute
>>
>> Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3
>>
>> Status: Started
>>
>> Number of Bricks: 1
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: 128.224.95.140:/tmp/brick
>>
>> Options Reconfigured:
>>
>> nfs.disable: true
>>
>> performance.readdir-ahead: on
>>
>> root@128:~# gluster volume status
>>
>> Status of volume: brick
>>
>> Gluster process TCP Port RDMA Port Online Pid
>>
>> 
>> --
>>
>> Brick 128.224.95.140:/tmp/brick 49155 0 Y 768
>>
>>
>>
>> Task Status of Volume brick
>>
>> 
>> --
>>
>> There are no active volume tasks
>>
>>
>>
>> root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/
>>
>> root@128:~# cd gluster/
>>
>> root@128:~/gluster# du -sh
>>
>> 0 .
>>
>> root@128:~/gluster# mkdir -p test/
>>
>> root@128:~/gluster# cp ~/tmp.file gluster/
>>
>> root@128:~/gluster# cp tmp.file test
>>
>> root@128:~/gluster# cd /tmp/brick
>>
>> root@128:/tmp/brick# du -sh *
>>
>> 768K test
>>
>> 768K tmp.file
>>
>> root@128:/tmp/brick# rm -rf test - delete the test directory and
>> data in the server side, not reasonable
>>
>> root@128:/tmp/brick# ls
>>
>> tmp.file
>>
>> root@128:/tmp/brick# du -sh *
>>
>> 768K tmp.file
>>
>> *root@128:/tmp/brick# du -sh (brick dir)*
>>
>> *1.6M .*
>>
>> root@128:/tmp/brick# cd .glusterfs/
>>
>> root@128:/tmp/brick/.glusterfs# du -sh *
>>
>> 0 00
>>
>> 0 2a
>>
>> 0 bb
>>
>> 768K c8
>>
>> 0 c9
>>
>> 0 changelogs
>>
>> 768K d0
>>
>> 4.0K health_check
>>
>> 0 indices
>>
>> 0 landfill
>>
>> *root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*
>>
>> *1.6M .*
>>
>> root@128:/tmp/brick# cd ~/gluster
>>
>> root@128:~/gluster# ls
>>
>> tmp.file
>>
>> *root@128:~/gluster# du -sh * (Mount dir)*
>>
>> *768K tmp.file*
>>
>>
>>
>> In the reproduce steps, we delete the test directory in the server side,
>> not in the client side. I think this delete operation is not reasonable.
>> Please ask the customer to check whether they do this unreasonable
>> operation.
>>
>
> What's the need of deleting data from backend (i.e bricks) directly?
>
>
>> *It seems while deleting the data from BRICK, metadata will not deleted
>> from .glusterfs directory.*
>>
>>
>> *I don't know whether it is a bug of limitations, please let us know
>> about this?*
>>
>>
>> Regards,
>>
>> Abhishek
>>
>>
>> On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 yes it is ext4. but what is the impact of this.

>>>
>>> Did you have a lot of data before and you deleted all that data? ext4 if
>>> I remember correctly doesn't decrease size of directory once it expands it.
>>> So in ext4 inside a directory if you create lots and lots of files and
>>> delete them all, the directory size would increase at the time of creation
>>> but won't decrease after deletion. I don't have any system with ext4 at the
>>> moment to test it now. This is something we faced 5-6 years back but not
>>> sure if it is fixed in ext4 in the latest releases.
>>>
>>>

 On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
 pkara...@redhat.com> wrote:

> Yes
>
> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Means the fs where this brick has been created?
>> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <
>> pkara...@redhat.com> wrote:
>>
>>> Is your backend filesystem ext4?
>>>
>>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 No,we are not using sharding
 On Apr 12, 2017 7:29 PM, "Alessandro Briosi" 
 wrote:

> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much 
> difference
>
>
> You are probably using sharding?
>
>
> Buon lavoro.

Re: [Gluster-devel] git pull failed!

2017-04-16 Thread Raghavendra Talur
Seems to be fixed now.

On Mon, Apr 17, 2017 at 5:37 AM, Zhengping Zhou
 wrote:
>  Did anyone encount this problem when excute "git pull", this output is :
>
>
> [zzp@42 gluster]$ git pull
> ssh_exchange_identification: Connection closed by remote host
> fatal: Could not read from remote repository.
>
> Please make sure you have the correct access rights
> and the repository exists.
>
>
> I have changed my private key and added the new public key to my
> configuration in gerrit system.
>
> But the problem still not fixed!
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] git pull failed!

2017-04-16 Thread Nigel Babu
This should be fixed now. See
https://bugzilla.redhat.com/show_bug.cgi?id=1442672

On Mon, Apr 17, 2017 at 5:37 AM, Zhengping Zhou 
wrote:

>  Did anyone encount this problem when excute "git pull", this output
> is :
>
>
> [zzp@42 gluster]$ git pull
> ssh_exchange_identification: Connection closed by remote host
> fatal: Could not read from remote repository.
>
> Please make sure you have the correct access rights
> and the repository exists.
>
>
> I have changed my private key and added the new public key to my
> configuration in gerrit system.
>
> But the problem still not fixed!
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is anyone else having trouble authenticating with review.gluster.org over ssh?

2017-04-16 Thread Nigel Babu
This should be fixed now: https://bugzilla.redhat.com/
show_bug.cgi?id=1442672

Vijay, can you link me to your failed Jenkins job? Jenkins should have been
able to clone since it uses the git protocol and not SSH.

On Sun, Apr 16, 2017 at 9:25 PM, Vijay Bellur  wrote:

>
>
> On Sun, Apr 16, 2017 at 11:44 AM, Raghavendra Talur 
> wrote:
>
>> On Sun, Apr 16, 2017 at 9:07 PM, Raghavendra Talur 
>> wrote:
>> > I am not able to login even after specifying the key file
>> >
>> > $ ssh -T -vvv -i ~/.ssh/gluster raghavendra-ta...@git.gluster.org
>> > OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
>> > debug1: Reading configuration data /home/rtalur/.ssh/config
>> > debug1: Reading configuration data /etc/ssh/ssh_config
>> > debug3: /etc/ssh/ssh_config line 56: Including file
>> > /etc/ssh/ssh_config.d/05-redhat.conf depth 0
>> > debug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf
>> > debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 2: include
>> > /etc/crypto-policies/back-ends/openssh.txt matched no files
>> > debug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options
>> for *
>> > debug1: auto-mux: Trying existing master
>> > debug1: Control socket
>> > "/tmp/ssh_mux_git.gluster.org_22_raghavendra-talur" does not exist
>> > debug2: resolving "git.gluster.org" port 22
>> > debug2: ssh_connect_direct: needpriv 0
>> > debug1: Connecting to git.gluster.org [8.43.85.171] port 22.
>> > debug1: Connection established.
>> > debug1: identity file /home/rtalur/.ssh/gluster type 1
>> > debug1: key_load_public: No such file or directory
>> > debug1: identity file /home/rtalur/.ssh/gluster-cert type -1
>> > debug1: Enabling compatibility mode for protocol 2.0
>> > debug1: Local version string SSH-2.0-OpenSSH_7.4
>> > ssh_exchange_identification: Connection closed by remote host
>>
>> Confirmed with Pranith that he is facing same issue.
>>
>
>
> One of my jenkins jobs also bailed out since it was unable to clone from
> r.g.o.
>
> Thanks,
> Vijay
>
>


-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Release 3.10.2: Scheduled for the 30th of April

2017-04-16 Thread Raghavendra Talur
Hi,

It's time to prepare the 3.10.2 release, which falls on the 30th of
each month, and hence would be April-30th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.10.2? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.10 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.10 and get
these going

3) I have made checks on what went into 3.8 post 3.10 release and if
these fixes are included in 3.10 branch, the status on this is *green*
as all fixes ported to 3.8, are ported to 3.10 as well

4) Empty release notes are posted here [3], if there are any specific
call outs for 3.10 beyond bugs, please update the review, or leave a
comment in the review, for me to pick it up

Thanks,
Raghavendra Talur

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.10.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-10-dashboard

[3] Release notes WIP: https://review.gluster.org/#/c/17063/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel