[Gluster-devel] Just a thought, a better way to rebuild replica when some bricks go down rather than replace-brick

2017-05-26 Thread Jaden Liang
down, it can just modify those storage graphs of files which lost replica, then rebuild can be run which replace-brick operations. Just a thought, any suggestion would be great! Best regards, Jaden Liang 5/25/2017 ___ Gluster-devel mailing list Gluster

[Gluster-devel] In what kind of circumstance, the changlog trusted.afr.xxx of a file will become 0xFFFFFFFF

2014-11-20 Thread Jaden Liang
Hi all, I have a glusterfs-3.4.5 build with 6 x 2 Distributed-Replicate volume for KVM storage. And found one of the file is not in a consistant state. I checked the extent attributes of every replica file on brick as below: # file: sf/data/vs/local/d2c2bf42-0206-43db-824b-d2d3872ea42d/98d0e5d0-4

[Gluster-devel] Need some advices regarding glusterd memory leak upto 120GB

2014-11-12 Thread Jaden Liang
Hi all, I am running gluster-3.4.5 on 2 servers. Each of them has 7 2TB HDDs to build a 7 * 2 distributed + replicated volume. I just notice that the glusterd consume about 120GB memory and get a coredump today. I read the mempool code try to identify which mempool eat the memory. Unfortunetly, th

Re: [Gluster-devel] [Gluster-users]glusterfs crashed lead by liblvm2app.so with BD xlator

2014-11-10 Thread Jaden Liang
On Monday, November 10, 2014, Vijay Bellur wrote: > On 11/08/2014 03:50 PM, Jaden Liang wrote: > >> >> Hi all, >> >> We are testing BD xlator to verify the KVM running with gluster. After >> some >> simple tests, we encountered a coredump of glusterfs

Re: [Gluster-devel] [Gluster-users]glusterfs crashed lead by liblvm2app.so with BD xlator

2014-11-10 Thread Jaden Liang
On Monday, November 10, 2014, Vijay Bellur wrote: > On 11/08/2014 03:50 PM, Jaden Liang wrote: > >> >> Hi all, >> >> We are testing BD xlator to verify the KVM running with gluster. After >> some >> simple tests, we encountered a coredump of glusterfs

[Gluster-devel] [Gluster-users]glusterfs crashed lead by liblvm2app.so with BD xlator

2014-11-08 Thread Jaden Liang
Hi all, We are testing BD xlator to verify the KVM running with gluster. After some simple tests, we encountered a coredump of glusterfs lead by liblvm2app.so. Hope some one here might give some advises about this issue. We have debug for some time, and found out this coredump is triggered by a t

Re: [Gluster-devel] Any review is appreciated. Reason about gluster server_connection_cleanup uncleanly, file flocks leaks in frequently network disconnection

2014-09-20 Thread Jaden Liang
gt; reviews till then. > > Cheers, > Vijay > > On 09/19/2014 12:44 PM, Jaden Liang wrote: > >> Hi all, >> >> Here is a patch for this file flocks uncleanly disconnect issue of >> gluster-3.4.5. >> I am totally new guy in the gluster development work f

Re: [Gluster-devel] Any review is appreciated. Reason about gluster server_connection_cleanup uncleanly, file flocks leaks in frequently network disconnection

2014-09-19 Thread Jaden Liang
the flags list of open() */ +uint32_t clnt_conn_id; /* connection id for each connection +in process_uuid, start with 0, + increase once a new conn

[Gluster-devel] Any review is appreciated. Reason about gluster server_connection_cleanup uncleanly, file flocks leaks in frequently network disconnection

2014-09-17 Thread Jaden Liang
Hi all, By several days tracking, we finally pinpointed the reason of glusterfs uncleanly detach file flocks in frequently network disconnection. We are now working on a patch to submit. And here is this issue details. Any suggestions will be appreciated! First of all, as I mentioned in http://su

[Gluster-devel] Frequently network failure cause a FD leak after setting a short network.tcp-timeout

2014-09-11 Thread Jaden Liang
Hi all, First of all, I have sent a mail about FD leak in network failure before. - http://supercolony.gluster.org/pipermail/gluster-devel/2014-August/041969.html 'About file descriptor leak in glusterfsd daemon after network failure' Thank Niels de Vos for telling me there is a bug 1129787 rep

Re: [Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-10 Thread Jaden Liang
imum writing big files sequentially (Enterprise SATA disk spinning at > 7200rpm reaches around 115MBps). > > Can you, please, explain which type of bricks do you have on each server > node? > > I'll try to emulate your setup and test it. > > Thank you! > > >

Re: [Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-03 Thread Jaden Liang
; In case of gluster FUSE Client, write data goes simultaneously to both > server nodes using half bandwidth for each of the client's 1GbE port > because replica is done by client side, that results on a writing speed > around 50MBps(<60MBps). > > I hope this helps. > >

Re: [Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-02 Thread Jaden Liang
hatty process? On Tuesday, September 2, 2014, Jaden Liang wrote: > Hello, gluster-devel and gluster-users team, > > We are running a performance test in a replica 1 volume and find out the > single file sequence writing performance only get about 50MB/s in a 1Gbps > Ethernet.

[Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-02 Thread Jaden Liang
. Now we are heading to the rpc mechanism in glusterfs. Still, we think this issue maybe encountered in gluster-devel or gluster-users teams. Therefor, any suggestions would be grateful. Or have anyone know such issue? Best regards, Jaden Liang 9/2/2014 -- Best regards, Jaden Liang

Re: [Gluster-devel] About file descriptor leak in glusterfsd daemon after network failure

2014-08-25 Thread Jaden Liang
released even stop the file process. Why does glusterfsd open a new fd instead of reusing the original reopened fd? Does glusterfsd have any kind of mechanism retrieve such fds? 2014-08-20 21:54 GMT+08:00 Niels de Vos : > On Wed, Aug 20, 2014 at 07:16:16PM +0800, Jaden Liang wrote: > > H

[Gluster-devel] About file descriptor leak in glusterfsd daemon after network failure

2014-08-20 Thread Jaden Liang
y job. So we want to look for some help in here. Here are our questions: 1. Has this issue been solved? Or is it a known issue? 2. Does anyone know the file descriptor maintenance logic in glusterfsd(server-side)? When the fd will be closed or held? Thank you very much. -- Best

[Gluster-devel] About file descriptor leak in glusterfsd daemon after network failure

2014-08-20 Thread Jaden Liang
y job. So we want to look for some help in here. Here are our questions: 1. Has this issue been solved? Or is it a known issue? 2. Does anyone know the file descriptor maintenance logic in glusterfsd(server-side)? When the fd will be closed or held? Thank you very much. -- Best

[Gluster-devel] About restart glusterfs mount.fuse daemon

2014-08-18 Thread Jaden Liang
daemon crash, the directory can not be remouonted via mount.glusterfs. Unless I kill all the testing processes referenced the files or directories in volume. How can I restore the mount daemon not by killing those referenced processes? Best regards, Jaden Liang 8/18/2014