down, it can just modify those storage graphs
of files which lost replica, then rebuild can be run which replace-brick
operations.
Just a thought, any suggestion would be great!
Best regards,
Jaden Liang
5/25/2017
___
Gluster-devel mailing list
Gluster
Hi all,
I have a glusterfs-3.4.5 build with 6 x 2 Distributed-Replicate volume for
KVM storage. And found one of the file is not in a consistant state. I
checked the extent attributes of every replica file on brick as below:
# file:
sf/data/vs/local/d2c2bf42-0206-43db-824b-d2d3872ea42d/98d0e5d0-4
Hi all,
I am running gluster-3.4.5 on 2 servers. Each of them has 7 2TB HDDs to
build a 7 * 2 distributed + replicated volume.
I just notice that the glusterd consume about 120GB memory and get a
coredump today. I read the mempool code try to identify which mempool eat
the memory. Unfortunetly, th
On Monday, November 10, 2014, Vijay Bellur wrote:
> On 11/08/2014 03:50 PM, Jaden Liang wrote:
>
>>
>> Hi all,
>>
>> We are testing BD xlator to verify the KVM running with gluster. After
>> some
>> simple tests, we encountered a coredump of glusterfs
On Monday, November 10, 2014, Vijay Bellur wrote:
> On 11/08/2014 03:50 PM, Jaden Liang wrote:
>
>>
>> Hi all,
>>
>> We are testing BD xlator to verify the KVM running with gluster. After
>> some
>> simple tests, we encountered a coredump of glusterfs
Hi all,
We are testing BD xlator to verify the KVM running with gluster. After some
simple tests, we encountered a coredump of glusterfs lead by liblvm2app.so.
Hope some one here might give some advises about this issue.
We have debug for some time, and found out this coredump is triggered by a
t
gt; reviews till then.
>
> Cheers,
> Vijay
>
> On 09/19/2014 12:44 PM, Jaden Liang wrote:
>
>> Hi all,
>>
>> Here is a patch for this file flocks uncleanly disconnect issue of
>> gluster-3.4.5.
>> I am totally new guy in the gluster development work f
the flags list of
open() */
+uint32_t clnt_conn_id; /* connection id for each
connection
+in process_uuid, start
with 0,
+ increase once a new
conn
Hi all,
By several days tracking, we finally pinpointed the reason of glusterfs
uncleanly
detach file flocks in frequently network disconnection. We are now working
on
a patch to submit. And here is this issue details. Any suggestions will be
appreciated!
First of all, as I mentioned in
http://su
Hi all,
First of all, I have sent a mail about FD leak in network failure before.
-
http://supercolony.gluster.org/pipermail/gluster-devel/2014-August/041969.html
'About file descriptor leak in glusterfsd daemon after network failure'
Thank Niels de Vos for telling me there is a bug 1129787 rep
imum writing big files sequentially (Enterprise SATA disk spinning at
> 7200rpm reaches around 115MBps).
>
> Can you, please, explain which type of bricks do you have on each server
> node?
>
> I'll try to emulate your setup and test it.
>
> Thank you!
>
>
>
; In case of gluster FUSE Client, write data goes simultaneously to both
> server nodes using half bandwidth for each of the client's 1GbE port
> because replica is done by client side, that results on a writing speed
> around 50MBps(<60MBps).
>
> I hope this helps.
>
>
hatty process?
On Tuesday, September 2, 2014, Jaden Liang wrote:
> Hello, gluster-devel and gluster-users team,
>
> We are running a performance test in a replica 1 volume and find out the
> single file sequence writing performance only get about 50MB/s in a 1Gbps
> Ethernet.
.
Now we are heading to the rpc mechanism in glusterfs. Still, we think this
issue maybe encountered in gluster-devel or gluster-users teams. Therefor,
any suggestions would be grateful. Or have anyone know such issue?
Best regards,
Jaden Liang
9/2/2014
--
Best regards,
Jaden Liang
released even stop the file
process.
Why does glusterfsd open a new fd instead of reusing the original reopened
fd?
Does glusterfsd have any kind of mechanism retrieve such fds?
2014-08-20 21:54 GMT+08:00 Niels de Vos :
> On Wed, Aug 20, 2014 at 07:16:16PM +0800, Jaden Liang wrote:
> > H
y job.
So we want to look for some help in here. Here are our questions:
1. Has this issue been solved? Or is it a known issue?
2. Does anyone know the file descriptor maintenance logic in
glusterfsd(server-side)? When the fd will be closed or held?
Thank you very much.
--
Best
y job.
So we want to look for some help in here. Here are our questions:
1. Has this issue been solved? Or is it a known issue?
2. Does anyone know the file descriptor maintenance logic in
glusterfsd(server-side)? When the fd will be closed or held?
Thank you very much.
--
Best
daemon crash, the
directory can not be remouonted via mount.glusterfs. Unless I kill all the
testing processes referenced the files or directories in volume. How can I
restore the mount daemon not by killing those referenced processes?
Best regards,
Jaden Liang
8/18/2014
18 matches
Mail list logo