Hi,
I have question about the directory brick_dir/.glusterfs/00/00.
I created a replicate gluster volume, which has two bricks, on two node.
On one node I found that when I run "ls /gluster_mount_point" it show no file.
But on another node when I run "ls /gluster_mount_point" it show all fil
Hi Kaushal,
It is great.
This patch could fix my issue.
Thanks,
Xin
At 2016-11-25 14:57:56, "Kaushal M" wrote:
>On Fri, Nov 25, 2016 at 12:03 PM, songxin wrote:
>> Hi Atin
>> I found a problem, that is about client(glusterfs) will not trying to
>>
t 12:06 PM, songxin wrote:
Hi Atin,
Do you mean that you have the workaround applicable now?
Or it will take time to design the workaround?
If you have workaround now, could you share it to me ?
If you end up in having a 0 byte info file you'd need to copy the same info
file from other
Hi Kaushal,
Thank you for your reply.
I will make sure whether this patch could fix my problem.
Thanks,
Xin
At 2016-11-25 14:57:56, "Kaushal M" wrote:
>On Fri, Nov 25, 2016 at 12:03 PM, songxin wrote:
>> Hi Atin
>> I found a problem, that is about client(glus
me more time to pick this item up
from my backlog. I believe we have a workaround applicable here too.
On Thu, 24 Nov 2016 at 14:24, songxin wrote:
Hi Atin,
Actually, the glusterfs is used in my project.
And our test team find this issue.
So I want to make sure that whether you plan to fix
Hi Atin
I found a problem, that is about client(glusterfs) will not trying to
reconnect to server(glusterfsd) after disconnect.
Actually, it seems caused by race condition.
Precondition
The glusterfs version is 3.7.6.
I create a replicate volume using two node, A node and B node.One brick i
ed a validation to fail delete
request if one of the glusterd is down. I'll get back to you on this.
On Mon, 21 Nov 2016 at 07:24, songxin wrote:
Hi Atin,
Thank you for your support.
And any conclusions about this issue?
Thanks,
Xin
在 2016-11-16 20:59:05,"Atin Mukherjee&
Hi everyone,
I create a replicate volume using two nodes,A board and B board.
A board ip:10.32.1.144
B board ip:10.32.0.48
One brick and mount point is on A board
Another brick is on B board
I found that I can't access the mount point because the disconnection happen
between client and two
rom the cluster. However we need to revisit this code to see if this
function is anymore needed given we recently added a validation to fail delete
request if one of the glusterd is down. I'll get back to you on this.
On Mon, 21 Nov 2016 at 07:24, songxin wrote:
Hi Atin,
Thank you for you
Hi Atin,
Thank you for your support.
And any conclusions about this issue?
Thanks,
Xin
在 2016-11-16 20:59:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 1:53 PM, songxin wrote:
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov
在 2016-11-16 20:59:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 1:53 PM, songxin wrote:
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
Hi Atin,
I think the root caus
ok,thank you for your reply.
At 2016-11-16 17:59:34, "Serkan Çoban" wrote:
>Below link has changes in each release.
>https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes
>
>
>On Wed, Nov 16, 2016 at 11:49 AM, songxin wrote:
>> Hi,
&
Hi,
I am planning to migrate from gluster 3.7.6 to gluster 3.7.10.
So I have two questions below.
1.How could I know the changes in gluster 3.7.6 compared to gluster 3.7.10?
2.Does my application need any NBC changes?
Thanks,
Xin___
Gluster-users mailin
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
Hi Atin,
I think the root cause is in the function glusterd_import_friend_volume as
below.
int32_t
glusterd_import_friend_volume (dict_t *peer_data, si
ny idea, Atin?
Thanks,
Xin
在 2016-11-15 12:07:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 8:58 AM, songxin wrote:
Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
Hi Atin,
Now I have known that the info and bricks/* are removed by the function
glusterd_delete_stale_volume().
But I have not known how to solve this issue.
Thanks,
Xin
在 2016-11-15 12:07:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 8:58 AM, songxin wrote:
Hi At
写道:
On Tue, Nov 15, 2016 at 8:58 AM, songxin wrote:
Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
I really appreciate your help in trying to nail down this issue. While I a
1 20:34:05,"Atin Mukherjee" 写道:
On Fri, Nov 11, 2016 at 4:00 PM, songxin wrote:
Hi Atin,
Thank you for your support.
Sincerely wait for your reply.
By the way, could you make sure that the issue, file info is empty, cause by
rename is interrupted in kernel?
As per my
Hi Atin,
Thank you for your support.
Sincerely wait for your reply.
By the way, could you make sure that the issue, file info is empty, cause by
rename is interrupted in kernel?
Thanks,
Xin
在 2016-11-11 15:49:02,"Atin Mukherjee" 写道:
On Fri, Nov 11, 2016 at 1:15 PM, song
在 2016-11-11 15:27:03,"Atin Mukherjee" 写道:
On Fri, Nov 11, 2016 at 12:38 PM, songxin wrote:
Hi Atin,
Thank you for your reply.
As you said that the info file can only be changed in the
glusterd_store_volinfo() sequentially because of the big lock.
I have found the simila
Xin
在 2016-11-11 14:36:40,"Atin Mukherjee" 写道:
On Fri, Nov 11, 2016 at 8:33 AM, songxin wrote:
Hi Atin,
Thank you for your reply.
I have two questions for you.
1.Are the two files info and info.tmp are only to be created or changed in
function glusterd_store_volinfo()? I did not
landed up once but we couldn't reproduce it, so something is wrong with the
atomic update here is what I guess. I'll be glad if you have a reproducer for
the same and then we can dig into it further.
On Thu, Nov 10, 2016 at 1:32 PM, songxin wrote:
Hi,
When I start the glusterd some
Hi,
When I start the glusterd some error happened.
And the log is following.
[2016-11-08 07:58:34.989365] I [MSGID: 100030] [glusterfsd.c:2318:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6 (args:
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2016-
Hi,
I have a question about files in .glusterfs/indices/xattrop/.
reproduce:
1.create a replicate volume using two brick
2.kill A brick process
3.create some files in mount point
4.run "gluster volume heal gv0 info" show some files need heal
5.ls B_brick/.glusterfs/indices/xattrop/ show gfid of
Hi,
I have a quesition about heal info split-brain.
I know that the gfid mismatch is a kind of split-brain and the parent directory
should be show split-brain.
In my case the "gluster volume heal info split-brain" show that no file is
split-brain, though same filename has diffetent gfid on two b
the brick forcefully if brick is not present there on peers?
3. Forcefully means it should not show any error even brick is present or not
on the peers.
Thanks,
Xin
At 2016-03-21 16:12:41, "Gaurav Garg" wrote:
>Hi songxin,
>
>>> 1.what is the different between runi
Hi,
I see.Thank you for your reply.
Thanks,
Xin
在 2016-03-21 16:34:26,"Atin Mukherjee" 写道:
>
>
>On 03/21/2016 01:30 PM, songxin wrote:
>> Hi,
>> Thank you for your reply.
>> Could you help me to answer my questions as below.
>>
>> Now
Hi Gaurav,
Thank you very much.It is very helpful for me.
Thanks,
Xin
At 2016-03-21 16:12:41, "Gaurav Garg" wrote:
>Hi songxin,
>
>>> 1.what is the different between runing "gluster volume remove-brick gv0
>>> replica 1 128.224.162.255:/data/brick
Hi,
When I run the command "gluster volume remove-brick gv0 replica 1
128.224.162.255:/data/brick/gv1 force" , it reture failed.
my question:
1.what is the different between runing "gluster volume remove-brick gv0
replica 1 128.224.162.255:/data/brick/gv1" and "gluster volume remove-brick
gv
full work?
3."heal full" must be run on the node whose uuid is biggest in volume?
Thanks,
Xin
At 2016-03-21 14:04:25, "Atin Mukherjee" wrote:
>
>
>On 03/19/2016 06:50 AM, songxin wrote:
>> Hi Gaurav Garg,
>>
>> Thank you for you reply.It is v
er node.
>
> i.e:
> On the unaffected node the peers directory should have an entry for the
> failed node containing the uuid of the failed node. The glusterd.info file
> should enable you to recreate the peer file on the failed node.
>
>
> On 16 March 2016 at 09:25,
;6) your volume will recover.
>>
>above steps are mandatory steps to recover failed node.
>
>Thanks,
>
>Regards,
>Gaurav
>
>- Original Message -
>From: "songxin"
>To: "Alastair Neil"
>Cc: gluster-users@gluster.org
>Sent: Thursday, M
the command `gluster system:: uuid get`
Put this uuid as well into the text file.
Now execute
# cat | sort
The last uuid printed in this list is the one that corresponds to the highest
uuid in the cluster.
HTH,
Krutika
On Mon, Mar 14, 2016 at 12:49 PM, songxin wrote:
Hi,
I have cre
Hi,
Now I face a problem.
Reproduc step is as below.
1.I create a replicate volume using two brick on two board
2.start the volume
3.one board is breakdown and all
files in the rootfs ,including /var/lib/glusterd/*,are lost.
4.reboot the board and ip is not change.
My question:
How to recovery th
e?
Thanks,
Xin
At 2016-03-19 02:25:58, "Gaurav Garg" wrote:
>Hi songxin,
>
>both method are almost same for recovering the replicated volume. i forgot to
>mentioned one steps:
>
> #gluster volume heal $vol full
>
>IMO this solution should also app
Hi,
I have create a replicate volume and I want to run "gluster volume heal gv0
full".
I found that if I run "gluster volume heal gv0 full" on one board it always
output err like below.
Launching heal operation to perform full self heal on volume gv0 has
been unsuccessful
But If I
gular file, correct? Could you confirm that?
Answer:
Yes.it is regular file.
On Thu, Mar 10, 2016 at 1:03 PM, songxin wrote:
Hi all,
I have a file has a problem of gfid-mismatch as below.
stat: cannot stat '/mnt/c//public_html/cello/ior_files/nameroot.ior':
Input/output error
Remo
Hi all,
I have a file has a problem of gfid-mismatch as below.
stat: cannot stat '/mnt/c//public_html/cello/ior_files/nameroot.ior':
Input/output error
Remote:
getfattr -d -m . -e hex
opt/lvmdir/c2/brick/public_html/cello/ior_files/nameroot.ior
# file: opt/lvmdir/c2/brick/public_html/cello/
Hi all,
I have a problem about how to recovery a replicate volume.
precondition:
glusterfs version:3.7.6
brick of A board :128.224.95.140:/data/brick/gv0
brick of B board:128.224.162.255:/data/brick/gv0
reproduce:
1.gluster peer probe 128.224.162.255
Hi,
recondition:
glusterfs version is 3.7.6
A node:128.224.95.140
B node:128.224.162.255
brick on A node:/data/brick/gv0
brick on B node:/data/brick/gv0
reproduce steps:
1.gluster peer probe 128.224.162.255
Thank you very much for your reply.It is very helpful for me.
And I have one more question about "heal full" in glusterfs 3.7.6.
the reproduce steps :
A board:128.224.95.140
B board:128.224.162.255
1.gluster peer probe 128.224.162.255
:35, "Anuradha Talur" wrote:
>
>
>- Original Message -
>> From: "songxin"
>> To: "gluster-user"
>> Sent: Tuesday, March 1, 2016 7:19:23 PM
>> Subject: [Gluster-users] about tail command
>>
>> Hi,
>>
>
Hi,
recondition:
A node:128.224.95.140
B node:128.224.162.255
brick on A node:/data/brick/gv0
brick on B node:/data/brick/gv0
reproduce steps:
1.gluster peer probe 128.224.162.255
f it's a file, then it
>was not placed there as part of snapshotting any volume. If it's a directory,
>then did you try creating a snapshot with such a name.
>
>Regards,
>Avra
>
>On 02/25/2016 05:10 PM, songxin wrote:
>> If I run "reboot" on the a n
Hi,
I want to know whether the command "gluster volume heal gv full" is sync or
async.
Is the volume heal complete when comand quite?
If it is a async , how could I know when the heal is complete?
Thanks,
Xin___
Gluster-users mailing list
Gluster-users
5日,19:05,Atin Mukherjee 写道:
>
> + Rajesh , Avra
>
>> On 02/25/2016 04:12 PM, songxin wrote:
>> Thanks for your reply.
>>
>> Do I need check all files in /var/lib/glusterd/*?
>> Must all files be same in A node and B node?
> Yes, they should be iden
gt;/var/lib/glusterd/* from board A?
>
>~Atin
>
>On 02/25/2016 03:48 PM, songxin wrote:
>> Hi,
>> I have a problem as below when I start the gluster after reboot a board.
>>
>> precondition:
>> I use two boards do this test.
>> The version of glusterf
Hi,
I have a problem as below when I start the gluster after reboot a board.
precondition:
I use two boards do this test.
The version of glusterfs is 3.7.6.
A board ip:128.224.162.255
B board ip:128.224.95.140
reproduce steps:
1.systemctl start glusterd (A board)
2.systemctl start glusterd
//run on A node
8.gluster volume heal gv0 info fulll
At step 7, should some split-brain entries be presented?
在 2016-02-24 12:55:40,"Ravishankar N" 写道:
On 02/24/2016 10:21 AM, songxin wrote:
Before step 6, there are some files(a,b,c), that are created at step 5 ,
在 2016-02-24 12:42:39,"Ravishankar N" 写道:
Hello,
On 02/24/2016 10:03 AM, songxin wrote:
Hi,
Thank you for answering my question. And I have another question to ask.
If there has been some file(c, d, e) in the B node brick before step 6 as
below.And the file c is diffetent with file
and B brick?
在 2016-02-24 12:11:09,"Ravishankar N" 写道:
On 02/24/2016 07:16 AM, songxin wrote:
Hi all,
I have a question about replicate volume as below.
precondition:
1.A node ip: 128.224.162.163
2.B node ip:128.224.162.255
3.A node brick:/data/brick/gv0
4.B node brick:/data
Hi all,
I have a question about replicate volume as below.
precondition:
1.A node ip: 128.224.162.163
2.B node ip:128.224.162.255
3.A node brick:/data/brick/gv0
4.B node brick:/data/brick/gv0
reproduce step:
1.gluster peer probe 128.224.162.255
nodes.
>
>
> following things will be very useful for analysing this issue.
>
> You can restart your glusterd as of now as a workaround but we need to
> analysis this issue further.
>
>
> Thanks,
>
> ~Gaurav
>
> - Original Message -
> From: &
Hi,
I create a replicate volume with 2 brick.And I frequently reboot my two nodes
and frequently run “peer detach” “peer detach” “add-brick” "remove-brick".
A borad ip: 10.32.0.48
B borad ip: 10.32.1.144
After that, I run "gluster peer status" on A board and it show as below.
Number of Peer
Do you mean that I will delete the info file on B node and then start the
glusterd?Or copy it from A node to B node?
发自我的 iPhone
> 在 2016年2月17日,14:59,Atin Mukherjee 写道:
>
>
>
>> On 02/17/2016 11:44 AM, songxin wrote:
>> Hi拢卢
>> The version of glusterfs on A
t;
>- Original Message -
>> From: "songxin"
>> To: "Atin Mukherjee"
>> Cc: "Anuradha Talur" , gluster-users@gluster.org
>> Sent: Wednesday, February 17, 2016 11:44:14 AM
>> Subject: Re:Re: [Gluster-users] question about sync replicate
mance.readdir-ahead=on
brick-0=128.224.162.255:-data-brick-gv0
brick-1=128.224.162.163:-home-wrsadmin-work-tmp-data-brick-gv0
Thanks,
Xin
At 2016-02-17 12:01:37, "Atin Mukherjee" wrote:
>
>
>On 02/17/2016 08:23 AM, songxin wrote:
>> Hi,
>> Thank you for your immedia
ke glusterd and glusterfs.
Thanks,
Xin
At 2016-02-16 18:53:03, "Anuradha Talur" wrote:
>
>
>- Original Message -
>> From: "songxin"
>> To: gluster-users@gluster.org
>> Sent: Tuesday, February 16, 2016 3:59:50 PM
>> Subject: [Gluster-
Hi,
I have a question about how to sync volume between two bricks after one node is
reboot.
There are two node, A node and B node.A node ip is 128.124.10.1 and B node ip
is 128.124.10.2.
operation steps on A node as below
1.gluster peer probe 128.124.10.2
2.mkdir -p /data/brick/gv0
3.gluster
Hi,
I have a question about creating a replicate volume with two bricks as below.
There are two node, A node and B node.A node ip is 128.124.10.1 and B node ip
is 128.124.10.2.
operation steps on A node as below
1.gluster peer probe 128.124.10.2
2.mkdir -p /data/brick/gv0
3.cteate two files,
This is regarding glusterfs(3.7.6) issue we are facing at our end.
We have a logging file which saves logs of the events for two node and this
file are in sync using replica volume. When we restart the nodes , we see that
log file of one board is not in the sync .
How to reproduce:
1.Cr
Hi,
I use glusterfs (version 3.7.6) in replicate mode for sync between two boards
in a node.
When one of the board is locked and replaced with new board and restarted we
see that sync is lost between the two boards.The mounted glusterfs volume is
not present on the replaced board.
Output of
Hi,
I use glusterfs (version 3.7.6) in replicate mode for sync between two boards
in a node.
When one of the board is locked and replaced with new board and restarted we
see that sync is lost between the two boards.The mounted glusterfs volume is
not present on the replaced board.
Output of
This is regarding glusterfs(3.7.6) issue we are facing at our end.
We have a logging file which saves logs of the events for two node and this
file are in sync using replica volume. When we restart the nodes , we see that
log file of one board is not in the sync .
How to reproduce:
1.Cr
64 matches
Mail list logo