Xin,
There is a patch [1] attempted to handle this case which is under review.
[1] http://review.gluster.org/#/c/16279
On Tue, Jan 10, 2017 at 7:15 AM, songxin wrote:
> Hi Atin,
>
> Have you fix this issue?
>
> Thanks,
> Xin
>
>
>
> 在 2016-11-25 15:46:25,"Atin Mukherjee" 写道:
>
>
>
> On Fri,
On Fri, Nov 25, 2016 at 1:14 PM, songxin wrote:
> Hi Atin,
> It seems that this workaround should be done by manual.
> Is that right?
> And even the files in bricks/* may be empty too.
>
Yes, that's right
>
> Do you have a workaround, which is implemented in glusterfs code?
>
Workaround is by
Hi Atin,
It seems that this workaround should be done by manual.
Is that right?
And even the files in bricks/* may be empty too.
Do you have a workaround, which is implemented in glusterfs code?
Thanks,
Xin
在 2016-11-25 15:36:29,"Atin Mukherjee" 写道:
On Fri, Nov 25, 2016 at 12:06 PM,
On Fri, Nov 25, 2016 at 12:06 PM, songxin wrote:
> Hi Atin,
> Do you mean that you have the workaround applicable now?
> Or it will take time to design the workaround?
>
> If you have workaround now, could you share it to me ?
>
If you end up in having a 0 byte info file you'd need to copy the s
Hi Atin,
Do you mean that you have the workaround applicable now?
Or it will take time to design the workaround?
If you have workaround now, could you share it to me ?
Thanks,
Xin,
在 2016-11-24 19:12:07,"Atin Mukherjee" 写道:
Xin - I appreciate your patience. I'd need some more time to pi
Xin - I appreciate your patience. I'd need some more time to pick this item
up from my backlog. I believe we have a workaround applicable here too.
On Thu, 24 Nov 2016 at 14:24, songxin wrote:
>
>
>
> Hi Atin,
> Actually, the glusterfs is used in my project.
> And our test team find this issue.
Hi Atin,
Actually, the glusterfs is used in my project.
And our test team find this issue.
So I want to make sure that whether you plan to fix it.
if you have plan I will wait you because your method shoud be better than mine.
Thanks,
Xin
在 2016-11-21 10:00:36,"Atin Mukherjee" 写道:
Hi Xin
Hi Atin,
Ok.Thank you for your reply.
Thanks,
Xin
在 2016-11-21 10:00:36,"Atin Mukherjee" 写道:
Hi Xin,
I've not got a chance to look into it yet. delete stale volume function is in
place to take care of wiping off volume configuration data which has been
deleted from the cluster. However
Hi Xin,
I've not got a chance to look into it yet. delete stale volume function is
in place to take care of wiping off volume configuration data which has
been deleted from the cluster. However we need to revisit this code to see
if this function is anymore needed given we recently added a validat
Hi Atin,
Thank you for your support.
And any conclusions about this issue?
Thanks,
Xin
在 2016-11-16 20:59:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 1:53 PM, songxin wrote:
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 12:47 PM,
Hi Atin,
Thank you for your support.
I have a question for you.
glusterd_store_volinfo() will hidden remove the info and bricks/* by rename().
Why glusterd must remove the info and bricks/* in function
glusterd_delete_stale_volume() before calling glusterd_store_volinfo()?
Thanks,
Xin
On Tue, Nov 15, 2016 at 1:53 PM, songxin wrote:
> ok, thank you.
>
>
>
>
> 在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
>
>
>
> On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
>
>>
>> Hi Atin,
>>
>> I think the root cause is in the function glusterd_import_friend_volume
>> as below.
>>
>> int32_
ok, thank you.
在 2016-11-15 16:12:34,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
Hi Atin,
I think the root cause is in the function glusterd_import_friend_volume as
below.
int32_t
glusterd_import_friend_volume (dict_t *peer_data, size_t count)
{
...
On Tue, Nov 15, 2016 at 12:47 PM, songxin wrote:
>
> Hi Atin,
>
> I think the root cause is in the function glusterd_import_friend_volume as
> below.
>
> int32_t
> glusterd_import_friend_volume (dict_t *peer_data, size_t count)
> {
> ...
> ret = glusterd_volinfo_find (new_volinfo->volname
Hi Atin,
I think the root cause is in the function glusterd_import_friend_volume as
below.
int32_t
glusterd_import_friend_volume (dict_t *peer_data, size_t count)
{
...
ret = glusterd_volinfo_find (new_volinfo->volname, &old_volinfo);
if (0 == ret) {
(vo
Hi Atin,
Now I have known that the info and bricks/* are removed by the function
glusterd_delete_stale_volume().
But I have not known how to solve this issue.
Thanks,
Xin
在 2016-11-15 12:07:05,"Atin Mukherjee" 写道:
On Tue, Nov 15, 2016 at 8:58 AM, songxin wrote:
Hi Atin,
I have some
Hi Atin,
I have two nodes, a node and b node, in which creating a replicate volume and
then start the volume.
I will run the script as below on b node.
#!/bin/bash
i=1
while(($i<100))
do
gluster
On Tue, Nov 15, 2016 at 8:58 AM, songxin wrote:
> Hi Atin,
> I have some clues about this issue.
> I could reproduce this issue use the scrip that mentioned in
> https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
>
I really appreciate your help in trying to nail down this issue. While I am
at
Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
After I added some debug print,which like below, in glusterd-store.c and I
found that the /var/lib/glusterd/vols/xxx/info and
/var/lib
On Fri, Nov 11, 2016 at 4:00 PM, songxin wrote:
> Hi Atin,
>
> Thank you for your support.
> Sincerely wait for your reply.
>
> By the way, could you make sure that the issue, file info is empty, cause
> by rename is interrupted in kernel?
>
As per my RCA on that bug, it looked to be.
>
> Than
Hi Atin,
Thank you for your support.
Sincerely wait for your reply.
By the way, could you make sure that the issue, file info is empty, cause by
rename is interrupted in kernel?
Thanks,
Xin
在 2016-11-11 15:49:02,"Atin Mukherjee" 写道:
On Fri, Nov 11, 2016 at 1:15 PM, songxin wrote:
H
On Fri, Nov 11, 2016 at 1:15 PM, songxin wrote:
> Hi Atin,
> Thank you for your reply.
> Actually it is very difficult to reproduce because I don't know when there
> was an ongoing commit happening.It is just a coincidence.
> But I want to make sure the root cause.
>
I'll give it a another try a
Hi Atin,
Thank you for your reply.
Actually it is very difficult to reproduce because I don't know when there was
an ongoing commit happening.It is just a coincidence.
But I want to make sure the root cause.
So I would be grateful if you could answer my questions below.
You said that "This iss
On Fri, Nov 11, 2016 at 12:38 PM, songxin wrote:
>
> Hi Atin,
> Thank you for your reply.
>
> As you said that the info file can only be changed in the
> glusterd_store_volinfo()
> sequentially because of the big lock.
>
> I have found the similar issue as below that you mentioned.
> https://bug
Hi Atin,
Thank you for your reply.
As you said that the info file can only be changed in the
glusterd_store_volinfo() sequentially because of the big lock.
I have found the similar issue as below that you mentioned.
https://bugzilla.redhat.com/show_bug.cgi?id=1308487
You said that "This i
On Fri, Nov 11, 2016 at 8:33 AM, songxin wrote:
> Hi Atin,
>
> Thank you for your reply.
> I have two questions for you.
>
> 1.Are the two files info and info.tmp are only to be created or changed in
> function glusterd_store_volinfo()? I did not find other point in which the
> two file are chang
Hi Atin,
Thank you for your reply.
I have two questions for you.
1.Are the two files info and info.tmp are only to be created or changed in
function glusterd_store_volinfo()? I did not find other point in which the two
file are changed.
2.I found that glusterd_store_volinfo() will be call in
Did you run out of disk space by any chance? AFAIK, the code is like we
write new stuffs to .tmp file and rename it back to the original file. In
case of a disk space issue I expect both the files to be of non zero size.
But having said that I vaguely remember a similar issue (in the form of a
bug
Hi,
When I start the glusterd some error happened.
And the log is following.
[2016-11-08 07:58:34.989365] I [MSGID: 100030] [glusterfsd.c:2318:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6 (args:
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2016-
29 matches
Mail list logo