uot;
>
> "flushed any more. So ignore the pending "
>
> " transactions to avoid blocking ocfs2 unmount.\n");
>
> atomic_set(&journal->j_num_trans, 0);
>
> }
>
> Thanks,
> Joseph
>
> On 17/1/11 10:16, Gechangwei
Hi,
As prior e-mail described, umount would hang after journal flushing failure.
When journal flushing in ocfs2_commit_cache() fails, following umount
procedure at stage of shutting journal may be blocked due to non-zero
transactions number.
Once jbd2_journal_flush() fails, the journal will be m
er put the logic into ocfs2_commit_thread,
> which can be align with kthread_should_stop to identify the case.
>
> Thanks,
> Joseph
>
> On 17/1/7 20:01, Gechangwei wrote:
>>
>> Hi,
>>
>> When journal flushing in ocfs2_commit_cache() fails, following umoun
nce journal has been marked as ABORT and flushing journal failure will free
all corresponding buffer heads, it will be safe to directly set transactions
number to zero.
From 98f42f5f52851ed84eb372a3e09a413a30ea2664 Mon Sep 17 00:00:00 2001
From: gechangwei <mailto:ge.chang...@h3c.com>
Date: S
On 2017/1/6 10:08, piaojun wrote:
>
> On 2017/1/5 15:44, Gechangwei wrote:
>> On 2017/1/5 15:28, gechangwei 12382 (Cloud) wrote:
>>
>> Hi Jun,
>> I suppose that a defect hid your patch.
>>
>>
>>> We found a dlm-blocked situation caused by cont
On 2017/1/5 15:28, gechangwei 12382 (Cloud) wrote:
Hi Jun,
I suppose that a defect hid your patch.
> We found a dlm-blocked situation caused by continuous breakdown of recovery
> masters described below. To solve this problem, we should purge recovery lock
> once detecting recove
On 2016/11/21 9:14, Joseph Qi wrote:
> It supports that a device can be mounted multiple times.
>
> Actually it is a general feature of linux filesystems.
>
> Thanks,
>
> Joseph
>
That's solve my problem.
Many thanks!
>
> On 16/11/19 16:32, Gechangwei wrote:
On 2016年08月31日 18:59, Gechangwei wrote:
Hi,
I am asking for your help on OCFS2 again.
I can’t figure out a segment of code.
In the function dlm_register_domain which is called during mount procedure,
below code stays:
dlm = __dlm_lookup_domain(domain);
if (dlm
ase.
Your advisement is very important to me.
Thanks.
Changwei.
-邮件原件-
发件人: Joseph Qi [mailto:jiangqi...@gmail.com]
发送时间: 2016年11月17日 17:18
收件人: gechangwei 12382 (CCPL); a...@linux-foundation.org
抄送: mfas...@versity.com; ocfs2-devel@oss.oracle.com
主题: Re: 答复: [Ocfs2-devel] [PATCH] ocfs
时间: 2016年11月17日 15:00
收件人: gechangwei 12382 (CCPL); a...@linux-foundation.org
抄送: mfas...@versity.com; ocfs2-devel@oss.oracle.com
主题: Re: [Ocfs2-devel] [PATCH] ocfs2/dlm: fix umount hang
Hi Changwei,
Why are the dead nodes still in live map, according to your dlm_state file?
Thanks,
Joseph
On
Mon Sep 17 00:00:00 2001
From: gechangwei
Date: Thu, 17 Nov 2016 14:00:45 +0800
Subject: [PATCH] fix umount hang
Signed-off-by: gechangwei
---
fs/ocfs2/dlm/dlmmaster.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
index 6ea06f8..3c46
r and dlm_clean_block_mle
will decease MLE
reference count, thus, in the following get_resouce procedure, the reference
count is going to be negative.
I propose a patch to solve this, please take review if you have any time.
Signed-off-by: gechangwei
---
dlm/dlmmaster.c | 8 +++-
1 file changed, 6 inser
Hi,
I am asking for your help on OCFS2 again.
I can’t figure out a segment of code.
In the function dlm_register_domain which is called during mount procedure,
below code stays:
dlm = __dlm_lookup_domain(domain);
if (dlm) {
if (dlm->dlm_state != DLM_CTXT_JO
after receiving the remaster request, not responses to
new master node yet.
That causes new master node waiting forever.
I think below patch can solve this problem. Please have a review!
Subject: [PATCH] interrupt waiting for node's response if node dies
Signed-off-by: gechangwei
--
Hi,
I have a question on AST related procedure.
If a lock request has been sent to lock resource’s owner node right before this
owner node crashes.
Then no one will send back to the requested node with AST, this will cause the
requested node waiting for a completion forever.
Is this an issue that
Hi,
According to current DLM recovery implementation after another node’s death in
cluster, all resources belonging to that dead node will be recovered by the
recovery master.
MLE related mastery procedure is not exclusive against DLM recovery. That means
when allocating a new lock resourc
Hi OCFS2 experts,
I found a strange segment of code during FENCE procedure in function
o2hb_stop_all_regions, cited as below:
void o2hb_stop_all_regions(void)
{
struct o2hb_region *reg;
mlog(ML_ERROR, "stopping heartbeat on all active regions.\n");
spin_lock(&o2hb_live_
Thanks. Your reply means great help to me.
发件人: Srinivas Eeda [mailto:srinivas.e...@oracle.com]
发送时间: 2016年5月4日 0:36
收件人: gechangwei 12382 (CCPL)
抄送: ocfs2-devel@oss.oracle.com
主题: Re: [Ocfs2-devel] what MLE wants to do?
In simple terms, mle's get into life at the beginning of the lock ma
directly
give me a clue on what MLE wants to do briefly via an email.
Thanks a lot.
Br.
Gechangwei
H3C Technologies Co., Limited
consistency?
Thanks,
Best regards.
Gechangwei
H3C Technologies Co., Limited
-
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制
.
Many thanks.
Best regards.
Gechangwei
H3C Technologies Co., Limited
-
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如
21 matches
Mail list logo