On Mon, 19 Dec 2016 17:55:06 +0800 Joseph Qi wrote:
> > I'd like to see this quick and small fix to be merged at this moment,
> > because this issue is little emergency for us.
> > Anyway, we can supersede this one easily if someone familiar with o2cb
> > works out a patch for o2cb in the futur
On 16/12/15 10:27, Eric Ren wrote:
Hi,
On 12/15/2016 09:46 AM, Joseph Qi wrote:
In you description, this issue can only happen in case of stack user +
fsdlm.
Yes.
So I feel we'd better to make stack user and o2cb behaves the same,
other than treat it as a special case.
Yes, I agree. But,
Hi,
On 12/15/2016 09:46 AM, Joseph Qi wrote:
In you description, this issue can only happen in case of stack user +
fsdlm.
Yes.
So I feel we'd better to make stack user and o2cb behaves the same,
other than treat it as a special case.
Yes, I agree. But, actually, there is nothing wrong wit
In you description, this issue can only happen in case of stack user +
fsdlm.
So I feel we'd better to make stack user and o2cb behaves the same,
other than treat it as a special case.
Thanks,
Joseph
On 16/12/9 17:30, Eric Ren wrote:
The crash happens rather often when we reset some cluste
Hi Gang,
On 12/12/2016 10:56 AM, Gang He wrote:
Hi Eric,
Looks good for me.
Just one suggestion,
please monitor if the LVB sharing mechanism in the cluster still works well in
the normal scenario,
to avoid any performance decrease regression problem.
Thanks for your review. I have done the te
Hi Eric,
Looks good for me.
Just one suggestion,
please monitor if the LVB sharing mechanism in the cluster still works well in
the normal scenario,
to avoid any performance decrease regression problem.
Reviewed-by: Gang He
Thanks
Gang
>>>
> The crash happens rather often when we reset so
Sorry, this email is not delivered to Mark successfully because of one weird
character
trailing his email address somehow.
So, resend later...
Thanks,
Eric
On 12/09/2016 05:24 PM, Eric Ren wrote:
The crash happens rather often when we reset some cluster
nodes while nodes contend fiercely to d
The crash happens rather often when we reset some cluster
nodes while nodes contend fiercely to do truncate and append.
The crash backtrace is below:
"
[ 245.197849] dlm: C21CBDA5E0774F4BA5A9D4F317717495: dlm_recover_grant 1 locks
on 971 resources
[ 245.197859] dlm: C21CBDA5E0774F4BA5A9D4F31771
8 matches
Mail list logo