Looks like I have similar issue as described in this bug: 
http://tracker.ceph.com/issues/15255
Writer (dd in my case) can be restarted and then writing continues, but until 
restart dd looks like hanged on write.

> 20 июля 2017 г., в 16:12, Дмитрий Глушенок <gl...@jet.msk.su> написал(а):
> 
> Hi,
> 
> Repeated the test using kernel 4.12.0. OSD node crash seems to be handled 
> fine now, but MDS crash still leads to hanged writes to CephFS. Now it was 
> enough just to crash the first MDS - failover didn't happened. At the same 
> time FUSE client was running on another client - no problems with it.
> 
>> 19 июля 2017 г., в 13:20, Дмитрий Глушенок <gl...@jet.msk.su 
>> <mailto:gl...@jet.msk.su>> написал(а):
>> 
>> You right. Forgot to mention that the client was using kernel 4.9.9.
>> 
>>> 19 июля 2017 г., в 12:36, 许雪寒 <xuxue...@360.cn <mailto:xuxue...@360.cn>> 
>>> написал(а):
>>> 
>>> Hi, thanks for your sharing:-)
>>> 
>>> So I guess you have not put cephfs into real production environment, and 
>>> it's still in test phase, right?
>>> 
>>> Thanks again:-)
>>> 
>>> 发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su <mailto:gl...@jet.msk.su>] 
>>> 发送时间: 2017年7月19日 17:33
>>> 收件人: 许雪寒
>>> 抄送: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>> 主题: Re: [ceph-users] How's cephfs going?
>>> 
>>> Hi,
>>> 
>>> I can share negative test results (on Jewel 10.2.6). All tests were 
>>> performed while actively writing to CephFS from single client (about 1300 
>>> MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and 
>>> metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at 
>>> all, active/standby.
>>> - Crashing one node resulted in write hangs for 17 minutes. Repeating the 
>>> test resulted in CephFS hangs forever.
>>> - Restarting active MDS resulted in successful failover to standby. Then, 
>>> after standby became active and the restarted MDS became standby the new 
>>> active was restarted. CephFS hanged for 12 minutes.
>>> 
>>> P.S. Planning to repeat the tests again on 10.2.7 or higher
>>> 
>>> 19 июля 2017 г., в 6:47, 许雪寒 <xuxue...@360.cn <mailto:xuxue...@360.cn>> 
>>> написал(а):
>>> 
>>> Is there anyone else willing to share some usage information of cephfs?
>>> Could developers tell whether cephfs is a major effort in the whole ceph 
>>> development?
>>> 
>>> 发件人: 许雪寒 
>>> 发送时间: 2017年7月17日 11:00
>>> 收件人: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>> 主题: How's cephfs going?
>>> 
>>> Hi, everyone.
>>> 
>>> We intend to use cephfs of Jewel version, however, we don’t know its 
>>> status. Is it production ready in Jewel? Does it still have lots of bugs? 
>>> Is it a major effort of the current ceph development? And who are using 
>>> cephfs now?
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>> 
>>> --
>>> Dmitry Glushenok
>>> Jet Infosystems
>>> 
>> 
>> --
>> Дмитрий Глушенок
>> Инфосистемы Джет
>> +7-910-453-2568
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> --
> Дмитрий Глушенок
> Инфосистемы Джет
> +7-910-453-2568
> 

--
Dmitry Glushenok
Jet Infosystems

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to