Re: [ceph-users] HA and data recovery of CEPH

2019-12-03 Thread Aleksey Gutikov
That is true. When an OSD goes down it will take a few seconds for it's Placement Groups to re-peer with the other OSDs. During that period writes to those PGs will stall for a couple of seconds. I wouldn't say it's 40s, but it can take ~10s. Hello, According to my experience, in case of

[ceph-users] Is it possible not to list rgw names in ceph status output?

2019-09-30 Thread Aleksey Gutikov
In Nautilus ceph status writes "rgw: 50 daemons active" and then lists all 50 names of rgw daemons. It takes significant space in terminal. Is it possible to disable list of names and make output like in Luminous: only number of active daemons? Thanks Aleksei

Re: [ceph-users] RGW 4 MiB objects

2019-07-31 Thread Aleksey Gutikov
Hi Thomas, We did some investigations some time before and got several rules how to configure rgw and osd for big files stored on erasure-coded pool. Hope it will be useful. And if I have any mistakes, please let me know. S3 object saving pipeline: - S3 object is divided into multipart