That is true. When an OSD goes down it will take a few seconds for it's
Placement Groups to re-peer with the other OSDs. During that period
writes to those PGs will stall for a couple of seconds.
I wouldn't say it's 40s, but it can take ~10s.
Hello,
According to my experience, in case of
In Nautilus ceph status writes "rgw: 50 daemons active" and then lists
all 50 names of rgw daemons.
It takes significant space in terminal.
Is it possible to disable list of names and make output like in
Luminous: only number of active daemons?
Thanks
Aleksei
Hi Thomas,
We did some investigations some time before and got several rules how to
configure rgw and osd for big files stored on erasure-coded pool.
Hope it will be useful.
And if I have any mistakes, please let me know.
S3 object saving pipeline:
- S3 object is divided into multipart