Hi, Dear ceph-devel
I met a issue: Ceph osd rm one osd cause 30% objects degraded.
Step 1:
#created a ssd root
ceph osd crush add-bucket ssd root
Step 2: Installed a osd.100 failed:
94 1 osd.94 up 1
95 1 osd.95 up 1
Hi:
I'm new to ceph and when I 'm reading ceph code, I find it
hard to understand the semantics of PG::info::last_update and
PG::info::last_complete (even after reading the comments in code..),
so would you please give a explanation on the two fields and their
semantics and their usage
Please do!
Mark
On 11/19/2014 01:29 AM, Alexandre DERUMIER wrote:
Hi,
Can I make a tracker for this ?
- Mail original -
De: Haomai Wang haomaiw...@gmail.com
À: Mark Nelson mark.nel...@inktank.com
Cc: Sage Weil s...@newdream.net, Alexandre DERUMIER aderum...@odiso.com, Somnath Roy
Sage's paper will give you what you want, you find it on ceph.com:
http://ceph.com/papers/weil-thesis.pdf
On 19 November 2014 20:33, Ding Dinghua dingdinghu...@gmail.com wrote:
Hi:
I'm new to ceph and when I 'm reading ceph code, I find it
hard to understand the semantics of
Please do!
http://tracker.ceph.com/issues/10139
I have put perf report inside, and last discussions on this mailing list thread.
- Mail original -
De: Mark Nelson mark.nel...@inktank.com
À: Alexandre DERUMIER aderum...@odiso.com, Haomai Wang
haomaiw...@gmail.com
Cc: Sage Weil
Add more information:
After step4, there are many restarting backfill on osd.x in ceph.log
2014-11-19 16:03:37.766787 mon.0 10.16.40.40:6789/0 2460367 : [INF]
pgmap v9995708: 8192 pgs: 10 inactive, 15 peering, 8167 active+clean;
21280 GB data, 63334 GB used, 209 TB / 270 TB avail; 174 kB/s