Irek, 

have you change the ceph.conf file to change the recovery p riority? 

Options like these might help with prioritising repair/rebuild io with the 
client IO: 

osd_recovery_max_chunk = 8388608 
osd_recovery_op_priority = 2 
osd_max_backfills = 1 
osd_recovery_max_active = 1 
osd_recovery_threads = 1 


Andrei 
----- Original Message -----

From: "Irek Fasikhov" <malm...@gmail.com> 
To: ceph-users@lists.ceph.com 
Sent: Thursday, 11 September, 2014 1:07:06 PM 
Subject: [ceph-users] Rebalancing slow I/O. 




Hi,All. 


DELL R720X8,96 OSDs, Network 2x10Gbit LACP. 


When one of the nodes crashes, I get very slow I / O operations on virtual 
machines. 
A cluster map by default. 
[ceph@ceph08 ~]$ ceph osd tree 


# id weight type name up/down reweight 
-1 262.1 root defaults 
-2 32.76 host ceph01 
0 2.73 osd.0 up 1 
........................... 
11 2.73 osd.11 up 1 
-3 32.76 host ceph02 
13 2.73 osd.13 up 1 
.............................. 
12 2.73 osd.12 up 1 
-4 32.76 host ceph03 
24 2.73 osd.24 up 1 
............................ 
35 2.73 osd.35 up 1 
-5 32.76 host ceph04 
37 2.73 osd.37 up 1 
............................. 
47 2.73 osd.47 up 1 
-6 32.76 host ceph05 
48 2.73 osd.48 up 1 
............................... 
59 2.73 osd.59 up 1 
-7 32.76 host ceph06 
60 2.73 osd.60 down 0 
............................... 
71 2.73 osd.71 down 0 
-8 32.76 host ceph07 
72 2.73 osd.72 up 1 
................................ 
83 2.73 osd.83 up 1 
-9 32.76 host ceph08 
84 2.73 osd.84 up 1 
................................ 
95 2.73 osd.95 up 1 




If I change the cluster map on the following: 

root---| 
| 
|-----rack1 
| | 
| host ceph01 
| host ceph02 
| host ceph03 
| host ceph04 
| 
|-------rack2 
| 
host ceph05 
host ceph06 
host ceph07 
host ceph08 
What will povidenie cluster failover one node? And how much will it affect the 
performance? 
Thank you 


-- 

С уважением, Фасихов Ирек Нургаязович 
Моб.: +79229045757 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to