Hi,

some of the osds in my env continues to try to connect to monitors/ceph nodes, 
but get connection refused and down/out. It even worse when I try to initialize 
100+ osds (800G HDD for each osd), most of the osds would run into the same 
problem to connect to monitor. I checked the monitor status, it looks good, 
there are no monitors down, I also disabled iptalbes and selinux, set " max 
open files = 131072" in ceph.conf. Could you let me know what else I should do 
to fix the problem?

BTW, for now I have 3 monitors in ceph cluster, and all of them are in good 
status.

Osd log - 
-4633> 2014-06-03 10:37:55.359873 7fa894c2c7a0 10 monclient(hunting): 
-4633> auth_supported 2 method cephx
 -4632> 2014-06-03 10:37:55.360055 7fa894c2c7a0  2 auth: KeyRing::load: loaded 
key file /etc/ceph/keyring.osd.0  -4631> 2014-06-03 10:37:55.360607 
7fa894c2c7a0  5 asok(0x2660230) register_command objecter_requests hook 
0x2610190  -4630> 2014-06-03 10:37:55.360620 7fa87f4fa700  5 osd.0 0 heartbeat: 
osd_stat(33016 kB used, 837 GB avail, 837 GB total, peers []/[] op hist [])  
-4629> 2014-06-03 10:37:55.360679 7fa894c2c7a0 10 monclient(hunting): 
renew_subs  -4628> 2014-06-03 10:37:55.360694 7fa894c2c7a0 10 
monclient(hunting): _reopen_session rank -1 name  -4627> 2014-06-03 
10:37:55.360779 7fa894c2c7a0 10 monclient(hunting): picked mon.0 con 0x269dc20 
addr 192.168.50.11:6789/0  -4626> 2014-06-03 10:37:55.360804 7fa894c2c7a0 10 
monclient(hunting): _send_mon_message to mon.0 at 192.168.50.11:6789/0  -4625> 
2014-06-03 10:37:55.360814 7fa894c2c7a0  1 -- 192.168.50.11:6800/7283 --> 
192.168.50.11:6789/0 -- auth(proto 0 26 bytes epoch 0) v1 -- ?+0 0x2668900 con 
0x269dc20  -4624> 2014-06-
 03 10:37:55.360835 7fa894c2c7a0 10 monclient(hunting): renew_subs  -4623> 
2014-06-03 10:37:55.360904 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 
c=0x269dc20).connect error 192.168.50.11:6789/0, (111) Connection refused  
-4622> 2014-06-03 10:37:55.360980 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 
c=0x269dc20).fault (111) Connection refused  -4621> 2014-06-03 10:37:55.361007 
7fa87d4f6700  0 -- 192.168.50.11:6800/7283 >> 192.168.50.11:6789/0 
pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 c=0x269dc20).fault  -4620> 
2014-06-03 10:37:55.361072 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 
c=0x269dc20).connect error 192.168.50.11:6789/0, (111) Connection refused  
-4619> 2014-06-03 10:37:55.361101 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pg
 s=0 cs=0 l=1 c=0x269dc20).fault (111) Connection refused  -4618> 2014-06-03 
10:37:55.561290 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 
c=0x269dc20).connect error 192.168.50.11:6789/0, (111) Connection refused  
-4617> 2014-06-03 10:37:55.561384 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 
c=0x269dc20).fault (111) Connection refused  -4616> 2014-06-03 10:37:55.961583 
7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 192.168.50.11:6789/0 
pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 c=0x269dc20).connect error 
192.168.50.11:6789/0, (111) Connection refused  -4615> 2014-06-03 
10:37:55.961641 7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 
192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 pgs=0 cs=0 l=1 
c=0x269dc20).fault (111) Connection refused  -4614> 2014-06-03 10:37:56.761838 
7fa87d4f6700  2 -- 192.168.50.11:6800/7283 >> 192.168.50.11:6789/0 
pipe(0x27b8000
  sd=25 :0 s=1 pgs=0 cs=0 l=1 c=0x269dc20).connect error 192.168.50.11:6789/0, 
(111) Connection refused  -4613> 2014-06-03 10:37:56.761904 7fa87d4f6700  2 -- 
192.168.50.11:6800/7283 >> 192.168.50.11:6789/0 pipe(0x27b8000 sd=25 :0 s=1 
pgs=0 cs=0 l=1 c=0x269dc20).fault (111) Connection refused

..................

-3482> 2014-06-03 10:40:37.377272 7fa882d01700 10 monclient(hunting): 
-3482> tick
 -3481> 2014-06-03 10:40:37.377286 7fa882d01700  1 monclient(hunting): 
continuing hunt  -3480> 2014-06-03 10:40:37.377288 7fa882d01700 10 
monclient(hunting): _reopen_session rank -1 name  -3479> 2014-06-03 
10:40:37.377294 7fa882d01700  1 -- 192.168.50.11:6800/7283 mark_down 0x269dc20 
-- 0x27b8780  -3478> 2014-06-03 10:40:37.377376 7fa882d01700 10 
monclient(hunting): picked mon.2 con 0x269f380 addr 192.168.50.13:6789/0  
-3477> 2014-06-03 10:40:37.377401 7fa882d01700 10 monclient(hunting): 
_send_mon_message to mon.2 at 192.168.50.13:6789/0  -3476> 2014-06-03 
10:40:37.377405 7fa882d01700  1 -- 192.168.50.11:6800/7283 --> 
192.168.50.13:6789/0 -- auth(proto 0 26 bytes epoch 0) v1 -- ?+0 0x266a880 con 
0x269f380  -3475> 2014-06-03 10:40:37.377415 7fa882d01700 10 
monclient(hunting): renew_subs  -3474> 2014-06-03 10:40:37.377387 7fa87c3f3700  
2 -- 192.168.50.11:6800/7283 >> 192.168.50.12:6789/0 pipe(0x27b8780 sd=25 
:56999 s=4 pgs=344 cs=1 l=1 c=0x269dc20).reader couldn't read tag, (0)
  Success  -3473> 2014-06-03 10:40:37.377463 7fa87c3f3700  2 -- 
192.168.50.11:6800/7283 >> 192.168.50.12:6789/0 pipe(0x27b8780 sd=25 :56999 s=4 
pgs=344 cs=1 l=1 c=0x269dc20).fault (0) Success  -3472> 2014-06-03 
10:40:37.377917 7fa88850c700 10 monclient(hunting): renew_subs  -3471> 
2014-06-03 10:40:37.378958 7fa891f47700  5 osd.0 0 tick  -3470> 2014-06-03 
10:40:38.379066 7fa891f47700  5 osd.0 0 tick  -3469> 2014-06-03 10:40:38.969300 
7fa87f4fa700  5 osd.0 0 heartbeat: osd_stat(33016 kB used, 837 GB avail, 837 GB 
total, peers []/[] op hist [])  -3468> 2014-06-03 10:40:39.379171 7fa891f47700  
5 osd.0 0 tick


-3423> 2014-06-03 10:40:47.380021 7fa891f47700  5 osd.0 0 tick
 -3422> 2014-06-03 10:40:48.380139 7fa891f47700  5 osd.0 0 tick
 -3421> 2014-06-03 10:40:48.669961 7fa87f4fa700  5 osd.0 0 heartbeat: 
osd_stat(33016 kB used, 837 GB avail, 837 GB total, peers []/[] op hist [])
 -3420> 2014-06-03 10:40:48.762568 7fa88850c700  1 -- 192.168.50.11:6800/7283 
<== mon.0 192.168.50.11:6789/0 1 ==== mon_map v1 ==== 614+0+0 (4047117196 0 0) 
0x27801e0 con 0x269f0c0
 -3419> 2014-06-03 10:40:48.762628 7fa88850c700 10 monclient(hunting): 
handle_monmap mon_map v1
 -3418> 2014-06-03 10:40:48.762672 7fa88850c700 10 monclient(hunting):  got 
monmap 1, mon.0 is now rank 0
 -3417> 2014-06-03 10:40:48.762680 7fa88850c700 10 monclient(hunting): dump:
epoch 1



-3353> 2014-06-03 10:40:49.378256 7fa882d01700 10 monclient: renew_subs
 -3352> 2014-06-03 10:40:49.378262 7fa882d01700 10 monclient: _send_mon_message 
to mon.0 at 192.168.50.11:6789/0
 -3351> 2014-06-03 10:40:49.378295 7fa882d01700  1 -- 192.168.50.11:6800/7283 
--> 192.168.50.11:6789/0 -- 
mon_subscribe({monmap=2+,osd_pg_creates=0,osdmap=0}) v2 -- ?+0 0x2719c00 con 
0x269f0c0
 -3350> 2014-06-03 10:40:49.380250 7fa891f47700  5 osd.0 0 tick
 -3349> 2014-06-03 10:40:49.436305 7fa88850c700  1 -- 192.168.50.11:6800/7283 
<== mon.0 192.168.50.11:6789/0 13 ==== osd_pg_create(pg0.103,1; pg0.15d,1; 
pg0.1af,1; pg0.1d4,1; pg0.291,1; pg0.2e1,1; pg0.324,1; pg0.38e,1; pg0.406,1; 
pg0.42e,1; pg0.4b1,1; pg0.55d,1; pg0.6d0,1; pg0.6e2,1; pg0.78e,1; pg0.7cd,1; 
pg0.7d4,1; pg0.9a3,1; pg0.9ce,1; pg0.a67,1; pg0.abd,1; pg0.b14,1; pg0.b20,1; 
pg0.b8d,1; pg0.ddb,1; pg0.e6c,1; pg0.e72,1; pg0.e9c,1; pg0.f3f,1; pg0.fc3,1; 
pg0.ff8,1; pg0.106c,1; pg0.1093,1; pg0.10b9,1; pg0.10bd,1; pg0.1230,1; 
pg0.12a0,1; pg0.12d8,1; pg0.133e,1; pg0.1342,1; pg0.13b0,1; pg0.146f,1; 
pg0.14ab,1; pg0.1540,1; pg0.158c,1; pg0.1593,1; pg0.15ef,1; pg0.1625,1; 
pg0.1693,1; pg0.17ff,1; pg0.1845,1; pg0.18ec,1; pg0.198c,1; pg0.19d2,1; 
pg0.1b6e,1; pg0.1cc1,1; pg0.1d05,1; pg0.1d20,1; pg0.1dc1,1; pg0.1dd4,1; 
pg0.1e03,1; pg0.1ecb,1; pg0.1edb,1; pg0.1fff,1; pg0.209a,1; pg0.20f3,1; 
pg0.2173,1; pg0.21ba,1; pg0.21d1,1; pg0.221b,1; pg0.227f,1; pg0.2336,1; 
pg0.23bf,1; pg0.2425,1; pg
 0.2470,1; pg1.60,1; pg1.98,1; pg1.c7,1; pg1.15b,1; pg1.165,1; pg1.1d6,1; 
pg1.31b,1; pg1.382,1; pg1.39c,1; pg1.3f7,1; pg1.5d8,1; pg1.604,1; pg1.67b,1; 
pg1.771,1; pg1.77a,1; pg1.92f,1; pg1.9b6,1; pg1.9c6,1; pg1.9ce,1; pg1.ae2,1; 
pg1.b0d,1; pg1.b1d,1; pg1.c1b,1; pg1.c38,1; pg1.c92,1; pg1.d1c,1; pg1.d37,1; 
pg1.de2,1; pg1.edf,1; pg1.eed,1; pg1.f19,1; pg1.f8d,1; pg1.f9c,1; pg1.fbb,1; 
pg1.1103,1; pg1.111d,1; pg1.11a4,1; pg1.11c7,1; pg1.1260,1; pg1.1316,1; 
pg1.1440,1; pg1.1480,1; pg1.14d6,1; pg1.14fc,1; pg1.151d,1; pg1.1539,1; 
pg1.1570,1; pg1.15d5,1; pg1.15f3,1; pg1.16de,1; pg1.1732,1; pg1.1815,1; 
pg1.1924,1; pg1.197e,1; pg1.198b,1; pg1.19e0,1; pg1.1a30,1; pg1.1ba9,1; 
pg1.1bbf,1; pg1.1bce,1; pg1.1bdb,1; pg1.1bee,1; pg1.1cae,1; pg1.1da9,1; 
pg1.1e29,1; pg1.1edc,1; pg1.1ee1,1; pg1.1fa5,1; pg1.1fed,1; pg1.211f,1; 
pg1.21ac,1; pg1.222a,1; pg1.22c2,1; pg1.2382,1; pg1.2460,1; pg1.246c,1; 
pg1.247a,1; pg2.7e,1; pg2.14a,1; pg2.1c0,1; pg2.20a,1; pg2.317,1; pg2.51b,1; 
pg2.533,1; pg2.560,1; pg2.6
 75,1; pg2.6d0,1; pg2.847,1; pg2.85a,1; pg2.939,1; pg2.9b5,1; pg2.a21,1; 
pg2.b6e,1; pg2.bc1,1; pg2.bd0,1; pg2.c5b,1; pg2.c5d,1; pg2.cd6,1; pg2.d28,1; 
pg2.d2b,1; pg2.dfb,1; pg2.efd,1; pg2.fe5,1; pg2.1175,1; pg2.1204,1; pg2.1252,1; 
pg2.12b1,1; pg2.1315,1; pg2.1423,1; pg2.1438,1; pg2.14c0,1; pg2.15cc,1; 
pg2.16be,1; pg2.16df,1; pg2.173f,1; pg2.185d,1; pg2.1a57,1; pg2.1b17,1; 
pg2.1b2d,1; pg2.1b70,1; pg2.1bc4,1; pg2.1be2,1; pg2.1c36,1; pg2.1c65,1; 
pg2.1e0c,1; pg2.1e48,1; pg2.1f1f,1; pg2.1f44,1; pg2.1fe0,1; pg2.1ff4,1; 
pg2.200c,1; pg2.2048,1; pg2.2049,1; pg2.207e,1; pg2.20cd,1; pg2.2157,1; 
pg2.2177,1; pg2.223e,1; pg2.226f,1; pg2.22b8,1; pg2.2390,1; pg2.243c,1; ) v2 
==== 10428+0+0 (3346767542 0 0) 0x2780780 con 0x269f0c0



ceph version 0.80 (b78644e7dee100e48dfeca32c9270a6b210d3003)
 1: (Thread::create(unsigned long)+0x8a) [0xa82dea]
 2: (SimpleMessenger::add_accept_pipe(int)+0x6a) [0xa2950a]
 3: (Accepter::entry()+0x265) [0xb3b895]
 4: /lib64/libpthread.so.0() [0x3eebc079d1]
 5: (clone()+0x6d) [0x3eeb4e8b6d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 keyvaluestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.0.log
--- end dump of recent events ---
2014-06-03 10:40:51.018095 7fa884504700 -1 common/Thread.cc: In function 'void 
Thread::create(size_t)' thread 7fa884504700 time 2014-06-03 10:40:51.017073
common/Thread.cc: 110: FAILED assert(ret == 0)

 ceph version 0.80 (b78644e7dee100e48dfeca32c9270a6b210d3003)
 1: (Thread::create(unsigned long)+0x8a) [0xa82dea]
 2: (SimpleMessenger::add_accept_pipe(int)+0x6a) [0xa2950a]
 3: (Accepter::entry()+0x265) [0xb3b895]
 4: /lib64/libpthread.so.0() [0x3eebc079d1]
 5: (clone()+0x6d) [0x3eeb4e8b6d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.

2014-06-03 10:40:51.023039 7fa85cb14700 -1 common/Thread.cc: In function 'void 
Thread::create(size_t)' thread 7fa85cb14700 time 2014-06-03 10:40:51.021762
common/Thread.cc: 110: FAILED assert(ret == 0)

 ceph version 0.80 (b78644e7dee100e48dfeca32c9270a6b210d3003)
 1: (Thread::create(unsigned long)+0x8a) [0xa82dea]
 2: (Pipe::connect()+0x2efb) [0xb2735b]
 3: (Pipe::writer()+0x9f3) [0xb28eb3]
 4: (Pipe::Writer::entry()+0xd) [0xb3481d]
 5: /lib64/libpthread.so.0() [0x3eebc079d1]
 6: (clone()+0x6d) [0x3eeb4e8b6d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.

2014-06-03 10:40:51.030486 7fa84d721700 -1 common/Thread.cc: In function 'void 
Thread::create(size_t)' thread 7fa84d721700 time 2014-06-03 10:40:51.009986
common/Thread.cc: 110: FAILED assert(ret == 0)

 ceph version 0.80 (b78644e7dee100e48dfeca32c9270a6b210d3003)
 1: (Thread::create(unsigned long)+0x8a) [0xa82dea]
 2: (Pipe::connect()+0x2efb) [0xb2735b]
 3: (Pipe::writer()+0x9f3) [0xb28eb3]
 4: (Pipe::Writer::entry()+0xd) [0xb3481d]
 5: /lib64/libpthread.so.0() [0x3eebc079d1]
 6: (clone()+0x6d) [0x3eeb4e8b6d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.


2014-06-03 10:40:51.153498 7fa849de8700 -1 common/Thread.cc: In function 'void 
Thread::create(size_t)' thread 7fa849de8700 time 2014-06-03 10:40:51.152350
common/Thread.cc: 110: FAILED assert(ret == 0)

 ceph version 0.80 (b78644e7dee100e48dfeca32c9270a6b210d3003)
 1: (Thread::create(unsigned long)+0x8a) [0xa82dea]
 2: (Pipe::connect()+0x2efb) [0xb2735b]
 3: (Pipe::writer()+0x9f3) [0xb28eb3]
 4: (Pipe::Writer::entry()+0xd) [0xb3481d]
 5: /lib64/libpthread.so.0() [0x3eebc079d1]
 6: (clone()+0x6d) [0x3eeb4e8b6d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.

2014-06-03 10:40:51.160413 7fa855fa9700 -1 common/Thread.cc: In function 'void 
Thread::create(size_t)' thread 7fa855fa9700 time 2014-06-03 10:40:51.159319
common/Thread.cc: 110: FAILED assert(ret == 0)


014-06-03 14:23:04.706115 7f5617e63700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0xa3c2580 sd=388 :6800 s=0 pgs=0 cs=0 l=1 
c=0x3e037900).accept replacing existing (lossy) channel (new one lossy=1)
2014-06-03 14:46:29.834146 7f562f2ce700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x3624f080 sd=200 :6800 s=0 pgs=0 cs=0 l=1 
c=0x3e0309a0).accept replacing existing (lossy) channel (new one lossy=1)
2014-06-03 15:09:54.968059 7f5617a5f700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x36248f00 sd=388 :6800 s=0 pgs=0 cs=0 l=1 
c=0xa3b6880).accept replacing existing (lossy) channel (new one lossy=1)
2014-06-03 15:33:20.107179 7f5617e63700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x2dba0c80 sd=203 :6800 s=0 pgs=0 cs=0 l=1 
c=0xe5074e0).accept replacing existing (lossy) channel (new one lossy=1)
2014-06-03 15:46:45.770734 7f564b0bb700  0 log [WRN] : 1 slow requests, 1 
included below; oldest blocked for > 805.662937 secs
2014-06-03 15:46:45.770775 7f564b0bb700  0 log [WRN] : slow request 805.662937 
seconds old, received at 2014-06-03 15:33:20.107755: osd_op(mds.0.1:13 
609.00000000 [??? 1~0,omap-set-header 0~222] 1.60b82d07 RETRY=11 
ondisk+retry+write e353) v4 currently waiting for pg to exist locally

================
//then, continues have the auth error below

2014-06-03 20:53:15.998422 7f561795e700  0 auth: could not find secret_id=10
2014-06-03 20:53:15.998432 7f561795e700  0 cephx: verify_authorizer could not 
get service secret for service osd secret_id=10
2014-06-03 20:53:15.998437 7f561795e700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x434dcb00 sd=222 :6800 s=0 pgs=0 cs=0 l=1 
c=0xe501b80).accept: got bad authorizer
2014-06-03 20:53:15.998760 7f562f2ce700  0 auth: could not find secret_id=10
2014-06-03 20:53:15.998767 7f562f2ce700  0 cephx: verify_authorizer could not 
get service secret for service osd secret_id=10
2014-06-03 20:53:15.998773 7f562f2ce700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x434dd000 sd=388 :6800 s=0 pgs=0 cs=0 l=1 
c=0xe5070c0).accept: got bad authorizer
2014-06-03 20:53:16.199280 7f562f2ce700  0 auth: could not find secret_id=10
2014-06-03 20:53:16.199290 7f562f2ce700  0 cephx: verify_authorizer could not 
get service secret for service osd secret_id=10
2014-06-03 20:53:16.199294 7f562f2ce700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x434dfd00 sd=222 :6800 s=0 pgs=0 cs=0 l=1 
c=0x41f6720).accept: got bad authorizer
2014-06-03 20:53:16.599881 7f562f2ce700  0 auth: could not find secret_id=10
2014-06-03 20:53:16.599890 7f562f2ce700  0 cephx: verify_authorizer could not 
get service secret for service osd secret_id=10
2014-06-03 20:53:16.599895 7f562f2ce700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x434de180 sd=222 :6800 s=0 pgs=0 cs=0 l=1 
c=0x96249080).accept: got bad authorizer
2014-06-03 20:53:17.400519 7f562f2ce700  0 auth: could not find secret_id=10
2014-06-03 20:53:17.400529 7f562f2ce700  0 cephx: verify_authorizer could not 
get service secret for service osd secret_id=10
2014-06-03 20:53:17.400533 7f562f2ce700  0 -- 192.168.50.11:6800/19918 >> 
192.168.50.11:6914/11553 pipe(0x434da080 sd=222 :6800 s=0 pgs=0 cs=0 l=1 
c=0x96249ce0).accept: got bad authorizer

........................



//Later on, osd down and re-try to connect again, but continues to get connect 
refused.

   -6> 2014-06-04 08:12:15.561076 7ffb38ca6700  2 -- 192.168.50.11:6801/28628 
>> 192.168.40.14:0/21316 pipe(0x484cb00 sd=241 :6801 s=2 pgs=27 cs=1 l=1 
c=0x48e9b80).reader couldn't read tag, (0) Success
    -5> 2014-06-04 08:12:15.561108 7ffb38ca6700  2 -- 192.168.50.11:6801/28628 
>> 192.168.40.14:0/21316 pipe(0x484cb00 sd=241 :6801 s=2 pgs=27 cs=1 l=1 
c=0x48e9b80).fault (0) Success
    -4> 2014-06-04 08:12:15.561719 7ffb2bed9700  2 -- 192.168.40.11:0/28628 >> 
192.168.40.12:6863/23000 pipe(0x4c59400 sd=485 :46442 s=1 pgs=0 cs=0 l=1 
c=0x4c38580).connect read reply (0) Success
    -3> 2014-06-04 08:12:15.561751 7ffb2bed9700  2 -- 192.168.40.11:0/28628 >> 
192.168.40.12:6863/23000 pipe(0x4c59400 sd=485 :46442 s=1 pgs=0 cs=0 l=1 
c=0x4c38580).fault (0) Success
    -2> 2014-06-04 08:12:15.561772 7ffb2bed9700  0 -- 192.168.40.11:0/28628 >> 
192.168.40.12:6863/23000 pipe(0x4c59400 sd=485 :46442 s=1 pgs=0 cs=0 l=1 
c=0x4c38580).fault
    -1> 2014-06-04 08:12:15.561870 7ffb2bed9700  2 -- 192.168.40.11:0/28628 >> 
192.168.40.12:6863/23000 pipe(0x4c59400 sd=485 :46442 s=1 pgs=0 cs=0 l=1 
c=0x4c38580).connect error 192.168.40.12:6863/23000, (111) Connection refused
     0> 2014-06-04 08:12:15.561892 7ffb2bed9700  2 -- 192.168.40.11:0/28628 >> 
192.168.40.12:6863/23000 pipe(0x4c59400 sd=485 :46442 s=1 pgs=0 cs=0 l=1 
c=0x4c38580).fault (111) Connection refused
--- logging levels ---
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to