[ceph-users] Best way to setup a Ceph Cluster as Fileserver

2016-04-15 Thread Hauke Homburg

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

I am installing a Ceph Cluster as File Server with Jewel under Centos 7.
At this Moment it is not in Produktion. I installed Jewel, because i
need the cephfs Option.
I tried to install Jewel in a Debian Wheezy Server. This doesn't happen
so i have 3 Questions:

Is it important to have on the Client Side an the Server Side the same
Ceph Version? I ask because i can use in NFS and SMB Environments
different Versions of CLient and Server.

What your choice to export the Files from a Cephfs Server with a File
Server? On Client Side i have Debian Wheezy and Centos 6. So I think it
is the best way to export the Files not with ceph because of too old
Packages, but with SMB or NFS.

I tried install on Centos 7 a Kernel NFS Server, but i have Performace
Lacks. Afeter this I searched in the Internet and found 2 possible Ways:

Install SAMBA 4.1 and try vfs Module ceph, or install ganesha nfs.

Thanks for help

Hauke

- -- 
www.w3-creative.de

www.westchat.de
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)

iQIcBAEBAgAGBQJXEeF9AAoJEEIVizQb/Y0mr4wP/RAzJez+e+tDjVer8J32X3o6
sBqjm37iUH0CItSfTf0ZfyTEwA/zYu4LpzZmvkMCgHJeuZvGMc0k/ZP2Eeg2VZya
HUafBMsOFCuUPQqjeoySQaYv7fXXe4GIMrL/jsyyLF8uNIAUSm6kHmTiy6W9lRll
rSFO5lPUF6UfDsGl52gG5m2Zt5fQ3rzEH+Nq+CrjN+kqxGBLyMwkzHoG0WHu06PG
Gd+6XLsLac56pMzOtOV+a2hJLZNC9sMIdfoYYfEPu/medfQGfgWV0IZL4DXAkOyu
DOli6e6ek1RLaOtXvpMgsXDwdakku0S9pzDPely5U6DRKr15TmsSnI1twovMcMse
pxTEM0cy/lqk/78gEz2qq4534zk0VRUYkQr49Nk7v33Iz/waesbtJhpEoFu3nH0O
Lnz10MAiXjXu+zxWCDF99kEweKW3WWaTnQVh8OAWtPsr2V9cAs4rk51Q29n28s4w
naM0hyJj1gC0nOLnnJ9yBGri6SbUYXHKR/nZMf/OhkvBgCzwSm1mIqcmq2vi9XnY
wCpt2Jsnma9YUXFrdHesmHN98Liy0oKBgdOf6CWDwxKpH37mMCdIBu3Ec58cJ8cU
KjTIpnIC0AUmqi0BcQdONUSMS2mc7OamB/wPzc9qH3hTh+yazNJJklEWXiMJTjOE
1Vgb+Zd0EDkAQfutkQyt
=g1Bx
-END PGP SIGNATURE-


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] infernalis and jewel upgrades...

2016-04-15 Thread hjcho616
Deleted old messages... Here is what I am seeing..  Home contains copy from 
OSD2.
root@OSD1:/var/lib/ceph/osd# find ./ceph-*/current/meta | grep osdmap | grep 
16024./ceph-0/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none./ceph-0/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none./ceph-1/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none./ceph-1/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none./ceph-2/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none./ceph-2/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__noneroot@OSD1:/var/lib/ceph/osd#
 diff ./ceph-0/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__none 
./ceph-1/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__noneroot@OSD1:/var/lib/ceph/osd#
 diff ./ceph-0/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__none 
./ceph-2/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__noneroot@OSD1:/var/lib/ceph/osd#
 diff ./ceph-0/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none 
./ceph-1/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__noneroot@OSD1:/var/lib/ceph/osd#
 diff ./ceph-0/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none 
./ceph-2/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__noneroot@OSD1:/var/lib/ceph/osd#
 diff ./ceph-0/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none 
~/osdmap.16024__0_4E98A1D9__noneroot@OSD1:/var/lib/ceph/osd#
root@OSD2:/var/lib/ceph/osd# find ./ceph-*/current/meta | grep osdmap | grep 
16024./ceph-3/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none./ceph-3/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none./ceph-4/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none./ceph-4/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none./ceph-5/current/meta/DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none./ceph-5/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__noneroot@OSD2:/var/lib/ceph/osd#
 diff ./ceph-3/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__none 
./ceph-4/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__noneroot@OSD2:/var/lib/ceph/osd#
 diff ./ceph-3/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__none 
./ceph-5/current/meta/DIR_E/DIR_3/inc\\uosdmap.16024__0_46887E3E__noneroot@OSD2:/var/lib/ceph/osd#
 diff ./ceph-3/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none 
./ceph-4/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__noneroot@OSD2:/var/lib/ceph/osd#
 diff ./ceph-3/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none 
./ceph-5/current/meta/DIR_9/DIR_D/osdmap.16024__0_4E98A1D9__none
Regards,Hong
 

On Saturday, April 16, 2016 12:35 AM, hjcho616  wrote:
 

 osd.3 did have a full version when I wrote last time.  I just diffed the those 
two files across all OSDs and they all seem to match.  What else can I try?
Regards,Hong

On Saturday, April 16, 2016 12:30 AM, huang jun  wrote:
 

 can you find the full osdmap.16024 in other osd?
seems like the osd::init doesnt read the incremental osdmap but the full osdmap,
if you find it, then copy to osd.3.

2016-04-16 13:27 GMT+08:00 hjcho616 :
> I found below file missing on osd.3 so I copied over.  Still fails with the
> similar message.  What can I try next?
>
>    -1> 2016-04-16 00:22:32.579622 7f8d5c340800 20 osd.3 0 get_map 16024 -
> loading and decoding 0x7f8d65d04900
>      0> 2016-04-16 00:22:32.584406 7f8d5c340800 -1 osd/OSD.h: In function
> 'OSDMapRef OSDService::get_map(epoch_t)' thread 7f8d5c340800 time 2016-04-16
> 00:22:32.579890
> osd/OSD.h: 885: FAILED assert(ret)
>
>  ceph version 10.1.2 (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x82) [0x7f8d5bdc64f2]
>  2: (OSDService::get_map(unsigned int)+0x3d) [0x7f8d5b74d83d]
>  3: (OSD::init()+0x1862) [0x7f8d5b6fba52]
>  4: (main()+0x2b05) [0x7f8d5b661735]
>  5: (__libc_start_main()+0xf5) [0x7f8d581f7b45]
>  6: (()+0x337197) [0x7f8d5b6ac197]
>  NOTE: a copy of the executable, or `objdump -rdS ` is needed to
> interpret this.
>
> Regards,
> Hong
>
>


   

  ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] infernalis and jewel upgrades...

2016-04-15 Thread huang jun
can you find the full osdmap.16024 in other osd?
seems like the osd::init doesnt read the incremental osdmap but the full osdmap,
if you find it, then copy to osd.3.

2016-04-16 13:27 GMT+08:00 hjcho616 :
> I found below file missing on osd.3 so I copied over.  Still fails with the
> similar message.  What can I try next?
>
> -1> 2016-04-16 00:22:32.579622 7f8d5c340800 20 osd.3 0 get_map 16024 -
> loading and decoding 0x7f8d65d04900
>  0> 2016-04-16 00:22:32.584406 7f8d5c340800 -1 osd/OSD.h: In function
> 'OSDMapRef OSDService::get_map(epoch_t)' thread 7f8d5c340800 time 2016-04-16
> 00:22:32.579890
> osd/OSD.h: 885: FAILED assert(ret)
>
>  ceph version 10.1.2 (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x82) [0x7f8d5bdc64f2]
>  2: (OSDService::get_map(unsigned int)+0x3d) [0x7f8d5b74d83d]
>  3: (OSD::init()+0x1862) [0x7f8d5b6fba52]
>  4: (main()+0x2b05) [0x7f8d5b661735]
>  5: (__libc_start_main()+0xf5) [0x7f8d581f7b45]
>  6: (()+0x337197) [0x7f8d5b6ac197]
>  NOTE: a copy of the executable, or `objdump -rdS ` is needed to
> interpret this.
>
> Regards,
> Hong
>
>
> On Saturday, April 16, 2016 12:11 AM, hjcho616  wrote:
>
>
> Is this it?
>
> root@OSD2:/var/lib/ceph/osd/ceph-3/current/meta# find ./ | grep osdmap |
> grep 16024
> ./DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none
>
> Regards,
> Hong
>
>
> On Friday, April 15, 2016 11:53 PM, huang jun  wrote:
>
>
> First, you should check whether file osdmap.16024 exists in your
> osd.3/current/meta dir,
> if not, you can copy it from other OSD who has it.
>
>
> 2016-04-16 12:36 GMT+08:00 hjcho616 :
>> Here is what I get wtih debug_osd = 20.
>>
>> 2016-04-15 23:28:24.429063 7f9ca0a5b800  0 set uid:gid to 1001:1001
>> (ceph:ceph)
>> 2016-04-15 23:28:24.429167 7f9ca0a5b800  0 ceph version 10.1.2
>> (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4), process ceph-osd, pid 2092
>> 2016-04-15 23:28:24.432034 7f9ca0a5b800  0 pidfile_write: ignore empty
>> --pid-file
>> 2016-04-15 23:28:24.459417 7f9ca0a5b800 10
>> ErasureCodePluginSelectJerasure:
>> load: jerasure_sse3
>> 2016-04-15 23:28:24.470016 7f9ca0a5b800 10 load: jerasure load: lrc load:
>> isa
>> 2016-04-15 23:28:24.472013 7f9ca0a5b800  2 osd.3 0 mounting
>> /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal
>> 2016-04-15 23:28:24.472292 7f9ca0a5b800  0
>> filestore(/var/lib/ceph/osd/ceph-3) backend xfs (magic 0x58465342)
>> 2016-04-15 23:28:24.473496 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: FIEMAP
>> ioctl is disabled via 'filestore fiemap' config option
>> 2016-04-15 23:28:24.473541 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features:
>> SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config
>> option
>> 2016-04-15 23:28:24.473615 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: splice
>> is
>> supported
>> 2016-04-15 23:28:24.494485 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features:
>> syncfs(2)
>> syscall fully supported (by glibc and kernel)
>> 2016-04-15 23:28:24.494802 7f9ca0a5b800  0
>> xfsfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_feature: extsize is
>> disabled by conf
>> 2016-04-15 23:28:24.499066 7f9ca0a5b800  1 leveldb: Recovering log #20901
>> 2016-04-15 23:28:24.782188 7f9ca0a5b800  1 leveldb: Delete type=0 #20901
>>
>> 2016-04-15 23:28:24.782420 7f9ca0a5b800  1 leveldb: Delete type=3 #20900
>>
>> 2016-04-15 23:28:24.784810 7f9ca0a5b800  0
>> filestore(/var/lib/ceph/osd/ceph-3) mount: enabling WRITEAHEAD journal
>> mode:
>> checkpoint is not enabled
>> 2016-04-15 23:28:24.792918 7f9ca0a5b800  1 journal _open
>> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
>> bytes, directio = 1, aio = 1
>> 2016-04-15 23:28:24.800583 7f9ca0a5b800  1 journal _open
>> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
>> bytes, directio = 1, aio = 1
>> 2016-04-15 23:28:24.808144 7f9ca0a5b800  1
>> filestore(/var/lib/ceph/osd/ceph-3) upgrade
>> 2016-04-15 23:28:24.808540 7f9ca0a5b800  2 osd.3 0 boot
>> 2016-04-15 23:28:24.809265 7f9ca0a5b800 10 osd.3 0 read_superblock
>> sb(9b2c9bca-112e-48b0-86fc-587ef9a52948 osd.3
>> 4f86a418-6c67-4cb4-83a1-6c123c890036 e16024 [15332,16024]
>> lci=[16010,16024])
>> 2016-04-15 23:28:24.810029 7f9ca0a5b800 10 open_all_classes
>> 2016-04-15 23:28:24.810433 7f9ca0a5b800 10 open_all_classes found journal
>> 2016-04-15 23:28:24.810746 7f9ca0a5b800 10 _get_class adding new class
>> name
>> journal 0x7f9caa628808
>> 2016-04-15 23:28:24.811059 7f9ca0a5b800 10 _load_class journal from
>> /usr/lib/rados-classes/libcls_journal.so
>> 2016-04-15 23:28:24.814498 7f9ca0a5b800 10 register_class journal status 3
>> 2016-04-15 23:28:24.814650 7f9ca0a5b800 10 register_cxx_method
>> journal.create flags 3 0x7f9c8dadac00
>> 2016-04-15 23:28:24.814745 7f9ca0a5b800 10 register_cxx_method
>> journal.ge

Re: [ceph-users] infernalis and jewel upgrades...

2016-04-15 Thread huang jun
yes, it's a incremental osdmap, does the file size is correct?
you can compare it with the same file in other osd.
If it's not the same, you can overwrite it with the right one.

2016-04-16 13:11 GMT+08:00 hjcho616 :
> Is this it?
>
> root@OSD2:/var/lib/ceph/osd/ceph-3/current/meta# find ./ | grep osdmap |
> grep 16024
> ./DIR_E/DIR_3/inc\uosdmap.16024__0_46887E3E__none
>
> Regards,
> Hong
>
>
> On Friday, April 15, 2016 11:53 PM, huang jun  wrote:
>
>
> First, you should check whether file osdmap.16024 exists in your
> osd.3/current/meta dir,
> if not, you can copy it from other OSD who has it.
>
>
> 2016-04-16 12:36 GMT+08:00 hjcho616 :
>> Here is what I get wtih debug_osd = 20.
>>
>> 2016-04-15 23:28:24.429063 7f9ca0a5b800  0 set uid:gid to 1001:1001
>> (ceph:ceph)
>> 2016-04-15 23:28:24.429167 7f9ca0a5b800  0 ceph version 10.1.2
>> (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4), process ceph-osd, pid 2092
>> 2016-04-15 23:28:24.432034 7f9ca0a5b800  0 pidfile_write: ignore empty
>> --pid-file
>> 2016-04-15 23:28:24.459417 7f9ca0a5b800 10
>> ErasureCodePluginSelectJerasure:
>> load: jerasure_sse3
>> 2016-04-15 23:28:24.470016 7f9ca0a5b800 10 load: jerasure load: lrc load:
>> isa
>> 2016-04-15 23:28:24.472013 7f9ca0a5b800  2 osd.3 0 mounting
>> /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal
>> 2016-04-15 23:28:24.472292 7f9ca0a5b800  0
>> filestore(/var/lib/ceph/osd/ceph-3) backend xfs (magic 0x58465342)
>> 2016-04-15 23:28:24.473496 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: FIEMAP
>> ioctl is disabled via 'filestore fiemap' config option
>> 2016-04-15 23:28:24.473541 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features:
>> SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config
>> option
>> 2016-04-15 23:28:24.473615 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: splice
>> is
>> supported
>> 2016-04-15 23:28:24.494485 7f9ca0a5b800  0
>> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features:
>> syncfs(2)
>> syscall fully supported (by glibc and kernel)
>> 2016-04-15 23:28:24.494802 7f9ca0a5b800  0
>> xfsfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_feature: extsize is
>> disabled by conf
>> 2016-04-15 23:28:24.499066 7f9ca0a5b800  1 leveldb: Recovering log #20901
>> 2016-04-15 23:28:24.782188 7f9ca0a5b800  1 leveldb: Delete type=0 #20901
>>
>> 2016-04-15 23:28:24.782420 7f9ca0a5b800  1 leveldb: Delete type=3 #20900
>>
>> 2016-04-15 23:28:24.784810 7f9ca0a5b800  0
>> filestore(/var/lib/ceph/osd/ceph-3) mount: enabling WRITEAHEAD journal
>> mode:
>> checkpoint is not enabled
>> 2016-04-15 23:28:24.792918 7f9ca0a5b800  1 journal _open
>> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
>> bytes, directio = 1, aio = 1
>> 2016-04-15 23:28:24.800583 7f9ca0a5b800  1 journal _open
>> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
>> bytes, directio = 1, aio = 1
>> 2016-04-15 23:28:24.808144 7f9ca0a5b800  1
>> filestore(/var/lib/ceph/osd/ceph-3) upgrade
>> 2016-04-15 23:28:24.808540 7f9ca0a5b800  2 osd.3 0 boot
>> 2016-04-15 23:28:24.809265 7f9ca0a5b800 10 osd.3 0 read_superblock
>> sb(9b2c9bca-112e-48b0-86fc-587ef9a52948 osd.3
>> 4f86a418-6c67-4cb4-83a1-6c123c890036 e16024 [15332,16024]
>> lci=[16010,16024])
>> 2016-04-15 23:28:24.810029 7f9ca0a5b800 10 open_all_classes
>> 2016-04-15 23:28:24.810433 7f9ca0a5b800 10 open_all_classes found journal
>> 2016-04-15 23:28:24.810746 7f9ca0a5b800 10 _get_class adding new class
>> name
>> journal 0x7f9caa628808
>> 2016-04-15 23:28:24.811059 7f9ca0a5b800 10 _load_class journal from
>> /usr/lib/rados-classes/libcls_journal.so
>> 2016-04-15 23:28:24.814498 7f9ca0a5b800 10 register_class journal status 3
>> 2016-04-15 23:28:24.814650 7f9ca0a5b800 10 register_cxx_method
>> journal.create flags 3 0x7f9c8dadac00
>> 2016-04-15 23:28:24.814745 7f9ca0a5b800 10 register_cxx_method
>> journal.get_order flags 1 0x7f9c8dada3c0
>> 2016-04-15 23:28:24.814838 7f9ca0a5b800 10 register_cxx_method
>> journal.get_splay_width flags 1 0x7f9c8dada360
>> 2016-04-15 23:28:24.814925 7f9ca0a5b800 10 register_cxx_method
>> journal.get_pool_id flags 1 0x7f9c8dadaa30
>> 2016-04-15 23:28:24.815062 7f9ca0a5b800 10 register_cxx_method
>> journal.get_minimum_set flags 1 0x7f9c8dada9c0
>> 2016-04-15 23:28:24.815162 7f9ca0a5b800 10 register_cxx_method
>> journal.set_minimum_set flags 3 0x7f9c8dada830
>> 2016-04-15 23:28:24.815246 7f9ca0a5b800 10 register_cxx_method
>> journal.get_active_set flags 1 0x7f9c8dada7c0
>> 2016-04-15 23:28:24.815336 7f9ca0a5b800 10 register_cxx_method
>> journal.set_active_set flags 3 0x7f9c8dada630
>> 2016-04-15 23:28:24.815417 7f9ca0a5b800 10 register_cxx_method
>> journal.get_client flags 1 0x7f9c8dadafb0
>> 2016-04-15 23:28:24.815501 7f9ca0a5b800 10 register_cxx_method
>> journal.client_register flags 3 0x7f9c8dadc140
>> 2016-04-15 23:28:24.815589 7f9ca0a5b800 10 register_cxx_

Re: [ceph-users] infernalis and jewel upgrades...

2016-04-15 Thread huang jun
First, you should check whether file osdmap.16024 exists in your
osd.3/current/meta dir,
if not, you can copy it from other OSD who has it.


2016-04-16 12:36 GMT+08:00 hjcho616 :
> Here is what I get wtih debug_osd = 20.
>
> 2016-04-15 23:28:24.429063 7f9ca0a5b800  0 set uid:gid to 1001:1001
> (ceph:ceph)
> 2016-04-15 23:28:24.429167 7f9ca0a5b800  0 ceph version 10.1.2
> (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4), process ceph-osd, pid 2092
> 2016-04-15 23:28:24.432034 7f9ca0a5b800  0 pidfile_write: ignore empty
> --pid-file
> 2016-04-15 23:28:24.459417 7f9ca0a5b800 10 ErasureCodePluginSelectJerasure:
> load: jerasure_sse3
> 2016-04-15 23:28:24.470016 7f9ca0a5b800 10 load: jerasure load: lrc load:
> isa
> 2016-04-15 23:28:24.472013 7f9ca0a5b800  2 osd.3 0 mounting
> /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal
> 2016-04-15 23:28:24.472292 7f9ca0a5b800  0
> filestore(/var/lib/ceph/osd/ceph-3) backend xfs (magic 0x58465342)
> 2016-04-15 23:28:24.473496 7f9ca0a5b800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: FIEMAP
> ioctl is disabled via 'filestore fiemap' config option
> 2016-04-15 23:28:24.473541 7f9ca0a5b800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features:
> SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
> 2016-04-15 23:28:24.473615 7f9ca0a5b800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: splice is
> supported
> 2016-04-15 23:28:24.494485 7f9ca0a5b800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: syncfs(2)
> syscall fully supported (by glibc and kernel)
> 2016-04-15 23:28:24.494802 7f9ca0a5b800  0
> xfsfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_feature: extsize is
> disabled by conf
> 2016-04-15 23:28:24.499066 7f9ca0a5b800  1 leveldb: Recovering log #20901
> 2016-04-15 23:28:24.782188 7f9ca0a5b800  1 leveldb: Delete type=0 #20901
>
> 2016-04-15 23:28:24.782420 7f9ca0a5b800  1 leveldb: Delete type=3 #20900
>
> 2016-04-15 23:28:24.784810 7f9ca0a5b800  0
> filestore(/var/lib/ceph/osd/ceph-3) mount: enabling WRITEAHEAD journal mode:
> checkpoint is not enabled
> 2016-04-15 23:28:24.792918 7f9ca0a5b800  1 journal _open
> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
> bytes, directio = 1, aio = 1
> 2016-04-15 23:28:24.800583 7f9ca0a5b800  1 journal _open
> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
> bytes, directio = 1, aio = 1
> 2016-04-15 23:28:24.808144 7f9ca0a5b800  1
> filestore(/var/lib/ceph/osd/ceph-3) upgrade
> 2016-04-15 23:28:24.808540 7f9ca0a5b800  2 osd.3 0 boot
> 2016-04-15 23:28:24.809265 7f9ca0a5b800 10 osd.3 0 read_superblock
> sb(9b2c9bca-112e-48b0-86fc-587ef9a52948 osd.3
> 4f86a418-6c67-4cb4-83a1-6c123c890036 e16024 [15332,16024] lci=[16010,16024])
> 2016-04-15 23:28:24.810029 7f9ca0a5b800 10 open_all_classes
> 2016-04-15 23:28:24.810433 7f9ca0a5b800 10 open_all_classes found journal
> 2016-04-15 23:28:24.810746 7f9ca0a5b800 10 _get_class adding new class name
> journal 0x7f9caa628808
> 2016-04-15 23:28:24.811059 7f9ca0a5b800 10 _load_class journal from
> /usr/lib/rados-classes/libcls_journal.so
> 2016-04-15 23:28:24.814498 7f9ca0a5b800 10 register_class journal status 3
> 2016-04-15 23:28:24.814650 7f9ca0a5b800 10 register_cxx_method
> journal.create flags 3 0x7f9c8dadac00
> 2016-04-15 23:28:24.814745 7f9ca0a5b800 10 register_cxx_method
> journal.get_order flags 1 0x7f9c8dada3c0
> 2016-04-15 23:28:24.814838 7f9ca0a5b800 10 register_cxx_method
> journal.get_splay_width flags 1 0x7f9c8dada360
> 2016-04-15 23:28:24.814925 7f9ca0a5b800 10 register_cxx_method
> journal.get_pool_id flags 1 0x7f9c8dadaa30
> 2016-04-15 23:28:24.815062 7f9ca0a5b800 10 register_cxx_method
> journal.get_minimum_set flags 1 0x7f9c8dada9c0
> 2016-04-15 23:28:24.815162 7f9ca0a5b800 10 register_cxx_method
> journal.set_minimum_set flags 3 0x7f9c8dada830
> 2016-04-15 23:28:24.815246 7f9ca0a5b800 10 register_cxx_method
> journal.get_active_set flags 1 0x7f9c8dada7c0
> 2016-04-15 23:28:24.815336 7f9ca0a5b800 10 register_cxx_method
> journal.set_active_set flags 3 0x7f9c8dada630
> 2016-04-15 23:28:24.815417 7f9ca0a5b800 10 register_cxx_method
> journal.get_client flags 1 0x7f9c8dadafb0
> 2016-04-15 23:28:24.815501 7f9ca0a5b800 10 register_cxx_method
> journal.client_register flags 3 0x7f9c8dadc140
> 2016-04-15 23:28:24.815589 7f9ca0a5b800 10 register_cxx_method
> journal.client_update_data flags 3 0x7f9c8dadb730
> 2016-04-15 23:28:24.815679 7f9ca0a5b800 10 register_cxx_method
> journal.client_update_state flags 3 0x7f9c8dadb300
> 2016-04-15 23:28:24.815771 7f9ca0a5b800 10 register_cxx_method
> journal.client_unregister flags 3 0x7f9c8dadf060
> 2016-04-15 23:28:24.815854 7f9ca0a5b800 10 register_cxx_method
> journal.client_commit flags 3 0x7f9c8dadbc40
> 2016-04-15 23:28:24.815934 7f9ca0a5b800 10 register_cxx_method
> journal.client_list flags 1 0x7f9c8dadc9c0
> 2016-04-15 23:28:24.816019 7f9ca0a5b800 10 registe

Re: [ceph-users] howto delete a pg

2016-04-15 Thread huang jun
for your cluster warning message, it's a pg's some objects have
inconsistent in primary and replicas,
so you can try 'ceph pg repair $PGID'.

2016-04-16 9:04 GMT+08:00 Oliver Dzombic :
> Hi,
>
> i meant of course
>
> 0.e6_head
> 0.e6_TEMP
>
> in
>
> /var/lib/ceph/osd/ceph-12/current
>
> sry...
>
>
> --
> Mit freundlichen Gruessen / Best regards
>
> Oliver Dzombic
> IP-Interactive
>
> mailto:i...@ip-interactive.de
>
> Anschrift:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 16.04.2016 um 03:03 schrieb Oliver Dzombic:
>> Hi,
>>
>> pg 0.e6 is active+clean+inconsistent, acting [12,7]
>>
>> /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700
>> -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got
>> 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0
>> hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0
>> hit_set_archive bytes.
>>
>>
>> i tried to follow
>>
>> https://ceph.com/planet/ceph-manually-repair-object/
>>
>> did not really work for me.
>>
>> How do i kill this pg completely from osd.12 ?
>>
>> Can i simply delete
>>
>> 0.6_head
>> 0.6_TEMP
>>
>> in
>>
>> /var/lib/ceph/osd/ceph-12/current
>>
>> and ceph will take the other copy and multiply it again, and all is fine ?
>>
>> Or would that be the start of the end ? ^^;
>>
>> Thank you !
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Thank you!
HuangJun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure coding after striping

2016-04-15 Thread huang jun
for striped objects, the main goodness is your cluster's OSDs capacity
usage will get more balanced,
and write\read requests will spread across the whole cluster which
will improve w/r performance .

2016-04-15 22:17 GMT+08:00 Chandan Kumar Singh :
> Hi
>
> Is it a good practice to store striped objects in a EC pool? If yes, what
> are the pros and cons of such a pattern?
>
> Regards
> Chandan
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Thank you!
HuangJun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] infernalis and jewel upgrades...

2016-04-15 Thread huang jun
Can you set 'debug_osd = 20' in ceph.conf and restart the osd again,
and post the corrupt log.
I doubt it's problem related to "0 byte osdmap" decode problem?

2016-04-16 12:14 GMT+08:00 hjcho616 :
> I've been successfully running cephfs on my Debian Jessies for a while and
> one day after power outage, MDS wasn't happy.  MDS crashing after it was
> done loading, increasing the memory utilization quite a bit.  I was running
> infernalis 9.2.0 and did successful upgrade from Hammer before... so I
> thought I may have hit a bug and decided to try 9.2.1.
>
> In 9.2.1, it was not happy that my journal didn't have permission for user
> ceph.  So corrected it.  Then all of my OSDs are no longer starting.
> Failing with similar messages as below.  I upgraded to Jewel, as I didn't
> see too much more complexitiy to upgrade from Infernalis and am still seeing
> these errors.
>
> 2016-04-15 22:47:04.897500 7f65fbbb0800  0 set uid:gid to 1001:1001
> (ceph:ceph)
> 2016-04-15 22:47:04.897635 7f65fbbb0800  0 ceph version 10.1.2
> (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4), process ceph-osd, pid 1284
> 2016-04-15 22:47:04.900585 7f65fbbb0800  0 pidfile_write: ignore empty
> --pid-file
> 2016-04-15 22:47:05.467530 7f65fbbb0800  0
> filestore(/var/lib/ceph/osd/ceph-3) backend xfs (magic 0x58465342)
> 2016-04-15 22:47:05.477912 7f65fbbb0800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: FIEMAP
> ioctl is disabled via 'filestore fiemap' config option
> 2016-04-15 22:47:05.477999 7f65fbbb0800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features:
> SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
> 2016-04-15 22:47:05.478091 7f65fbbb0800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: splice is
> supported
> 2016-04-15 22:47:05.494593 7f65fbbb0800  0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: syncfs(2)
> syscall fully supported (by glibc and kernel)
> 2016-04-15 22:47:05.494785 7f65fbbb0800  0
> xfsfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_feature: extsize is
> disabled by conf
> 2016-04-15 22:47:05.596738 7f65fbbb0800  1 leveldb: Recovering log #20899
> 2016-04-15 22:47:05.825914 7f65fbbb0800  1 leveldb: Delete type=0 #20899
>
> 2016-04-15 22:47:05.826089 7f65fbbb0800  1 leveldb: Delete type=3 #20898
>
> 2016-04-15 22:47:05.900058 7f65fbbb0800  0
> filestore(/var/lib/ceph/osd/ceph-3) mount: enabling WRITEAHEAD journal mode:
> checkpoint is not enabled
> 2016-04-15 22:47:06.377878 7f65fbbb0800  1 journal _open
> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
> bytes, directio = 1, aio = 1
> 2016-04-15 22:47:06.381738 7f65fbbb0800  1 journal _open
> /var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096
> bytes, directio = 1, aio = 1
> 2016-04-15 22:47:06.384954 7f65fbbb0800  1
> filestore(/var/lib/ceph/osd/ceph-3) upgrade
> 2016-04-15 22:47:06.415851 7f65fbbb0800  0 
> cls/cephfs/cls_cephfs.cc:202: loading cephfs_size_scan
> 2016-04-15 22:47:06.419654 7f65fbbb0800  0  cls/hello/cls_hello.cc:305:
> loading cls_hello
> 2016-04-15 22:47:06.498512 7f65fbbb0800 -1 osd/OSD.h: In function 'OSDMapRef
> OSDService::get_map(epoch_t)' thread 7f65fbbb0800 time 2016-04-15
> 22:47:06.494680
> osd/OSD.h: 885: FAILED assert(ret)
>
>  ceph version 10.1.2 (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x82) [0x7f65fb6364f2]
>  2: (OSDService::get_map(unsigned int)+0x3d) [0x7f65fafbd83d]
>  3: (OSD::init()+0x1862) [0x7f65faf6ba52]
>  4: (main()+0x2b05) [0x7f65faed1735]
>  5: (__libc_start_main()+0xf5) [0x7f65f7a67b45]
>  6: (()+0x337197) [0x7f65faf1c197]
>  NOTE: a copy of the executable, or `objdump -rdS ` is needed to
> interpret this.
>
> --- begin dump of recent events ---
>-78> 2016-04-15 22:47:04.873688 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perfcounters_dump hook 0x7f66067e2030
>-77> 2016-04-15 22:47:04.873771 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command 1 hook 0x7f66067e2030
>-76> 2016-04-15 22:47:04.873804 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perf dump hook 0x7f66067e2030
>-75> 2016-04-15 22:47:04.873834 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perfcounters_schema hook 0x7f66067e2030
>-76> 2016-04-15 22:47:04.873804 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perf dump hook 0x7f66067e2030
>-75> 2016-04-15 22:47:04.873834 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perfcounters_schema hook 0x7f66067e2030
>-74> 2016-04-15 22:47:04.873860 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command 2 hook 0x7f66067e2030
>-73> 2016-04-15 22:47:04.873886 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perf schema hook 0x7f66067e2030
>-72> 2016-04-15 22:47:04.873916 7f65fbbb0800  5 asok(0x7f660689a000)
> register_command perf reset hook 0x7f66067e2030
>-71> 2016-04-15 2

[ceph-users] infernalis and jewel upgrades...

2016-04-15 Thread hjcho616
I've been successfully running cephfs on my Debian Jessies for a while and one 
day after power outage, MDS wasn't happy.  MDS crashing after it was done 
loading, increasing the memory utilization quite a bit.  I was running 
infernalis 9.2.0 and did successful upgrade from Hammer before... so I thought 
I may have hit a bug and decided to try 9.2.1.
In 9.2.1, it was not happy that my journal didn't have permission for user 
ceph.  So corrected it.  Then all of my OSDs are no longer starting.  Failing 
with similar messages as below.  I upgraded to Jewel, as I didn't see too much 
more complexitiy to upgrade from Infernalis and am still seeing these errors.
2016-04-15 22:47:04.897500 7f65fbbb0800  0 set uid:gid to 1001:1001 
(ceph:ceph)2016-04-15 22:47:04.897635 7f65fbbb0800  0 ceph version 10.1.2 
(4a2a6f72640d6b74a3bbd92798bb913ed380dcd4), process ceph-osd, pid 
12842016-04-15 22:47:04.900585 7f65fbbb0800  0 pidfile_write: ignore empty 
--pid-file2016-04-15 22:47:05.467530 7f65fbbb0800  0 
filestore(/var/lib/ceph/osd/ceph-3) backend xfs (magic 0x58465342)2016-04-15 
22:47:05.477912 7f65fbbb0800  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: FIEMAP ioctl 
is disabled via 'filestore fiemap' config option2016-04-15 22:47:05.477999 
7f65fbbb0800  0 genericfilestorebackend(/var/lib/ceph/osd/ceph-3) 
detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' 
config option2016-04-15 22:47:05.478091 7f65fbbb0800  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: splice is 
supported2016-04-15 22:47:05.494593 7f65fbbb0800  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_features: syncfs(2) 
syscall fully supported (by glibc and kernel)2016-04-15 22:47:05.494785 
7f65fbbb0800  0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-3) detect_feature: 
extsize is disabled by conf2016-04-15 22:47:05.596738 7f65fbbb0800  1 leveldb: 
Recovering log #208992016-04-15 22:47:05.825914 7f65fbbb0800  1 leveldb: Delete 
type=0 #20899
2016-04-15 22:47:05.826089 7f65fbbb0800  1 leveldb: Delete type=3 #20898
2016-04-15 22:47:05.900058 7f65fbbb0800  0 filestore(/var/lib/ceph/osd/ceph-3) 
mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled2016-04-15 
22:47:06.377878 7f65fbbb0800  1 journal _open /var/lib/ceph/osd/ceph-3/journal 
fd 18: 14998831104 bytes, block size 4096 bytes, directio = 1, aio = 
12016-04-15 22:47:06.381738 7f65fbbb0800  1 journal _open 
/var/lib/ceph/osd/ceph-3/journal fd 18: 14998831104 bytes, block size 4096 
bytes, directio = 1, aio = 12016-04-15 22:47:06.384954 7f65fbbb0800  1 
filestore(/var/lib/ceph/osd/ceph-3) upgrade2016-04-15 22:47:06.415851 
7f65fbbb0800  0  cls/cephfs/cls_cephfs.cc:202: loading 
cephfs_size_scan2016-04-15 22:47:06.419654 7f65fbbb0800  0  
cls/hello/cls_hello.cc:305: loading cls_hello2016-04-15 22:47:06.498512 
7f65fbbb0800 -1 osd/OSD.h: In function 'OSDMapRef OSDService::get_map(epoch_t)' 
thread 7f65fbbb0800 time 2016-04-15 22:47:06.494680osd/OSD.h: 885: FAILED 
assert(ret)
 ceph version 10.1.2 (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4) 1: 
(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x82) 
[0x7f65fb6364f2] 2: (OSDService::get_map(unsigned int)+0x3d) [0x7f65fafbd83d] 
3: (OSD::init()+0x1862) [0x7f65faf6ba52] 4: (main()+0x2b05) [0x7f65faed1735] 5: 
(__libc_start_main()+0xf5) [0x7f65f7a67b45] 6: (()+0x337197) [0x7f65faf1c197] 
NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.
--- begin dump of recent events ---   -78> 2016-04-15 22:47:04.873688 
7f65fbbb0800  5 asok(0x7f660689a000) register_command perfcounters_dump hook 
0x7f66067e2030   -77> 2016-04-15 22:47:04.873771 7f65fbbb0800  5 
asok(0x7f660689a000) register_command 1 hook 0x7f66067e2030   -76> 2016-04-15 
22:47:04.873804 7f65fbbb0800  5 asok(0x7f660689a000) register_command perf dump 
hook 0x7f66067e2030   -75> 2016-04-15 22:47:04.873834 7f65fbbb0800  5 
asok(0x7f660689a000) register_command perfcounters_schema hook 0x7f66067e2030   
-76> 2016-04-15 22:47:04.873804 7f65fbbb0800  5 asok(0x7f660689a000) 
register_command perf dump hook 0x7f66067e2030   -75> 2016-04-15 
22:47:04.873834 7f65fbbb0800  5 asok(0x7f660689a000) register_command 
perfcounters_schema hook 0x7f66067e2030   -74> 2016-04-15 22:47:04.873860 
7f65fbbb0800  5 asok(0x7f660689a000) register_command 2 hook 0x7f66067e2030   
-73> 2016-04-15 22:47:04.873886 7f65fbbb0800  5 asok(0x7f660689a000) 
register_command perf schema hook 0x7f66067e2030   -72> 2016-04-15 
22:47:04.873916 7f65fbbb0800  5 asok(0x7f660689a000) register_command perf 
reset hook 0x7f66067e2030   -71> 2016-04-15 22:47:04.873943 7f65fbbb0800  5 
asok(0x7f660689a000) register_command config show hook 0x7f66067e2030   -70> 
2016-04-15 22:47:04.873974 7f65fbbb0800  5 asok(0x7f660689a000) 
register_command config set hook 0x7f66067e2030   -69> 2016-04-15 
22:47:04.874000 7f65fbbb0800  5 asok(0x7f660689a000) register_command config 
get hook 0x7f66067e2030   -68> 20

Re: [ceph-users] howto delete a pg

2016-04-15 Thread Oliver Dzombic
Hi,

i meant of course

0.e6_head
0.e6_TEMP

in

/var/lib/ceph/osd/ceph-12/current

sry...


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:i...@ip-interactive.de

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 16.04.2016 um 03:03 schrieb Oliver Dzombic:
> Hi,
> 
> pg 0.e6 is active+clean+inconsistent, acting [12,7]
> 
> /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700
> -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got
> 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0
> hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0
> hit_set_archive bytes.
> 
> 
> i tried to follow
> 
> https://ceph.com/planet/ceph-manually-repair-object/
> 
> did not really work for me.
> 
> How do i kill this pg completely from osd.12 ?
> 
> Can i simply delete
> 
> 0.6_head
> 0.6_TEMP
> 
> in
> 
> /var/lib/ceph/osd/ceph-12/current
> 
> and ceph will take the other copy and multiply it again, and all is fine ?
> 
> Or would that be the start of the end ? ^^;
> 
> Thank you !
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] howto delete a pg

2016-04-15 Thread Oliver Dzombic
Hi,

pg 0.e6 is active+clean+inconsistent, acting [12,7]

/var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700
-1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got
4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0
hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0
hit_set_archive bytes.


i tried to follow

https://ceph.com/planet/ceph-manually-repair-object/

did not really work for me.

How do i kill this pg completely from osd.12 ?

Can i simply delete

0.6_head
0.6_TEMP

in

/var/lib/ceph/osd/ceph-12/current

and ceph will take the other copy and multiply it again, and all is fine ?

Or would that be the start of the end ? ^^;

Thank you !

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:i...@ip-interactive.de

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSDs refuse to start, latest osdmap missing

2016-04-15 Thread David Zafman


The ceph-objectstore-tool set-osdmap operation updates existing 
osdmaps.  If a map doesn't already exist the --force option can be used 
to create it.  It appears safe in your case to use that option.


David

On 4/15/16 9:47 AM, Markus Blank-Burian wrote:

Hi,

  


we had a problem on our production cluster (running 9.2.1) which caused /proc,
/dev and /sys to be unmounted. During this time, we received the following error
on a large number of OSDs (for various osdmap epochs):

  


Apr 15 15:25:19 kaa-99 ceph-osd[4167]: 2016-04-15 15:25:19.457774 7f1c817fd700
0 filestore(/local/ceph/osd.43) write couldn't open
meta/-1/c188e154/osdmap.276293/0: (2) No such file or directory

  


After restarting the hosts, the OSDs now refuse to start with:

  


Apr 15 16:03:53 kaa-99 ceph-osd[4211]: -2> 2016-04-15 16:03:53.089842
7f8e9f840840 10 _load_class version success

Apr 15 16:03:53 kaa-99 ceph-osd[4211]: -1> 2016-04-15 16:03:53.089863
7f8e9f840840 20 osd.43 0 get_map 276424 - loading and decoding 0x7f8e9b841780

Apr 15 16:03:53 kaa-99 ceph-osd[4211]:  0> 2016-04-15 16:03:53.140754
7f8e9f840840 -1 osd/OSD.h: In function 'OSDMapRef OSDService::get_map(epoch_t)'
thread 7f8e9f840840 time 2016-04-15 16:03:53.139563

osd/OSD.h: 847: FAILED assert(ret)

  


Inserting the map with ceph-objectstore-tool –op set-osdmap does not work and
gives the following error:

  


osdmap (-1/c1882e94/osdmap.276507/0) does not exist.

2016-04-15 17:14:00.335751 7f4b4d75b840  1 journal close /dev/ssd/journal.43

  


How can I get the OSDs running again?

  


I also created an issue for this in the tracker:
http://tracker.ceph.com/issues/15520

There are some similar entries, but I could not find a solution without
recreating the OSD.

  

  


Markus

  


--

Markus Blank-Burian

AK Heuer, Institut für Physikalische Chemie, WWU Münster

Corrensstraße 28/30

Raum E005

Tel.: 0251 / 83 29178

E-Mail:   blankbur...@wwu.de

  

  





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSDs refuse to start, latest osdmap missing

2016-04-15 Thread Markus Blank-Burian
Hi,

 

we had a problem on our production cluster (running 9.2.1) which caused /proc,
/dev and /sys to be unmounted. During this time, we received the following error
on a large number of OSDs (for various osdmap epochs): 

 

Apr 15 15:25:19 kaa-99 ceph-osd[4167]: 2016-04-15 15:25:19.457774 7f1c817fd700
0 filestore(/local/ceph/osd.43) write couldn't open
meta/-1/c188e154/osdmap.276293/0: (2) No such file or directory

 

After restarting the hosts, the OSDs now refuse to start with:

 

Apr 15 16:03:53 kaa-99 ceph-osd[4211]: -2> 2016-04-15 16:03:53.089842
7f8e9f840840 10 _load_class version success

Apr 15 16:03:53 kaa-99 ceph-osd[4211]: -1> 2016-04-15 16:03:53.089863
7f8e9f840840 20 osd.43 0 get_map 276424 - loading and decoding 0x7f8e9b841780

Apr 15 16:03:53 kaa-99 ceph-osd[4211]:  0> 2016-04-15 16:03:53.140754
7f8e9f840840 -1 osd/OSD.h: In function 'OSDMapRef OSDService::get_map(epoch_t)'
thread 7f8e9f840840 time 2016-04-15 16:03:53.139563

osd/OSD.h: 847: FAILED assert(ret)

 

Inserting the map with ceph-objectstore-tool –op set-osdmap does not work and
gives the following error:

 

osdmap (-1/c1882e94/osdmap.276507/0) does not exist.

2016-04-15 17:14:00.335751 7f4b4d75b840  1 journal close /dev/ssd/journal.43

 

How can I get the OSDs running again?

 

I also created an issue for this in the tracker:
http://tracker.ceph.com/issues/15520

There are some similar entries, but I could not find a solution without
recreating the OSD.

 

 

Markus

 

--

Markus Blank-Burian

AK Heuer, Institut für Physikalische Chemie, WWU Münster

Corrensstraße 28/30

Raum E005

Tel.: 0251 / 83 29178

E-Mail:   blankbur...@wwu.de

 

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Erasure coding after striping

2016-04-15 Thread Chandan Kumar Singh
Hi

Is it a good practice to store striped objects in a EC pool? If yes, what
are the pros and cons of such a pattern?

Regards
Chandan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Erasure coding for small files vs large files

2016-04-15 Thread Chandan Kumar Singh
Hi

I am evaluating EC for a ceph cluster where the objects are mostly of
smaller sizes (< 1 MB) and occasionally large (~ 100 - 500 MB). Besides the
general performance penalty of EC, is there any additional disadvantage of
storing small objects along with large objects in same EC pool.

More generally, should size be of any consideration for storing objects in
EC pool.

Regards
Chandan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Antw: Re: librados: client.admin authentication error

2016-04-15 Thread Steffen Weißgerber



>>> "leon...@gstarcloud.com"  schrieb am
Freitag, 15. April
2016 um 11:33:
> Hello Daniel, 
>  
> I'm a newbie to Ceph, and when i config the storage cluster on CentOS
7 VMs, 
> i encontered the same problem as you posted on 
>
[http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-August/041992.html].

> I've done lots of searching and trying, but still cannot make it
work.
> 
> Could you please tell me how you solved  the problem?
> Thanks in advance.
> 
> 
> # additional terminal info:
> [root@ceph3 ceph]# ceph --debug -v 
> ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
> [root@ceph3 ceph]# ceph --debug --verbose
> parsed_args: Namespace(admin_socket=None, admin_socket_nope=None, 
> cephconf=None, client_id=None, client_name=None, cluster=None, 
> cluster_timeout=None, completion=False, help=False, input_file=None,

> output_file=None, output_format=None, status=False, verbose=True, 
> version=False, watch=False, watch_debug=False, watch_error=False, 
> watch_info=False, watch_sec=False, watch_warn=False), childargs:
['--debug']
> 2016-04-15 17:24:37.909282 7f7169d4f700  0 librados: client.admin 
> authentication error (95) Operation not supported
> Error connecting to cluster: Error
> 
> 

Looks like you're doing this from outside of the cluster.

Look if you have a copy of the ceph.conf in /etc/ceph and what the
client is missing
is the ceph.client.admin.keyring file for connecting to the cluster as
a cluster admin.
That's the default if no --id parameter is given.

But if this is not your deploy or cluster admin host you should avoid
to use the admin
authentication (as well as working with root privileges per default on
linux).

Instead create a seperate ceph user with the subset of the capabilities
you need
(see
http://docs.ceph.com/docs/hammer/rados/operations/user-management/)
and deploy the referencing keyring to the client hosts.

To connect as a different user insert an entry like

[client.health]
  keyring = /etc/ceph/ceph.client.health.keyring

to the ceph.conf and execute 'ceph --id=health ...'.

Regards

Steffen

> 
> Leon(蔡天雄)
> leon...@gstarcloud.com

-- 
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph cluster upgrade - adding ceph osd server

2016-04-15 Thread Andrei Mikhailovsky
Hi all, 

Was wondering what is the best way to add a new osd server to the small ceph 
cluster? I am interested in minimising performance degradation as the cluster 
is live and actively used. 

At the moment i've got the following setup: 

2 osd servers (9 osds each) 
Journals on Intel 520/530 ssds 
3 mon servers 
Infiniband 40gbit/s interconnect (using ipoib) 
Replication: 2 


The cluster will be modified to become: 

3 osd servers (10 osds each) 
Journals on Intel S3710 ssds (one ssd to five osds) 
Replication: 2 

What I am interested to learn is the best way to upgrade my cluster without 
having much performance problems. Should I first add an additional osd to the 
existing two osd servers, wait for the cluster to sync and then add the 3rd osd 
server? Or should I add the third osd server with 9 osds, wait for the cluster 
to sync and replicate and add one osd to the three osd servers? Or should I 
just add the third osd server with 10 osds and additional osd to the existing 
osd servers and wait for everything to sync? 

Many thanks 

Andrei 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread lin zhou
Thanks llya.The following is the solution.

3.8 is missing a *ton* of fixes, I'd strongy recommend an upgrade to
4.0+.

If the osdc output is still the same, try marking osd28 down with "ceph
down 28" (it'll come back automatically) and triggering some I/O (e.g.
a small read from a file you can open).  You should see

libceph: osd28 down
libceph: osd28 up

in the dmesg after the I/O is triggered.

Attach

# ceph -s
# find /sys/kernel/debug/ceph -type f -print -exec cat {} \;

when you are done.

2016-04-15 17:42 GMT+08:00 lin zhou :
> some thing goes well now.
> after I reboot osd.28.
>
> the rbd show from :
> /sys/kernel/debug/ceph/409059ba-797e-46da-bc2f-83e3c7779094.client400179/osdc
> 155198393 osd28 2.2e56 rb.0.9e5ab.6b8b4567.0001f432 write
>
> I can read the file which I can not read before reboot osd.28
>
> 2016-04-15 17:37 GMT+08:00 lin zhou :
>> the increment log in dmesg is :
>> [25592034.504614] libceph: osd44 192.168.43.15:6823 socket closed (con
>> state OPEN)
>> [25592545.157129] libceph: osd17 192.168.43.13:6832 socket closed (con
>> state OPEN)
>> [25593569.346612] libceph: osd28 down
>> [25593573.750922] libceph: osd28 up
>> [25593578.317884] EXT4-fs (rbd17): re-mounted. Opts:
>> grpjquota=quota.group,usrjquota=quota.user,jqfmt=vfsv1
>>
>>
>> /sys/kernel/debug/ceph/409059ba-797e-46da-bc2f-83e3c7779094.client400179/osdc
>> 155198396 osd38 2.2d68 rb.0.5b06d.6b8b4567.0001f433 write
>> 155198529 osd87 2.27d6 rb.0.578cc.6b8b4567.000c0021 write
>> 155198530 osd80 2.33e rb.0.578c3.6b8b4567.000c0021 write
>> 155198531 osd16 2.79ce rb.0.5486a.6b8b4567.6421 write
>> 155198532 osd22 2.b35e rb.0.899f7.6b8b4567.322f write
>> 155198533 osd26 2.ea40 rb.0.2b68d4.6b8b4567.00040022 write
>> 155198534 osd20 2.713d rb.0.578d5.6b8b4567.000c0021 write
>> 155198535 osd26 2.e436 rb.0.54935.6b8b4567.6421 write
>> 155198536 osd56 2.cc9d rb.0.56fb6.6b8b4567.0001f421 write
>> 155198537 osd80 2.936b rb.0.5486d.6b8b4567.6421 write
>> 155198539 osd51 2.d1bd rb.0.51ae2.6b8b4567.00018623 write
>> 155198586 osd40 2.f093 rb.0.899f7.6b8b4567.0470 write
>> 155198587 osd40 2.f093 rb.0.899f7.6b8b4567.0470 write
>> 155198597 osd40 2.f093 rb.0.899f7.6b8b4567.0470 write
>> 155198598 osd40 2.f093 rb.0.899f7.6b8b4567.0470 write
>> 155199106 osd40 2.f093 rb.0.899f7.6b8b4567.0470 write
>> 155199460 osd40 2.f093 rb.0.899f7.6b8b4567.0470 write
>> /sys/kernel/debug/ceph/409059ba-797e-46da-bc2f-83e3c7779094.client400179/monc
>> have osdmap 32321
>> want next osdmap
>> root@musicgci5:~# dmesg |less
>> root@musicgci5:~# ceph -s
>>   cluster 409059ba-797e-46da-bc2f-83e3c7779094
>>health HEALTH_OK
>>monmap e1: 3 mons at
>> {musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.13:6789/0,musicgci4=192.168.43.14:6789/0},
>> election epoch 70, quorum 0,1,2 musicgci2,musicgci3,musicgci4
>>osdmap e32321: 69 osds: 69 up, 69 in
>> pgmap v39523780: 18748 pgs: 18748 active+clean; 48326 GB data, 141
>> TB used, 46976 GB / 187 TB avail; 1000KB/s wr, 10op/s
>>mdsmap e1: 0/0/1 up
>>
>> 2016-04-15 17:33 GMT+08:00 Ilya Dryomov :
>>> On Fri, Apr 15, 2016 at 11:18 AM, lin zhou  wrote:
 root@musicgci5:~# uname -a
 Linux musicgci5 3.8.0-31-generic #46~precise1 SMP Wed Sep 25 23:05:54
 CST 2013 x86_64 x86_64 x86_64 GNU/Linux
 root@musicgci5:~# lsb_release -a
 No LSB modules are available.
 Distributor ID: Ubuntu
 Description: Ubuntu 12.04.3 LTS
 Release: 12.04
 Codename: precise
 root@musicgci5:~# ceph -v
 ceph version 0.67.7 (d7ab4244396b57aac8b7e80812115bbd079e6b73)
>>>
>>> 3.8 is missing a *ton* of fixes, I'd strongy recommend an upgrade to
>>> 4.0+.
>>>
>>> If the osdc output is still the same, try marking osd28 down with "ceph
>>> down 28" (it'll come back automatically) and triggering some I/O (e.g.
>>> a small read from a file you can open).  You should see
>>>
>>> libceph: osd28 down
>>> libceph: osd28 up
>>>
>>> in the dmesg after the I/O is triggered.
>>>
>>> Attach
>>>
>>> # ceph -s
>>> # find /sys/kernel/debug/ceph -type f -print -exec cat {} \;
>>>
>>> when you are done.
>>>
>>> Thanks,
>>>
>>> Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Antw: Re: Deprecating ext4 support

2016-04-15 Thread Steffen Weißgerber



>>> Christian Balzer  schrieb am Donnerstag, 14. April
2016 um
17:00:

> Hello,
> 
> [reduced to ceph-users]
> 
> On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote:
> 
>> 
>> 
>> >>> Christian Balzer  schrieb am Dienstag, 12. April
2016
>> >>> um 01:39:
>> 
>> > Hello,
>> > 
>> 
>> Hi,
>> 
>> > I'm officially only allowed to do (preventative) maintenance
during
>> > weekend nights on our main production cluster. 
>> > That would mean 13 ruined weekends at the realistic rate of 1 OSD
per
>> > night, so you can see where my lack of enthusiasm for OSD
recreation
>> > comes from.
>> > 
>> 
>> Wondering extremely about that. We introduced ceph for VM's on RBD
to not
>> have to move maintenance time to night shift.
>> 
> This is Japan. 
> It makes the most anal retentive people/rules in "der alten Heimat"
look
> like a bunch of hippies on drugs.
> 
> Note the preventative and I should have put "officially" in quotes,
like
> that.
> 
> I can do whatever I feel comfortable with on our other production
cluster,
> since there aren't hundreds of customers with very, VERY tight SLAs
on it.
> 
> So if I were to tell my boss that I want to renew all OSDs he'd say
"Sure,
> but at time that if anything goes wrong it will not impact any
customer
> unexpectedly" meaning the official maintenance windows...
> 

For "all OSD's" (at the same time), I would agree. But when we talk
about
changing one by one the effect to a cluster auf x OSD's on y nodes ...
Hmm.

>> My understanding of ceph is that it was also made as reliable
storage in
>> case of hardware failure.
>>
> Reliable, yes. With certain limitations, see below.
>  
>> So what's the difference between maintain an osd and it's failure
in
>> effect for the end user? In both cases it should be none.
>> 
> Ideally, yes.
> Note than an OSD failure can result in slow I/O (to the point of
what
> would be considered service interruption) depending on the failure
mode
> and the various timeout settings.
> 
> So planned and properly executed maintenance has less impact.
> None (or at least not noticeable) IF your cluster has enough
resources
> and/or all the tuning has been done correctly.
> 
>> Maintaining OSD's should be routine so that you're confident that
your
>> application stays save while hardware fails in a amount one
configured
>> unused reserve.
>> 
> IO is a very fickle beast, it may perform splendidly at 2000ops/s
just to
> totally go down the drain at 2100. 
> Knowing your capacity and reserve isn't straightforward, especially
not in
> a live environment as compared to synthetic tests. 
> 
> In short, could that cluster (now, after upgrades and adding a cache
tier)
> handle OSD renewals at any given time?
> Absolutely.
> Will I get an official blessing to do so?
> No effing way.
> 

Understand. A setup with cache tiering is more complex than simple
osd's
with journals on SSD.

But that reminds me to a keynote held by Kris Köhntopp at
the FFG of the GUUG in 2015 were he talked about restarting a huge
MySQL-DB part of the backend of booking.com were he had the choice
to regulary restart die DB which tooks 10-15 minutes or so or kill the
DB
process whereafter the DB recovery tooks only 1-2 minutes.

Having this knowledge, he told, is one thing but being that self
confident
to do it with a good feeling only comes from experience to have it
done
in routine.

Please don't understand me wrong, I'll will not force you to be
reckless.

Another interesting fact, Kris explained, was that the IT was equiped
with
a budget for loss of business due to IT unavailability. And the
management
only intervened when this budget was exhausted.

That's also i kind of reserve an IT-Administrator can work with. But
having
such budget surely depends on a corresponding management mentality.

>> In the end what happens to your cluster, when a complete node
fails?
>> 

> Nothing much, in fact LESS than when an OSD should fail since it
won't
> trigger re-balancing (mon_osd_down_out_subtree_limit = host).
> 

Yes, but does a single osd change can trigger this in your
configuration and
is the amount of data that much for a relevant recovery load?

And the same problem you have is when you extend your cluster, haven't
you?

For me a level of operation with such sorrows would be to change
crushmap
related things (e.g. our tunables are already on bobtail profile).
But mainly because I never did it.

> Regards,
> 
> Christian

Regards

Steffen

> -- 
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com Global OnLine Japan/Rakuten Communications
> http://www.gol.com/

-- 
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] librados: client.admin authentication error

2016-04-15 Thread leon...@gstarcloud.com
Hello Daniel, 
 
I'm a newbie to Ceph, and when i config the storage cluster on CentOS 7 VMs, i 
encontered the same problem as you posted on 
[http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-August/041992.html]. 
I've done lots of searching and trying, but still cannot make it work.

Could you please tell me how you solved  the problem?
Thanks in advance.


# additional terminal info:
[root@ceph3 ceph]# ceph --debug -v 
ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
[root@ceph3 ceph]# ceph --debug --verbose
parsed_args: Namespace(admin_socket=None, admin_socket_nope=None, 
cephconf=None, client_id=None, client_name=None, cluster=None, 
cluster_timeout=None, completion=False, help=False, input_file=None, 
output_file=None, output_format=None, status=False, verbose=True, 
version=False, watch=False, watch_debug=False, watch_error=False, 
watch_info=False, watch_sec=False, watch_warn=False), childargs: ['--debug']
2016-04-15 17:24:37.909282 7f7169d4f700  0 librados: client.admin 
authentication error (95) Operation not supported
Error connecting to cluster: Error



Leon(蔡天雄)
leon...@gstarcloud.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread Ilya Dryomov
On Fri, Apr 15, 2016 at 10:59 AM, lin zhou  wrote:
> Yes,the output is the same.

(Dropped ceph-users.)

Can you attach compressed osd logs for OSDs 28 and 40?

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread lin zhou
Yes,the output is the same.

2016-04-15 16:55 GMT+08:00 Ilya Dryomov :
> On Fri, Apr 15, 2016 at 10:32 AM, lin zhou  wrote:
>> thanks for so fast reply.
>> output in one of the faulty host:
>>
>> root@musicgci5:~#  ceph -s
>>   cluster 409059ba-797e-46da-bc2f-83e3c7779094
>>health HEALTH_OK
>>monmap e1: 3 mons at
>> {musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.13:6789/0,musicgci4=192.168.43.14:6789/0},
>> election epoch 70, quorum 0,1,2 musicgci2,musicgci3,musicgci4
>>osdmap e32317: 69 osds: 69 up, 69 in
>> pgmap v39521976: 18748 pgs: 18748 active+clean; 48326 GB data, 141
>> TB used, 46977 GB / 187 TB avail; 319KB/s wr, 2op/s
>>mdsmap e1: 0/0/1 up
>>
>> root@musicgci5:~#  find /sys/kernel/debug/ceph -type f -print -exec cat {} \;
>> /sys/kernel/debug/ceph/409059ba-797e-46da-bc2f-83e3c7779094.client400179/osdmap
>> epoch 32317
>> flags
>> pg_pool 0 pg_num 64 / 63, lpg_num 0 / 0
>> pg_pool 1 pg_num 64 / 63, lpg_num 0 / 0
>> pg_pool 2 pg_num 2048 / 2047, lpg_num 0 / 0
>> pg_pool 3 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 4 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 5 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 6 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 7 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 8 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 9 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 10 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 11 pg_num 1500 / 2047, lpg_num 0 / 0
>> pg_pool 13 pg_num 1024 / 1023, lpg_num 0 / 0
>> pg_pool 14 pg_num 1024 / 1023, lpg_num 0 / 0
>> pg_pool 15 pg_num 1024 / 1023, lpg_num 0 / 0
>> osd0 192.168.43.11:6808 100% (exists, up)
>> osd1 192.168.43.12:6808 100% (exists, up)
>> osd2 192.168.43.13:6828 100% (exists, up)
>> osd3 192.168.43.14:6804 100% (exists, up)
>> osd4 192.168.43.15:6805  0% (doesn't exist)
>> osd5 192.168.43.11:6816 100% (exists, up)
>> osd6 192.168.43.12:6816 100% (exists, up)
>> osd7 192.168.43.13:6808 100% (exists, up)
>> osd8 192.168.43.14:6816 100% (exists, up)
>> osd9 192.168.43.15:6800 100% (exists, up)
>> osd10 192.168.43.11:6832 100% (exists, up)
>> osd11 192.168.43.12:6800 100% (exists, up)
>> osd12 192.168.43.13:6800 100% (exists, up)
>> osd13 192.168.43.14:6836 100% (exists, up)
>> osd14 192.168.43.15:6809 100% (exists, up)
>> osd15 192.168.43.11:6828 100% (exists, up)
>> osd16 192.168.43.12:6820 100% (exists, up)
>> osd17 192.168.43.13:6832 100% (exists, up)
>> osd18 192.168.43.14:6800 100% (exists, up)
>> osd19 192.168.43.15:6810 100% (exists, up)
>> osd20 192.168.43.11:6804 100% (exists, up)
>> osd21 192.168.43.12:6804 100% (exists, up)
>> osd22 192.168.43.13:6816 100% (exists, up)
>> osd23 192.168.43.14:6812 100% (exists, up)
>> osd24 192.168.43.15:6852 100% (exists, up)
>> osd25 192.168.43.11:6836 100% (exists, up)
>> osd26 192.168.43.12:6812 100% (exists, up)
>> osd27 192.168.43.13:6824 100% (exists, up)
>> osd28 192.168.43.14:6832 100% (exists, up)
>> osd29 192.168.43.15:6836 100% (exists, up)
>> osd30 192.168.43.11:6812 100% (exists, up)
>> osd31 192.168.43.12:6824 100% (exists, up)
>> osd32 192.168.43.13:6812 83% (exists, up)
>> osd33 192.168.43.14:6808 100% (exists, up)
>> osd34 192.168.43.15:6801 89% (exists, up)
>> osd35 192.168.43.11:6820 100% (exists, up)
>> osd36 192.168.43.12:6832 100% (exists, up)
>> osd37 192.168.43.13:6836 79% (exists, up)
>> osd38 192.168.43.14:6828 100% (exists, up)
>> osd39 192.168.43.15:6818 86% (exists, up)
>> osd40 192.168.43.11:6800 83% (exists, up)
>> osd41 192.168.43.12:6836 100% (exists, up)
>> osd42 192.168.43.13:6820 100% (exists, up)
>> osd43 192.168.43.14:6824 100% (exists, up)
>> osd44 192.168.43.15:6823 100% (exists, up)
>> osd45 192.168.43.11:6824 100% (exists, up)
>> osd46 192.168.43.12:6828 100% (exists, up)
>> osd47 192.168.43.13:6804 100% (exists, up)
>> osd48 192.168.43.14:6820 100% (exists, up)
>> osd49 192.168.43.15:6805 100% (exists, up)
>> osd50 192.168.43.17:6832 86% (exists, up)
>> osd51 192.168.43.16:6820 100% (exists, up)
>> osd52 192.168.43.18:6820  0% (doesn't exist)
>> osd53 192.168.43.19:6808  0% (doesn't exist)
>> osd54 192.168.43.20:6816  0% (doesn't exist)
>> osd55 192.168.43.21:6836  0% (doesn't exist)
>> osd56 192.168.43.16:6828 100% (exists, up)
>> osd57 192.168.43.17:6808 100% (exists, up)
>> osd58 192.168.43.18:6832  0% (doesn't exist)
>> osd59 192.168.43.19:6800  0% (doesn't exist)
>> osd60 192.168.43.20:6824  0% (doesn't exist)
>> osd61 192.168.43.21:6816  0% (doesn't exist)
>> osd62 192.168.43.16:6812 100% (exists, up)
>> osd63 192.168.43.17:6816 100% (exists, up)
>> osd64 192.168.43.18:6800  0% (doesn't exist)
>> osd65 192.168.43.19:6828  0% (doesn't exist)
>> osd66 192.168.43.20:6808  0% (doesn't exist)
>> osd67 192.168.43.21:6832  0% (doesn't exist)
>> osd68 192.168.43.16:6808 100% (exists, up)
>> osd69 192.168.43.17:6847 100% (exists, up)
>> osd70 192.168.43.18:6812  0% (doesn't exist)
>> osd71 192.168.43.19:6824  0% (doesn't exist)
>> osd72 192.168.43.20:6804  0% (doesn't exist)
>> osd73 192.168.43.21:6800  0% (doesn't exist)

Re: [ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread Ilya Dryomov
On Fri, Apr 15, 2016 at 10:32 AM, lin zhou  wrote:
> thanks for so fast reply.
> output in one of the faulty host:
>
> root@musicgci5:~#  ceph -s
>   cluster 409059ba-797e-46da-bc2f-83e3c7779094
>health HEALTH_OK
>monmap e1: 3 mons at
> {musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.13:6789/0,musicgci4=192.168.43.14:6789/0},
> election epoch 70, quorum 0,1,2 musicgci2,musicgci3,musicgci4
>osdmap e32317: 69 osds: 69 up, 69 in
> pgmap v39521976: 18748 pgs: 18748 active+clean; 48326 GB data, 141
> TB used, 46977 GB / 187 TB avail; 319KB/s wr, 2op/s
>mdsmap e1: 0/0/1 up
>
> root@musicgci5:~#  find /sys/kernel/debug/ceph -type f -print -exec cat {} \;
> /sys/kernel/debug/ceph/409059ba-797e-46da-bc2f-83e3c7779094.client400179/osdmap
> epoch 32317
> flags
> pg_pool 0 pg_num 64 / 63, lpg_num 0 / 0
> pg_pool 1 pg_num 64 / 63, lpg_num 0 / 0
> pg_pool 2 pg_num 2048 / 2047, lpg_num 0 / 0
> pg_pool 3 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 4 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 5 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 6 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 7 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 8 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 9 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 10 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 11 pg_num 1500 / 2047, lpg_num 0 / 0
> pg_pool 13 pg_num 1024 / 1023, lpg_num 0 / 0
> pg_pool 14 pg_num 1024 / 1023, lpg_num 0 / 0
> pg_pool 15 pg_num 1024 / 1023, lpg_num 0 / 0
> osd0 192.168.43.11:6808 100% (exists, up)
> osd1 192.168.43.12:6808 100% (exists, up)
> osd2 192.168.43.13:6828 100% (exists, up)
> osd3 192.168.43.14:6804 100% (exists, up)
> osd4 192.168.43.15:6805  0% (doesn't exist)
> osd5 192.168.43.11:6816 100% (exists, up)
> osd6 192.168.43.12:6816 100% (exists, up)
> osd7 192.168.43.13:6808 100% (exists, up)
> osd8 192.168.43.14:6816 100% (exists, up)
> osd9 192.168.43.15:6800 100% (exists, up)
> osd10 192.168.43.11:6832 100% (exists, up)
> osd11 192.168.43.12:6800 100% (exists, up)
> osd12 192.168.43.13:6800 100% (exists, up)
> osd13 192.168.43.14:6836 100% (exists, up)
> osd14 192.168.43.15:6809 100% (exists, up)
> osd15 192.168.43.11:6828 100% (exists, up)
> osd16 192.168.43.12:6820 100% (exists, up)
> osd17 192.168.43.13:6832 100% (exists, up)
> osd18 192.168.43.14:6800 100% (exists, up)
> osd19 192.168.43.15:6810 100% (exists, up)
> osd20 192.168.43.11:6804 100% (exists, up)
> osd21 192.168.43.12:6804 100% (exists, up)
> osd22 192.168.43.13:6816 100% (exists, up)
> osd23 192.168.43.14:6812 100% (exists, up)
> osd24 192.168.43.15:6852 100% (exists, up)
> osd25 192.168.43.11:6836 100% (exists, up)
> osd26 192.168.43.12:6812 100% (exists, up)
> osd27 192.168.43.13:6824 100% (exists, up)
> osd28 192.168.43.14:6832 100% (exists, up)
> osd29 192.168.43.15:6836 100% (exists, up)
> osd30 192.168.43.11:6812 100% (exists, up)
> osd31 192.168.43.12:6824 100% (exists, up)
> osd32 192.168.43.13:6812 83% (exists, up)
> osd33 192.168.43.14:6808 100% (exists, up)
> osd34 192.168.43.15:6801 89% (exists, up)
> osd35 192.168.43.11:6820 100% (exists, up)
> osd36 192.168.43.12:6832 100% (exists, up)
> osd37 192.168.43.13:6836 79% (exists, up)
> osd38 192.168.43.14:6828 100% (exists, up)
> osd39 192.168.43.15:6818 86% (exists, up)
> osd40 192.168.43.11:6800 83% (exists, up)
> osd41 192.168.43.12:6836 100% (exists, up)
> osd42 192.168.43.13:6820 100% (exists, up)
> osd43 192.168.43.14:6824 100% (exists, up)
> osd44 192.168.43.15:6823 100% (exists, up)
> osd45 192.168.43.11:6824 100% (exists, up)
> osd46 192.168.43.12:6828 100% (exists, up)
> osd47 192.168.43.13:6804 100% (exists, up)
> osd48 192.168.43.14:6820 100% (exists, up)
> osd49 192.168.43.15:6805 100% (exists, up)
> osd50 192.168.43.17:6832 86% (exists, up)
> osd51 192.168.43.16:6820 100% (exists, up)
> osd52 192.168.43.18:6820  0% (doesn't exist)
> osd53 192.168.43.19:6808  0% (doesn't exist)
> osd54 192.168.43.20:6816  0% (doesn't exist)
> osd55 192.168.43.21:6836  0% (doesn't exist)
> osd56 192.168.43.16:6828 100% (exists, up)
> osd57 192.168.43.17:6808 100% (exists, up)
> osd58 192.168.43.18:6832  0% (doesn't exist)
> osd59 192.168.43.19:6800  0% (doesn't exist)
> osd60 192.168.43.20:6824  0% (doesn't exist)
> osd61 192.168.43.21:6816  0% (doesn't exist)
> osd62 192.168.43.16:6812 100% (exists, up)
> osd63 192.168.43.17:6816 100% (exists, up)
> osd64 192.168.43.18:6800  0% (doesn't exist)
> osd65 192.168.43.19:6828  0% (doesn't exist)
> osd66 192.168.43.20:6808  0% (doesn't exist)
> osd67 192.168.43.21:6832  0% (doesn't exist)
> osd68 192.168.43.16:6808 100% (exists, up)
> osd69 192.168.43.17:6847 100% (exists, up)
> osd70 192.168.43.18:6812  0% (doesn't exist)
> osd71 192.168.43.19:6824  0% (doesn't exist)
> osd72 192.168.43.20:6804  0% (doesn't exist)
> osd73 192.168.43.21:6800  0% (doesn't exist)
> osd74 192.168.43.16:6806 100% (exists, up)
> osd75 192.168.43.17:6843 100% (exists, up)
> osd76 192.168.43.18:6801  0% (doesn't exist)
> osd77 192.168.43.19:6812  0% (doesn't ex

Re: [ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread lin zhou
thanks for so fast reply.
output in one of the faulty host:

root@musicgci5:~#  ceph -s
  cluster 409059ba-797e-46da-bc2f-83e3c7779094
   health HEALTH_OK
   monmap e1: 3 mons at
{musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.13:6789/0,musicgci4=192.168.43.14:6789/0},
election epoch 70, quorum 0,1,2 musicgci2,musicgci3,musicgci4
   osdmap e32317: 69 osds: 69 up, 69 in
pgmap v39521976: 18748 pgs: 18748 active+clean; 48326 GB data, 141
TB used, 46977 GB / 187 TB avail; 319KB/s wr, 2op/s
   mdsmap e1: 0/0/1 up

root@musicgci5:~#  find /sys/kernel/debug/ceph -type f -print -exec cat {} \;
/sys/kernel/debug/ceph/409059ba-797e-46da-bc2f-83e3c7779094.client400179/osdmap
epoch 32317
flags
pg_pool 0 pg_num 64 / 63, lpg_num 0 / 0
pg_pool 1 pg_num 64 / 63, lpg_num 0 / 0
pg_pool 2 pg_num 2048 / 2047, lpg_num 0 / 0
pg_pool 3 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 4 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 5 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 6 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 7 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 8 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 9 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 10 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 11 pg_num 1500 / 2047, lpg_num 0 / 0
pg_pool 13 pg_num 1024 / 1023, lpg_num 0 / 0
pg_pool 14 pg_num 1024 / 1023, lpg_num 0 / 0
pg_pool 15 pg_num 1024 / 1023, lpg_num 0 / 0
osd0 192.168.43.11:6808 100% (exists, up)
osd1 192.168.43.12:6808 100% (exists, up)
osd2 192.168.43.13:6828 100% (exists, up)
osd3 192.168.43.14:6804 100% (exists, up)
osd4 192.168.43.15:6805  0% (doesn't exist)
osd5 192.168.43.11:6816 100% (exists, up)
osd6 192.168.43.12:6816 100% (exists, up)
osd7 192.168.43.13:6808 100% (exists, up)
osd8 192.168.43.14:6816 100% (exists, up)
osd9 192.168.43.15:6800 100% (exists, up)
osd10 192.168.43.11:6832 100% (exists, up)
osd11 192.168.43.12:6800 100% (exists, up)
osd12 192.168.43.13:6800 100% (exists, up)
osd13 192.168.43.14:6836 100% (exists, up)
osd14 192.168.43.15:6809 100% (exists, up)
osd15 192.168.43.11:6828 100% (exists, up)
osd16 192.168.43.12:6820 100% (exists, up)
osd17 192.168.43.13:6832 100% (exists, up)
osd18 192.168.43.14:6800 100% (exists, up)
osd19 192.168.43.15:6810 100% (exists, up)
osd20 192.168.43.11:6804 100% (exists, up)
osd21 192.168.43.12:6804 100% (exists, up)
osd22 192.168.43.13:6816 100% (exists, up)
osd23 192.168.43.14:6812 100% (exists, up)
osd24 192.168.43.15:6852 100% (exists, up)
osd25 192.168.43.11:6836 100% (exists, up)
osd26 192.168.43.12:6812 100% (exists, up)
osd27 192.168.43.13:6824 100% (exists, up)
osd28 192.168.43.14:6832 100% (exists, up)
osd29 192.168.43.15:6836 100% (exists, up)
osd30 192.168.43.11:6812 100% (exists, up)
osd31 192.168.43.12:6824 100% (exists, up)
osd32 192.168.43.13:6812 83% (exists, up)
osd33 192.168.43.14:6808 100% (exists, up)
osd34 192.168.43.15:6801 89% (exists, up)
osd35 192.168.43.11:6820 100% (exists, up)
osd36 192.168.43.12:6832 100% (exists, up)
osd37 192.168.43.13:6836 79% (exists, up)
osd38 192.168.43.14:6828 100% (exists, up)
osd39 192.168.43.15:6818 86% (exists, up)
osd40 192.168.43.11:6800 83% (exists, up)
osd41 192.168.43.12:6836 100% (exists, up)
osd42 192.168.43.13:6820 100% (exists, up)
osd43 192.168.43.14:6824 100% (exists, up)
osd44 192.168.43.15:6823 100% (exists, up)
osd45 192.168.43.11:6824 100% (exists, up)
osd46 192.168.43.12:6828 100% (exists, up)
osd47 192.168.43.13:6804 100% (exists, up)
osd48 192.168.43.14:6820 100% (exists, up)
osd49 192.168.43.15:6805 100% (exists, up)
osd50 192.168.43.17:6832 86% (exists, up)
osd51 192.168.43.16:6820 100% (exists, up)
osd52 192.168.43.18:6820  0% (doesn't exist)
osd53 192.168.43.19:6808  0% (doesn't exist)
osd54 192.168.43.20:6816  0% (doesn't exist)
osd55 192.168.43.21:6836  0% (doesn't exist)
osd56 192.168.43.16:6828 100% (exists, up)
osd57 192.168.43.17:6808 100% (exists, up)
osd58 192.168.43.18:6832  0% (doesn't exist)
osd59 192.168.43.19:6800  0% (doesn't exist)
osd60 192.168.43.20:6824  0% (doesn't exist)
osd61 192.168.43.21:6816  0% (doesn't exist)
osd62 192.168.43.16:6812 100% (exists, up)
osd63 192.168.43.17:6816 100% (exists, up)
osd64 192.168.43.18:6800  0% (doesn't exist)
osd65 192.168.43.19:6828  0% (doesn't exist)
osd66 192.168.43.20:6808  0% (doesn't exist)
osd67 192.168.43.21:6832  0% (doesn't exist)
osd68 192.168.43.16:6808 100% (exists, up)
osd69 192.168.43.17:6847 100% (exists, up)
osd70 192.168.43.18:6812  0% (doesn't exist)
osd71 192.168.43.19:6824  0% (doesn't exist)
osd72 192.168.43.20:6804  0% (doesn't exist)
osd73 192.168.43.21:6800  0% (doesn't exist)
osd74 192.168.43.16:6806 100% (exists, up)
osd75 192.168.43.17:6843 100% (exists, up)
osd76 192.168.43.18:6801  0% (doesn't exist)
osd77 192.168.43.19:6812  0% (doesn't exist)
osd78 192.168.43.20:6820  0% (doesn't exist)
osd79 192.168.43.21:6808  0% (doesn't exist)
osd80 192.168.43.16:6816 100% (exists, up)
osd81 192.168.43.17:6851 100% (exists, up)
osd82 192.168.43.18:6836  0% (doesn't exist)
osd83 192.168.43.19:6832  0% (doesn't exist)

Re: [ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread Ilya Dryomov
On Fri, Apr 15, 2016 at 10:18 AM, lin zhou  wrote:
> Hi,cephers:
> In one of my ceph cluster,we map rbd then mount it. in node1 and then
> using samba to share it to do backup for several vm,and some web root
> directory.
>
> Yesterday,one of the disk  in my cluster is full at 95%,then the
> cluster stop receive write request.
> I have solve the full problem.But these mount directory from mapped
> rbd in node1 do not work correctly.
>
> case 1:I can access some directory in node1 ,but some files can not open
> case 2:If I enter a directory,the cd command hang,the state in ps is
> D,and can not be killed.If I map this rbd in another host and then
> mount it,I can see all files.
>
> so does some option for rbd map and the following mount command to
> deal with this situation that ceph has a little volatility to prevent
> hang.

Can you provide the output of

# ceph -s
# find /sys/kernel/debug/ceph -type f -print -exec cat {} \;

from the faulty host?

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] directory hang which mount from a mapped rbd

2016-04-15 Thread lin zhou
Hi,cephers:
In one of my ceph cluster,we map rbd then mount it. in node1 and then
using samba to share it to do backup for several vm,and some web root
directory.

Yesterday,one of the disk  in my cluster is full at 95%,then the
cluster stop receive write request.
I have solve the full problem.But these mount directory from mapped
rbd in node1 do not work correctly.

case 1:I can access some directory in node1 ,but some files can not open
case 2:If I enter a directory,the cd command hang,the state in ps is
D,and can not be killed.If I map this rbd in another host and then
mount it,I can see all files.

so does some option for rbd map and the following mount command to
deal with this situation that ceph has a little volatility to prevent
hang.

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Antw: Re: remote logging

2016-04-15 Thread Steffen Weißgerber



>>> Wido den Hollander  schrieb am Donnerstag, 14. April
2016 um
16:02:

>> Op 14 april 2016 om 14:46 schreef Steffen Weißgerber
:
>> 
>> 
>> Hello,
>> 
>> I tried to configure ceph logging to a remote syslog host based on
>> Sebastian Han's Blog
>> (http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/): 
>> 
>> ceph.conf
>> 
>> [global]
>> ...
>> log_file = none
>> log_to_syslog = true
>> err_to_syslog = true
>> 
>> [mon]
>> mon_cluster_log_to_syslog = true
>> mon_cluster_log_file = none
>> 
>> The remote logging works fine but nevertheless i find local logging
in the
>> file /none.
>> 
> 
> It has been a long time since I tried syslog. Could you try:
> 
> log_file = ""
> 
> See how that works out.
> 

Yes, that's it. First I thought a 'ceph-deploy --overwrite-conf config
push '
will also notify the daemons on that host to reread the configuration
but
I had to restart the daemons so that the /none file is not used
anymore.

Thank you and regards.

> Wido
> 
>> That means:
>> 
>> monitor and osd logging on mon hosts
>> 
>> 2016-04-14 14:40:46.379376 mon.0 2.1.1.92:6789/0 2643 : cluster
[INF] pgmap
>> v39837493: 4624 pgs: 4624 active+clean; 22634 GB data, 68107 GB
used, 122 TB 
> 
>> / 189 TB avail; 36972 kB/s rd, 6561 kB/s wr, 1733 op/s
>> 2016-04-14 14:40:48.824882 7f21de1e9700  0 -- 2.1.1.138:6812/8489
>>
>> 2.1.106.116:0/1407816754 pipe(0x1b69e000 sd=182 :6812 s=0 pgs=0 cs=0
l=0
>> c=0x1bb5a2c
>> 0).accept peer addr is really 2.1.106.116:0/1407816754 (socket is
>> 2.1.106.116:60963/0)
>> 2016-04-14 14:40:47.460665 mon.0 2.1.1.92:6789/0 2644 : cluster
[INF] pgmap
>> v39837494: 4624 pgs: 4624 active+clean; 22634 GB data, 68107 GB
used, 122 TB 
> 
>> / 189 TB avail; 34412 kB/s rd, 8085 kB/s wr, 1762 op/s
>> 
>> and osd logging on non mon hosts.
>> 
>> I configured this on Giant and now migrated to Hammer (based on
Ubuntu 
> 14.04.4
>> LTS) without change.
>> 
>> What I'm doing wrong?
>> 
>> Thanks in advance.
>> 
>> 
>> Regards
>> 
>> Steffen
>> 
>> 
>> 
>> -- 
>> Klinik-Service Neubrandenburg GmbH
>> Allendestr. 30, 17036 Neubrandenburg
>> Amtsgericht Neubrandenburg, HRB 2457
>> Geschaeftsfuehrerin: Gudrun Kappich
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com