gentooserver ~ # ceph -s --debug-ms=1
2018-01-18 21:09:55.981886 7f9581f33700  1  Processor -- start
2018-01-18 21:09:55.981919 7f9581f33700  1 -- - start start
2018-01-18 21:09:55.982006 7f9581f33700  1 -- - -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- auth(proto 0 30 bytes
epoch 0) v1 -- 0x7f957c129110 con 0
2018-01-18 21:09:55.982701 7f957b7fe700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 learned_addr learned my
addr [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692
2018-01-18 21:09:55.983088 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 1 ==== mon_map magic: 0 v1
==== 208+0+0 (3367529148 0 0) 0x7f956c001130 con 0x7f957c124780
2018-01-18 21:09:55.983137 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 2 ==== auth_reply(proto 1 0
(0) Success) v1 ==== 24+0+0 (3149240224 0 0) 0x7f956c001490 con
0x7f957c124780
2018-01-18 21:09:55.983169 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 --
mon_subscribe({monmap=0+}) v2 -- 0x7f957c12a700 con 0
2018-01-18 21:09:55.983204 7f9581f33700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 --
mon_subscribe({mgrmap=0+}) v2 -- 0x7f957c129210 con 0
2018-01-18 21:09:55.983260 7f9581f33700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- mon_subscribe({osdmap=0})
v2 -- 0x7f957c129210 con 0
2018-01-18 21:09:55.983409 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 3 ==== mon_map magic: 0 v1
==== 208+0+0 (3367529148 0 0) 0x7f956c001050 con 0x7f957c124780
2018-01-18 21:09:55.983459 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 4 ==== mgrmap(e 1) v1 ====
103+0+0 (706778617 0 0) 0x7f956c001470 con 0x7f957c124780
2018-01-18 21:09:55.983562 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 5 ==== osd_map(1..1 src has
1..1) v3 ==== 638+0+0 (1741579187 0 0) 0x7f956c0011b0 con 0x7f957c124780
2018-01-18 21:09:55.985228 7f9581f33700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- mon_command({"prefix":
"get_command_descriptions"} v 0) v1 -- 0x7f957c12a0c0 con 0
2018-01-18 21:10:35.982605 7f9578ff9700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 >>
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 conn(0x7f957c124780 :-1
s=STATE_OPEN pgs=3 cs=1 l=1).mark_down
2018-01-18 21:10:35.982706 7f9578ff9700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- auth(proto 0 30 bytes
epoch 1) v1 -- 0x7f9560005af0 con 0
2018-01-18 21:10:35.983813 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 1 ==== auth_reply(proto 1 0
(0) Success) v1 ==== 24+0+0 (3149240224 0 0) 0x7f956c000d50 con
0x7f95600012a0
2018-01-18 21:10:35.983856 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 --
mon_subscribe({mgrmap=0+,monmap=2+}) v2 -- 0x7f9560007190 con 0
2018-01-18 21:10:35.983875 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- mon_command({"prefix":
"get_command_descriptions"} v 0) v1 -- 0x7f955c0033c0 con 0
2018-01-18 21:10:35.984094 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 2 ==== mgrmap(e 1) v1 ====
103+0+0 (706778617 0 0) 0x7f956c000fd0 con 0x7f95600012a0
2018-01-18 21:11:11.983389 7f9578ff9700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 >>
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 conn(0x7f95600012a0 :-1
s=STATE_OPEN pgs=4 cs=1 l=1).mark_down
2018-01-18 21:11:11.983464 7f9578ff9700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- auth(proto 0 30 bytes
epoch 1) v1 -- 0x7f95600062b0 con 0
2018-01-18 21:11:11.984386 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 1 ==== auth_reply(proto 1 0
(0) Success) v1 ==== 24+0+0 (3149240224 0 0) 0x7f956c000d50 con
0x7f9560007290
2018-01-18 21:11:11.984427 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 --
mon_subscribe({mgrmap=0+,monmap=2+}) v2 -- 0x7f95600068f0 con 0
2018-01-18 21:11:11.984458 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 -->
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 -- mon_command({"prefix":
"get_command_descriptions"} v 0) v1 -- 0x7f955c0041d0 con 0
2018-01-18 21:11:11.984678 7f9579ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 <== mon.0
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 2 ==== mgrmap(e 1) v1 ====
103+0+0 (706778617 0 0) 0x7f956c000fd0 con 0x7f9560007290
^CError EINTR: problem getting command descriptions from mon.
2018-01-18 21:11:41.999310 7f9571ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 >>
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 conn(0x7f9560007290 :-1
s=STATE_OPEN pgs=5 cs=1 l=1).mark_down
2018-01-18 21:11:41.999382 7f9571ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 shutdown_connections
2018-01-18 21:11:41.999464 7f9571ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 shutdown_connections
2018-01-18 21:11:41.999488 7f9571ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 wait complete.
2018-01-18 21:11:41.999494 7f9571ffb700  1 --
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 >>
[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:0/285566692 conn(0x7f957c118c90 :-1
s=STATE_NONE pgs=0 cs=0 l=0).mark_down

To my knowledge there are no firewall settings that would cause problems.
I'm running ceph -s on the monitor node.

On Tue, Jan 16, 2018 at 8:05 PM, Jean-Charles Lopez <jelo...@redhat.com>
wrote:

> Hi Nathan,
>
> I would have place the mon_host parameter and assigned it the IP address
> for your monitor host in the global section so that the client (ceph -s
> command).
>
> Have you also checked your firewall setup on your MON box?
>
> To help diagnose you can also use ceph -s --debug-ms=1 so you can follow
> the network exchange between your client machine and the MON.
>
> Regards
> JC
>
>
> On Jan 16, 2018, at 13:24, Nathan Dehnel <ncdeh...@gmail.com> wrote:
>
> I'm doing a manual setup following http://docs.ceph.
> com/docs/master/install/manual-deployment/
>
> The ceph command hangs until I kill it. I have 1 monitor service started.
> ==================================================
> gentooserver ~ # ceph -s
> ^CError EINTR: problem getting command descriptions from mon.
> =================================================
> gentooserver ~ # emerge -pqv ceph
> [ebuild   R   ] sys-cluster/ceph-12.2.1  USE="mgr radosgw ssl systemd
> tcmalloc -babeltrace -cephfs -fuse -jemalloc -ldap -lttng -nss -static-libs
> {-test} -xfs -zfs" CPU_FLAGS_X86="sse sse2 sse3 sse4_1 sse4_2 ssse3"
> PYTHON_TARGETS="python2_7 python3_5 -python3_4 -python3_6"
> ==================================================
> gentooserver ~ # cat /etc/ceph/ceph.conf
> [global]
> cluster = ceph
> fsid = a736559a-92d1-483e-9289-d2c7feed510f
> ms bind ipv6 = true
> #public network = 2001:1c:d64b:91c5:3a84:dfce:8546:9982/0
> auth cluster required = none
> auth service required = none
> auth client required = none
>
> [mon]
> mon initial members = mon0
> mon host = gentooserver
> mon addr = [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789
>
>
> [mon.mon0]
> host = gentooserver
> mon addr = [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789
>
> [osd]
> osd journal size = 10000
> osd crush chooseleaf type = 0
> ==================================================
> gentooserver ~ # monmaptool --print /tmp/monmap
> monmaptool: monmap file /tmp/monmap
> epoch 0
> fsid a736559a-92d1-483e-9289-d2c7feed510f
> last_changed 2018-01-14 16:50:59.838277
> created 2018-01-14 16:50:59.838277
> 0: [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 mon.mon0
> ==================================================
> gentooserver ~ # systemctl status ceph-mon@mon0 | cat
> ● ceph-mon@mon0.service - Ceph cluster monitor daemon
>    Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled;
> vendor preset: disabled)
>    Active: active (running) since Tue 2018-01-16 14:50:18 CST; 17min ago
>  Main PID: 75938 (ceph-mon)
>    CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@mon0.service
>            └─75938 /usr/bin/ceph-mon -f --cluster ceph --id mon0 --setuser
> ceph --setgroup ceph
>
> Jan 16 14:50:18 gentooserver systemd[1]: Started Ceph cluster monitor
> daemon.
> Jan 16 14:50:18 gentooserver ceph-mon[75938]: 2018-01-16 14:50:18.977494
> 7ff07d4cef80 -1 distro_detect - can't detect distro_version
> ===================================================
> gentooserver ~ # cat /var/log/ceph/ceph.log
> 2018-01-16 14:50:18.977541 mon.mon0 mon.0 
> [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0
> 1 : cluster [INF] mon.mon0@0 won leader election with quorum 0
> 2018-01-16 14:50:18.977656 mon.mon0 mon.0 
> [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0
> 2 : cluster [INF] monmap e1: 1 mons at {mon0=[2001:1c:d64b:91c5:3a84:
> dfce:8546:9982]:6789/0}
> ------------------------------------------------------------
> -------------------------------
> gentooserver ~ # cat /var/log/ceph/ceph-mon.mon0.log
> 2018-01-16 14:50:18.760533 7ff07d4cef80  0 set uid:gid to 110:239
> (ceph:ceph)
> 2018-01-16 14:50:18.760549 7ff07d4cef80  0 ceph version 12.2.1 (
> 3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable), process
> (unknown), pid 75938
> 2018-01-16 14:50:18.760591 7ff07d4cef80  0 pidfile_write: ignore empty
> --pid-file
> 2018-01-16 14:50:18.765642 7ff07d4cef80  0 load: jerasure load: lrc load:
> isa
> 2018-01-16 14:50:18.765702 7ff07d4cef80  0  set rocksdb option compression
> = kNoCompression
> 2018-01-16 14:50:18.765709 7ff07d4cef80  0  set rocksdb option
> write_buffer_size = 33554432
> 2018-01-16 14:50:18.765722 7ff07d4cef80  0  set rocksdb option compression
> = kNoCompression
> 2018-01-16 14:50:18.765726 7ff07d4cef80  0  set rocksdb option
> write_buffer_size = 33554432
> 2018-01-16 14:50:18.765798 7ff07d4cef80  4 rocksdb: RocksDB version: 5.4.0
>
> 2018-01-16 14:50:18.765804 7ff07d4cef80  4 rocksdb: Git sha
> rocksdb_build_git_sha:@0@
> 2018-01-16 14:50:18.765806 7ff07d4cef80  4 rocksdb: Compile date Dec 14
> 2017
> 2018-01-16 14:50:18.765808 7ff07d4cef80  4 rocksdb: DB SUMMARY
>
> 2018-01-16 14:50:18.765837 7ff07d4cef80  4 rocksdb: CURRENT file:  CURRENT
>
> 2018-01-16 14:50:18.765840 7ff07d4cef80  4 rocksdb: IDENTITY file:
> IDENTITY
>
> 2018-01-16 14:50:18.765843 7ff07d4cef80  4 rocksdb: MANIFEST file:
> MANIFEST-000011 size: 210 Bytes
>
> 2018-01-16 14:50:18.765845 7ff07d4cef80  4 rocksdb: SST files in
> /var/lib/ceph/mon/ceph-mon0/store.db dir, Total Num: 3, files: 000004.sst
> 000007.sst 000010.sst
>
> 2018-01-16 14:50:18.765846 7ff07d4cef80  4 rocksdb: Write Ahead Log file
> in /var/lib/ceph/mon/ceph-mon0/store.db: 000012.log size: 9161 ;
>
> 2018-01-16 14:50:18.765847 7ff07d4cef80  4 rocksdb:
>  Options.error_if_exists: 0
> 2018-01-16 14:50:18.765848 7ff07d4cef80  4 rocksdb:
>  Options.create_if_missing: 0
> 2018-01-16 14:50:18.765848 7ff07d4cef80  4 rocksdb:
>  Options.paranoid_checks: 1
> 2018-01-16 14:50:18.765849 7ff07d4cef80  4 rocksdb:
>              Options.env: 0x55bc0ff83a60
> 2018-01-16 14:50:18.765849 7ff07d4cef80  4 rocksdb:
>         Options.info_log: 0x55bc10a53ae0
> 2018-01-16 14:50:18.765850 7ff07d4cef80  4 rocksdb:
>   Options.max_open_files: -1
> 2018-01-16 14:50:18.765851 7ff07d4cef80  4 rocksdb:
> Options.max_file_opening_threads: 16
> 2018-01-16 14:50:18.765851 7ff07d4cef80  4 rocksdb:
>        Options.use_fsync: 0
> 2018-01-16 14:50:18.765852 7ff07d4cef80  4 rocksdb:
>  Options.max_log_file_size: 0
> 2018-01-16 14:50:18.765852 7ff07d4cef80  4 rocksdb:
> Options.max_manifest_file_size: 18446744073709551615
> 2018-01-16 14:50:18.765853 7ff07d4cef80  4 rocksdb:
>  Options.log_file_time_to_roll: 0
> 2018-01-16 14:50:18.765853 7ff07d4cef80  4 rocksdb:
>  Options.keep_log_file_num: 1000
> 2018-01-16 14:50:18.765854 7ff07d4cef80  4 rocksdb:
> Options.recycle_log_file_num: 0
> 2018-01-16 14:50:18.765854 7ff07d4cef80  4 rocksdb:
>  Options.allow_fallocate: 1
> 2018-01-16 14:50:18.765855 7ff07d4cef80  4 rocksdb:
> Options.allow_mmap_reads: 0
> 2018-01-16 14:50:18.765855 7ff07d4cef80  4 rocksdb:
>  Options.allow_mmap_writes: 0
> 2018-01-16 14:50:18.765856 7ff07d4cef80  4 rocksdb:
> Options.use_direct_reads: 0
> 2018-01-16 14:50:18.765856 7ff07d4cef80  4 rocksdb:
> Options.use_direct_io_for_flush_and_compaction: 0
> 2018-01-16 14:50:18.765860 7ff07d4cef80  4 rocksdb:
> Options.create_missing_column_families: 0
> 2018-01-16 14:50:18.765861 7ff07d4cef80  4 rocksdb:
>       Options.db_log_dir:
> 2018-01-16 14:50:18.765861 7ff07d4cef80  4 rocksdb:
>          Options.wal_dir: /var/lib/ceph/mon/ceph-mon0/store.db
> 2018-01-16 14:50:18.765862 7ff07d4cef80  4 rocksdb:
> Options.table_cache_numshardbits: 6
> 2018-01-16 14:50:18.765862 7ff07d4cef80  4 rocksdb:
> Options.max_subcompactions: 1
> 2018-01-16 14:50:18.765863 7ff07d4cef80  4 rocksdb:
> Options.max_background_flushes: 1
> 2018-01-16 14:50:18.765863 7ff07d4cef80  4 rocksdb:
>  Options.WAL_ttl_seconds: 0
> 2018-01-16 14:50:18.765864 7ff07d4cef80  4 rocksdb:
>  Options.WAL_size_limit_MB: 0
> 2018-01-16 14:50:18.765864 7ff07d4cef80  4 rocksdb:
>  Options.manifest_preallocation_size: 4194304
> 2018-01-16 14:50:18.765865 7ff07d4cef80  4 rocksdb:
>  Options.is_fd_close_on_exec: 1
> 2018-01-16 14:50:18.765866 7ff07d4cef80  4 rocksdb:
>  Options.advise_random_on_open: 1
> 2018-01-16 14:50:18.765866 7ff07d4cef80  4 rocksdb:
> Options.db_write_buffer_size: 0
> 2018-01-16 14:50:18.765866 7ff07d4cef80  4 rocksdb:
>  Options.access_hint_on_compaction_start: 1
> 2018-01-16 14:50:18.765867 7ff07d4cef80  4 rocksdb:
> Options.new_table_reader_for_compaction_inputs: 0
> 2018-01-16 14:50:18.765868 7ff07d4cef80  4 rocksdb:
>  Options.compaction_readahead_size: 0
> 2018-01-16 14:50:18.765868 7ff07d4cef80  4 rocksdb:
>  Options.random_access_max_buffer_size: 1048576
> 2018-01-16 14:50:18.765868 7ff07d4cef80  4 rocksdb:
>  Options.writable_file_max_buffer_size: 1048576
> 2018-01-16 14:50:18.765869 7ff07d4cef80  4 rocksdb:
> Options.use_adaptive_mutex: 0
> 2018-01-16 14:50:18.765870 7ff07d4cef80  4 rocksdb:
>     Options.rate_limiter: (nil)
> 2018-01-16 14:50:18.765871 7ff07d4cef80  4 rocksdb:
>  Options.sst_file_manager.rate_bytes_per_sec: 0
> 2018-01-16 14:50:18.765871 7ff07d4cef80  4 rocksdb:
>   Options.bytes_per_sync: 0
> 2018-01-16 14:50:18.765871 7ff07d4cef80  4 rocksdb:
> Options.wal_bytes_per_sync: 0
> 2018-01-16 14:50:18.765872 7ff07d4cef80  4 rocksdb:
>  Options.wal_recovery_mode: 2
> 2018-01-16 14:50:18.765872 7ff07d4cef80  4 rocksdb:
> Options.enable_thread_tracking: 0
> 2018-01-16 14:50:18.765873 7ff07d4cef80  4 rocksdb:
>  Options.allow_concurrent_memtable_write: 1
> 2018-01-16 14:50:18.765873 7ff07d4cef80  4 rocksdb:
> Options.enable_write_thread_adaptive_yield: 1
> 2018-01-16 14:50:18.765874 7ff07d4cef80  4 rocksdb:
>  Options.write_thread_max_yield_usec: 100
> 2018-01-16 14:50:18.765874 7ff07d4cef80  4 rocksdb:
> Options.write_thread_slow_yield_usec: 3
> 2018-01-16 14:50:18.765875 7ff07d4cef80  4 rocksdb:
>        Options.row_cache: None
> 2018-01-16 14:50:18.765875 7ff07d4cef80  4 rocksdb:
>       Options.wal_filter: None
> 2018-01-16 14:50:18.765876 7ff07d4cef80  4 rocksdb:
>  Options.avoid_flush_during_recovery: 0
> 2018-01-16 14:50:18.765876 7ff07d4cef80  4 rocksdb:
>  Options.base_background_compactions: 1
> 2018-01-16 14:50:18.765877 7ff07d4cef80  4 rocksdb:
>  Options.max_background_compactions: 1
> 2018-01-16 14:50:18.765877 7ff07d4cef80  4 rocksdb:
>  Options.avoid_flush_during_shutdown: 0
> 2018-01-16 14:50:18.765878 7ff07d4cef80  4 rocksdb:
>  Options.delayed_write_rate : 16777216
> 2018-01-16 14:50:18.765878 7ff07d4cef80  4 rocksdb:
>  Options.max_total_wal_size: 0
> 2018-01-16 14:50:18.765879 7ff07d4cef80  4 rocksdb:
>  Options.delete_obsolete_files_period_micros: 21600000000
> 2018-01-16 14:50:18.765881 7ff07d4cef80  4 rocksdb:
>  Options.stats_dump_period_sec: 600
> 2018-01-16 14:50:18.765882 7ff07d4cef80  4 rocksdb: Compression algorithms
> supported:
> 2018-01-16 14:50:18.765883 7ff07d4cef80  4 rocksdb: Snappy supported: 0
> 2018-01-16 14:50:18.765883 7ff07d4cef80  4 rocksdb: Zlib supported: 0
> 2018-01-16 14:50:18.765883 7ff07d4cef80  4 rocksdb: Bzip supported: 0
> 2018-01-16 14:50:18.765884 7ff07d4cef80  4 rocksdb: LZ4 supported: 0
> 2018-01-16 14:50:18.765884 7ff07d4cef80  4 rocksdb: ZSTD supported: 0
> 2018-01-16 14:50:18.765885 7ff07d4cef80  4 rocksdb: Fast CRC32 supported: 1
> 2018-01-16 14:50:18.765950 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> version_set.cc:2609] Recovering from manifest file: MANIFEST-000011
>
> 2018-01-16 14:50:18.765981 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> column_family.cc:407] --------------- Options for column family [default]:
>
> 2018-01-16 14:50:18.765983 7ff07d4cef80  4 rocksdb:
>  Options.comparator: leveldb.BytewiseComparator
> 2018-01-16 14:50:18.765984 7ff07d4cef80  4 rocksdb:
>  Options.merge_operator:
> 2018-01-16 14:50:18.765984 7ff07d4cef80  4 rocksdb:
> Options.compaction_filter: None
> 2018-01-16 14:50:18.765985 7ff07d4cef80  4 rocksdb:
> Options.compaction_filter_factory: None
> 2018-01-16 14:50:18.765985 7ff07d4cef80  4 rocksdb:
>  Options.memtable_factory: SkipListFactory
> 2018-01-16 14:50:18.765986 7ff07d4cef80  4 rocksdb:
> Options.table_factory: BlockBasedTable
> 2018-01-16 14:50:18.765998 7ff07d4cef80  4 rocksdb:
> table_factory options:   flush_block_policy_factory:
> FlushBlockBySizePolicyFactory (0x55bc1079c088)
>   cache_index_and_filter_blocks: 1
>   cache_index_and_filter_blocks_with_high_priority: 1
>   pin_l0_filter_and_index_blocks_in_cache: 1
>   index_type: 0
>   hash_index_allow_collision: 1
>   checksum: 1
>   no_block_cache: 0
>   block_cache: 0x55bc10a77510
>   block_cache_name: LRUCache
>   block_cache_options:
>     capacity : 134217728
>     num_shard_bits : 4
>     strict_capacity_limit : 0
>     high_pri_pool_ratio: 0.000
>   block_cache_compressed: (nil)
>   persistent_cache: (nil)
>   block_size: 4096
>   block_size_deviation: 10
>   block_restart_interval: 16
>   index_block_restart_interval: 1
>   filter_policy: rocksdb.BuiltinBloomFilter
>   whole_key_filtering: 1
>   format_version: 2
>
> 2018-01-16 14:50:18.766003 7ff07d4cef80  4 rocksdb:
> Options.write_buffer_size: 33554432
> 2018-01-16 14:50:18.766004 7ff07d4cef80  4 rocksdb:
> Options.max_write_buffer_number: 2
> 2018-01-16 14:50:18.766004 7ff07d4cef80  4 rocksdb:
> Options.compression: NoCompression
> 2018-01-16 14:50:18.766005 7ff07d4cef80  4 rocksdb:
> Options.bottommost_compression: Disabled
> 2018-01-16 14:50:18.766006 7ff07d4cef80  4 rocksdb:
>  Options.prefix_extractor: nullptr
> 2018-01-16 14:50:18.766006 7ff07d4cef80  4 rocksdb:
>  Options.memtable_insert_with_hint_prefix_extractor: nullptr
> 2018-01-16 14:50:18.766007 7ff07d4cef80  4 rocksdb:
>  Options.num_levels: 7
> 2018-01-16 14:50:18.766007 7ff07d4cef80  4 rocksdb:
> Options.min_write_buffer_number_to_merge: 1
> 2018-01-16 14:50:18.766008 7ff07d4cef80  4 rocksdb:
>  Options.max_write_buffer_number_to_maintain: 0
> 2018-01-16 14:50:18.766009 7ff07d4cef80  4 rocksdb:
> Options.compression_opts.window_bits: -14
> 2018-01-16 14:50:18.766009 7ff07d4cef80  4 rocksdb:
> Options.compression_opts.level: -1
> 2018-01-16 14:50:18.766010 7ff07d4cef80  4 rocksdb:
>  Options.compression_opts.strategy: 0
> 2018-01-16 14:50:18.766010 7ff07d4cef80  4 rocksdb:
>  Options.compression_opts.max_dict_bytes: 0
> 2018-01-16 14:50:18.766011 7ff07d4cef80  4 rocksdb:
> Options.level0_file_num_compaction_trigger: 4
> 2018-01-16 14:50:18.766011 7ff07d4cef80  4 rocksdb:
> Options.level0_slowdown_writes_trigger: 20
> 2018-01-16 14:50:18.766012 7ff07d4cef80  4 rocksdb:
> Options.level0_stop_writes_trigger: 36
> 2018-01-16 14:50:18.766013 7ff07d4cef80  4 rocksdb:
>  Options.target_file_size_base: 67108864
> 2018-01-16 14:50:18.766013 7ff07d4cef80  4 rocksdb:
>  Options.target_file_size_multiplier: 1
> 2018-01-16 14:50:18.766014 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_base: 268435456
> 2018-01-16 14:50:18.766014 7ff07d4cef80  4 rocksdb:
> Options.level_compaction_dynamic_level_bytes: 0
> 2018-01-16 14:50:18.766015 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier: 10.000000
> 2018-01-16 14:50:18.766018 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[0]: 1
> 2018-01-16 14:50:18.766018 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[1]: 1
> 2018-01-16 14:50:18.766019 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[2]: 1
> 2018-01-16 14:50:18.766019 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[3]: 1
> 2018-01-16 14:50:18.766020 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[4]: 1
> 2018-01-16 14:50:18.766021 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[5]: 1
> 2018-01-16 14:50:18.766021 7ff07d4cef80  4 rocksdb:
> Options.max_bytes_for_level_multiplier_addtl[6]: 1
> 2018-01-16 14:50:18.766022 7ff07d4cef80  4 rocksdb:
>  Options.max_sequential_skip_in_iterations: 8
> 2018-01-16 14:50:18.766022 7ff07d4cef80  4 rocksdb:
> Options.max_compaction_bytes: 1677721600
> 2018-01-16 14:50:18.766023 7ff07d4cef80  4 rocksdb:
> Options.arena_block_size: 4194304
> 2018-01-16 14:50:18.766023 7ff07d4cef80  4 rocksdb:   
> Options.soft_pending_compaction_bytes_limit:
> 68719476736
> 2018-01-16 14:50:18.766024 7ff07d4cef80  4 rocksdb:   
> Options.hard_pending_compaction_bytes_limit:
> 274877906944
> 2018-01-16 14:50:18.766024 7ff07d4cef80  4 rocksdb:
>  Options.rate_limit_delay_max_milliseconds: 100
> 2018-01-16 14:50:18.766025 7ff07d4cef80  4 rocksdb:
> Options.disable_auto_compactions: 0
> 2018-01-16 14:50:18.766026 7ff07d4cef80  4 rocksdb:
>  Options.compaction_style: kCompactionStyleLevel
> 2018-01-16 14:50:18.766026 7ff07d4cef80  4 rocksdb:
>    Options.compaction_pri: kByCompensatedSize
> 2018-01-16 14:50:18.766027 7ff07d4cef80  4 rocksdb:
> Options.compaction_options_universal.size_ratio: 1
> 2018-01-16 14:50:18.766030 7ff07d4cef80  4 rocksdb:
> Options.compaction_options_universal.min_merge_width: 2
> 2018-01-16 14:50:18.766031 7ff07d4cef80  4 rocksdb:
> Options.compaction_options_universal.max_merge_width: 4294967295
> 2018-01-16 14:50:18.766032 7ff07d4cef80  4 rocksdb:
> Options.compaction_options_universal.max_size_amplification_percent: 200
> 2018-01-16 14:50:18.766032 7ff07d4cef80  4 rocksdb:
> Options.compaction_options_universal.compression_size_percent: -1
> 2018-01-16 14:50:18.766033 7ff07d4cef80  4 rocksdb:
> Options.compaction_options_fifo.max_table_files_size: 1073741824
> 2018-01-16 14:50:18.766033 7ff07d4cef80  4 rocksdb:
>  Options.table_properties_collectors:
> 2018-01-16 14:50:18.766034 7ff07d4cef80  4 rocksdb:
>  Options.inplace_update_support: 0
> 2018-01-16 14:50:18.766035 7ff07d4cef80  4 rocksdb:
>  Options.inplace_update_num_locks: 10000
> 2018-01-16 14:50:18.766035 7ff07d4cef80  4 rocksdb:
>  Options.memtable_prefix_bloom_size_ratio: 0.000000
> 2018-01-16 14:50:18.766036 7ff07d4cef80  4 rocksdb:
>  Options.memtable_huge_page_size: 0
> 2018-01-16 14:50:18.766037 7ff07d4cef80  4 rocksdb:
>    Options.bloom_locality: 0
> 2018-01-16 14:50:18.766037 7ff07d4cef80  4 rocksdb:
> Options.max_successive_merges: 0
> 2018-01-16 14:50:18.766038 7ff07d4cef80  4 rocksdb:
> Options.optimize_filters_for_hits: 0
> 2018-01-16 14:50:18.766038 7ff07d4cef80  4 rocksdb:
> Options.paranoid_file_checks: 0
> 2018-01-16 14:50:18.766040 7ff07d4cef80  4 rocksdb:
> Options.force_consistency_checks: 0
> 2018-01-16 14:50:18.766041 7ff07d4cef80  4 rocksdb:
> Options.report_bg_io_stats: 0
> 2018-01-16 14:50:18.766425 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> version_set.cc:2859] Recovered from manifest 
> file:/var/lib/ceph/mon/ceph-mon0/store.db/MANIFEST-000011
> succeeded,manifest_file_number is 11, next_file_number is 13, last_sequence
> is 448, log_number is 0,prev_log_number is 0,max_column_family is 0
>
> 2018-01-16 14:50:18.766434 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> version_set.cc:2867] Column family [default] (ID 0), log number is 10
>
> 2018-01-16 14:50:18.766479 7ff07d4cef80  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818766473, "job": 1, "event": "recovery_started",
> "log_files": [12]}
> 2018-01-16 14:50:18.766485 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> db_impl_open.cc:482] Recovering log #12 mode 2
> 2018-01-16 14:50:18.769893 7ff07d4cef80  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818769875, "cf_name": "default", "job": 1, "event":
> "table_file_creation", "file_number": 13, "file_size": 9896,
> "table_properties": {"data_size": 8840, "index_size": 75, "filter_size":
> 71, "raw_key_size": 356, "raw_average_key_size": 22, "raw_value_size":
> 8465, "raw_average_value_size": 529, "num_data_blocks": 3, "num_entries":
> 16, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys":
> "0", "kMergeOperands": "0"}}
> 2018-01-16 14:50:18.769936 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> version_set.cc:2395] Creating manifest 14
>
> 2018-01-16 14:50:18.771844 7ff07d4cef80  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818771839, "job": 1, "event": "recovery_finished"}
> 2018-01-16 14:50:18.972893 7ff07d4cef80  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> db_impl_open.cc:1063] DB pointer 0x55bc10882000
> 2018-01-16 14:50:18.972997 7ff06fe3e700  3 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> db_impl.cc:447] ------- DUMPING STATS -------
> 2018-01-16 14:50:18.973006 7ff06fe3e700  3 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> db_impl.cc:448]
> ** DB Stats **
> Uptime(secs): 0.2 total, 0.2 interval
> Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per
> commit group, ingest: 0.00 GB, 0.00 MB/s
> Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB,
> 0.00 MB/s
> Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
> Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit
> group, ingest: 0.00 MB, 0.00 MB/s
> Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB,
> 0.00 MB/s
> Interval stall: 00:00:0.000 H:M:S, 0.0 percent
>
> ** Compaction Stats [default] **
> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB)
> Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec)
> KeyIn KeyDrop
> ------------------------------------------------------------
> ------------------------------------------------------------
> ----------------------------------
>   L0      4/0   134.36 KB   1.0      0.0     0.0      0.0       0.0
> 0.0       0.0   1.0      0.0      2.8         0         1    0.003       0
>     0
>  Sum      4/0   134.36 KB   0.0      0.0     0.0      0.0       0.0
> 0.0       0.0   1.0      0.0      2.8         0         1    0.003       0
>     0
>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0
> 0.0       0.0   1.0      0.0      2.8         0         1    0.003       0
>     0
> Uptime(secs): 0.2 total, 0.2 interval
> Flush(GB): cumulative 0.000, interval 0.000
> AddFile(GB): cumulative 0.000, interval 0.000
> AddFile(Total Files): cumulative 0, interval 0
> AddFile(L0 Files): cumulative 0, interval 0
> AddFile(Keys): cumulative 0, interval 0
> Cumulative compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00
> MB/s read, 0.0 seconds
> Interval compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00
> MB/s read, 0.0 seconds
> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0
> level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for
> pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0
> memtable_compaction, 0 memtable_slowdown, interval 0 total count
>
> ** File Read Latency Histogram By Level [default] **
>
> ** Compaction Stats [default] **
> Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB)
> Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec)
> KeyIn KeyDrop
> ------------------------------------------------------------
> ------------------------------------------------------------
> ----------------------------------
>   L0      4/0   134.36 KB   1.0      0.0     0.0      0.0       0.0
> 0.0       0.0   1.0      0.0      2.8         0         1    0.003       0
>     0
>  Sum      4/0   134.36 KB   0.0      0.0     0.0      0.0       0.0
> 0.0       0.0   1.0      0.0      2.8         0         1    0.003       0
>     0
>  Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0
> 0.0       0.0   0.0      0.0      0.0         0         0    0.000       0
>     0
> Uptime(secs): 0.2 total, 0.2 interval
> Flush(GB): cumulative 0.000, interval 0.000
> AddFile(GB): cumulative 0.000, interval 0.000
> AddFile(Total Files): cumulative 0, interval 0
> AddFile(L0 Files): cumulative 0, interval 0
> AddFile(Keys): cumulative 0, interval 0
> Cumulative compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00
> MB/s read, 0.0 seconds
> Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00
> MB/s read, 0.0 seconds
> Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0
> level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for
> pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0
> memtable_compaction, 0 memtable_slowdown, interval 0 total count
>
> ** File Read Latency Histogram By Level [default] **
>
> 2018-01-16 14:50:18.973039 7ff06fe3e700  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> compaction_job.cc:1403] [default] [JOB 3] Compacting 4@0 files to L1,
> score 1.00
> 2018-01-16 14:50:18.973046 7ff06fe3e700  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> compaction_job.cc:1407] [default] Compaction start summary: Base version
> 2 Base level 0, inputs: [13(9896B) 10(10KB) 7(112KB) 4(1734B)]
>
> 2018-01-16 14:50:18.973060 7ff06fe3e700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818973048, "job": 3, "event": "compaction_started",
> "files_L0": [13, 10, 7, 4], "score": 1, "input_data_size": 137585}
> 2018-01-16 14:50:18.973487 7ff07d4cef80  0 starting mon.mon0 rank 0 at
> public addr [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 at bind addr
> [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 mon_data
> /var/lib/ceph/mon/ceph-mon0 fsid a736559a-92d1-483e-9289-d2c7feed510f
> 2018-01-16 14:50:18.973584 7ff07d4cef80  0 starting mon.mon0 rank 0 at
> [2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0 mon_data
> /var/lib/ceph/mon/ceph-mon0 fsid a736559a-92d1-483e-9289-d2c7feed510f
> 2018-01-16 14:50:18.973998 7ff07d4cef80  1 mon.mon0@-1(probing) e1
> preinit fsid a736559a-92d1-483e-9289-d2c7feed510f
> 2018-01-16 14:50:18.974113 7ff07d4cef80  1 mon.mon0@-1(probing).mds e0
> Unable to load 'last_metadata'
> 2018-01-16 14:50:18.974150 7ff07d4cef80  1 
> mon.mon0@-1(probing).paxosservice(pgmap
> 1..2) refresh upgraded, format 0 -> 1
> 2018-01-16 14:50:18.974155 7ff07d4cef80  1 mon.mon0@-1(probing).pg v0
> on_upgrade discarding in-core PGMap
> 2018-01-16 14:50:18.974209 7ff07d4cef80  0 mon.mon0@-1(probing).mds e1
> print_map
> e1
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds
> uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
> legacy client fscid: -1
>
> No filesystems configured
>
> 2018-01-16 14:50:18.974292 7ff07d4cef80  0 mon.mon0@-1(probing).osd e1
> crush map has features 288514050185494528, adjusting msgr requires
> 2018-01-16 14:50:18.974299 7ff07d4cef80  0 mon.mon0@-1(probing).osd e1
> crush map has features 288514050185494528, adjusting msgr requires
> 2018-01-16 14:50:18.974301 7ff07d4cef80  0 mon.mon0@-1(probing).osd e1
> crush map has features 1009089990564790272, adjusting msgr requires
> 2018-01-16 14:50:18.974302 7ff07d4cef80  0 mon.mon0@-1(probing).osd e1
> crush map has features 288514050185494528, adjusting msgr requires
> 2018-01-16 14:50:18.974429 7ff07d4cef80  1 
> mon.mon0@-1(probing).paxosservice(auth
> 1..50) refresh upgraded, format 0 -> 2
> 2018-01-16 14:50:18.974684 7ff06fe3e700  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> compaction_job.cc:1116] [default] [JOB 3] Generated table #18: 171 keys,
> 132159 bytes
> 2018-01-16 14:50:18.974702 7ff06fe3e700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818974695, "cf_name": "default", "job": 3, "event":
> "table_file_creation", "file_number": 18, "file_size": 132159,
> "table_properties": {"data_size": 129735, "index_size": 780, "filter_size":
> 726, "raw_key_size": 3154, "raw_average_key_size": 18, "raw_value_size":
> 126459, "raw_average_value_size": 739, "num_data_blocks": 29,
> "num_entries": 171, "filter_policy_name": "rocksdb.BuiltinBloomFilter",
> "kDeletedKeys": "0", "kMergeOperands": "0"}}
> 2018-01-16 14:50:18.975080 7ff07d4cef80  0 mon.mon0@-1(probing) e1  my
> rank is now 0 (was -1)
> 2018-01-16 14:50:18.975103 7ff07d4cef80  1 mon.mon0@0(probing) e1
> win_standalone_election
> 2018-01-16 14:50:18.975125 7ff07d4cef80  1 mon.mon0@0(probing).elector(7)
> init, last seen epoch 7, mid-election, bumping
> 2018-01-16 14:50:18.975499 7ff06fe3e700  4 rocksdb:
> [/var/tmp/portage/sys-cluster/ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/
> compaction_job.cc:1173] [default] [JOB 3] Compacted 4@0 files to L1 =>
> 132159 bytes
> 2018-01-16 14:50:18.976476 7ff06fe3e700  4 rocksdb: (Original Log Time
> 2018/01/16-14:50:18.976419) [/var/tmp/portage/sys-cluster/
> ceph-12.2.1/work/ceph-12.2.1/src/rocksdb/db/compaction_job.cc:621]
> [default] compacted to: base level 1 max bytes base 268435456 files[0 1 0 0
> 0 0 0] max score 0.00, MB/sec: 56.7 rd, 54.5 wr, level 1, files in(4, 0)
> out(1) MB in(0.1, 0.0) out(0.1), read-write-amplify(2.0) write-amplify(1.0)
> OK, records in: 201, records dropped: 30
>
> 2018-01-16 14:50:18.976484 7ff06fe3e700  4 rocksdb: (Original Log Time
> 2018/01/16-14:50:18.976434) EVENT_LOG_v1 {"time_micros": 1516135818976428,
> "job": 3, "event": "compaction_finished", "compaction_time_micros": 2426,
> "output_level": 1, "num_output_files": 1, "total_output_size": 132159,
> "num_input_records": 201, "num_output_records": 171, "num_subcompactions":
> 1, "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0,
> "lsm_state": [0, 1, 0, 0, 0, 0, 0]}
> 2018-01-16 14:50:18.976564 7ff06fe3e700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818976562, "job": 3, "event": "table_file_deletion",
> "file_number": 13}
> 2018-01-16 14:50:18.976606 7ff06fe3e700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818976604, "job": 3, "event": "table_file_deletion",
> "file_number": 10}
> 2018-01-16 14:50:18.976648 7ff06fe3e700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818976647, "job": 3, "event": "table_file_deletion",
> "file_number": 7}
> 2018-01-16 14:50:18.976681 7ff06fe3e700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1516135818976679, "job": 3, "event": "table_file_deletion",
> "file_number": 4}
> 2018-01-16 14:50:18.977494 7ff07d4cef80 -1 distro_detect - can't detect
> distro_version
> 2018-01-16 14:50:18.977538 7ff07d4cef80  0 log_channel(cluster) log [INF]
> : mon.mon0@0 won leader election with quorum 0
> 2018-01-16 14:50:18.977649 7ff07d4cef80  0 log_channel(cluster) log [INF]
> : monmap e1: 1 mons at {mon0=[2001:1c:d64b:91c5:3a84:
> dfce:8546:9982]:6789/0}
> 2018-01-16 14:50:18.977705 7ff07d4cef80  0 log_channel(cluster) log [DBG]
> : fsmap
> 2018-01-16 14:50:18.977753 7ff07d4cef80  0 log_channel(cluster) log [DBG]
> : osdmap e1: 0 total, 0 up, 0 in
> 2018-01-16 14:50:18.977786 7ff07d4cef80  0 log_channel(cluster) log [DBG]
> : mgrmap e1: no daemons active
> 2018-01-16 14:51:18.974924 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:51:30.147376 7ff078660700  0 log_channel(audit) log [DBG] :
> from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
> 2018-01-16 14:51:30.147445 7ff078660700  0 log_channel(audit) log [DBG] :
> from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
> 2018-01-16 14:52:18.975109 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:53:18.975302 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:54:18.975510 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:55:18.975691 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:56:18.975850 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:57:18.976047 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:58:18.976264 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 14:59:18.976485 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:00:18.976711 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:01:18.976933 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:02:18.977137 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:03:18.977327 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:04:18.977575 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:05:18.977753 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:06:18.977922 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:07:18.978141 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:08:18.978296 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:09:18.978469 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:10:18.978672 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:11:18.978906 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:12:18.979060 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:13:18.979237 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:14:18.979452 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:15:18.979623 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:16:18.979851 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> 2018-01-16 15:17:18.980064 7ff07664b700  0 mon.mon0@0(leader).data_health(9)
> update_stats avail 91% total 57109 MB, used 4482 MB, avail 52012 MB
> =====================================================
> gentooserver ~ # ceph --admin-daemon /var/run/ceph/ceph-mon.mon0.asok
> mon_status
> {
>     "name": "mon0",
>     "rank": 0,
>     "state": "leader",
>     "election_epoch": 9,
>     "quorum": [
>         0
>     ],
>     "features": {
>         "required_con": "153140804152475648",
>         "required_mon": [
>             "kraken",
>             "luminous"
>         ],
>         "quorum_con": "2305244844532236283",
>         "quorum_mon": [
>             "kraken",
>             "luminous"
>         ]
>     },
>     "outside_quorum": [],
>     "extra_probe_peers": [],
>     "sync_provider": [],
>     "monmap": {
>         "epoch": 1,
>         "fsid": "a736559a-92d1-483e-9289-d2c7feed510f",
>         "modified": "2018-01-14 16:50:59.838277",
>         "created": "2018-01-14 16:50:59.838277",
>         "features": {
>             "persistent": [
>                 "kraken",
>                 "luminous"
>             ],
>             "optional": []
>         },
>         "mons": [
>             {
>                 "rank": 0,
>                 "name": "mon0",
>                 "addr": "[2001:1c:d64b:91c5:3a84:dfce:8546:9982]:6789/0",
>                 "public_addr": "[2001:1c:d64b:91c5:3a84:dfce:
> 8546:9982]:6789/0"
>             }
>         ]
>     },
>     "feature_map": {
>         "mon": {
>             "group": {
>                 "features": "0x1ffddff8eea4fffb",
>                 "release": "luminous",
>                 "num": 1
>             }
>         }
>     }
> }
> ===========================================
> Any help would be appreciated.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to