Thanks Igor, indeed it does match !
# cat ceph-osd.0.log |grep wal
*2018-08-16 11:55:27.181950 7fa47c106e00 4
rocksdb: Options.wal_dir: db*
2018-08-16 11:55:27.181983 7fa47c106e00 4 rocksdb:
Options.wal_bytes_per_sync: 0
2018-08-16 11:55:27.181984 7fa47c106e00 4
rocksdb: Options.wal_recovery_mode: 0
2018-08-16 11:55:27.182000 7fa47c106e00 4
rocksdb: Options.wal_filter: None
2018-08-16 11:55:27.182011 7fa47c106e00 4 rocksdb:
Options.max_total_wal_size: 0
# ceph daemon osd.0 perf dump
...
"bluefs": {
"gift_bytes": 0,
"reclaim_bytes": 0,
"db_total_bytes": 80015777792,
"db_used_bytes": 15728640,
* "wal_total_bytes": 0,**
** "wal_used_bytes": 0,*
"slow_total_bytes": 71988936704,
"slow_used_bytes": 0,
"num_files": 12,
"log_bytes": 491520,
"log_compactions": 0,
"logged_bytes": 114688,
"files_written_wal": 1,
"files_written_sst": 1,
"bytes_written_wal": 377013,
"bytes_written_sst": 4842
},
...
Just an additional question, is it normal that on the osd log, I see
that /max_total_wal_size/ is setted to 0 ?
I used the ceph default values at this time :
# ceph-conf --show-config | grep wal
bluefs_preextend_wal_files = false
bluestore_block_wal_create = false
bluestore_block_wal_path =
bluestore_block_wal_size = 100663296
rocksdb_separate_wal_dir = false
Regards,
Hervé
Le 16/08/2018 à 16:05, Igor Fedotov a écrit :
Hi Herve
actually absence of block.wal symlink is good enough symptom that DB
and WAL are merged .
But you can also inspect OSD startup log or check bluefs perf counters
after some load - corresponding WAL counters (total/used) should be zero.
Thanks,
Igor
On 8/16/2018 4:55 PM, Hervé Ballans wrote:
Hi all,
I'm setting up my Ceph cluster (last release of Luminous) and I'm
currently configuring OSD with WAL and DB on NVMe disk.
OSD data are on a SATA disk and Both WAL and DB are on the same
partition of the NVMe disk.
After creating partitions on the NVMe (block partitions, without
filesystem), I created my first OSD from the admin node :
$ ceph-deploy osd create --debug --bluestore --data /dev/sda
--block-db /dev/nvme0n1p1 /node-osd0/
It works perfectly, but I just want to clarify a point regarding the
WAL : I understood that if we specify a --block-db option without a
--block-wal, WAL is stored on the same partition than the DB.
OK, I'm sure it's working like that but how can I check now where the
wal is really stored ? (as there is no symbolic link block.wal into
/var/lib/ceph/osd/ceph-0 [1] ?)
Is there somewhere or a Ceph command where I can check this ?
I just wanted to be sure of my options before starting deployment on
my 120 OSDs !
Thanks for your clarifications,
Hervé
[1] # ls -l /var/lib/ceph/osd/ceph-0/
total 48
-rw-r--r-- 1 ceph ceph 465 Aug 16 14:36 activate.monmap
lrwxrwxrwx 1 ceph ceph 93 Aug 16 14:36 block ->
/dev/ceph-766bd78c-ed1a-4e27-8b4d-7adc4c4f2f0d/osd-block-98bfb597-009b-4e88-bc5e-dd22587d21fe
lrwxrwxrwx 1 ceph ceph 15 Aug 16 14:36 block.db -> /dev/nvme0n1p1
-rw-r--r-- 1 ceph ceph 2 Aug 16 14:36 bluefs
-rw-r--r-- 1 ceph ceph 37 Aug 16 14:36 ceph_fsid
-rw-r--r-- 1 ceph ceph 37 Aug 16 14:36 fsid
-rw------- 1 ceph ceph 55 Aug 16 14:36 keyring
-rw-r--r-- 1 ceph ceph 8 Aug 16 14:36 kv_backend
-rw-r--r-- 1 ceph ceph 21 Aug 16 14:36 magic
-rw-r--r-- 1 ceph ceph 4 Aug 16 14:36 mkfs_done
-rw-r--r-- 1 ceph ceph 41 Aug 16 14:36 osd_key
-rw-r--r-- 1 ceph ceph 6 Aug 16 14:36 ready
-rw-r--r-- 1 ceph ceph 10 Aug 16 14:36 type
-rw-r--r-- 1 ceph ceph 2 Aug 16 14:36 whoami
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com