[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Jan Marek
Hello Eugen,

Dne Po, čec 10, 2023 at 10:02:58 CEST napsal Eugen Block:
> It's fine, you don't need to worry about the WAL device, it is automatically
> created on the nvme if the DB is there. Having a dedicated WAL device would
> only make sense if for example your data devices are on HDD, your rocksDB on
> "regular" SSDs and you also have nvme devices. But since you already use
> nvme for DB you don't need to specify a WAL device.

OK :-)

> 
> > Here is some problem:
> > 
> > # ceph daemon osd.8 perf dump bluefs
> > Can't get admin socket path: unable to get conf option admin_socket for
> > osd: b"error parsing 'osd': expected string of the form TYPE.ID, valid
> > types are: auth, mon, osd, mds, mgr, client\n"
> > 
> > I'm on the host, on which is this OSD 8.
> 
> I should have mentioned that you need to enter into the container first
> 
> cephadm enter --name osd.8
> 
> and then
> 
> ceph daemon osd.8 perf dump bluefs

Yes, it was a problem:

 ceph daemon osd.8 perf dump bluefs | grep wal
"wal_total_bytes": 0,
"wal_used_bytes": 0,
"files_written_wal": 535,
"bytes_written_wal": 121443819520,
"max_bytes_wal": 0,
"alloc_unit_wal": 0,
"read_random_disk_bytes_wal": 0,
"read_disk_bytes_wal": 0,

So I can now see, that it uses WAL.

Once again, thanks a lot.

Sincerely
Jan Marek
-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Eugen Block
It's fine, you don't need to worry about the WAL device, it is  
automatically created on the nvme if the DB is there. Having a  
dedicated WAL device would only make sense if for example your data  
devices are on HDD, your rocksDB on "regular" SSDs and you also have  
nvme devices. But since you already use nvme for DB you don't need to  
specify a WAL device.



Here is some problem:

# ceph daemon osd.8 perf dump bluefs
Can't get admin socket path: unable to get conf option admin_socket  
for osd: b"error parsing 'osd': expected string of the form TYPE.ID,  
valid types are: auth, mon, osd, mds, mgr, client\n"


I'm on the host, on which is this OSD 8.


I should have mentioned that you need to enter into the container first

cephadm enter --name osd.8

and then

ceph daemon osd.8 perf dump bluefs

Zitat von Jan Marek :


Hello Eugen,

I've tried to specify dedicated WAL device, but I have only
/dev/nvme0n1 , so I cannot write a correct YAML file...

Dne Po, čec 10, 2023 at 09:12:29 CEST napsal Eugen Block:

Yes, because you did *not* specify a dedicated WAL device. This is also
reflected in the OSD metadata:

$ ceph osd metadata 6 | grep dedicated
"bluefs_dedicated_db": "1",
"bluefs_dedicated_wal": "0"


Yes, it is exactly, as you wrote.



Only if you had specified a dedicated WAL device you would see it in the lvm
list output, so this is all as expected.
You can check out the perf dump of an OSD to see that it actually writes to
the WAL:

# ceph daemon osd.6 perf dump bluefs | grep wal
"wal_total_bytes": 0,
"wal_used_bytes": 0,
"files_written_wal": 1588,
"bytes_written_wal": 1090677563392,
"max_bytes_wal": 0,


Here is some problem:

# ceph daemon osd.8 perf dump bluefs
Can't get admin socket path: unable to get conf option admin_socket  
for osd: b"error parsing 'osd': expected string of the form TYPE.ID,  
valid types are: auth, mon, osd, mds, mgr, client\n"


I'm on the host, on which is this OSD 8.

My CEPH version is latest (I hope) quincy: 17.2.6.

Thanks a lot for help.

Sincerely
Jan Marek




Zitat von Jan Marek :

> Hello,
>
> but when I try to list devices config with ceph-volume, I can see
> a DB devices, but no WAL devices:
>
> ceph-volume lvm list
>
> == osd.8 ===
>
>   [db]   
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

>
>   block device   
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

>   block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
>   cephx lockbox secret
>   cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
>   cluster name  ceph
>   crush device class
>   db device  
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

>   db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
>   encrypted 0
>   osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
>   osd id8
>   osdspec affinity  osd_spec_default
>   type  db
>   vdo   0
>   devices   /dev/nvme0n1
>
>   [block]
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

>
>   block device   
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

>   block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
>   cephx lockbox secret
>   cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
>   cluster name  ceph
>   crush device class
>   db device  
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

>   db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
>   encrypted 0
>   osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
>   osd id8
>   osdspec affinity  osd_spec_default
>   type  block
>   vdo   0
>   devices   /dev/sdi
>
> (part of listing...)
>
> Sincerely
> Jan Marek
>
>
> Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
> > Hi,
> >
> > if you don't specify a different device for WAL it will be automatically
> > colocated on the same device as the DB. So you're good with this
> > configuration.
> >
> > Regards,
> > Eugen
> >
> >
> > Zitat von Jan Marek :
> >
> > > Hello,
> > >
> > > I've tried to add to CEPH cluster OSD node with a 12 rotational
> > > disks and 1 NVMe. My YAML was this:
> > >
> > > service_type: osd
> > > service_id: osd_spec_default
> > > service_name: osd.osd_spec_default
> > > placement:
> > >   host_pattern: osd8
> > > spec:
> > >   block_db_size: 64G
> > >   data_devices:
> > > 

[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Jan Marek
Hello Eugen,

I've tried to specify dedicated WAL device, but I have only
/dev/nvme0n1 , so I cannot write a correct YAML file...

Dne Po, čec 10, 2023 at 09:12:29 CEST napsal Eugen Block:
> Yes, because you did *not* specify a dedicated WAL device. This is also
> reflected in the OSD metadata:
> 
> $ ceph osd metadata 6 | grep dedicated
> "bluefs_dedicated_db": "1",
> "bluefs_dedicated_wal": "0"

Yes, it is exactly, as you wrote.

> 
> Only if you had specified a dedicated WAL device you would see it in the lvm
> list output, so this is all as expected.
> You can check out the perf dump of an OSD to see that it actually writes to
> the WAL:
> 
> # ceph daemon osd.6 perf dump bluefs | grep wal
> "wal_total_bytes": 0,
> "wal_used_bytes": 0,
> "files_written_wal": 1588,
> "bytes_written_wal": 1090677563392,
> "max_bytes_wal": 0,

Here is some problem:

# ceph daemon osd.8 perf dump bluefs
Can't get admin socket path: unable to get conf option admin_socket for osd: 
b"error parsing 'osd': expected string of the form TYPE.ID, valid types are: 
auth, mon, osd, mds, mgr, client\n"

I'm on the host, on which is this OSD 8.

My CEPH version is latest (I hope) quincy: 17.2.6.

Thanks a lot for help.

Sincerely
Jan Marek

> 
> 
> Zitat von Jan Marek :
> 
> > Hello,
> > 
> > but when I try to list devices config with ceph-volume, I can see
> > a DB devices, but no WAL devices:
> > 
> > ceph-volume lvm list
> > 
> > == osd.8 ===
> > 
> >   [db]  
> > /dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
> > 
> >   block device  
> > /dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
> >   block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
> >   cephx lockbox secret
> >   cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
> >   cluster name  ceph
> >   crush device class
> >   db device 
> > /dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
> >   db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
> >   encrypted 0
> >   osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
> >   osd id8
> >   osdspec affinity  osd_spec_default
> >   type  db
> >   vdo   0
> >   devices   /dev/nvme0n1
> > 
> >   [block]   
> > /dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
> > 
> >   block device  
> > /dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
> >   block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
> >   cephx lockbox secret
> >   cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
> >   cluster name  ceph
> >   crush device class
> >   db device 
> > /dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
> >   db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
> >   encrypted 0
> >   osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
> >   osd id8
> >   osdspec affinity  osd_spec_default
> >   type  block
> >   vdo   0
> >   devices   /dev/sdi
> > 
> > (part of listing...)
> > 
> > Sincerely
> > Jan Marek
> > 
> > 
> > Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
> > > Hi,
> > > 
> > > if you don't specify a different device for WAL it will be automatically
> > > colocated on the same device as the DB. So you're good with this
> > > configuration.
> > > 
> > > Regards,
> > > Eugen
> > > 
> > > 
> > > Zitat von Jan Marek :
> > > 
> > > > Hello,
> > > >
> > > > I've tried to add to CEPH cluster OSD node with a 12 rotational
> > > > disks and 1 NVMe. My YAML was this:
> > > >
> > > > service_type: osd
> > > > service_id: osd_spec_default
> > > > service_name: osd.osd_spec_default
> > > > placement:
> > > >   host_pattern: osd8
> > > > spec:
> > > >   block_db_size: 64G
> > > >   data_devices:
> > > > rotational: 1
> > > >   db_devices:
> > > > paths:
> > > > - /dev/nvme0n1
> > > >   filter_logic: AND
> > > >   objectstore: bluestore
> > > >
> > > > Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> > > > can add WAL to this OSD?
> > > >
> > > > NVMe device still have 128GB free place.
> > > >
> > > > Thanks a lot.
> > > >
> > > > Sincerely
> > > > Jan Marek
> > > > --
> > > > Ing. Jan Marek
> > > > University of South Bohemia
> > > > Academic Computer Centre
> > > > Phone: +420389032080
> > > > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> > > 
> > > 
> > > 

[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Joachim Kraftmayer - ceph ambassador
you can also test it directly with ceph bench, if the WAL is on the 
flash device:


https://www.clyso.com/blog/verify-ceph-osd-db-and-wal-setup/

Joachim


___
ceph ambassador DACH
ceph consultant since 2012

Clyso GmbH - Premier Ceph Foundation Member

https://www.clyso.com/

Am 10.07.23 um 09:12 schrieb Eugen Block:
Yes, because you did *not* specify a dedicated WAL device. This is 
also reflected in the OSD metadata:


$ ceph osd metadata 6 | grep dedicated
    "bluefs_dedicated_db": "1",
    "bluefs_dedicated_wal": "0"

Only if you had specified a dedicated WAL device you would see it in 
the lvm list output, so this is all as expected.
You can check out the perf dump of an OSD to see that it actually 
writes to the WAL:


# ceph daemon osd.6 perf dump bluefs | grep wal
    "wal_total_bytes": 0,
    "wal_used_bytes": 0,
    "files_written_wal": 1588,
    "bytes_written_wal": 1090677563392,
    "max_bytes_wal": 0,


Zitat von Jan Marek :


Hello,

but when I try to list devices config with ceph-volume, I can see
a DB devices, but no WAL devices:

ceph-volume lvm list

== osd.8 ===

  [db] 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9


  block device 
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

  block uuid j4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid 2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

  db uuid d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid 26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id    8
  osdspec affinity  osd_spec_default
  type  db
  vdo   0
  devices   /dev/nvme0n1

  [block] 
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970


  block device 
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

  block uuid j4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid 2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

  db uuid d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid 26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id    8
  osdspec affinity  osd_spec_default
  type  block
  vdo   0
  devices   /dev/sdi

(part of listing...)

Sincerely
Jan Marek


Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:

Hi,

if you don't specify a different device for WAL it will be 
automatically

colocated on the same device as the DB. So you're good with this
configuration.

Regards,
Eugen


Zitat von Jan Marek :

> Hello,
>
> I've tried to add to CEPH cluster OSD node with a 12 rotational
> disks and 1 NVMe. My YAML was this:
>
> service_type: osd
> service_id: osd_spec_default
> service_name: osd.osd_spec_default
> placement:
>   host_pattern: osd8
> spec:
>   block_db_size: 64G
>   data_devices:
> rotational: 1
>   db_devices:
> paths:
> - /dev/nvme0n1
>   filter_logic: AND
>   objectstore: bluestore
>
> Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> can add WAL to this OSD?
>
> NVMe device still have 128GB free place.
>
> Thanks a lot.
>
> Sincerely
> Jan Marek
> --
> Ing. Jan Marek
> University of South Bohemia
> Academic Computer Centre
> Phone: +420389032080
> http://www.gnu.org/philosophy/no-word-attachments.cs.html


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Eugen Block
Yes, because you did *not* specify a dedicated WAL device. This is  
also reflected in the OSD metadata:


$ ceph osd metadata 6 | grep dedicated
"bluefs_dedicated_db": "1",
"bluefs_dedicated_wal": "0"

Only if you had specified a dedicated WAL device you would see it in  
the lvm list output, so this is all as expected.
You can check out the perf dump of an OSD to see that it actually  
writes to the WAL:


# ceph daemon osd.6 perf dump bluefs | grep wal
"wal_total_bytes": 0,
"wal_used_bytes": 0,
"files_written_wal": 1588,
"bytes_written_wal": 1090677563392,
"max_bytes_wal": 0,


Zitat von Jan Marek :


Hello,

but when I try to list devices config with ceph-volume, I can see
a DB devices, but no WAL devices:

ceph-volume lvm list

== osd.8 ===

  [db]   
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9


  block device   
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

  block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device  
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

  db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id8
  osdspec affinity  osd_spec_default
  type  db
  vdo   0
  devices   /dev/nvme0n1

  [block]
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970


  block device   
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

  block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device  
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

  db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id8
  osdspec affinity  osd_spec_default
  type  block
  vdo   0
  devices   /dev/sdi

(part of listing...)

Sincerely
Jan Marek


Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:

Hi,

if you don't specify a different device for WAL it will be automatically
colocated on the same device as the DB. So you're good with this
configuration.

Regards,
Eugen


Zitat von Jan Marek :

> Hello,
>
> I've tried to add to CEPH cluster OSD node with a 12 rotational
> disks and 1 NVMe. My YAML was this:
>
> service_type: osd
> service_id: osd_spec_default
> service_name: osd.osd_spec_default
> placement:
>   host_pattern: osd8
> spec:
>   block_db_size: 64G
>   data_devices:
> rotational: 1
>   db_devices:
> paths:
> - /dev/nvme0n1
>   filter_logic: AND
>   objectstore: bluestore
>
> Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> can add WAL to this OSD?
>
> NVMe device still have 128GB free place.
>
> Thanks a lot.
>
> Sincerely
> Jan Marek
> --
> Ing. Jan Marek
> University of South Bohemia
> Academic Computer Centre
> Phone: +420389032080
> http://www.gnu.org/philosophy/no-word-attachments.cs.html


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Jan Marek
Hello,

but when I try to list devices config with ceph-volume, I can see
a DB devices, but no WAL devices:

ceph-volume lvm list

== osd.8 ===

  [db]  
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9

  block device  
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
  block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
  db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id8
  osdspec affinity  osd_spec_default
  type  db
  vdo   0
  devices   /dev/nvme0n1

  [block]   
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970

  block device  
/dev/ceph-eaf5f0d7-ad50-4009-9ee6-04b8204b5b1a/osd-block-26b1d4b7-2425-4a2f-912b-111cf66a5970
  block uuidj4s9lv-wS9n-xg2W-I4Y0-fUSu-Vuvl-9gOB2P
  cephx lockbox secret
  cluster fsid  2c565e24-7850-47dc-a751-a6357cbbaf2a
  cluster name  ceph
  crush device class
  db device 
/dev/ceph-5aa92e38-077b-48e2-bda6-5b7db7b7701c/osd-db-bfd11468-d109-4f85-9723-75976f51bfb9
  db uuid   d9MZ2r-ImXX-Xod0-TNDS-tqi5-oG5Y-wrXFtW
  encrypted 0
  osd fsid  26b1d4b7-2425-4a2f-912b-111cf66a5970
  osd id8
  osdspec affinity  osd_spec_default
  type  block
  vdo   0
  devices   /dev/sdi

(part of listing...)

Sincerely
Jan Marek


Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
> Hi,
> 
> if you don't specify a different device for WAL it will be automatically
> colocated on the same device as the DB. So you're good with this
> configuration.
> 
> Regards,
> Eugen
> 
> 
> Zitat von Jan Marek :
> 
> > Hello,
> > 
> > I've tried to add to CEPH cluster OSD node with a 12 rotational
> > disks and 1 NVMe. My YAML was this:
> > 
> > service_type: osd
> > service_id: osd_spec_default
> > service_name: osd.osd_spec_default
> > placement:
> >   host_pattern: osd8
> > spec:
> >   block_db_size: 64G
> >   data_devices:
> > rotational: 1
> >   db_devices:
> > paths:
> > - /dev/nvme0n1
> >   filter_logic: AND
> >   objectstore: bluestore
> > 
> > Now I have 12 OSD with DB on NVMe device, but without WAL. How I
> > can add WAL to this OSD?
> > 
> > NVMe device still have 128GB free place.
> > 
> > Thanks a lot.
> > 
> > Sincerely
> > Jan Marek
> > --
> > Ing. Jan Marek
> > University of South Bohemia
> > Academic Computer Centre
> > Phone: +420389032080
> > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html


signature.asc
Description: PGP signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Eugen Block

Hi,

if you don't specify a different device for WAL it will be  
automatically colocated on the same device as the DB. So you're good  
with this configuration.


Regards,
Eugen


Zitat von Jan Marek :


Hello,

I've tried to add to CEPH cluster OSD node with a 12 rotational
disks and 1 NVMe. My YAML was this:

service_type: osd
service_id: osd_spec_default
service_name: osd.osd_spec_default
placement:
  host_pattern: osd8
spec:
  block_db_size: 64G
  data_devices:
rotational: 1
  db_devices:
paths:
- /dev/nvme0n1
  filter_logic: AND
  objectstore: bluestore

Now I have 12 OSD with DB on NVMe device, but without WAL. How I
can add WAL to this OSD?

NVMe device still have 128GB free place.

Thanks a lot.

Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io