[ceph-users] Re: cephadm orch thinks hosts are offline

2022-06-29 Thread Thomas Roth
Trying to resolve this, at first I tried to pause the cephadm processes ('ceph config-key set 
mgr/cephadm/pause true') which did not lead anywhere but loss of connectivity: how do you "resume"? 
Does not exist anywhere in the documentation!
Actually, there are quite many things in Ceph that you can switch on, but not off, of switch off, but 
not on - such as rebooting a mgr node ...



In addition to the `ceph orch host ls` showing everything Offline, I thus 
managed to get also
> ceph -s
> id: 98e1e122-ebe3-11ec-b165-8208fe80
>health: HEALTH_WARN
>9 hosts fail cephadm check
>21 stray daemon(s) not managed by cephadm
>
>  services:
>mon: 5 daemons, quorum lxbk0374,lxbk0375,lxbk0376,lxbk0377,lxbk0378 (age 
6d)
>mgr: lxbk0375.qtgomh(active, since 6d), standbys: lxbk0376.jstndr, 
lxbk0374.hdvmvg
>mds: 1/1 daemons up, 11 standby
>osd: 24 osds: 24 up (since 5d), 24 in (since 5d)
>
>  data:
>volumes: 1/1 healthy
>pools:   3 pools, 641 pgs
>objects: 4.77k objects, 16 GiB
>usage:   50 GiB used, 909 TiB / 910 TiB avail
>pgs: 641 active+clean


Good thing is that neither ceph nor cephfs care about the orchestrator thingy - everything kees 
working, it would seem ;-)



Finally, the workaround (or solution?):
Re-adding missing nodes is a bad idea in most every system, but not in Ceph.

Go to lxbk0375 - since that is the active mgr, cf. above.

> ssh-copy-id -f -i /etc/ceph/ceph.pub root@lxbk0374
> ceph orch host add lxbk0374 10.20.2.161

-> 'ceph orch host ls' shows that node no longer Offline.
-> Repeat with all the other hosts, and everything looks fine also from the 
orch view.


My question: Did I miss this procedure in the manuals?


Cheers
Thomas

On 23/06/2022 18.29, Thomas Roth wrote:

Hi all,

found this bug https://tracker.ceph.com/issues/51629  (Octopus 15.2.13), reproduced it in Pacific and 
now again in Quincy:

- new cluster
- 3 mgr nodes
- reboot active mgr node
- (only in Quincy:) standby mgr node takes over, rebooted node becomse standby
- `ceph orch host ls` shows all hosts as `offline`
- add a new host: not offline

In my setup, hostnames and IPs are well known, thus

# ceph orch host ls
HOST  ADDR LABELS  STATUS
lxbk0374  10.20.2.161  _admin  Offline
lxbk0375  10.20.2.162  Offline
lxbk0376  10.20.2.163  Offline
lxbk0377  10.20.2.164  Offline
lxbk0378  10.20.2.165  Offline
lxfs416   10.20.2.178  Offline
lxfs417   10.20.2.179  Offline
lxfs418   10.20.2.222  Offline
lxmds22   10.20.6.67
lxmds23   10.20.6.72   Offline
lxmds24   10.20.6.74   Offline


(All lxbk are mon nodes, the first 3 are mgr, 'lxmds22' was added after the 
fatal reboot.)


Does this matter at all?
The old bug report is one year old, now with prio 'Low'. And some people must have rebooted the one or 
other host in their clusters...


There is a cephfs on our cluster, operations seem to be unaffected.


Cheers
Thomas



--

Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm orch thinks hosts are offline

2022-06-27 Thread Thomas Roth

Hi Adam,

no, this is the 'feature' where the reboot of a mgr hosts causes all known 
hosts to become unmanaged.


> # lxbk0375 # ceph cephadm check-host lxbk0374 10.20.2.161
> mgr.server reply reply (1) Operation not permitted check-host failed:
> Host 'lxbk0374' not found. Use 'ceph orch host ls' to see all managed hosts.

In some email on this issue I can't find atm, someone describes a workaround that allows to restart 
the entire orchestrator business.

But that sounded risky.

Regards
Thomsa


On 23/06/2022 19.42, Adam King wrote:

Hi Thomas,

What happens if you run "ceph cephadm check-host " for one of the
hosts that is offline (and if that fails "ceph cephadm check-host
 ")? Usually, the hosts get marked offline when some ssh
connection to them fails. The check-host command will attempt a connection
and maybe let us see why it's failing, or, if there is no longer an issue
connecting to the host, should mark the host online again.

Thanks,
   - Adam King

On Thu, Jun 23, 2022 at 12:30 PM Thomas Roth  wrote:


Hi all,

found this bug https://tracker.ceph.com/issues/51629  (Octopus 15.2.13),
reproduced it in Pacific and
now again in Quincy:
- new cluster
- 3 mgr nodes
- reboot active mgr node
- (only in Quincy:) standby mgr node takes over, rebooted node becomse
standby
- `ceph orch host ls` shows all hosts as `offline`
- add a new host: not offline

In my setup, hostnames and IPs are well known, thus

# ceph orch host ls
HOST  ADDR LABELS  STATUS
lxbk0374  10.20.2.161  _admin  Offline
lxbk0375  10.20.2.162  Offline
lxbk0376  10.20.2.163  Offline
lxbk0377  10.20.2.164  Offline
lxbk0378  10.20.2.165  Offline
lxfs416   10.20.2.178  Offline
lxfs417   10.20.2.179  Offline
lxfs418   10.20.2.222  Offline
lxmds22   10.20.6.67
lxmds23   10.20.6.72   Offline
lxmds24   10.20.6.74   Offline


(All lxbk are mon nodes, the first 3 are mgr, 'lxmds22' was added after
the fatal reboot.)


Does this matter at all?
The old bug report is one year old, now with prio 'Low'. And some people
must have rebooted the one or
other host in their clusters...

There is a cephfs on our cluster, operations seem to be unaffected.


Cheers
Thomas

--
----
Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io





--
----
Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
Phone: +49-6159-71 1453  Fax: +49-6159-71 2986


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephadm orch thinks hosts are offline

2022-06-23 Thread Thomas Roth

Hi all,

found this bug https://tracker.ceph.com/issues/51629  (Octopus 15.2.13), reproduced it in Pacific and 
now again in Quincy:

- new cluster
- 3 mgr nodes
- reboot active mgr node
- (only in Quincy:) standby mgr node takes over, rebooted node becomse standby
- `ceph orch host ls` shows all hosts as `offline`
- add a new host: not offline

In my setup, hostnames and IPs are well known, thus

# ceph orch host ls
HOST  ADDR LABELS  STATUS
lxbk0374  10.20.2.161  _admin  Offline
lxbk0375  10.20.2.162  Offline
lxbk0376  10.20.2.163  Offline
lxbk0377  10.20.2.164  Offline
lxbk0378  10.20.2.165  Offline
lxfs416   10.20.2.178  Offline
lxfs417   10.20.2.179  Offline
lxfs418   10.20.2.222  Offline
lxmds22   10.20.6.67
lxmds23   10.20.6.72   Offline
lxmds24   10.20.6.74   Offline


(All lxbk are mon nodes, the first 3 are mgr, 'lxmds22' was added after the 
fatal reboot.)


Does this matter at all?
The old bug report is one year old, now with prio 'Low'. And some people must have rebooted the one or 
other host in their clusters...


There is a cephfs on our cluster, operations seem to be unaffected.


Cheers
Thomas

--

Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] active+undersized+degraded due to OSD size differences?

2022-06-19 Thread Thomas Roth

Hi all,

I have set up a cluster for use with cephfs. Trying to follow the recommendations for the MDS service, I picked two machines which provide SSD-based 
disk space, 2 TB each, to put the cephfs- metadata pool there.

My ~20 HDD-based OSDs in the cluster have 43 TB each.

I created a crush rule tied to this MDS-hardware and then the metadata pool by 
specifying the rule name, mds-ssd,

ceph osd pool create metadata0 128 128 replicated mds-ssd

whereas the data pool was just created as standard replicated pool.

cephfs creation seemed to work with these, but now the system is stuck with

>pgs: 22/72 objects degraded (30.556%)
> 513 active+clean
> 110 active+undersized
> 18  active+undersized+degraded


What is the main reason here? I can think of these:
1. There are just two OSDs for the metadata pool - a replicated pool without 
further tweaks would need three OSDs/hosts?
2. Ceph might have placed the metadata  pool onto the said OSDs, but considers them still valid targets for other pools, hence tries to reconcile OSDs 
of 2TB and 43TB and fails?



Btw, how can I change the default failure domain? osd, host, whatever?
This is all Quincy, cephadm, so there is no ceph.conf anymore, and I did not 
find the command to inject my failure domain into the config database...


Regards
Thomas
--
----
Thomas Roth   IT-HPC-Linux
Location: SB3 2.291   Phone: 1453

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph.pub not presistent over reboots?

2022-06-15 Thread Thomas Roth

Hi all,


while setting up a system with cephadm under Quincy, I bootstrapped from host A, added mons on hosts B 
and C, and rebooted host A.

Afterwards, ceph seemed to be in a healthy state (no OSDs yet, of course), but my host A 
was "offline".

I was afraid I had run into https://tracker.ceph.com/issues/51027, but no, my host A simply lacked the 
ceph.pub key.


Since this is Quincy (no support for non-root users), I had provided the other mons by 'ssh-copy-id -f 
-i /etc/ceph/ceph.pub root@hostB' etc. but not my host A.

After making up for that, my host A found itself to be online again ;-)


Then I prepared three machines to host OSDs. For some reasons, one of them showed only the locked 
'/dev/sdX' devices and not the LVs that I intended to use as OSDs.
I rebooted, which did not change anything, then copied the key ('ssh-copy-id -f -i /etc/ceph/ceph.pub 
root@fileserverA'),  which did everything, and now I am wondering if I should write a cron job that 
periodically copies the key to all involved machines...



Where could the keys get lost? Is this a container-feature?


Is it really true that sites using cephadm never reboot their nodes? Can't 
really believe that.


Regards
Thomas


--
--------
Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] set configuration options in the cephadm age

2022-06-14 Thread Thomas Roth

https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/

talks about changing 'osd_crush_chooseleaf_type' before creating monitors or OSDs, for the special 
case of a 1-node-cluster.


However, the documentation fails to explain how/where to set this option, seeing that with 'cephadm', 
there is (almost) no /etc/ceph/ceph.conf anymore.



If you search the web for various errors in Ceph, you will come across clever people explaining you 
how to manipulate the DB on the fly, for example "ceph tell mon.* injectargs...".
There should be a paragraph in the documentation mentioning this, along with the corresponding 
paragraph on setting options permanently...



In fact, I would just to have the failure domain 'OSD' instead of 'host'.
Any clever way of doing that?


Regards,
Thomas

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v17.2.0 Quincy released

2022-05-25 Thread Thomas Roth

Hello,

just found that this "feature" is not restricted to upgrades - I just tried to bootstrap an entirely new cluster with Quincy, also with the fatal 
switch to non-root-user: adding the second mon results in

> Unable to write lxmon1:/etc/ceph/ceph.conf: scp: /tmp/etc/ceph/ceph.conf.new: 
Permission denied



By now, I go to ceph.io every day to see if the motd has been changed to "If it 
compiles at all, release it as stable".

Cheers,
Thomas


On 5/4/22 14:57, Jozef Rebjak wrote:

Hello, If there is somebody who is using non-root user within Pacific and would 
like to upgrade to Quincy read this first

https://blog.jozefrebjak.com/why-to-wait-with-upgrade-from-ceph-pacific-with-non-root-user-to-quincy
 
.

or message me with a solution. For me it’s just about waiting for v17.2.1.

Thanks



Dňa 4. 5. 2022 o 11:16, Ilya Dryomov  napísal:

On Tue, May 3, 2022 at 9:31 PM Steve Taylor  wrote:


Just curious, is there any updated ETA on the 16.2.8 release? This
note implied that it was pretty close a couple of weeks ago, but the
release task seems to have several outstanding items before it's
wrapped up.

I'm just wondering if it's worth waiting a bit for new Pacific
deployments to try 16.2.8 or not. Thanks!


Hi Steve,

The last blocker PR just merged so it should be a matter of days now.

Thanks,

Ilya



Steve

On Wed, Apr 20, 2022 at 3:37 AM Ilya Dryomov  wrote:


On Wed, Apr 20, 2022 at 6:21 AM Harry G. Coin  wrote:


Great news!  Any notion when the many pending bug fixes will show up in
Pacific?  It's been a while.


Hi Harry,

The 16.2.8 release is planned within the next week or two.

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Multipath and cephadm

2022-01-30 Thread Thomas Roth

Thanks, Peter, this works.

Before, I had the impression cephadm would only accept 'bare' disks as osd devices, but indeed it will swallow any kind of block device or LV that you 
prepare for it on the osd host.


Regards,
Thomas

On 1/25/22 20:21, Peter Childs wrote:

This came from a previous thread that I started last year, so you may want
to look in the archive.

https://www.mail-archive.com/ceph-users@ceph.io/msg11572.html

Although the doc page it refers to looks to have disappeared :(

You can use "ceph orch daemon add osd :"

I've been using

1=mpatha

pvcreate /dev/mapper/$1
vgcreate $1-vg /dev/mapper/$1
lvcreate -l 100%FREE -n $1-lv $1-vg
ceph orch daemon add osd dampwood48:$1-vg/$1-lv

to create osd's on the multipath devices terrible dev opt script but it
works.

I've not currently got a method to use the yaml device description method
(which would be much more ideal),  hence there is no obvious way to use
separate db_devices, but this does look to work for me as far as it goes.

Hope that helps

Peter Childs




On Tue, 25 Jan 2022, 17:53 Thomas Roth,  wrote:


Would like to know that as well.

I have the same setup - cephadm, Pacific, CentOS8, and a host with a
number of HDDs which are all connect by 2 paths.
No way to use these without multipath

  > ceph orch daemon add osd serverX:/dev/sdax

  > Cannot update volume group ceph-51f8b9b0-2917-431d-8a6d-8ff90440641b
with duplicate PV devices

(because sdax == sdce, etc.)

and with multipath, it fails with

  > ceph orch daemon add osd serverX:/dev/mapper/mpathbq

  > podman: stderr -->  IndexError: list index out of range


Quite strange that the 'future of storage' does not know how to handle
multipath devices?

Regrads,
Thomas


On 12/23/21 18:40, Michal Strnad wrote:

Hi all.

We have problem using disks accessible via multipath. We are using

cephadm for deployment, Pacific version for containers, CentOS 8 Stream on
servers

and following LVM configuration.

devices {
  multipath_component_detection = 1
}



We tried several methods.

1.) Direct approach.

cephadm shell


/mapper/mpatha


Errors are attached in 1.output file.



2.  With the help of OSD specifications where they are mpathX devices

used.


service_type: osd
service_id: osd-spec-serverX
placement:
host_pattern: 'serverX'
spec:
data_devices:
  paths:
- /dev/mapper/mpathaj
- /dev/mapper/mpathan
- /dev/mapper/mpatham
db_devices:
  paths:
- /dev/sdc
encrypted: true

Errors are attached in 2.output file.


2.  With the help of OSD specifications where they are dm-X devices used.

service_type: osd
service_id: osd-spec-serverX
placement:
host_pattern: 'serverX'
spec:
data_devices:
  paths:
- /dev/dm-1
- /dev/dm-2
- /dev/dm-3
- /dev/dm-X
db_devices:
  size: ':2TB'
encrypted: true

Errors are attached in 3.output file.

What is the right method for multipath deployments? I didn't find much

on this topic.


Thank you

Michal

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



--
----
Thomas Roth
HPC Department

GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstr. 1, 64291 Darmstadt, http://www.gsi.de/

Gesellschaft mit beschraenkter Haftung

Sitz der Gesellschaft / Registered Office:Darmstadt
Handelsregister   / Commercial Register:
  Amtsgericht Darmstadt, HRB 1528

Geschaeftsfuehrung/ Managing Directors:
   Professor Dr. Paolo Giubellino, Ursula Weyrich, Jörg Blaurock

Vorsitzender des GSI-Aufsichtsrates /
Chairman of the Supervisory Board:
 Staatssekretaer / State Secretary Dr. Georg Schütte
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Multipath and cephadm

2022-01-25 Thread Thomas Roth

Would like to know that as well.

I have the same setup - cephadm, Pacific, CentOS8, and a host with a number of 
HDDs which are all connect by 2 paths.
No way to use these without multipath

> ceph orch daemon add osd serverX:/dev/sdax

> Cannot update volume group ceph-51f8b9b0-2917-431d-8a6d-8ff90440641b with 
duplicate PV devices

(because sdax == sdce, etc.)

and with multipath, it fails with

> ceph orch daemon add osd serverX:/dev/mapper/mpathbq

> podman: stderr -->  IndexError: list index out of range


Quite strange that the 'future of storage' does not know how to handle 
multipath devices?

Regrads,
Thomas


On 12/23/21 18:40, Michal Strnad wrote:

Hi all.

We have problem using disks accessible via multipath. We are using cephadm for deployment, Pacific version for containers, CentOS 8 Stream on servers 
and following LVM configuration.


devices {
     multipath_component_detection = 1
}



We tried several methods.

1.) Direct approach.

cephadm shell 


/mapper/mpatha


Errors are attached in 1.output file.



2.  With the help of OSD specifications where they are mpathX devices used.

service_type: osd
service_id: osd-spec-serverX
placement:
   host_pattern: 'serverX'
spec:
   data_devices:
     paths:
   - /dev/mapper/mpathaj
   - /dev/mapper/mpathan
   - /dev/mapper/mpatham
   db_devices:
     paths:
   - /dev/sdc
encrypted: true

Errors are attached in 2.output file.


2.  With the help of OSD specifications where they are dm-X devices used.

service_type: osd
service_id: osd-spec-serverX
placement:
   host_pattern: 'serverX'
spec:
   data_devices:
     paths:
   - /dev/dm-1
   - /dev/dm-2
   - /dev/dm-3
   - /dev/dm-X
   db_devices:
     size: ':2TB'
encrypted: true

Errors are attached in 3.output file.

What is the right method for multipath deployments? I didn't find much on this 
topic.

Thank you

Michal

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



--
--------
Thomas Roth
HPC Department

GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstr. 1, 64291 Darmstadt, http://www.gsi.de/

Gesellschaft mit beschraenkter Haftung

Sitz der Gesellschaft / Registered Office:Darmstadt
Handelsregister   / Commercial Register:
Amtsgericht Darmstadt, HRB 1528

Geschaeftsfuehrung/ Managing Directors:
 Professor Dr. Paolo Giubellino, Ursula Weyrich, Jörg Blaurock

Vorsitzender des GSI-Aufsichtsrates /
  Chairman of the Supervisory Board:
   Staatssekretaer / State Secretary Dr. Georg Schütte
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Thomas Roth

Thank you all for the clarification!

I just did not grasp the concept before, probably because I am used to those systems that form a layer on top of the local file system. If ceph does 
it all, down to the magnetic platter, all the better.


Cheers
Thomas

On 6/22/21 12:15 PM, Marc wrote:


That is the idea, what is wrong with this concept? If you aggregate disks, you 
still aggregate 70 disks, and you still be having 70 disks.
Everything you do that ceph can't be aware of creates a potential 
misinterpretation of the reality and make ceph act in a way it should not.




-Original Message-
Sent: Tuesday, 22 June 2021 11:55
To: ceph-users@ceph.io
Subject: [ceph-users] HDD <-> OSDs

Hi all,

newbie question:

The documentation seems to suggest that with ceph-volume, one OSD is
created for each HDD (cf. 4-HDD-example in
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-
ref/)

This seems odd: what if a server has a finite number of disks? I was
going to try cephfs on ~10 servers with 70 HDD each. That would make
each system
having to deal with 70 OSDs, on 70 LVs?

Really no aggregation of the disks?


Regards,
Thomas
--
----
Thomas Roth
Department: IT

GSI Helmholtzzentrum für Schwerionenforschung GmbH
www.gsi.de
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] HDD <-> OSDs

2021-06-22 Thread Thomas Roth

Hi all,

newbie question:

The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in 
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)


This seems odd: what if a server has a finite number of disks? I was going to try cephfs on ~10 servers with 70 HDD each. That would make each system 
having to deal with 70 OSDs, on 70 LVs?


Really no aggregation of the disks?


Regards,
Thomas
--

Thomas Roth
Department: IT

GSI Helmholtzzentrum für Schwerionenforschung GmbH
www.gsi.de
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io