[ceph-users] hardware setup recommendations wanted

2023-08-28 Thread Kai Zimmer
Dear listers, my employer already has a production Ceph cluster running but we need a second one. I just wanted to ask your opininion on the following setup. It is planned for 500 TB net capacity, expandable to 2 PB. I expect the number of OSD servers to double in the next 4 years. Erasure Cod

[ceph-users] Re: Windows 2016 RBD Driver install failure

2023-08-28 Thread Lucian Petrut
Hi, Windows Server 2019 is the minimum supported version for rbd-wnbd (https://github.com/cloudbase/wnbd#requirements). You may use ceph-dokan (cephfs) with Windows Server 2016 by disabling the WNBD driver when running the MSI installer. Regards, Lucian From: Robert Ford

[ceph-users] Status of diskprediction MGR module?

2023-08-28 Thread Robert Sander
Hi, Several years ago the diskprediction module was added to the MGR collecting SMART data from the OSDs. There were local and cloud modes available claiming different accuracies. Now only the local mode remains. What is the current status of that MGR module (diskprediction_local)? We have

[ceph-users] Re: What does 'removed_snaps_queue' [d5~3] means?

2023-08-28 Thread Eugen Block
It would be helpful to know what exactly happened. Who creates the snapshots and how? What are your clients, openstack compute nodes? If an 'rbd ls' shows some output, does 'rbd status /' display any info as well or does it return an error? This is a reoccuring issue if client connections b

[ceph-users] Re: Status of diskprediction MGR module?

2023-08-28 Thread Konstantin Shalygin
Hi, > On 28 Aug 2023, at 12:45, Robert Sander wrote: > > Several years ago the diskprediction module was added to the MGR collecting > SMART data from the OSDs. > > There were local and cloud modes available claiming different accuracies. Now > only the local mode remains. > > What is the cu

[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-28 Thread Adam King
cephadm piece of rados can be approved. Failures all look known to me. On Fri, Aug 25, 2023 at 4:06 PM Radoslaw Zarzynski wrote: > rados approved > > On Thu, Aug 24, 2023 at 12:33 AM Laura Flores wrote: > >> Rados summary is here: >> https://tracker.ceph.com/projects/rados/wiki/PACIFIC#Pacific-

[ceph-users] Re: Status of diskprediction MGR module?

2023-08-28 Thread Robert Sander
On 8/28/23 13:26, Konstantin Shalygin wrote: The module don't have new commits for more than two year So diskprediction_local is unmaintained. Will it be removed? It looks like a nice feature but when you try to use it it's useless. I suggest to use smartctl_exporter [1] for monitoring drive

[ceph-users] Re: Status of diskprediction MGR module?

2023-08-28 Thread Anthony D'Atri
>> The module don't have new commits for more than two year > > So diskprediction_local is unmaintained. Will it be removed? > It looks like a nice feature but when you try to use it it's useless. IIRC it has only a specific set of drive models, and the binary blob from ProphetStor. >> I sugg

[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-28 Thread Yuri Weinstein
I am waiting for checks to pass and will merge one remaining PR https://github.com/ceph/ceph/pull/53157 And will start the build as soon as it is merged. On Mon, Aug 28, 2023 at 4:57 AM Adam King wrote: > cephadm piece of rados can be approved. Failures all look known to me. > > On Fri, Aug 25,

[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-28 Thread Satish Patel
I have replaced Samsung with Intel P4600 6.4TB nvme (I have created 3 OSDs on top of nvme) Here is the result: (venv-openstack) root@os-ctrl1:~# rados -p test-nvme -t 64 -b 4096 bench 10 write hints = 1 Maintaining 64 concurrent writes of 4096 bytes to objects of size 4096 for up to 10 seconds or

[ceph-users] Re: A couple OSDs not starting after host reboot

2023-08-28 Thread apeisker
Hi, Thank you for your reply. I don’t think the device names changed, but ceph seems to be confused about which device the OSD is on. It’s reporting that there are 2 OSDs on the same device although this is not true. ceph device ls-by-host | grep sdu ATA_HGST_HUH728080ALN600_VJH4GLUX sdu osd.

[ceph-users] two ways of adding OSDs? LVN vs ceph orch daemon add

2023-08-28 Thread Giuliano Maggi
Hi, I am learning about Ceph, and I found this two ways of adding OSDs: https://docs.ceph.com/en/quincy/install/manual-deployment/#short-form (via LVM) AND https://docs.ceph.com/en/quincy/cephadm/services/osd/#creating-new-

[ceph-users] Questions since updating to 18.0.2

2023-08-28 Thread Curt
Hello, We recently upgraded our cluster to version 18 and I've noticed some things that I'd like feedback on before I go down a rabbit hole for non-issues. cephadm was used for the upgrade and there were no issues. Cluster is 56 OSD's spinners for right now only used for RBD images. I've noticed

[ceph-users] Reef - what happened to OSD spec?

2023-08-28 Thread Nigel Williams
We upgraded to Reef from Quincy, all went smoothly (thanks Ceph developers!) When adding OSDs, the process seems to have changed, the docs no longer mention OSD spec, and giving it a try it fails when it bumps into the root drive (which has an active LVM). I expect I can add a filter to avoid it.

[ceph-users] Re: Reef - what happened to OSD spec?

2023-08-28 Thread Nigel Williams
On Tue, 29 Aug 2023 at 10:09, Nigel Williams wrote: > and giving it a try it fails when it bumps into the root drive (which has > an active LVM). I expect I can add a filter to avoid it. > I found the cause of this initial failure when applying the spec from the web-gui. Even though I (thought)