[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-16 Thread Drew Weaver
>Groovy. Channel drives are IMHO a pain, though in the case of certain >manufacturers it can be the only way to get firmware updates. Channel drives >often only have a 3 year warranty, vs 5 for generic drives. Yes, we have run into this with Kioxia as far as being able to find new firmware.

[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-16 Thread Drew Weaver
By HBA I suspect you mean a non-RAID HBA? Yes, something like the HBA355 NVMe SSDs shouldn’t cost significantly more than SATA SSDs. Hint: certain tier-one chassis manufacturers mark both the fsck up. You can get a better warranty and pricing by buying drives from a VAR. We

[ceph-users] Re: recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-15 Thread Drew Weaver
nes server with 8-12 direct attach NVMe? On Fri, Jan 12, 2024 at 02:32:12PM +0000, Drew Weaver wrote: > Hello, > > So we were going to replace a Ceph cluster with some hardware we had > laying around using SATA HBAs but I was told that the only right way > to build Ceph in 2023 i

[ceph-users] recommendation for barebones server with 8-12 direct attach NVMe?

2024-01-12 Thread Drew Weaver
Hello, So we were going to replace a Ceph cluster with some hardware we had laying around using SATA HBAs but I was told that the only right way to build Ceph in 2023 is with direct attach NVMe. Does anyone have any recommendation for a 1U barebones server (we just drop in ram disks and cpus)

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-26 Thread Drew Weaver
, 2023 12:33 PM To: Drew Weaver Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Building new cluster had a couple of questions Sorry I thought of one more thing. I was actually re-reading the hardware recommendations for Ceph and it seems to imply that both RAID controllers as well as HBAs

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Drew Weaver
actually build a cluster with many disks 12-14 each per server without any HBAs in the servers. Are there certain HBAs that are worse than others? Sorry I am just confused. Thanks, -Drew -Original Message- From: Drew Weaver Sent: Thursday, December 21, 2023 8:51 AM To: 'ceph-users@ceph.io

[ceph-users] Building new cluster had a couple of questions

2023-12-21 Thread Drew Weaver
Howdy, I am going to be replacing an old cluster pretty soon and I am looking for a few suggestions. #1 cephadm or ceph-ansible for management? #2 Since the whole... CentOS thing... what distro appears to be the most straightforward to use with Ceph? I was going to try and deploy it on Rocky

[ceph-users] Re: iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around? [EXT]

2023-02-14 Thread Drew Weaver
. Thanks, -Drew -Original Message- From: Dave Holland Sent: Tuesday, February 14, 2023 11:39 AM To: Drew Weaver Cc: 'ceph-users@ceph.io' Subject: Re: [ceph-users] iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around? [EXT] On Tue, Feb 14, 2023 at 04:00

[ceph-users] iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around?

2023-02-14 Thread Drew Weaver
Hello, After upgrading a lot of iDRAC9 modules to version 6.10 in servers that are involved in a Ceph cluster we noticed that the iDRAC9 shows the write endurance as 0% on any non-certified disk. OMSA still shows the correct remaining write endurance but I am assuming that they are working

[ceph-users] Recommended SSDs for Ceph

2022-09-29 Thread Drew Weaver
Hello, We had been using Intel SSD D3 S4610/20 SSDs but Solidigm is... having problems Bottom line is they haven't shipped an order in a year. Does anyone have any recommendations on SATA SSDs that have a fairly good mix of performance/endurance/cost? I know that they should all just work

[ceph-users] Re: RGW performance as a Veeam capacity tier

2021-09-30 Thread Drew Weaver
of performance out of it. So on a 4disk r10 you get about 30M/s when offloading. -Original Message- From: Konstantin Shalygin Sent: Saturday, July 10, 2021 10:28 AM To: Nathan Fish Cc: Drew Weaver ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: RGW performance as a Veeam capacity tier

[ceph-users] Migrating CEPH OS looking for suggestions

2021-09-30 Thread Drew Weaver
Hi, I am going to migrate our ceph cluster to a new OS and I am trying to choose the right one so that I won't have to replace it again when python4 becomes a requirement mid-cycle [or whatever]. Has anyone seen any recommendations from the devs as to what distro they are targeting for lets

[ceph-users] RGW performance as a Veeam capacity tier

2021-07-09 Thread Drew Weaver
Greetings. I've begun testing using Ceph 14.2.9 as a capacity tier for a scale out backup repository in Veeam 11. The backup host and the RGW server are connected directly at 10Gbps. It would appear that the maximum throughput that Veeam is able to achieve while archiving data to this cluster

[ceph-users] Rolling upgrade model to new OS

2021-06-04 Thread Drew Weaver
Hello, I need to upgrade the OS that our Ceph cluster is running on to support new versions of Ceph. Has anyone devised a model for how you handle this? Do you just: Install some new nodes with the new OS Install the old version of Ceph on the new nodes Add those nodes/osds to the cluster

[ceph-users] Replacing disk with xfs on it, documentation?

2021-03-09 Thread Drew Weaver
Hello, I haven't needed to replace a disk in awhile and it seems that I have misplaced my quick little guide on how to do it. When searching the docs it is now recommending that you should use ceph-volume to create OSDs when doing that it creates LV: Disk /dev/sde: 4000.2 GB, 4000225165312

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread Drew Weaver
595d01e46e.74194.637.14 10413 .dir.2b67ef7c-2015-4ca0-bf50-b7595d01e46e.74194.637.9 10356 .dir.2b67ef7c-2015-4ca0-bf50-b7595d01e46e.74194.637.11 10410 -Original Message- From: Benoît Knecht Sent: Friday, March 5, 2021 12:00 PM To: Drew Weaver Cc: 'ceph-use

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread Drew Weaver
\n" $obj $(rados -p default.rgw.buckets.index listomapkeys $obj | wc -l) done ``` returns this: -bash: command substitution: line 4: syntax error: unexpected end of file I figured perhaps you were using ``` to denote code so I tried running it without that and also on one line and nei

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread Drew Weaver
whether or not it's enabled in the running environment? Thanks, -Drew -Original Message- From: Benoît Knecht Sent: Thursday, March 4, 2021 11:46 AM To: Drew Weaver Cc: 'ceph-users@ceph.io' Subject: Re: [ceph-users] Resolving LARGE_OMAP_OBJECTS Hi Drew, On Thursday, March 4th, 2021

[ceph-users] Resolving LARGE_OMAP_OBJECTS

2021-03-04 Thread Drew Weaver
Howdy, the dashboard on our cluster keeps showing LARGE_OMAP_OBJECTS. I went through this document https://www.suse.com/support/kb/doc/?id=19698 I've found that we have a total of 5 buckets, each one is owned by a different user. >From what I have read on this issue it seems to flip flop

[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Drew Weaver
> As I understand it right now Ceph 14 is the last version that will run on > CentOS/EL7 but CentOS8 was "killed off". >This is wrong. Ceph 15 runs on CentOS 7 just fine, but without the dashboard. Oh, what I should have said is that I want it to be fully functional.

[ceph-users] Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Drew Weaver
Howdy, After the IBM acquisition of RedHat the landscape for CentOS quickly changed. As I understand it right now Ceph 14 is the last version that will run on CentOS/EL7 but CentOS8 was "killed off". So given that, if you were going to build a Ceph cluster today would you even bother doing it

[ceph-users] Re: Trying to upgrade to octopus removes current version of ceph release and tries to install older version...

2020-06-08 Thread Drew Weaver
Nevermind didn't see that Octopus isn't really supported on C7 so I'll just stick with what I have until I want to upgrade to C8. Thanks, -Drew -Original Message- From: Drew Weaver Sent: Monday, June 8, 2020 1:38 PM To: 'ceph-users@ceph.io' Subject: [ceph-users] Trying to upgrade

[ceph-users] Trying to upgrade to octopus removes current version of ceph release and tries to install older version...

2020-06-08 Thread Drew Weaver
Hi cluster is version 14.2.9 ceph-deploy v2.0.1 using command ceph-deploy install --release octopus mon0 mon1 mon2 result is this command being run: sudo yum remove -y ceph-release which removes this package: ceph-releasenoarch1-1.el7 @/ceph-release-1-0.el7.noarch Then it tries to

[ceph-users] Re: Choosing suitable SSD for Ceph cluster

2019-10-25 Thread Drew Weaver
Not related to the original topic but the Micron case in that article is fascinating and a little surprising. With pretty much best in class hardware in a lab environment: Potential 25,899,072‬ 4KiB random write IOPs goes to 477K Potential 23,826,216 4KiB random read IOPs goes to 2,000,000

[ceph-users] Re: iSCSI write performance

2019-10-24 Thread Drew Weaver
I was told by someone at Red Hat that ISCSI performance is still several magnitudes behind using the client / driver. Thanks, -Drew -Original Message- From: Nathan Fish Sent: Thursday, October 24, 2019 1:27 PM To: Ryan Cc: ceph-users Subject: [ceph-users] Re: iSCSI write