[ceph-users] Re: Where is a simple getting started guide for a very basic cluster?

2023-11-28 Thread Robert Sander
On 11/28/23 17:50, Leo28C wrote: Problem is I don't have the cephadm command on every node. Do I need it on all nodes or just one of them? I tried installing it via curl but my ceph version is 14.2.22 which is not on the archive anymore so the curl command returns a 404 error html file. How do

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Anthony D'Atri
Sent to quickly — also note that consumer / client SSDs often don’t have powerloss protection, so if your whole cluster were to lose power at the wrong time, you might lose data. > On Nov 28, 2023, at 8:16 PM, Anthony D'Atri wrote: > > >>> >>> 1) They’re client aka desktop SSDs, not

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Anthony D'Atri
>> >> 1) They’re client aka desktop SSDs, not “enterprise” >> 2) They’re a partition of a larger OSD shared with other purposes > > Yup. They're a mix of SATA SSDs and NVMes, but everything is > consumer-grade. They're only 10% full on average and I'm not > super-concerned with performance.

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Rich Freeman
On Tue, Nov 28, 2023 at 6:25 PM Anthony D'Atri wrote: > Looks like one 100GB SSD OSD per host? This is AIUI the screaming minimum > size for an OSD. With WAL, DB, cluster maps, and other overhead there > doesn’t end up being much space left for payload data. On larger OSDs the > overhead is

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Anthony D'Atri
>> Very small and/or non-uniform clusters can be corner cases for many things, >> especially if they don’t have enough PGs. What is your failure domain — >> host or OSD? > > Failure domain is host, Your host buckets do vary in weight by roughly a factor of two. They naturally will get PGs

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Rich Freeman
On Tue, Nov 28, 2023 at 3:52 PM Anthony D'Atri wrote: > > Very small and/or non-uniform clusters can be corner cases for many things, > especially if they don’t have enough PGs. What is your failure domain — host > or OSD? Failure domain is host, and PG number should be fairly reasonable. >

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Wesley Dillingham
It's a complicated topic and there is no one answer, it varies for each cluster and depends. You have a good lay of the land. I just wanted to mention that the correct "foundation" for equally utilized OSDs within a cluster relies on two important factors: - Symmetry of disk/osd quantity and

[ceph-users] Re: Best Practice for OSD Balancing

2023-11-28 Thread Anthony D'Atri
> > I'm fairly new to Ceph and running Rook on a fairly small cluster > (half a dozen nodes, about 15 OSDs). Very small and/or non-uniform clusters can be corner cases for many things, especially if they don’t have enough PGs. What is your failure domain — host or OSD? Are your OSDs sized

[ceph-users] Best Practice for OSD Balancing

2023-11-28 Thread Rich Freeman
I'm fairly new to Ceph and running Rook on a fairly small cluster (half a dozen nodes, about 15 OSDs). I notice that OSD space use can vary quite a bit - upwards of 10-20%. In the documentation I see multiple ways of managing this, but no guidance on what the "correct" or best way to go about

[ceph-users] Bucket/object create/update/delete notification

2023-11-28 Thread Rok Jaklič
Hi, I would like to get info if the bucket or object got updated. I can get this info with a changed etag of an object, but not I cannot get etag from bucket, so I am looking at https://docs.ceph.com/en/latest/radosgw/notifications/ How do I create a topic and where do I send request with

[ceph-users] Re: Rook-Ceph OSD Deployment Error

2023-11-28 Thread P Wagner-Beccard
(Again to the mailing list, ups) Hi Travis, Thanks for your input – it's greatly appreciated. I assume that my deployment was using v17.2.6, as I hadn't explicitly specified a version in my provided rook-ceph-cluster/values.yaml

[ceph-users] Re: How to speed up rgw lifecycle

2023-11-28 Thread Kai Stian Olstad
On Tue, Nov 28, 2023 at 02:55:56PM +0700, VÔ VI wrote: My ceph cluster is using s3 with three pools and obj/s approximately 4.5k obj/s and the rgw lifecycle delete per pool is only 60-70 objects/s How can I speed up the lc rgw process? 60 70 objects/s is too slow It explained in the