[ceph-users] Docker & CEPH-CRASH

2021-09-14 Thread Guilherme Geronimo
Hey Guys! I'm running my  entire cluster (12hosts/89osds - v15.2.22) on Docker and everything runs smoothly. But  I'm kind of "blind" here: ceph-crash is not running inside the containers. And there's nothing related to "ceph-crash" in the docker logs either Is there a special way to

[ceph-users] Re: OSD based ec-code

2021-09-14 Thread Szabo, Istvan (Agoda)
Yeah understand this point as well. So you keep 3 nodes as kind of 'spare' for data rebalance. Do you use the space of them or you maximize the usage on a 11 cluster level? Why I'm asking I have a cluster with 6 hosts on a 4:2 ec pool, I'm planning to add 1 more node but as a spare only, so

[ceph-users] Re: cephfs small files expansion

2021-09-14 Thread Josh Baergen
Hey Seb, > I have a test cluster on which I created pools rbd and cephfs (octopus), when > I copy a directory containing many small files on a pool rbd the USED part of > the ceph df command seems normal on the other hand on cephfs the USED part > seems really abnormal, I tried to change the

[ceph-users] Re: Metrics for object sizes

2021-09-14 Thread Szabo, Istvan (Agoda)
Definitely still pending from my side, thank you very much I’ll give a try. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com

[ceph-users] Re: rbd info flags

2021-09-14 Thread Anthony D'Atri
These are known as “feature flags” or just “features”. https://docs.ceph.com/en/pacific/rbd/rbd-config-ref/ > On Sep 14, 2021, at 4:02 AM, Budai Laszlo wrote: > > Hello all! > > what is the flags showing in the rbd info output? where can I read more about > it? > > Thank you, > Laszlo >

[ceph-users] Re: cephfs small files expansion

2021-09-14 Thread Anthony D'Atri
Yes, RBD volumes are typically much larger than bluestore_min_alloc_size. Typically your client filesystem is built *within* an RBD volume, but to Ceph it’s a single, monolithic image. > On Sep 14, 2021, at 7:27 AM, Sebastien Feminier > wrote: > > > thanks josh , my cluster is octopus on

[ceph-users] Re: OSD based ec-code

2021-09-14 Thread David Orman
We don't allow usage to grow over the threshold at which losing the servers would be impactful to the cluster. We keep usage low enough (we remove two hosts of capacity from the overall cluster allocation limit in our provisioning and management systems) to tolerate at least 2 failures while still

[ceph-users] Re: cephfs small files expansion

2021-09-14 Thread Sebastien Feminier
thanks josh , my cluster is octopus on hdd (for testing) ,so i have to re-create OSDs and change bluestore_min_alloc_size before creating OSDs ? Is this normal that my rbd pool does not having size amplification ? > Hey Seb, > >> I have a test cluster on which I created pools rbd and

[ceph-users] Re: OSD based ec-code

2021-09-14 Thread David Orman
Keep in mind performance, as well. Once you start getting into higher 'k' values with EC, you've got a lot more drives involved that need to return completions for operations, and on rotational drives this becomes especially painful. We use 8+3 for a lot of our purposes, as it's a good balance of

[ceph-users] osd: mkfs: bluestore_stored > 235GiB from start

2021-09-14 Thread Konstantin Shalygin
Hi, One of OSD after deploy on first (mkfs) run allocate ~ 235GiB of space. OSD perf dump after boot without any PG's: "bluestore_allocated": 253647523840, "bluestore_stored": 252952705757, I was created tracker for this [1], maybe someone has already encountered a similar problem? Thanks,

[ceph-users] Re: OSD based ec-code

2021-09-14 Thread Eugen Block
Hi, consider yourself lucky that you haven't had a host failure. But I would not draw the wrong conclusions here and change the failure-domain based on luck. In our production cluster we have an EC pool for archive purposes, it all went well for quite some time and last Sunday one of the

[ceph-users] Re: Metrics for object sizes

2021-09-14 Thread Yuval Lifshitz
Hi Istvan, Hope this is still relevant... but you may want to have a look at this example: https://github.com/ceph/ceph/blob/master/examples/lua/prometheus_adapter.lua https://github.com/ceph/ceph/blob/master/examples/lua/prometheus_adapter.md where we log RGW object sizes to Prometheus. would

[ceph-users] cephfs small files expansion

2021-09-14 Thread Sebastien Feminier
Hi ceph folks, I have a test cluster on which I created pools rbd and cephfs (octopus), when I copy a directory containing many small files on a pool rbd the USED part of the ceph df command seems normal on the other hand on cephfs the USED part seems really abnormal, I tried to change the

[ceph-users] rbd info flags

2021-09-14 Thread Budai Laszlo
Hello all! what is the flags showing in the rbd info output? where can I read more about it? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ignore Ethernet interface

2021-09-14 Thread Robert Sander
Hi Dominik, Am 14.09.21 um 09:20 schrieb Dominik Baack: For now, we do not use IPoIB. If I remember correctly bonding interfaces over two distinct protocols could be problematic, but I will look into it. Ah, I have not seen that two of the interfaces are Infiniband. But then it will not

[ceph-users] Re: How many concurrent users can be supported by a single Rados gateway

2021-09-14 Thread Szabo, Istvan (Agoda)
Haven’t found the proper metrics yet but something maxing out and can’t handle the traffic, we are facing with harbor and gitlab always. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e:

[ceph-users] Re: The best way of backup S3 buckets

2021-09-14 Thread mhnx
You're right but when I reached NCW, he published a new meta version of the rclone and you can download it here. https://beta.rclone.org/branch/fix-111-metadata/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata/ I've transferred almost 500M objects with the meta version and I didn't hit a single

[ceph-users] Re: Ignore Ethernet interface

2021-09-14 Thread Dominik Baack
Thank you very much for your response. For now, we do not use IPoIB. If I remember correctly bonding interfaces over two distinct protocols could be problematic, but I will look into it. Cheers Dominik Baack Am 13.09.2021 um 09:38 schrieb Robert Sander: Am 10.09.21 um 20:06 schrieb Dominik

[ceph-users] Re: Problem with multi zonegroup configuration

2021-09-14 Thread Boris Behrens
Someone got any ideas? I am not even sure if I am thinking it correctly. I only want to have users and bucketname synced, so they are unique. But not the data. I don't want to have redundance. The documentation reads like I need multiple zonegroups with a single zone each. Am Mo., 13. Sept.