[ceph-users] Re: mclock and massive reads

2024-03-28 Thread Luis Domingues
Luis Domingues Proton AG On Thursday, 28 March 2024 at 10:10, Sridhar Seshasayee wrote: > Hi Luis, > > > So our question, is mClock taking into account the reads as well as the > > writes? Or are the reads calculate to be less expensive than the writes? > >

[ceph-users] mclock and massive reads

2024-03-26 Thread Luis Domingues
, the global speed slows down, and the slow ops disappear. So our question, is mClock taking into account the reads as well as the writes? Or are the reads calculate to be less expensive than the writes? Thanks, Luis Domingues Proton AG ___ ceph-users mailing

[ceph-users] Re: osd_mclock_max_capacity_iops_hdd in Reef

2024-01-08 Thread Luis Domingues
eploy of Pacific, each OSD pushes its own osd_mclock_max_capacity_iops_hdd, but deploying Reef not. We did not see any values for the OSDs in the ceph config db. In conclusion, we could say, at least on our pre-update tests, that mClock seems to behave a lot better in Reef than in Pacific. Lu

[ceph-users] osd_mclock_max_capacity_iops_hdd in Reef

2024-01-08 Thread Luis Domingues
performances. Did osd_mclock_max_capacity_iops_hdd became useless? I did not found anything regarding it on the changelogs, but I could have miss something. Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email

[ceph-users] Re: cephadm bootstrap on 3 network clusters

2024-01-03 Thread Luis Domingues
> Why? The public network should not have any restrictions between the > Ceph nodes. Same with the cluster network. Internal policies and network rules. Luis Domingues Proton AG On Wednesday, 3 January 2024 at 16:15, Robert Sander wrote: > Hi Luis, > > On 1/3/24 16:12,

[ceph-users] Re: cephadm bootstrap on 3 network clusters

2024-01-03 Thread Luis Domingues
to add mon1 to the list of hosts. When I apply a spec afterwards with my list of hosts with their IPs where cephadm can reach them, it works fine. But that means that I need to create the client-keyring rule for _admin label manually as well. Luis Domingues Proton AG On Wednesday, 3 January

[ceph-users] cephadm bootstrap on 3 network clusters

2024-01-03 Thread Luis Domingues
to it anyway, and then fails. Thanks, Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm user on cephadm rpm package

2023-11-17 Thread Luis Domingues
bit redundant to have cephadm package to create that user, when we need to figure out how to enable cephadm's access to the machines. Anyway, thanks for your reply. Luis Domingues Proton AG On Friday, 17 November 2023 at 13:55, David C. wrote: > Hi, > > You can use the cephad

[ceph-users] cephadm user on cephadm rpm package

2023-11-17 Thread Luis Domingues
Hi, I noticed when installing the cephadm rpm package, to bootstrap a cluster for example, that a user cephadm was created. But I do not see it used anywhere. What is the purpose of creating a user on the machine we install the local binary of cephadm? Luis Domingues Proton AG

[ceph-users] Cephadm specs application order

2023-09-27 Thread Luis Domingues
understand that for a specific spec, cephadm will try to match nodes by host, label and then host_pattern. Our question is more at spec level, and the order cephadm will "loop" the specs. I hope I was clear enough. Thanks, Luis Domingues

[ceph-users] Re: cephadm logs

2023-07-30 Thread Luis Domingues
Hi, We are interested in having cephadm log to journald. So I create the ticket: https://tracker.ceph.com/issues/62233 Thanks Luis Domingues Proton AG --- Original Message --- On Saturday, July 29th, 2023 at 20:55, John Mulligan wrote: > On Friday, July 28, 2023 11:51:06 AM

[ceph-users] cephadm logs

2023-07-28 Thread Luis Domingues
? Of if not is there any reason to log into a file while everything else logs to journald? Thanks Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm and kernel memory usage

2023-07-26 Thread Luis Domingues
or not. Only difference I can see is using smem on the noncache kernel memory on containerized machines. Maybe it's a podman issue, maybe a kernel. It does not seem related to ceph directly. I just asked here to see if anyone got the same issue. Anyway, thanks for your time. Luis Domingues Proton

[ceph-users] Re: cephadm and kernel memory usage

2023-07-25 Thread Luis Domingues
, it is more around 10G, and it can go up to 40G-50G. Do anyone knows if this is expected and why this is the case? Maybe this is a podman related question and ceph-dev is not the best place to ask this kind of question, but maybe someone using cephadm saw similar behavior. Luis Domingues Proton AG

[ceph-users] Re: cephadm and kernel memory usage

2023-07-24 Thread Luis Domingues
Of course: free -h totalusedfree shared buff/cache available Mem: 125Gi96Gi 9.8Gi 4.0Gi19Gi 7.6Gi Swap:0B 0B 0B Luis Domingues Proton AG --- Original Message --- On Monday

[ceph-users] cephadm and kernel memory usage

2023-07-24 Thread Luis Domingues
containers with podman the kernel needs a lot more memory? Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm does not redeploy OSD

2023-07-20 Thread Luis Domingues
in my previous e-mail, I am not sure this was the reason why, as I did not found any clear messages saying the db_device was ignored. And I did not tried to replicate this behavior yet. So yeah, I fixed my issue, but not sure if I it was just luck or not. Luis Domingues Proton AG

[ceph-users] Re: cephadm does not redeploy OSD

2023-07-19 Thread Luis Domingues
purged OSD.1 with --replace and --zap, and once disks where empty and ready to go, cephamd just added back OSD.1 and OSD.2 using the db_device as specified. I do not know if this is the intended behavior, or if I was just lucky, but all my OSDs are back to the cluster. Luis Domingues Proton AG

[ceph-users] Re: cephadm does not redeploy OSD

2023-07-18 Thread Luis Domingues
4-7Sxe-80GE-EcywDb", "name": "osd-block-db-5cb8edda-30f9-539f-b4c5-dbe420927911", "osd_fsid": "089894cf-1782-4a3a-8ac0-9dd043f80c71", "osd_id": "7", "osdspec_affinity": "&quo

[ceph-users] cephadm does not redeploy OSD

2023-07-18 Thread Luis Domingues
ing, another disk was replaced on the same machine, and it went without any issues. Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-17 Thread Luis Domingues
It looks indeed to be that bug that I hit. Thanks. Luis Domingues Proton AG --- Original Message --- On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee wrote: > Hello Luis, > > Please see my response below: > > But when I took a look on the memory usag

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-16 Thread Luis Domingues
get the running config: "osd_memory_target": "4294967296", "osd_memory_target_autotune": "true", "osd_memory_target_cgroup_limit_ratio": "0.80", Which is not the value I observe from the config. I have 4294967296 instead of so

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-11 Thread Luis Domingues
avgtime": 0.0 } }, "throttle-msgr_dispatch_throttler-client": { "val": 0, "max": 104857600, "get_started": 0, "get": 292633, "get_sum": 39290356304, "get_or_fail_fail

[ceph-users] OSD memory usage after cephadm adoption

2023-07-11 Thread Luis Domingues
7068M 6400M 16.2.13 327f301eff51 6223ed8e34e9 osd.10 running (5d) 10m ago 5d 7235M 6400M 16.2.13 327f301eff51 073ddc0d7391 osd.100 running (5d) 2m ago 5d 7118M 6400M 16.2.13 327f301eff51 b7f9238c0c24 Does anybody knows why OSDs would use more memory than the limit? Thanks Luis Domingues Proton

[ceph-users] Re: Quincy osd bench in order to define osd_mclock_max_capacity_iops_[hdd|ssd]

2023-06-30 Thread Luis Domingues
more consistent than ceph bench or fio. Hope this will help you. Luis Domingues Proton AG --- Original Message --- On Friday, June 30th, 2023 at 12:15, Rafael Diaz Maurin wrote: > Hello, > > I've just upgraded a Pacific cluster into Quincy, and all my osd have > t

[ceph-users] Keepalived configuration with cephadm

2023-06-12 Thread Luis Domingues
on the configuration? Or should we add some kind of option to generate keepalived's config without `unicast_src_ip` and `unicast_peer`? Thanks, Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users

[ceph-users] Re: How mClock profile calculation works, and IOPS

2023-04-03 Thread Luis Domingues
osd_mclock_max_capacity_iops_hdd set to 2000 on that lab setup is the value I get the most performance from my rados experiments, both with Segate and Toshiba disks. Luis Domingues Proton AG --- Original Message --- On Monday, April 3rd, 2023 at 08:44, Sridhar Seshasayee wrote: > Why was it done that way? I

[ceph-users] Re: How mClock profile calculation works, and IOPS

2023-04-03 Thread Luis Domingues
This means with default parameters we will always be far from reaching OSD limit right? Luis Domingues Proton AG --- Original Message --- On Monday, April 3rd, 2023 at 07:43, Sridhar Seshasayee wrote: > Hi Luis, > > > I am reading reading some documentation about mClock

[ceph-users] How mClock profile calculation works, and IOPS

2023-03-31 Thread Luis Domingues
t;, "osd_mclock_scheduler_background_best_effort_wgt": "2", "osd_mclock_scheduler_background_recovery_lim": "135", "osd_mclock_scheduler_background_recovery_res": "36", "osd_mclock_scheduler_background_recovery_wgt":

[ceph-users] Re: how ceph OSD bench works?

2023-03-31 Thread Luis Domingues
> > OSD bench performs IOs at the objectstore level and the stats are > > reported > > based on the response from those transactions. It performs either > > sequential > > or random IOs (i.e. a random offset into an object) based on the > > arguments > > passed to it. IIRC if number of objects and

[ceph-users] how ceph OSD bench works?

2023-03-30 Thread Luis Domingues
on that, it is impactful performance wise. On our cluster we can reach a lot better performances if we teak those values, instead of letting the cluster do proper measurements. And this looks to impact certain disk vendors more than others. Luis Domingues Proton AG

[ceph-users] mclock and backgourd best effort

2022-02-28 Thread Luis Domingues
. Could someone tell me what kind of load is included in best effort? Regards, Luis Domingues Proton AG ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] cephadm cluster behing a proxy

2021-10-14 Thread Luis Domingues
not found anything on the documentation, and I want to avoid to have http_proxy environment variables set on shell system wise. Or should I use a local container registry mirroring the ceph images? Thanks, Luis Domingues ___ ceph-users mailing li

[ceph-users] Re: Adopting "unmanaged" OSDs into OSD service specification

2021-10-13 Thread Luis Domingues
cluster. We are still looking for a more smooth way to do that. Luis Domingues ‐‐‐ Original Message ‐‐‐ On Monday, October 4th, 2021 at 10:01 PM, David Orman wrote: > We have an older cluster which has been iterated on many times. It's > > always been cephadm deployed, but I a

[ceph-users] Re: cephadm adopt with another user than root

2021-10-11 Thread Luis Domingues
I tries you advice today, and it work well. Thanks, Luis Domingues ‐‐‐ Original Message ‐‐‐ On Friday, October 8th, 2021 at 9:50 PM, Daniel Pivonka wrote: > Id have to test this to make sure it works but i believe you can run 'ceph > > cephadm set-user ' > > https://d

[ceph-users] cephadm adopt with another user than root

2021-10-08 Thread Luis Domingues
cluster. Is any way to set the ssh-user when adopting a cluster? I did not found the way to change the ssh-user on the documentation. Thanks, Luis Domingues ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Re: Drop of performance after Nautilus to Pacific upgrade

2021-09-20 Thread Luis Domingues
We tested Ceph 16.2.6, and indeed, the performances came back to what we expect for this cluster. Luis Domingues ‐‐‐ Original Message ‐‐‐ On Saturday, September 11th, 2021 at 9:55 AM, Luis Domingues wrote: > Hi Igor, > > I have a SSD for the physical DB volume. And inde

[ceph-users] Re: Drop of performance after Nautilus to Pacific upgrade

2021-09-11 Thread Luis Domingues
Hi Igor, I have a SSD for the physical DB volume. And indeed it has very high utilisation during the benchmark. I will test 16.2.6. Thanks, Luis Domingues ‐‐‐ Original Message ‐‐‐ On Friday, September 10th, 2021 at 5:57 PM, Igor Fedotov wrote: > Hi Luis, > > som

[ceph-users] Re: Drop of performance after Nautilus to Pacific upgrade

2021-09-10 Thread Luis Domingues
EOF eventually. The source of this performance drop is still a mystery to me. Luis Domingues ‐‐‐ Original Message ‐‐‐ On Tuesday, September 7th, 2021 at 10:51 AM, Martin Mlynář wrote: > Hello, > > we've noticed similar issue after upgrading our test 3 node cluster from >

[ceph-users] Drop of performance after Nautilus to Pacific upgrade

2021-09-05 Thread Luis Domingues
ceph-ansible to deploy and/or upgrade), my performances drop to ~400 MB/s of bandwidth doing the same rados bench. I am kind of clueless on what makes the performance drop so much. Does someone have some ideas where I can dig to find the root of this difference? Thanks, Luis Domingues