Luis Domingues
Proton AG
On Thursday, 28 March 2024 at 10:10, Sridhar Seshasayee
wrote:
> Hi Luis,
>
> > So our question, is mClock taking into account the reads as well as the
> > writes? Or are the reads calculate to be less expensive than the writes?
>
>
, the
global speed slows down, and the slow ops disappear.
So our question, is mClock taking into account the reads as well as the writes?
Or are the reads calculate to be less expensive than the writes?
Thanks,
Luis Domingues
Proton AG
___
ceph-users mailing
eploy of Pacific, each OSD pushes its own osd_mclock_max_capacity_iops_hdd,
but deploying Reef not. We did not see any values for the OSDs in the ceph
config db.
In conclusion, we could say, at least on our pre-update tests, that mClock
seems to behave a lot better in Reef than in Pacific.
Lu
performances. Did
osd_mclock_max_capacity_iops_hdd became useless?
I did not found anything regarding it on the changelogs, but I could have miss
something.
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
> Why? The public network should not have any restrictions between the
> Ceph nodes. Same with the cluster network.
Internal policies and network rules.
Luis Domingues
Proton AG
On Wednesday, 3 January 2024 at 16:15, Robert Sander
wrote:
> Hi Luis,
>
> On 1/3/24 16:12,
to add mon1 to the list of hosts.
When I apply a spec afterwards with my list of hosts with their IPs where
cephadm can reach them, it works fine. But that means that I need to create the
client-keyring rule for _admin label manually as well.
Luis Domingues
Proton AG
On Wednesday, 3 January
to it anyway, and then fails.
Thanks,
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
bit redundant to have cephadm package to create that user,
when we need to figure out how to enable cephadm's access to the machines.
Anyway, thanks for your reply.
Luis Domingues
Proton AG
On Friday, 17 November 2023 at 13:55, David C. wrote:
> Hi,
>
> You can use the cephad
Hi,
I noticed when installing the cephadm rpm package, to bootstrap a cluster for
example, that a user cephadm was created. But I do not see it used anywhere.
What is the purpose of creating a user on the machine we install the local
binary of cephadm?
Luis Domingues
Proton AG
understand that for a specific spec, cephadm will try to match nodes by
host, label and then host_pattern. Our question is more at spec level, and the
order cephadm will "loop" the specs.
I hope I was clear enough.
Thanks,
Luis Domingues
Hi,
We are interested in having cephadm log to journald. So I create the ticket:
https://tracker.ceph.com/issues/62233
Thanks
Luis Domingues
Proton AG
--- Original Message ---
On Saturday, July 29th, 2023 at 20:55, John Mulligan
wrote:
> On Friday, July 28, 2023 11:51:06 AM
? Of if not is there any reason to log into a file
while everything else logs to journald?
Thanks
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
or not.
Only difference I can see is using smem on the noncache kernel memory on
containerized machines.
Maybe it's a podman issue, maybe a kernel. It does not seem related to ceph
directly. I just asked here to see if anyone got the same issue.
Anyway, thanks for your time.
Luis Domingues
Proton
, it is more around 10G, and it can go up to 40G-50G. Do
anyone knows if this is expected and why this is the case?
Maybe this is a podman related question and ceph-dev is not the best place to
ask this kind of question, but maybe someone using cephadm saw similar behavior.
Luis Domingues
Proton AG
Of course:
free -h
totalusedfree shared buff/cache available
Mem: 125Gi96Gi 9.8Gi 4.0Gi19Gi 7.6Gi
Swap:0B 0B 0B
Luis Domingues
Proton AG
--- Original Message ---
On Monday
containers with podman the kernel needs
a lot more memory?
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
in my previous e-mail, I am not sure this was the reason why, as I
did not found any clear messages saying the db_device was ignored. And I did
not tried to replicate this behavior yet.
So yeah, I fixed my issue, but not sure if I it was just luck or not.
Luis Domingues
Proton AG
purged OSD.1 with --replace and --zap, and once disks where empty and ready
to go, cephamd just added back OSD.1 and OSD.2 using the db_device as specified.
I do not know if this is the intended behavior, or if I was just lucky, but all
my OSDs are back to the cluster.
Luis Domingues
Proton AG
4-7Sxe-80GE-EcywDb",
"name": "osd-block-db-5cb8edda-30f9-539f-b4c5-dbe420927911",
"osd_fsid": "089894cf-1782-4a3a-8ac0-9dd043f80c71",
"osd_id": "7",
"osdspec_affinity": "&quo
ing, another disk was
replaced on the same machine, and it went without any issues.
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
It looks indeed to be that bug that I hit.
Thanks.
Luis Domingues
Proton AG
--- Original Message ---
On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee
wrote:
> Hello Luis,
>
> Please see my response below:
>
> But when I took a look on the memory usag
get the running config:
"osd_memory_target": "4294967296",
"osd_memory_target_autotune": "true",
"osd_memory_target_cgroup_limit_ratio": "0.80",
Which is not the value I observe from the config. I have 4294967296 instead of
so
avgtime": 0.0
}
},
"throttle-msgr_dispatch_throttler-client": {
"val": 0,
"max": 104857600,
"get_started": 0,
"get": 292633,
"get_sum": 39290356304,
"get_or_fail_fail
7068M 6400M 16.2.13 327f301eff51
6223ed8e34e9
osd.10 running (5d) 10m ago 5d 7235M 6400M 16.2.13 327f301eff51
073ddc0d7391 osd.100 running (5d) 2m ago 5d 7118M 6400M 16.2.13
327f301eff51 b7f9238c0c24
Does anybody knows why OSDs would use more memory than the limit?
Thanks
Luis Domingues
Proton
more consistent than ceph bench or fio.
Hope this will help you.
Luis Domingues
Proton AG
--- Original Message ---
On Friday, June 30th, 2023 at 12:15, Rafael Diaz Maurin
wrote:
> Hello,
>
> I've just upgraded a Pacific cluster into Quincy, and all my osd have
> t
on the configuration? Or should we add some
kind of option to generate keepalived's config without `unicast_src_ip` and
`unicast_peer`?
Thanks,
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users
osd_mclock_max_capacity_iops_hdd set to 2000 on that lab setup is the
value I get the most performance from my rados experiments, both with Segate
and Toshiba disks.
Luis Domingues
Proton AG
--- Original Message ---
On Monday, April 3rd, 2023 at 08:44, Sridhar Seshasayee
wrote:
> Why was it done that way? I
This means with default parameters we will always be far from reaching
OSD limit right?
Luis Domingues
Proton AG
--- Original Message ---
On Monday, April 3rd, 2023 at 07:43, Sridhar Seshasayee
wrote:
> Hi Luis,
>
>
> I am reading reading some documentation about mClock
t;,
"osd_mclock_scheduler_background_best_effort_wgt": "2",
"osd_mclock_scheduler_background_recovery_lim": "135",
"osd_mclock_scheduler_background_recovery_res": "36",
"osd_mclock_scheduler_background_recovery_wgt":
> > OSD bench performs IOs at the objectstore level and the stats are
> > reported
> > based on the response from those transactions. It performs either
> > sequential
> > or random IOs (i.e. a random offset into an object) based on the
> > arguments
> > passed to it. IIRC if number of objects and
on that, it is impactful performance wise. On our cluster
we can reach a lot better performances if we teak those values, instead of
letting the cluster do proper measurements. And this looks to impact certain
disk vendors more than others.
Luis Domingues
Proton AG
.
Could someone tell me what kind of load is included in best effort?
Regards,
Luis Domingues
Proton AG
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
not found anything
on the documentation, and I want to avoid to have http_proxy environment
variables set on shell system wise.
Or should I use a local container registry mirroring the ceph images?
Thanks,
Luis Domingues
___
ceph-users mailing li
cluster. We are still
looking for a more smooth way to do that.
Luis Domingues
‐‐‐ Original Message ‐‐‐
On Monday, October 4th, 2021 at 10:01 PM, David Orman
wrote:
> We have an older cluster which has been iterated on many times. It's
>
> always been cephadm deployed, but I a
I tries you advice today, and it work well.
Thanks,
Luis Domingues
‐‐‐ Original Message ‐‐‐
On Friday, October 8th, 2021 at 9:50 PM, Daniel Pivonka
wrote:
> Id have to test this to make sure it works but i believe you can run 'ceph
>
> cephadm set-user '
>
> https://d
cluster.
Is any way to set the ssh-user when adopting a cluster? I did not found the way
to change the ssh-user on the documentation.
Thanks,
Luis Domingues
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
We tested Ceph 16.2.6, and indeed, the performances came back to what we expect
for this cluster.
Luis Domingues
‐‐‐ Original Message ‐‐‐
On Saturday, September 11th, 2021 at 9:55 AM, Luis Domingues
wrote:
> Hi Igor,
>
> I have a SSD for the physical DB volume. And inde
Hi Igor,
I have a SSD for the physical DB volume. And indeed it has very high
utilisation during the benchmark. I will test 16.2.6.
Thanks,
Luis Domingues
‐‐‐ Original Message ‐‐‐
On Friday, September 10th, 2021 at 5:57 PM, Igor Fedotov
wrote:
> Hi Luis,
>
> som
EOF
eventually. The source of this performance drop is still a mystery to me.
Luis Domingues
‐‐‐ Original Message ‐‐‐
On Tuesday, September 7th, 2021 at 10:51 AM, Martin Mlynář
wrote:
> Hello,
>
> we've noticed similar issue after upgrading our test 3 node cluster from
>
ceph-ansible to deploy and/or
upgrade), my performances drop to ~400 MB/s of bandwidth doing the same rados
bench.
I am kind of clueless on what makes the performance drop so much. Does someone
have some ideas where I can dig to find the root of this difference?
Thanks,
Luis Domingues
40 matches
Mail list logo