I am considering Indexless bucket to increase the small file performance
and maximum number of files using just HDDs.
When I checked the constraints of indexless bucket in ceph-docs, it
indicates the possible for bucket list problem, so I just know it makes the
problem of versioning and sync.
I
Hello, Ceph users,
what is the correct location of keyring for ceph-crash?
I tried to follow this document:
https://docs.ceph.com/en/latest/mgr/crash/
# ceph auth get-or-create client.crash mon 'profile crash' mgr 'profile crash'
> /etc/ceph/ceph.client.crash.keyring
and copy this
Hi all,
Thanks to Joel, Matt, and everyone else for a great discussion today.
As promised, here is a ticket to track progress on this feature request:
https://tracker.ceph.com/issues/64087
Check back on our archive [1] to rewatch the recording or view the
presentation slides.
Thanks,
Laura
1.
Hi Sridhar
Thank you for the suggestion and the link. We'll stick with wpq for now
since it seems to work ok then upgrade to reef when we are HEALTH_OK,
and then back to mclock.
Mvh.
Torkil
On 18-01-2024 13:06, Sridhar Seshasayee wrote:
Hi,
Given that the first host added had 19 OSDs,
Nice improvement with wpq:
"
data:
volumes: 1/1 healthy
pools: 13 pools, 11153 pgs
objects: 313.18M objects, 1003 TiB
usage: 1.6 PiB used, 1.6 PiB / 3.2 PiB avail
pgs: 366564427/1681736248 objects misplaced (21.797%)
5905 active+clean
5139
On 1/18/24 14:28, Eugen Block wrote:
Is your admin keyring under management?
There is no issue with the admin keyring but with ceph.conf.
The config setting "mgr mgr/cephadm/manage_etc_ceph_ceph_conf" is set to
true and "mgr mgr/cephadm/manage_etc_ceph_ceph_conf_hosts" was at "*",
so all
Is your admin keyring under management? The docs say [1]:
When a client keyring is placed under management, cephadm will:
build a list of target hosts based on the specified placement
spec (see Daemon Placement)
store a copy of the /etc/ceph/ceph.conf file on the specified
Hi,
On 1/18/24 14:07, Eugen Block wrote:
I just tired that in my test cluster, removed the ceph.conf and admin
keyring from /etc/ceph and then added the _admin label to the host via
'ceph orch' and both were created immediately.
This is strange, I only get this:
Hi,
I just tired that in my test cluster, removed the ceph.conf and admin
keyring from /etc/ceph and then added the _admin label to the host via
'ceph orch' and both were created immediately.
# no label for quincy-2
quincy-1:~ # ceph orch host ls
HOST ADDR LABELS STATUS
Np. Thanks, we'll try with wpq instead as next step.
Out of curiosity, how does that work in the interim as it requires
restarting OSDs? For a period of time we will have some OSDs on mclock
and some on wpq?
Mvh.
Torkil
On 18/01/2024 13:11, Eugen Block wrote:
Oh, I missed that line with
On 1/17/24 20:49, Chris Palmer wrote:
>
>
> On 17/01/2024 16:11, kefu chai wrote:
>>
>>
>> On Tue, Jan 16, 2024 at 12:11 AM Chris Palmer wrote:
>>
>> Updates on both problems:
>>
>> Problem 1
>> --
>>
>> The bookworm/reef cephadm package needs updating to accommodate
Oh, I missed that line with the mclock profile, sorry.
Zitat von Eugen Block :
Hi,
what is your current mclock profile? The default is "balanced":
quincy-1:~ # ceph config get osd osd_mclock_profile
balanced
You could try setting it to high_recovery_ops [1], or disable it
alltogether
Hi,
> Given that the first host added had 19 OSDs, with none of them anywhere
> near the target capacity, and the one we just added has 22 empty OSDs,
> having just 22 PGs backfilling and 1 recovering seems somewhat
> underwhelming.
>
> Is this to be expected with such a pool? Mclock profile is
Hi,
what is your current mclock profile? The default is "balanced":
quincy-1:~ # ceph config get osd osd_mclock_profile
balanced
You could try setting it to high_recovery_ops [1], or disable it
alltogether [2]:
quincy-1:~ # ceph config set osd osd_op_queue wpq
[1]
Hi,
After upgrading from Quincy to Reef the ceph-mgr daemon is not throwing some throughput OSD metrics like: ceph_osd_op_*
curl http://localhost:9283/metrics | grep -i ceph_osd_op
% Total % Received % Xferd Average Speed Time Time Time Current
Hi
Our 17.2.7 cluster:
"
-33 886.00842 datacenter 714
-7 209.93135 host ceph-hdd1
-69 69.86389 host ceph-flash1
-6 188.09579 host ceph-hdd2
-3 233.57649 host ceph-hdd3
-12 184.54091 host
Hi,
According to the documentation¹ the special host label _admin instructs
the cephadm orchestrator to place a valid ceph.conf and the
ceph.client.admin.keyring into /etc/ceph of the host.
I noticed that (at least) on 17.2.7 only the keyring file is placed in
/etc/ceph, but not ceph.conf.
On 18/01/2024 10:46, Frank Schilder wrote:
Hi, maybe this is related. On a system with many disks I also had aio problems
causing OSDs to hang. Here it was the kernel parameter fs.aio-max-nr that was
way too low by default. I bumped it to fs.aio-max-nr = 1048576 (sysctl/tuned)
and OSDs came
Hi, maybe this is related. On a system with many disks I also had aio problems
causing OSDs to hang. Here it was the kernel parameter fs.aio-max-nr that was
way too low by default. I bumped it to fs.aio-max-nr = 1048576 (sysctl/tuned)
and OSDs came up right away.
Best regards,
For multi- vs. single-OSD per flash drive decision the following test might be
useful:
We found dramatic improvements using multiple OSDs per flash drive with octopus
*if* the bottleneck is the kv_sync_thread. Apparently, each OSD has only one
and this thread is effectively sequentializing
I'm glad to hear (or read) that it worked for you as well. :-)
Zitat von Torkil Svensgaard :
On 18/01/2024 09:30, Eugen Block wrote:
Hi,
[ceph: root@lazy /]# ceph-conf --show-config | egrep
osd_max_pg_per_osd_hard_ratio
osd_max_pg_per_osd_hard_ratio = 3.00
I don't think this is the
On 18/01/2024 09:30, Eugen Block wrote:
Hi,
[ceph: root@lazy /]# ceph-conf --show-config | egrep
osd_max_pg_per_osd_hard_ratio
osd_max_pg_per_osd_hard_ratio = 3.00
I don't think this is the right tool, it says:
--show-config-value Print the corresponding ceph.conf value
Hi,
[ceph: root@lazy /]# ceph-conf --show-config | egrep
osd_max_pg_per_osd_hard_ratio
osd_max_pg_per_osd_hard_ratio = 3.00
I don't think this is the right tool, it says:
--show-config-valuePrint the corresponding ceph.conf value
that matches
Hi,
the mgr caps for profile-rbd were introduced in Nautilus:
The MGR now accepts profile rbd and profile rbd-read-only user caps.
These caps can be used to provide users access to MGR-based RBD
functionality such as rbd perf image iostat an rbd perf image iotop.
So if you don't need that
24 matches
Mail list logo