This can be subtle and is easy to mix up.
The “PG ratio” is intended to be the number of PGs hosted on each OSD, plus or
minus a few.
Note how I phrased that, it’s not the number of PGs divided by the number of
OSDs. Remember that PGs are replicated.
While each PG belongs to exactly one
Hi,
You can try [1] geesefs project, the presentation for this code is here [2]
[1] https://github.com/yandex-cloud/geesefs
[2]
https://yourcmc-ru.translate.goog/geesefs-2022/highload.html?_x_tr_sl=ru&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=wapp
k
> On 28 Feb 2023, at 22:31, Marc wrote:
>
>
Hello
Looking to get some official guidance on PG and PGP sizing.
Is the goal to maintain approximately 100 PGs per OSD per pool or for the
cluster general?
Assume the following scenario:
Cluster with 80 OSD across 8 nodes;
3 Pools:
- Pool1 = Replicated 3x
- Pool2 =
One thing to watch out for with bluefs_buffered_io is that disabling it
can greatly impact certain rocksdb workloads. From what I remember it
was a huge problem during certain iteration workloads for things like
collection listing. I think the block cache was being invalidated or
simply
> Yeah, there seems to be a fear that attempting to repair those will
> negatively impact performance even more. I disagree and think we should do
> them immediately.
There shouldn’t really be too much of a noticeable performance hit.
Some good documentation here
Hi Josh,
thanks a lot for the breakdown and the links.
I disabled the write cache but it didn't change anything. Tomorrow I will
try to disable bluefs_buffered_io.
It doesn't sound that I can mitigate the problem with more SSDs.
Am Di., 28. Feb. 2023 um 15:42 Uhr schrieb Josh Baergen <
Hello,
Setting up first ceph cluster in lab.
Rocky 8.6
Ceph quincy
Using curl install method
Following cephadm deployment steps
Everything works as expected except
ceph orch device ls --refresh
Only displays nvme devices and not the sata ssds on the ceph host.
Tried
sgdisk
On 2/28/23 13:11, Dave Ingram wrote:
On Tue, Feb 28, 2023 at 12:56 PM Reed Dier wrote:
I think a few other things that could help would be `ceph osd df tree`
which will show the hierarchy across different crush domains.
Good idea: https://pastebin.com/y07TKt52
Yeah, it looks like
Minio no longer lets you read / write from the posix side. Only through minio
itself. :(
Haven't found a replacement yet. If you do, please let me know.
Thanks,
Kevin
From: Robert Sander
Sent: Tuesday, February 28, 2023 9:37 AM
To: ceph-users@ceph.io
On Tue, Feb 28, 2023 at 12:56 PM Reed Dier wrote:
> I think a few other things that could help would be `ceph osd df tree`
> which will show the hierarchy across different crush domains.
>
Good idea: https://pastebin.com/y07TKt52
> And if you’re doing something like erasure coded pools, or
When I suggested this to the senior admin here I was told that was a bad
idea because it would negatively impact performance.
Is that true? I thought all that would do was accept the information from
the other 2 OSDs and the one with the errors would rebuild the record.
The underlying disks
I think a few other things that could help would be `ceph osd df tree` which
will show the hierarchy across different crush domains.
And if you’re doing something like erasure coded pools, or something other than
replication 3, maybe `ceph osd crush rule dump` may provide some further
context
Den tis 28 feb. 2023 kl 18:13 skrev Dave Ingram :
> There are also several
> scrub errors. In short, it's a complete wreck.
>
> health: HEALTH_ERR
> 3 scrub errors
> Possible data damage: 3 pgs inconsistent
> [root@ceph-admin davei]# ceph health detail
> HEALTH_ERR 3
Also look at truenas core which has minio built in.
Venlig hilsen - Mit freundlichen Grüßen - Kind Regards,
Jens Galsgaard
Gitservice.dk
+45 28864340
-Oprindelig meddelelse-
Fra: Robert Sander
Sendt: 28. februar 2023 18:38
Til: ceph-users@ceph.io
Emne: [ceph-users] Re: s3 compatible
A bit late to the game, but I'm not sure if it is your drives. I had a very
similar issue to yours on enterprise drives (not that means much outside of
support).
What I was seeing is that a rebuild would kick off, PGs would instantly start
to become laggy and then our clients (openstack rbd)
Even the documentation at
https://www.kernel.org/doc/html/v5.14/filesystems/ceph.html#mount-options is
incomplete and doesn’t list options like “secret” and “mds_namespace”
Thanks
Shawn
> On Feb 28, 2023, at 11:03 AM, Shawn Weeks wrote:
>
> I’m trying to find documentation for which mount
On 28.02.23 16:31, Marc wrote:
Anyone know of a s3 compatible interface that I can just run, and reads/writes
files from a local file system and not from object storage?
Have a look at Minio:
https://min.io/product/overview#architecture
Regards
--
Robert Sander
Heinlein Support GmbH
Linux:
Hello,
Our ceph cluster performance has become horrifically slow over the past few
months.
Nobody here is terribly familiar with ceph and we're inheriting this
cluster without much direction.
Architecture: 40Gbps QDR IB fabric between all ceph nodes and our ovirt VM
hosts. 11 OSD nodes with a
I’m trying to find documentation for which mount options are supported directly
by the kernel module. For example in the kernel module included in Rocky Linux
8 and 9 the secretfile option isn’t supported even though the documentation
seems to imply it is. It seems like the documentation
Hi Cephers,
I have large OMAP objects on one of my cluster (certainly due to a big
bucket deletion, and things not completely purged).
Since there is no tool to either reconstruct index from data or purge
unused index, I thought I can use mutlisite replication.
As I am in a multisite
It doesn't really help to create multiple threads for the same issue.
I don't see a reason why the MDS went read-only in your log output
from [1], could you please add the startup log from the MDS in debug
mode so we can actually see why it's going into read-only?
[1]
Anyone know of a s3 compatible interface that I can just run, and reads/writes
files from a local file system and not from object storage?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
So I decided to proceed and everything went very well, with the cluster
remaining up and running during the whole process.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi to all and thanks for sharing your experience on ceph !
We have an easy setup with 9 osd all hdd and 3 nodes, 3 osd for each node.
We started the cluster to test how it works with hdd with default and easy
bootstrap . Then we decide to add ssd and create a pool to use only ssd.
In order to
Hello. We trying to resolve some issue with ceph. Our openshift cluster is
blocked and we tried do almost all.
Actual state is:
MDS_ALL_DOWN: 1 filesystem is offline
MDS_DAMAGE: 1 mds daemon damaged
FS_DEGRADED: 1 filesystem is degraded
MON_DISK_LOW: mon be is low on available space
Hi,
When I was looking at a complete multipart upload request I found that the
response did return an empty ETag entry in the XML.
If I query the keys metadata after the Complete is done it will return the
expected ETag so it looks like it is calculated correctly.
Hello to everyone
When I use this command to see bucket usage
radosgw-admin bucket stats --bucket=
It work only when the owner of the bucket is activated
How to see the usage even when the owner is suspended ?
Here is 2 exemple, one with the owner activated et the other one with owner
Hey all!
I'm a first time ceph user trying to learn how to set up a cluster. I've
gotten a basic cluster created using the following:
```
cehphadm bootstrap --mon-ip
ceph orch host add server-2 _admin
```
I've created and mounted an fs on a host, everything is going well, but I
have noticed
Dear Ceph community,
since about two or three weeks, we have CephFS clients regularly failing
to respond to capability releases accompanied OSD slow ops. By now, this
happens daily every time clients get more active (e.g. during nightly
backups).
We mostly observe it with a handful of highly
Hey Ilya,
I'm not sure if the things I find in the logs are actually anything related or
useful.
But I'm not really sure, if I'm looking in the right places.
I enabled "debug_ms 1" for the OSDs as suggested above.
But this filled up our host disks pretty fast, leading to e.g. monitors
Hi Cephers, We have two octopus 15.2.17 clusters in a multisite
configuration. Every once in a while we have to perform a bucket reshard (most
recently on 613 shards) and this practically kills our replication for a few
days. Does anyone know of any priority mechanics within sync to give
Hi Boris,
OK, what I'm wondering is whether
https://tracker.ceph.com/issues/58530 is involved. There are two
aspects to that ticket:
* A measurable increase in the number of bytes written to disk in
Pacific as compared to Nautilus
* The same, but for IOPS
Per the current theory, both are due to
Hi Josh,
we upgraded 15.2.17 -> 16.2.11 and we only use rbd workload.
Am Di., 28. Feb. 2023 um 15:00 Uhr schrieb Josh Baergen <
jbaer...@digitalocean.com>:
> Hi Boris,
>
> Which version did you upgrade from and to, specifically? And what
> workload are you running (RBD, etc.)?
>
> Josh
>
> On
Hi Boris,
Which version did you upgrade from and to, specifically? And what
workload are you running (RBD, etc.)?
Josh
On Tue, Feb 28, 2023 at 6:51 AM Boris Behrens wrote:
>
> Hi,
> today I did the first update from octopus to pacific, and it looks like the
> avg apply latency went up from 1ms
Hi,
the same on my side - destroyed and replaced by bluestore.
JP
On 28/02/2023 14.17, Mark Schouten wrote:
Hi,
I just destroyed the filestore osd and added it as a bluestore osd. Worked fine.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl / +31 318 200208
-- Original Message --
Hi,
today I did the first update from octopus to pacific, and it looks like the
avg apply latency went up from 1ms to 2ms.
All 36 OSDs are 4TB SSDs and nothing else changed.
Someone knows if this is an issue, or am I just missing a config value?
Cheers
Boris
On Tue, Feb 28, 2023 at 8:19 AM Lars Dunemark wrote:
>
> Hi,
>
> I notice that CompleteMultipartUploadResult does return an empty ETag
> field when completing an multipart upload in v17.2.3.
>
> I haven't had the possibility to verify from which version this changed
> and can't find in the
Hi,
I just destroyed the filestore osd and added it as a bluestore osd.
Worked fine.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To m...@tuxis.nl; ceph-users@ceph.io
Date 2/25/2023 4:14:54 PM
Subject Re:
Hi,
I notice that CompleteMultipartUploadResult does return an empty ETag
field when completing an multipart upload in v17.2.3.
I haven't had the possibility to verify from which version this changed
and can't find in the changelog that it should be fixed in newer version.
The response
39 matches
Mail list logo