[ceph-users] Re: CephFS single file size limit and performance impact

2021-12-10 Thread Yan, Zheng
On Sat, Dec 11, 2021 at 2:21 AM huxia...@horebdata.cn wrote: > > Dear Ceph experts, > > I encounter a use case wherein the size of a single file may go beyound 50TB, > and would like to know whether CephFS can support a single file with size > over 50TB? Furthermore, if multiple clients, say

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-10 Thread Wesley Dillingham
I would avoid doing this. Size 2 is not where you want to be. Maybe you can give more details about your cluster size and shape and what you are trying to accomplish and another solution could be proposed. The contents of "ceph osd tree " and "ceph df" would help. Respectfully, *Wes Dillingham*

[ceph-users] Re: Cephalocon 2022 deadline extended?

2021-12-10 Thread Bobby
one typing mistakeI meant 19 December 2021 On Fri, Dec 10, 2021 at 8:21 PM Bobby wrote: > > Hi all, > > Has the CfP deadline for Cephalcoon 2022 been extended to 19 December > 2022? Please confirm if anyone knows it... > > > Thanks > ___

[ceph-users] Re: Cephalocon 2022 deadline extended?

2021-12-10 Thread Matt Vandermeulen
It appears to have been, and we have an application that's pending an internal review before we can submit... so we're hopeful that it has been! On 2021-12-10 15:21, Bobby wrote: Hi all, Has the CfP deadline for Cephalcoon 2022 been extended to 19 December 2022? Please confirm if anyone

[ceph-users] Cephalocon 2022 deadline extended?

2021-12-10 Thread Bobby
Hi all, Has the CfP deadline for Cephalcoon 2022 been extended to 19 December 2022? Please confirm if anyone knows it... Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] CephFS single file size limit and performance impact

2021-12-10 Thread huxia...@horebdata.cn
Dear Ceph experts, I encounter a use case wherein the size of a single file may go beyound 50TB, and would like to know whether CephFS can support a single file with size over 50TB? Furthermore, if multiple clients, say 50, want to access (read/modify) this big file, do we expect any

[ceph-users] Re: cephfs kernel client + snapshots slowness

2021-12-10 Thread Sebastian Knust
Hi, I also see this behaviour and can more or less reproduce it running rsync or Bareos Backup tasks (anything stat-intense should do) on a specific directory. Unmounting and then remounting the filesystem fixes it, until it is caused again by a stat-intense task. For me, I only saw two

[ceph-users] Experience reducing size 3 to 2 on production cluster?

2021-12-10 Thread Marco Pizzolo
Hello, As part of a migration process where we will be swinging Ceph hosts from one cluster to another we need to reduce the size from 3 to 2 in order to shrink the footprint sufficiently to allow safe removal of an OSD/Mon node. The cluster has about 500M objects as per dashboard, and is about

[ceph-users] Re: 16.2.6 Convert Docker to Podman?

2021-12-10 Thread Marco Pizzolo
Forgot to confirm, was this process non-destructive in terms of data in OSDs? Thanks again, On Fri, Dec 10, 2021 at 9:23 AM Marco Pizzolo wrote: > Robert, Roman and Weiwen Hu, > > Thank you very much for your responses. I presume one host at a time, and > the redeploy will take care of any

[ceph-users] Re: 16.2.6 Convert Docker to Podman?

2021-12-10 Thread Marco Pizzolo
Robert, Roman and Weiwen Hu, Thank you very much for your responses. I presume one host at a time, and the redeploy will take care of any configuration, with nothing further being necessary? Thank you. Marco On Fri, Dec 10, 2021 at 7:36 AM 胡玮文 wrote: > On Fri, Dec 10, 2021 at 01:12:56AM

[ceph-users] Re: v16.2.6 PG peering indefinitely after cluster power outage

2021-12-10 Thread Eric Alba
So I did an export of the PG using ceph-objectstore-tool in hopes that I could push ceph to forget about the rest of the data there. It was a successful export but we’ll see how it goes importing it. I tried on one osd already to import but got the message the PG already exists, am I doing

[ceph-users] OSD storage not balancing properly when crush map uses multiple device classes

2021-12-10 Thread Erik Lindahl
Hi, We are experimenting with using manually created crush maps to pick one SSD as primary and and two HDD devices. Since all our HDDs have the DB & WAL on NVMe drives, this gives us a nice combination of pretty good write performance, and great read performance while keeping costs manageable for

[ceph-users] Re: 16.2.6 Convert Docker to Podman?

2021-12-10 Thread 胡玮文
On Fri, Dec 10, 2021 at 01:12:56AM +0100, Roman Steinhart wrote: > hi, > > recently I had to switch the other way around (from podman to docker). > I just... > - stopped all daemons on a host with "systemctl stop ceph-{uuid}@*" > - purged podman > - triggered a redeploy for every daemon with

[ceph-users] Re: reinstalled node with OSD

2021-12-10 Thread bbk
Hi, i like to answer to myself :-) I finally found the rest of my documentation... So after reinstalling the OS also the osd config must be created. Here is what i have done, maybe this helps someone: -- Get the informations: ``` cephadm ceph-volume lvm list ceph config

[ceph-users] Re: Local NTP servers on monitor node's.

2021-12-10 Thread mhnx
It's nice to hear I'm on the right track. Thanks for the answers. Anthony D'Atri , 8 Ara 2021 Çar, 12:13 tarihinde şunu yazdı: > > I’ve had good success with this strategy, have the mons chime each other, and > perhaps have OSD / other nodes against the mons too. > Chrony >> ntpd > With modern

[ceph-users] Re: CephFS Metadata Pool bandwidth usage

2021-12-10 Thread Andras Sali
Hi Greg, As a follow up, we see items similar to this pop up in the objecter_requests (when it's not empty). Not sure if reading it right, but some appear quite large (in the MB range?): { "ops": [ { "tid": 9532804, "pg": "3.f9c235d7", "osd": 2,