[ceph-users] Re: Cannot create CephFS subvolume

2023-01-02 Thread Venky Shankar
Hi Daniel, On Wed, Dec 28, 2022 at 3:17 AM Daniel Kovacs wrote: > > Hello! > > I'd like to create a CephFS subvol, with these command: ceph fs > subvolume create cephfs_ssd subvol_1 > I got this error: Error EINVAL: invalid value specified for > ceph.dir.subvolume > If I use another cephfs

[ceph-users] Re: CephFS: Isolating folders for different users

2023-01-02 Thread Venky Shankar
Hi Jonas, On Mon, Jan 2, 2023 at 10:52 PM Jonas Schwab wrote: > > Thank you very much! Works like a charm, except for one thing: I gave my > clients the MDS caps 'allow rws path=' to also be able > to create snapshots from the client, but `mkdir .snap/test` still returns > mkdir: cannot

[ceph-users] Re: pg deep scrubbing issue

2023-01-02 Thread Anthony D'Atri
Look closely at your output. The PGs with 0 objects. Are only “every other” due to how the command happened to order the output. Note that the empty PGs all have IDs matching “3.*”. The numeric prefix of a PG ID reflects the cardinal ID of the pool to which it belongs. I strongly suspect

[ceph-users] Re: pg deep scrubbing issue

2023-01-02 Thread Jeffrey Turmelle
Thanks for the reply. I’ll give that a try, I wasn’t using the balancer. > On Jan 2, 2023, at 1:55 AM, Pavin Joseph wrote: > > Hi Jeff, > > Might be worth checking the balancer [0] status, also you probably want to > use upmap mode [1] if possible. > > [0]:

[ceph-users] Re: CephFS: Isolating folders for different users

2023-01-02 Thread Robert Gallop
One side affect of using sub volumes is that you can then only take a snap at the sub volume level, nothing further down the tree. I find you can use the same path on the auth without the sub volume unless I’m missing something in this thread. On Mon, Jan 2, 2023 at 10:21 AM Jonas Schwab <

[ceph-users] Re: CephFS: Isolating folders for different users

2023-01-02 Thread Jonas Schwab
Thank you very much! Works like a charm, except for one thing: I gave my clients the MDS caps 'allow rws path=' to also be able to create snapshots from the client, but `mkdir .snap/test` still returns     mkdir: cannot create directory ‘.snap/test’: Operation not permitted Do you have an idea

[ceph-users] Re: Ceph All-SSD Cluster & Wal/DB Separation

2023-01-02 Thread Anthony D'Atri
Sent prematurely. I meant to add that after ~3 years of service, the 1 DWPD drives in the clusters I mentioned mostly reported <10% of endurance burned. Required endurance is in part a function of how long you expect the drives to last. >> Having said that, for a storage cluster where write

[ceph-users] Re: Ceph All-SSD Cluster & Wal/DB Separation

2023-01-02 Thread Anthony D'Atri
> Having said that, for a storage cluster where write performance is expected > to be the main bottleneck, I would be hesitant to use drives that only have > 1DWPD endurance since Ceph has fairly high write amplification factors. If > you use 3-fold replication, this cluster might only be

[ceph-users] Re: Ceph All-SSD Cluster & Wal/DB Separation

2023-01-02 Thread Mevludin Blazevic
Hi all, I have a similar question regarding a cluster configuration consisting of HDDs, SSDs and NVMes. Let's say I would setup a OSD configuration in a yaml file like this: service_type:osd service_id:osd_spec_default placement: host_pattern:'*' spec: data_devices: model:HDD-Model-XY

[ceph-users] Re: Ceph All-SSD Cluster & Wal/DB Separation

2023-01-02 Thread Erik Lindahl
Depends. In theory, each OSD will have access to 1/4 of the separate WAL/DB device, so to get better performance you need to find an NVMe device that delivers significantly more than 4x the IOPS rate of the pm1643 drives, which is not common. That assumes the pm1643 devices are connected to a

[ceph-users] Re: ceph failing to write data - MDSs read only

2023-01-02 Thread Amudhan P
Hi Kotresh, The issue is fixed for now I followed the steps below. I have an unmounted kernel client and restarted mds service which brought back mds to normal. But even after this "1 MDSs behind on trimming issue" didn't solve I waited for about 20 - 30 mins which automatically fixed the

[ceph-users] Re: max pool size (amount of data/number of OSDs)

2023-01-02 Thread Konstantin Shalygin
Hi Chris, The actually limits are not software. Usually Ceph teams on Cloud Providers or Universities running out at physical resources at first: racks, racks power or network (ports, EOL switches that can't be upgraded) or hardware lifetime (There is no point in buying old hardware, and the

[ceph-users] Ceph All-SSD Cluster & Wal/DB Separation

2023-01-02 Thread hosseinz8...@yahoo.com
Hi Experts,I am seeking for if there is achievable significant write performance improvements when separating WAL/DB in a ceph cluster with all SSD type OSD.I have a cluster with 40 SSD (PM1643 1.8 TB SSD Enterprise Samsung). I have 10 Storage node each with 4 OSD. I want to know that can I get