[ceph-users] Re: [EXTERN] Re: cache pressure?

2024-05-07 Thread Dietmar Rieder
On 4/26/24 23:51, Erich Weiler wrote: As Dietmar said, VS Code may cause this. Quite funny to read, actually, because we've been dealing with this issue for over a year, and yesterday was the very first time Ceph complained about a client and we saw VS Code's remote stuff running. Coincidence.

[ceph-users] Re: [EXTERN] cache pressure?

2024-04-29 Thread Dietmar Rieder
dules/*/**": true, "**/.cache/**": true, "**/.conda/**": true, "**/.local/**": true, "**/.nextflow/**": true,      "**/work/**": true, "**/cephfs/**": true    } } On 4/27/24 12:24 AM, Dietmar Rieder wrote: Hi

[ceph-users] Re: [EXTERN] cache pressure?

2024-04-27 Thread Dietmar Rieder
. I'll suggest they make the mods you referenced! Thanks for the tip. > >cheers, >erich > >On 4/24/24 12:58 PM, Dietmar Rieder wrote: >> Hi Erich, >> >> in our case the "client failing to respond to cache pressure" situation >> is/was often caused by u

[ceph-users] Re: [EXTERN] cache pressure?

2024-04-24 Thread Dietmar Rieder
Hi Erich, in our case the "client failing to respond to cache pressure" situation is/was often caused by users how have vscode connecting via ssh to our HPC head node. vscode makes heavy use of file watchers and we have seen users with > 400k watchers. All these watched files must be held in

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-17 Thread Dietmar Rieder
(we see massive writes as well there)? Unfortunately, I can't comment on Reef as we're still using Pacific. /Z On Tue, 16 Apr 2024 at 18:08, Dietmar Rieder <mailto:dietmar.rie...@i-med.ac.at>> wrote: Hi Zakhar, hello List, I just wanted to follow up on this and ask a few quesi

[ceph-users] Re: [EXTERN] Re: Ceph 16.2.x mon compactions, disk writes

2024-04-16 Thread Dietmar Rieder
Hi Zakhar, hello List, I just wanted to follow up on this and ask a few quesitions: Did you noticed any downsides with your compression settings so far? Do you have all mons now on compression? Did release updates go through without issues? Do you know if this works also with reef (we see

[ceph-users] Re: [EXTERN] cephFS on CentOS7

2024-04-16 Thread Dietmar Rieder
Hello, we a run CentOS 7.9 client to access cephfs on a Ceph Reef (18.2.2) Cluster and it works just fine using the kernel client that comes with CentOS 7.9 + updates. Best Dietmar On 4/15/24 16:17, Dario Graña wrote: Hello everyone! We deployed a platform with Ceph Quincy and now we

[ceph-users] Re: [EXTERN] Re: cephfs inode backtrace information

2024-01-31 Thread Dietmar Rieder
ue, Jan 30, 2024 at 2:03 AM Dietmar Rieder wrote: Hello, I have a question regarding the default pool of a cephfs. According to the docs it is recommended to use a fast ssd replicated pool as default pool for cephfs. I'm asking what are the space requirements for storing the inode backtrace i

[ceph-users] Re: [EXTERN] Re: cephfs inode backtrace information

2024-01-31 Thread Dietmar Rieder
On 1/31/24 20:13, Patrick Donnelly wrote: On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder wrote: Hello, I have a question regarding the default pool of a cephfs. According to the docs it is recommended to use a fast ssd replicated pool as default pool for cephfs. I'm asking what

[ceph-users] cephfs inode backtrace information

2024-01-30 Thread Dietmar Rieder
Hello, I have a question regarding the default pool of a cephfs. According to the docs it is recommended to use a fast ssd replicated pool as default pool for cephfs. I'm asking what are the space requirements for storing the inode backtrace information? Let's say I have a 85 TiB replicated

[ceph-users] Re: [EXTERN] No metrics shown in dashboard (18.2.1)

2024-01-06 Thread Dietmar Rieder
...nevermind, after restart of the managers I was getting the metrics. sorry for the noise Dietmar On 1/6/24 13:45, Dietmar Rieder wrote: Hi, I just freshly deployed a new cluster (v18.2.1) using cephadm. Now before creating pools, cephfs and so on I wanted to check if the dashboard

[ceph-users] No metrics shown in dashboard (18.2.1)

2024-01-06 Thread Dietmar Rieder
Hi, I just freshly deployed a new cluster (v18.2.1) using cephadm. Now before creating pools, cephfs and so on I wanted to check if the dashboard is working and if I get some metrics. If I navigate to Cluster >> Hosts and open one of the OSD hosts the "Performance Details" tab is shown but

[ceph-users] Re: [EXTERN] Please help collecting stats of Ceph monitor disk writes

2023-10-13 Thread Dietmar Rieder
Hi, this is on our nautilus cluster, not sure if it is relevant, however here are the results: 1) iotop results: TID PRIO USER DISK READ DISK WRITE SWAPIN IOCOMMAND TID PRIO USER DISK READ DISK WRITE SWAPIN IOCOMMAND 1801 be/4 ceph 0.00 B

[ceph-users] Re: [EXTERN] Re: Ceph Quincy and liburing.so.2 on Rocky Linux 9

2023-08-04 Thread Dietmar Rieder
I thought so too, but now I'm a bit confused. We are planning to setup a new ceph cluster and initially opted for a el9 system, which is supposed to be stable, should we rather use a stream trail version? Dietmar On 8/4/23 09:04, Marc wrote: But Rocky Linux 9 is the continuation of what

[ceph-users] Re: [EXTERN] Re: cephfs max_file_size

2023-05-24 Thread Dietmar Rieder
On 5/23/23 15:58, Gregory Farnum wrote: On Tue, May 23, 2023 at 3:28 AM Dietmar Rieder wrote: Hi, can the cephfs "max_file_size" setting be changed at any point in the lifetime of a cephfs? Or is it critical for existing data if it is changed after some time? Is there anything t

[ceph-users] Re: [EXTERN] Re: cephfs max_file_size

2023-05-24 Thread Dietmar Rieder
On 5/23/23 15:53, Konstantin Shalygin wrote: Hi, On 23 May 2023, at 13:27, Dietmar Rieder wrote: can the cephfs "max_file_size" setting be changed at any point in the lifetime of a cephfs? Or is it critical for existing data if it is changed after some time? Is there anything t

[ceph-users] cephfs max_file_size

2023-05-23 Thread Dietmar Rieder
Hi, can the cephfs "max_file_size" setting be changed at any point in the lifetime of a cephfs? Or is it critical for existing data if it is changed after some time? Is there anything to consider when changing, let's say, from 1TB (default) to 4TB ? We are running the latest Nautilus

[ceph-users] Re: cephfs:: store files on different pools?

2021-05-27 Thread Dietmar Rieder
On 5/27/21 2:33 PM, Adrian Sevcenco wrote: Hi! is is (technically) possible to instruct cephfs to store files < 1Mib on a (replicate) pool and the others files on another (ec) pool? And even more, is it possible to take the same kind of decision on the path of the file? (let's say that

[ceph-users] Re: cephfs: massive drop in MDS requests per second with increasing number of caps

2021-01-20 Thread Dietmar Rieder
: Dietmar Rieder Sent: 19 January 2021 13:24:15 To: Frank Schilder; ceph-users@ceph.io Subject: Re: [ceph-users] Re: cephfs: massive drop in MDS requests per second with increasing number of caps Hi Frank, you don't need to remount the fs. The kernel driver should react to the change on the MDS

[ceph-users] Re: cephfs: massive drop in MDS requests per second with increasing number of caps

2021-01-19 Thread Dietmar Rieder
ing config settings in the manual pages. I would be most interested in further updates in this matter and also if you find other flags with positive performance impact. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ____ From: Diet

[ceph-users] Re: cephfs: massive drop in MDS requests per second with increasing number of caps

2021-01-18 Thread Dietmar Rieder
if it has any impact on other operations or situations. Still I wonder why a higher number (i.e. >64k) of caps on the client destroys the performance completely. Thanks again Dietmar On 1/18/21 6:20 PM, Dietmar Rieder wrote: Hi Burkhard, thanks so much for the quick reply and the explanat

[ceph-users] Re: cephfs: massive drop in MDS requests per second with increasing number of caps

2021-01-18 Thread Dietmar Rieder
Hi Burkhard, thanks so much for the quick reply and the explanation and suggestions. I'll check these settings and eventually change them and report back. Best Dietmar On 1/18/21 6:00 PM, Burkhard Linke wrote: Hi, On 1/18/21 5:46 PM, Dietmar Rieder wrote: Hi all, we noticed a massive

[ceph-users] cephfs: massive drop in MDS requests per second with increasing number of caps

2021-01-18 Thread Dietmar Rieder
Hi all, we noticed a massive drop in requests per second a cephfs client is able to perform when we do a recursive chown over a directory with millions of files. As soon as we see about 170k caps on the MDS, the client performance drops from about 660 reqs/sec to 70 reqs/sec. When we then

[ceph-users] Re: ceph on rhel7 / centos7 till eol?

2020-06-12 Thread Dietmar Rieder
On 2020-06-12 16:35, Marc Roos wrote: > > Will there be a ceph release available on rhel7 until the eol of rhel7? much needed here as well +1 Would be really great, Thanks a lot. Dietmar -- _ D i e t m a r R i e d e r, Mag.Dr. Innsbruck Medical

[ceph-users] Re: v15.2.0 Octopus released

2020-04-02 Thread Dietmar Rieder
On 2020-04-02 16:40, konstantin.ilya...@mediascope.net wrote: > I have done it. > I am not sure, if i didn’t miss something, but i upgraded test cluster from > CentOs7.7.1908+Ceph14.2.8 to Debian10.3+Ceph15.2.0. > > Preparations: > - 6 nodes with OS CentOs7.7.1908, Ceph14.2.8: > -

[ceph-users] Re: LARGE_OMAP_OBJECTS 1 large omap objects

2020-04-02 Thread Dietmar Rieder
On 2020-04-02 12:24, Paul Emmerich wrote: > Safe to ignore/increase the warning threshold. You are seeing this > because the warning level was reduced to 200k from 2M recently. > > The file will be sharded in a newer version which will clean this up > Thanks Paul, would that "newer version" be

[ceph-users] LARGE_OMAP_OBJECTS 1 large omap objects

2020-04-02 Thread Dietmar Rieder
Hi, I'm trying to understand the "LARGE_OMAP_OBJECTS 1 large omap objects" warning for out cephfs metadata pool. It seems that pg 5.26 has a large omap object with > 200k keys [WRN] : Large omap object found. Object: 5:654134d2:::mds0_openfiles.0:head PG: 5.4b2c82a6 (5.26) Key count: 286083

[ceph-users] Re: v15.2.0 Octopus released

2020-03-25 Thread Dietmar Rieder
On 2020-03-24 23:37, Sage Weil wrote: > On Tue, 24 Mar 2020, konstantin.ilya...@mediascope.net wrote: >> Is it poosible to provide instructions about upgrading from CentOs7+ >> ceph 14.2.8 to CentOs8+ceph 15.2.0 ? > > You have ~2 options: > > - First, upgrade Ceph packages to 15.2.0. Note that

[ceph-users] Re: v14.2.8 Nautilus released

2020-03-18 Thread Dietmar Rieder
pus ready. > > ta ta > > Jake > > > On 3/16/20 4:58 PM, Dietmar Rieder wrote: >> On 2020-03-03 13:36, Abhishek Lekshmanan wrote: >>> >>> This is the eighth update to the Ceph Nautilus release series. This release >>> fixes issues across a ra

[ceph-users] Re: v14.2.8 Nautilus released

2020-03-16 Thread Dietmar Rieder
On 2020-03-03 13:36, Abhishek Lekshmanan wrote: > > This is the eighth update to the Ceph Nautilus release series. This release > fixes issues across a range of subsystems. We recommend that all users upgrade > to this release. Please note the following important changes in this > release; as

[ceph-users] Re: HEALTH_WARN 1 pools have too few placement groups

2020-03-16 Thread Dietmar Rieder
Oh, didn't realize, Thanks Dietmar On 2020-03-16 09:44, Ashley Merrick wrote: > This was a bug in 14.2.7 and calculation for EC pools. > > It has been fixed in 14.2.8 > > > On Mon, 16 Mar 2020 16:21:41 +0800 *Dietmar Rieder > * wrote > > Hi, > &

[ceph-users] HEALTH_WARN 1 pools have too few placement groups

2020-03-16 Thread Dietmar Rieder
Hi, I was planing to activate the pg_autoscaler on a EC (6+3) pool which I created two years ago. Back then I calculated the total # of pgs for this pool with a target per ods pg # of 150 (this was the recommended /osd pg number as far as I recall). I used the RedHat ceph pg per pool calculator

[ceph-users] Re: CephFS hangs with access denied

2020-02-17 Thread Dietmar Rieder
will keep an eye on it. Thanks again Dietmar On 2020-02-13 09:37, Dietmar Rieder wrote: > Hi, > > they were not down as far as I can tell form the affected osd logs at > the time in question. > I'll try to play with those values, thanks. Is there anything else that > might help?

[ceph-users] Re: CephFS hangs with access denied

2020-02-12 Thread Dietmar Rieder
] [281203.017510] RSP [281203.019743] CR2: 0010 # uname -a Linux zeus.icbi.local 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux More dmesg extract attached. Should I file a bug report? Dietmar On 2020-02-12 13:32, Dietmar Rieder wrote: >

[ceph-users] CephFS hangs with access denied

2020-02-12 Thread Dietmar Rieder
Hi, we sometimes loose access to our cephfs mount and get permission denied if we try to cd into it. This happens apparently only on some of our HPC cephfs-client nodes (fs mounted via kernel client) when they are busy with calculation and I/O. When we then manually force unmount the fs and

[ceph-users] Re: luminous -> nautilus upgrade path

2020-02-12 Thread Dietmar Rieder
worked fine for us as well D. On 2020-02-12 09:33, Massimo Sgaravatto wrote: > We skipped from Luminous to Nautilus, skipping Mimic > This is supported and documented > > On Wed, Feb 12, 2020 at 9:30 AM Eugen Block wrote: > >> Hi, >> >> we also skipped Mimic when upgrading from L --> N and it