On 4/26/24 23:51, Erich Weiler wrote:
As Dietmar said, VS Code may cause this. Quite funny to read,
actually, because we've been dealing with this issue for over a year,
and yesterday was the very first time Ceph complained about a client
and we saw VS Code's remote stuff running. Coincidence.
dules/*/**": true,
"**/.cache/**": true,
"**/.conda/**": true,
"**/.local/**": true,
"**/.nextflow/**": true,
"**/work/**": true,
"**/cephfs/**": true
}
}
On 4/27/24 12:24 AM, Dietmar Rieder wrote:
Hi
. I'll suggest they make the mods you referenced! Thanks for the tip.
>
>cheers,
>erich
>
>On 4/24/24 12:58 PM, Dietmar Rieder wrote:
>> Hi Erich,
>>
>> in our case the "client failing to respond to cache pressure" situation
>> is/was often caused by u
Hi Erich,
in our case the "client failing to respond to cache pressure" situation
is/was often caused by users how have vscode connecting via ssh to our
HPC head node. vscode makes heavy use of file watchers and we have seen
users with > 400k watchers. All these watched files must be held in
(we see massive writes as
well there)?
Unfortunately, I can't comment on Reef as we're still using Pacific.
/Z
On Tue, 16 Apr 2024 at 18:08, Dietmar Rieder <mailto:dietmar.rie...@i-med.ac.at>> wrote:
Hi Zakhar, hello List,
I just wanted to follow up on this and ask a few quesi
Hi Zakhar, hello List,
I just wanted to follow up on this and ask a few quesitions:
Did you noticed any downsides with your compression settings so far?
Do you have all mons now on compression?
Did release updates go through without issues?
Do you know if this works also with reef (we see
Hello,
we a run CentOS 7.9 client to access cephfs on a Ceph Reef (18.2.2)
Cluster and it works just fine using the kernel client that comes with
CentOS 7.9 + updates.
Best
Dietmar
On 4/15/24 16:17, Dario Graña wrote:
Hello everyone!
We deployed a platform with Ceph Quincy and now we
ue, Jan 30, 2024 at 2:03 AM Dietmar Rieder
wrote:
Hello,
I have a question regarding the default pool of a cephfs.
According to the docs it is recommended to use a fast ssd replicated
pool as default pool for cephfs. I'm asking what are the space
requirements for storing the inode backtrace i
On 1/31/24 20:13, Patrick Donnelly wrote:
On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder
wrote:
Hello,
I have a question regarding the default pool of a cephfs.
According to the docs it is recommended to use a fast ssd replicated
pool as default pool for cephfs. I'm asking what
Hello,
I have a question regarding the default pool of a cephfs.
According to the docs it is recommended to use a fast ssd replicated
pool as default pool for cephfs. I'm asking what are the space
requirements for storing the inode backtrace information?
Let's say I have a 85 TiB replicated
...nevermind, after restart of the managers I was getting the metrics.
sorry for the noise
Dietmar
On 1/6/24 13:45, Dietmar Rieder wrote:
Hi,
I just freshly deployed a new cluster (v18.2.1) using cephadm. Now
before creating pools, cephfs and so on I wanted to check if the
dashboard
Hi,
I just freshly deployed a new cluster (v18.2.1) using cephadm. Now
before creating pools, cephfs and so on I wanted to check if the
dashboard is working and if I get some metrics.
If I navigate to Cluster >> Hosts and open one of the OSD hosts the
"Performance Details" tab is shown but
Hi,
this is on our nautilus cluster, not sure if it is relevant, however
here are the results:
1) iotop results:
TID PRIO USER DISK READ DISK WRITE SWAPIN IOCOMMAND
TID PRIO USER DISK READ DISK WRITE SWAPIN IOCOMMAND
1801 be/4 ceph 0.00 B
I thought so too, but now I'm a bit confused. We are planning to
setup a new ceph cluster and initially opted for a el9 system, which is
supposed to be stable, should we rather use a stream trail version?
Dietmar
On 8/4/23 09:04, Marc wrote:
But Rocky Linux 9 is the continuation of what
On 5/23/23 15:58, Gregory Farnum wrote:
On Tue, May 23, 2023 at 3:28 AM Dietmar Rieder
wrote:
Hi,
can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time? Is
there anything t
On 5/23/23 15:53, Konstantin Shalygin wrote:
Hi,
On 23 May 2023, at 13:27, Dietmar Rieder
wrote:
can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time?
Is there anything t
Hi,
can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time? Is
there anything to consider when changing, let's say, from 1TB (default)
to 4TB ?
We are running the latest Nautilus
On 5/27/21 2:33 PM, Adrian Sevcenco wrote:
Hi! is is (technically) possible to instruct cephfs to store files <
1Mib on a (replicate) pool
and the others files on another (ec) pool?
And even more, is it possible to take the same kind of decision on the
path of the file?
(let's say that
: Dietmar Rieder
Sent: 19 January 2021 13:24:15
To: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: cephfs: massive drop in MDS requests per second
with increasing number of caps
Hi Frank,
you don't need to remount the fs. The kernel driver should react to the
change on the MDS
ing config settings in the manual pages.
I would be most interested in further updates in this matter and also if you
find other flags with positive performance impact.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
____
From: Diet
if it has any impact on other
operations or situations.
Still I wonder why a higher number (i.e. >64k) of caps on the client
destroys the performance completely.
Thanks again
Dietmar
On 1/18/21 6:20 PM, Dietmar Rieder wrote:
Hi Burkhard,
thanks so much for the quick reply and the explanat
Hi Burkhard,
thanks so much for the quick reply and the explanation and suggestions.
I'll check these settings and eventually change them and report back.
Best
Dietmar
On 1/18/21 6:00 PM, Burkhard Linke wrote:
Hi,
On 1/18/21 5:46 PM, Dietmar Rieder wrote:
Hi all,
we noticed a massive
Hi all,
we noticed a massive drop in requests per second a cephfs client is able
to perform when we do a recursive chown over a directory with millions
of files. As soon as we see about 170k caps on the MDS, the client
performance drops from about 660 reqs/sec to 70 reqs/sec.
When we then
On 2020-06-12 16:35, Marc Roos wrote:
>
> Will there be a ceph release available on rhel7 until the eol of rhel7?
much needed here as well
+1
Would be really great, Thanks a lot.
Dietmar
--
_
D i e t m a r R i e d e r, Mag.Dr.
Innsbruck Medical
On 2020-04-02 16:40, konstantin.ilya...@mediascope.net wrote:
> I have done it.
> I am not sure, if i didn’t miss something, but i upgraded test cluster from
> CentOs7.7.1908+Ceph14.2.8 to Debian10.3+Ceph15.2.0.
>
> Preparations:
> - 6 nodes with OS CentOs7.7.1908, Ceph14.2.8:
> -
On 2020-04-02 12:24, Paul Emmerich wrote:
> Safe to ignore/increase the warning threshold. You are seeing this
> because the warning level was reduced to 200k from 2M recently.
>
> The file will be sharded in a newer version which will clean this up
>
Thanks Paul,
would that "newer version" be
Hi,
I'm trying to understand the "LARGE_OMAP_OBJECTS 1 large omap objects"
warning for out cephfs metadata pool.
It seems that pg 5.26 has a large omap object with > 200k keys
[WRN] : Large omap object found. Object:
5:654134d2:::mds0_openfiles.0:head PG: 5.4b2c82a6 (5.26) Key count:
286083
On 2020-03-24 23:37, Sage Weil wrote:
> On Tue, 24 Mar 2020, konstantin.ilya...@mediascope.net wrote:
>> Is it poosible to provide instructions about upgrading from CentOs7+
>> ceph 14.2.8 to CentOs8+ceph 15.2.0 ?
>
> You have ~2 options:
>
> - First, upgrade Ceph packages to 15.2.0. Note that
pus ready.
>
> ta ta
>
> Jake
>
>
> On 3/16/20 4:58 PM, Dietmar Rieder wrote:
>> On 2020-03-03 13:36, Abhishek Lekshmanan wrote:
>>>
>>> This is the eighth update to the Ceph Nautilus release series. This release
>>> fixes issues across a ra
On 2020-03-03 13:36, Abhishek Lekshmanan wrote:
>
> This is the eighth update to the Ceph Nautilus release series. This release
> fixes issues across a range of subsystems. We recommend that all users upgrade
> to this release. Please note the following important changes in this
> release; as
Oh, didn't realize, Thanks
Dietmar
On 2020-03-16 09:44, Ashley Merrick wrote:
> This was a bug in 14.2.7 and calculation for EC pools.
>
> It has been fixed in 14.2.8
>
>
> On Mon, 16 Mar 2020 16:21:41 +0800 *Dietmar Rieder
> * wrote
>
> Hi,
>
&
Hi,
I was planing to activate the pg_autoscaler on a EC (6+3) pool which I
created two years ago.
Back then I calculated the total # of pgs for this pool with a target
per ods pg # of 150 (this was the recommended /osd pg number as far as I
recall).
I used the RedHat ceph pg per pool calculator
will keep an eye on it.
Thanks again
Dietmar
On 2020-02-13 09:37, Dietmar Rieder wrote:
> Hi,
>
> they were not down as far as I can tell form the affected osd logs at
> the time in question.
> I'll try to play with those values, thanks. Is there anything else that
> might help?
]
[281203.017510] RSP
[281203.019743] CR2: 0010
# uname -a
Linux zeus.icbi.local 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4
23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
More dmesg extract attached.
Should I file a bug report?
Dietmar
On 2020-02-12 13:32, Dietmar Rieder wrote:
>
Hi,
we sometimes loose access to our cephfs mount and get permission denied
if we try to cd into it. This happens apparently only on some of our HPC
cephfs-client nodes (fs mounted via kernel client) when they are busy
with calculation and I/O.
When we then manually force unmount the fs and
worked fine for us as well
D.
On 2020-02-12 09:33, Massimo Sgaravatto wrote:
> We skipped from Luminous to Nautilus, skipping Mimic
> This is supported and documented
>
> On Wed, Feb 12, 2020 at 9:30 AM Eugen Block wrote:
>
>> Hi,
>>
>> we also skipped Mimic when upgrading from L --> N and it
36 matches
Mail list logo