Hi George,
Also, I should have mentioned before. The results I shared were with a
lowered cache pressure value (in an attempt to keep inodes in cache).
vm.vfs_cache_pressure
= 10 (down from the default 100). The results were a little ambiguous, but
it seemed like that did help somewhat. We haven't
That's interesting Mark. It would be great if anyone has a definitive
answer on the potential syncfs-related downside of caching a lot of
inodes. A lot of our testing so far has been on the assumption that
more cached inodes is a pure good.
On Tue, May 2, 2017 at 9:19 AM, Mark Nelson wrote:
> I u
I used to advocate that users favor dentry/inode cache, but it turns out
that it's not necessarily a good idea if you also are using syncfs. It
turns out that when syncfs is used, the kernel will iterate through all
cached inodes, rather than just dirty inodes. With high numbers of
cached ino
Hi Patrick,
You could add more RAM to the servers witch will not increase the cost too
much, probably.
You could change swappiness value or use something like
https://hoytech.com/vmtouch/ to pre-cache inodes entries.
You could tarball the smaller files before loading them into Ceph maybe.
How
On 05/02/2017 01:32 AM, Frédéric Nass wrote:
Le 28/04/2017 à 17:03, Mark Nelson a écrit :
On 04/28/2017 08:23 AM, Frédéric Nass wrote:
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st
Le 28/04/2017 à 17:03, Mark Nelson a écrit :
On 04/28/2017 08:23 AM, Frédéric Nass wrote:
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusi
lot easier to use with Ceph.
Nick
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Patrick Dinnen
Sent: 01 May 2017 19:07
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Maintaining write performance under a steady intake of
small objects
Hello Ceph-users
One additional detail, we also did filestore testing using Jewel and saw
substantially similar results to those on Kraken.
On Mon, May 1, 2017 at 2:07 PM, Patrick Dinnen wrote:
> Hello Ceph-users,
>
> Florian has been helping with some issues on our proof-of-concept cluster,
> where we've been e
Hello Ceph-users,
Florian has been helping with some issues on our proof-of-concept
cluster, where we've been experiencing these issues. Thanks for the
replies so far. I wanted to jump in with some extra details.
All of our testing has been with scrubbing turned off, to remove that as
a fact
On 04/28/2017 08:23 AM, Frédéric Nass wrote:
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusion
regarding the performance drop with many smal
Le 28/04/2017 à 15:19, Frédéric Nass a écrit :
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusion
regarding the performance drop with many small objects on bluestore,
whatever the number
Hi Florian, Wido,
That's interesting. I ran some bluestore benchmarks a few weeks ago on
Luminous dev (1st release) and came to the same (early) conclusion
regarding the performance drop with many small objects on bluestore,
whatever the number of PGs is on a pool. Here is the graph I generate
> Op 24 april 2017 om 19:52 schreef Florian Haas :
>
>
> Hi everyone,
>
> so this will be a long email — it's a summary of several off-list
> conversations I've had over the last couple of weeks, but the TL;DR
> version is this question:
>
> How can a Ceph cluster maintain near-constant perfor
Hi everyone,
so this will be a long email — it's a summary of several off-list
conversations I've had over the last couple of weeks, but the TL;DR
version is this question:
How can a Ceph cluster maintain near-constant performance
characteristics while supporting a steady intake of a large number
14 matches
Mail list logo