Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread Patrick Dinnen
Hi George, Also, I should have mentioned before. The results I shared were with a lowered cache pressure value (in an attempt to keep inodes in cache). vm.vfs_cache_pressure = 10 (down from the default 100). The results were a little ambiguous, but it seemed like that did help somewhat. We haven't

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread Patrick Dinnen
That's interesting Mark. It would be great if anyone has a definitive answer on the potential syncfs-related downside of caching a lot of inodes. A lot of our testing so far has been on the assumption that more cached inodes is a pure good. On Tue, May 2, 2017 at 9:19 AM, Mark Nelson wrote: > I u

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread Mark Nelson
I used to advocate that users favor dentry/inode cache, but it turns out that it's not necessarily a good idea if you also are using syncfs. It turns out that when syncfs is used, the kernel will iterate through all cached inodes, rather than just dirty inodes. With high numbers of cached ino

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread George Mihaiescu
Hi Patrick, You could add more RAM to the servers witch will not increase the cost too much, probably. You could change swappiness value or use something like https://hoytech.com/vmtouch/ to pre-cache inodes entries. You could tarball the smaller files before loading them into Ceph maybe. How

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-02 Thread Mark Nelson
On 05/02/2017 01:32 AM, Frédéric Nass wrote: Le 28/04/2017 à 17:03, Mark Nelson a écrit : On 04/28/2017 08:23 AM, Frédéric Nass wrote: Le 28/04/2017 à 15:19, Frédéric Nass a écrit : Hi Florian, Wido, That's interesting. I ran some bluestore benchmarks a few weeks ago on Luminous dev (1st

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-01 Thread Frédéric Nass
Le 28/04/2017 à 17:03, Mark Nelson a écrit : On 04/28/2017 08:23 AM, Frédéric Nass wrote: Le 28/04/2017 à 15:19, Frédéric Nass a écrit : Hi Florian, Wido, That's interesting. I ran some bluestore benchmarks a few weeks ago on Luminous dev (1st release) and came to the same (early) conclusi

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-01 Thread Nick Fisk
lot easier to use with Ceph. Nick From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Patrick Dinnen Sent: 01 May 2017 19:07 To: ceph-users@lists.ceph.com Subject: [ceph-users] Maintaining write performance under a steady intake of small objects Hello Ceph-users

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-01 Thread Patrick Dinnen
One additional detail, we also did filestore testing using Jewel and saw substantially similar results to those on Kraken. On Mon, May 1, 2017 at 2:07 PM, Patrick Dinnen wrote: > Hello Ceph-users, > > Florian has been helping with some issues on our proof-of-concept cluster, > where we've been e

[ceph-users] Maintaining write performance under a steady intake of small objects

2017-05-01 Thread Patrick Dinnen
Hello Ceph-users, Florian has been helping with some issues on our proof-of-concept cluster, where we've been experiencing these issues. Thanks for the replies so far. I wanted to jump in with some extra details. All of our testing has been with scrubbing turned off, to remove that as a fact

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-04-28 Thread Mark Nelson
On 04/28/2017 08:23 AM, Frédéric Nass wrote: Le 28/04/2017 à 15:19, Frédéric Nass a écrit : Hi Florian, Wido, That's interesting. I ran some bluestore benchmarks a few weeks ago on Luminous dev (1st release) and came to the same (early) conclusion regarding the performance drop with many smal

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-04-28 Thread Frédéric Nass
Le 28/04/2017 à 15:19, Frédéric Nass a écrit : Hi Florian, Wido, That's interesting. I ran some bluestore benchmarks a few weeks ago on Luminous dev (1st release) and came to the same (early) conclusion regarding the performance drop with many small objects on bluestore, whatever the number

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-04-28 Thread Frédéric Nass
Hi Florian, Wido, That's interesting. I ran some bluestore benchmarks a few weeks ago on Luminous dev (1st release) and came to the same (early) conclusion regarding the performance drop with many small objects on bluestore, whatever the number of PGs is on a pool. Here is the graph I generate

Re: [ceph-users] Maintaining write performance under a steady intake of small objects

2017-04-26 Thread Wido den Hollander
> Op 24 april 2017 om 19:52 schreef Florian Haas : > > > Hi everyone, > > so this will be a long email — it's a summary of several off-list > conversations I've had over the last couple of weeks, but the TL;DR > version is this question: > > How can a Ceph cluster maintain near-constant perfor

[ceph-users] Maintaining write performance under a steady intake of small objects

2017-04-24 Thread Florian Haas
Hi everyone, so this will be a long email — it's a summary of several off-list conversations I've had over the last couple of weeks, but the TL;DR version is this question: How can a Ceph cluster maintain near-constant performance characteristics while supporting a steady intake of a large number