Justin,
NVMe drives have their own IO queueing mechanism and there is a huge
performance difference vs the linux queue.
Next to properly configured file system and scheduler try setting
"scsi_mod.use_blk_mq=1"
in grub cmdline.
If you are looking for a BFQ scheduler, its probably a module so you wi
>
>
>
>
> In regards to setting read ahead, how is this set for nvme drives? Also,
> below is our compression settings for the table… It’s the same as our tests
> that we are doing against SAS SSDs so I don’t think the compression
> settings would be the issue…
>
>
>
Check blockdev --report betwee
rite query at consistency ONE (1 replica were required but only 0
acknowledged the write)
Any insight would be very helpful.
Thank you,
Justin Sanciangco
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, January 5, 2018 5:50 PM
To: user@cassandra.apache.org
Subject: Re: NVMe SSD benchma
ing performance.
>
>
>
>
>
> Please let me know
>
>
>
>
>
> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
> *Sent:* Friday, January 5, 2018 5:50 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: NVMe SSD benchmarking with Cassandra
>
>
>
> Se
performance.
[cid:image001.png@01D3864E.B5034DA0]
Please let me know
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, January 5, 2018 5:50 PM
To: user@cassandra.apache.org
Subject: Re: NVMe SSD benchmarking with Cassandra
Second the note about compression chunk size in particular
Second the note about compression chunk size in particular.
--
Jeff Jirsa
> On Jan 5, 2018, at 5:48 PM, Jon Haddad wrote:
>
> Generally speaking, disable readahead. After that it's very likely the issue
> isn’t in the settings you’re using the disk settings, but is actually in your
> Cass
Oh, I should have added, my compression settings comment only applies to read
heavy workloads, as reading 64KB off disk in order to return a handful of bytes
is incredibly wasteful by orders of magnitude but doesn’t really cause any
problems on write heavy workloads.
> On Jan 5, 2018, at 5:48 P
Generally speaking, disable readahead. After that it's very likely the issue
isn’t in the settings you’re using the disk settings, but is actually in your
Cassandra config or the data model. How are you measuring things? Are you
saturating your disks? What resource is your bottleneck?
*Ever
Can you quantify very bad performance?
--
Jeff Jirsa
> On Jan 5, 2018, at 5:41 PM, Justin Sanciangco
> wrote:
>
> Hello,
>
> I am currently benchmarking NVMe SSDs with Cassandra and am getting very bad
> performance when my workload exceeds the memory size. What mount settings for
> NVM