Whether 600m rows per hour is good or bad depends on the hardware you are
using (do you have 1 node or 1000? 2 cores each or 16?) and the data you
are writing (is it 10 bytes per row or 100kb?).
In general, I think you will need to supply a lot more context about your
use case and set up to get an
Sent from Mail for Windows 10
Thanks for feedback.
Just to elaborate more, I am currently writing 600m rows per hour and need to
understand if this is about on target or if there are better ways to write or
perhaps structure the keyspaces and table structures.
And I can use the Cassandra St
If you’re after some benchmark that someone else has already run to help
estimate sizing, we pretty regularly publish benchmarking on various cloud
provider instances.
For example, see:
https://www.instaclustr.com/announcing-instaclustr-support-for-aws-i3en-instances/
and https://www.instaclustr.c
Cassandra, being a scale-out database, can load any arbitrary number of
records per hour.
The best way to do this is for your given data model, find what your max
throughput is on a single node by scaling the number of clients until you
start seeing errors (or hit your latency SLA) then pull back
Have you tried ycsa?
It is a tool from yahoo for stress testing nosql databases.
On Tue, Aug 20, 2019 at 3:34 AM wrote:
> Hi Everyone,
>
>
>
> Anyone before who have bused Cassandra-stress. I want to test if it’s
> possible to load 600 milllions records per hour in Cassandra or
>
> Find a better