Cassandra, being a scale-out database, can load any arbitrary number of
records per hour.

The best way to do this is for your given data model, find what your max
throughput is on a single node by scaling the number of clients until you
start seeing errors (or hit your latency SLA) then pull back by 15-20%.
>From there, it's a matter of linearly scaling clients and nodes until you
hit your desired throughput.

I recommend taking a look at TLP-Stress as it's a bit easier to use and
understand:  https://thelastpickle.com/blog/2018/10/31/tlp-stress-intro.html


Best.
*Marc Selwan | *DataStax *| *PM, Server Team *|* *(925) 413-7079* *|*
Twitter <https://twitter.com/MarcSelwan>

*  Quick links | *DataStax <http://www.datastax.com> *| *Training
<http://www.academy.datastax.com> *| *Documentation
<http://www.datastax.com/documentation/getting_started/doc/getting_started/gettingStartedIntro_r.html>
 *| *Downloads <http://www.datastax.com/download>



On Tue, Aug 20, 2019 at 7:16 AM Surbhi Gupta <surbhi.gupt...@gmail.com>
wrote:

> Have you tried ycsa?
> It is a tool from yahoo for stress testing nosql databases.
>
> On Tue, Aug 20, 2019 at 3:34 AM <yanga.zuke...@condorgreen.com> wrote:
>
>> Hi Everyone,
>>
>>
>>
>> Anyone before who have bused Cassandra-stress. I want to test if it’s
>> possible to load 600 milllions records per hour in Cassandra or
>>
>> Find a better way to optimize Cassandra for this case.
>>
>> Any help will be highly appreciated.
>>
>>
>>
>> Sent from Mail
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__go.microsoft.com_fwlink_-3FLinkId-3D550986&d=DwMFaQ&c=adz96Xi0w1RHqtPMowiL2g&r=E6NVfMr2TIhW42QMfARTvsfCLtdF-oEA3KfAQRfVZdk&m=qz4MqEErkPhY1u6JLqEJUgJmIIjmnMQjptddjTPJE_M&s=87TbqmPgsIH-JP0fbsUYHhpSQyxeHVdqioQud3BHygc&e=>
>> for Window
>>
>

Reply via email to