You’re basically looking to query and aggregate the data arbitrarily - you may 
have better luck using spark or solr pointing to a single backing table in 
Cassandra 



-- 
Jeff Jirsa


> On Feb 18, 2018, at 3:38 AM, onmstester onmstester <onmstes...@zoho.com> 
> wrote:
> 
> I have a single structured row as input with rate of 10K per seconds. Each 
> row has 20 columns. Some queries should be answered on these inputs. Because 
> most of queries needs different where, group by or orderby, The final data 
> model ended up like this:
>     primary key for table of query1 : ((column1,column2),column3,column4)
>     primary key for table of query2 : ((column3,column4),column2,column1)
>     and so on
> 
> I am aware of the limit in number of tables in cassandra data model (200 is 
> warning and 500 would fail) Because for every input row i should do an insert 
> in every table, the final write per seconds became big * big data!:
> 
> write per seconds = 10K (input) * number of tables (queries) * replication 
> factor
> 
> The main question: am i in the right path? is this normal to have a table for 
> every query even when the input rate is already so high? Shouldn't i use 
> something like spark or hadoop upon instead of relying on bare datamodel Or 
> event Hbase instead of cassandra?
> 
> 
> Sent using Zoho Mail
> 
> 
> 
> 

Reply via email to