Re: Wide row column slicing - row size shard limit

2012-02-19 Thread aaron morton
> I know the hard limit is 2 billion columns per row. My question is at what > size it will slowdown read/write performance and maintenance. The blog I > reference said the row size should be less than 10MB. A look at read performance with different row sizes…. http://thelastpickle.com/2011/10/0

Re: Wide row column slicing - row size shard limit

2012-02-16 Thread Data Craftsman
Hi Aaron Morton and R. Verlangen, Thanks for the quick answer. It's good to know Thrift's limit on the amount of data it will accept / send. I know the hard limit is 2 billion columns per row. My question is at what size it will slowdown read/write performance and maintenance. The blog I refer

Re: Wide row column slicing - row size shard limit

2012-02-16 Thread aaron morton
> Based on this blog of Basic Time Series with Cassandra data modeling, > http://rubyscale.com/blog/2011/03/06/basic-time-series-with-cassandra/ I've not read that one but it sounds right. Mat Dennis knows his stuff http://www.slideshare.net/mattdennis/cassandra-nyc-2011-data-modeling > There

Re: Wide row column slicing - row size shard limit

2012-02-16 Thread R. Verlangen
Things you should know: - Thrift has a limit on the amount of data it will accept / send, you can configure this in Cassandra: 64MB's should still work find (1) - Rows should not become huge: this will make "perfect" load balancing impossible in your cluster - A single row should fit on a disk - T