We store objects that are a couple of tens of K, sometimes 100K, and we
store quite a few of these per row, sometimes hundreds of thousands.
One problem we encountered early was that these rows would become so big
that C* couldn't compact the rows in-memory and had to revert to slow
two-pass
So was the point of breaking into 36 parts to bring each row to the 64 or
128mb threshold?
On Tue, Jul 9, 2013 at 3:18 AM, Theo Hultberg t...@iconara.net wrote:
We store objects that are a couple of tens of K, sometimes 100K, and we
store quite a few of these per row, sometimes hundreds of
yes, by splitting the rows into 36 parts it's very rare that any part gets
big enough to impact the clusters performance. there are still rows that
are bigger than the in memory compaction limit, but when it's only some it
doesn't matter as much.
T#
On Tue, Jul 9, 2013 at 5:43 PM, S Ahmed
I'm guessing that most people use cassandra to store relatively smaller
payloads like 1-5kb in size.
Is there anyone using it to store say 100kb (1/10 of a megabyte) and if so,
was there any tweaking or gotchas that you ran into?
100kb should be fine. For larger values, there's been a lot of people
doing file chunking for a while now. The Astyanax folks have a
recipe for this:
https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store
The DataStax Enterprise CassandraFileSystem impl works similarly.
Your intuition is
I regularly store word and pdf docs in cassandra without any issues.
On Mon, Jul 8, 2013 at 1:46 PM, S Ahmed sahmed1...@gmail.com wrote:
I'm guessing that most people use cassandra to store relatively smaller
payloads like 1-5kb in size.
Is there anyone using it to store say 100kb (1/10
Hi Peter,
Can you describe your environment, # of documents and what kind of usage
pattern you have?
On Mon, Jul 8, 2013 at 2:06 PM, Peter Lin wool...@gmail.com wrote:
I regularly store word and pdf docs in cassandra without any issues.
On Mon, Jul 8, 2013 at 1:46 PM, S Ahmed