In the CQL3 protocol the sizes of collections are unsigned shorts, so the
maximum number of elements in a LIST<...> is 65,536. There's no check,
afaik, that stops you from creating lists that are bigger than that, but
the protocol doesn't handle returning them (you get the first N - 65536 %
65536 i
2 billion is the maximum theoretically limit of columns under a row. It is
NOT the maximum limit of a CQL collection. The design of CQL collections
currently require retrieving the entire collection on read.
On Sun, May 12, 2013 at 11:13 AM, Robert Wille wrote:
> I designed a data model for my
On Sat, May 11, 2013 at 7:26 PM, Rodrigo Felix <
rodrigofelixdealme...@gmail.com> wrote:
> es the first line of bin/nodetool ring o
Fix the number of mapper and reducer according to your hardware
*Thanks & Regards*
∞
Shashwat Shriparv
I designed a data model for my data that uses a list of UUID's in a
column. When I designed my data model, my expectation was that most of the
lists would have fewer than a hundred elements, with a few having several
thousand. I discovered in my data a list that has nearly 400,000 items in
it. When
Iterating through lots of records is not a primary use of my data.
However, there are a number scenarios where scanning the entire contents
of a column family is an interesting and useful exercise. Here are a few:
removal of orphaned records, checking the integrity a data set, and
analytics.
On 5/
On 2013-05-11 14:42:32 +, Robert Wille said:
I'm using the JDBC driver to access Cassandra. I'm wondering if its
possible to iterate through a large number of records (e.g. to perform
maintenance on a large column family). I tried calling
Connection.createStatement(ResultSet.TYPE_FORWARD_ONL
Dne 12.5.2013 2:28, Techy Teck napsal(a):
I am running Cassandra 1.2.2 in production. What kind of problems you
talking about? Might be I get some root cause why I am seeing bad read
performance with Astyanax client in production cluster.
no support for full cassandra 1.2 feature set
no/bad sup