The key is this line:

Read 827 live and 6948 tombstoned cells


That means you either have a lot of deleted or TTLed columns in that row.
One option to help with that is to set a lower gc_grace for the table and
repair more frequently; this will help tombstones get purged more quickly.
Another option is to adjust your data model so that you periodically switch
to a new row.


On Wed, Oct 23, 2013 at 7:20 PM, Matt Mankins <mmank...@fastcompany.com>wrote:

> Hi.
>
> I have a table with about 300k rows in it, and am doing a query that
> returns about 800 results.
>
> select * from fc.co WHERE thread_key = 'fastcompany:3000619';
>
> The read latencies seem really high (upwards of 500ms)? Or is this
> expected? Is this bad schema, or…? What's the best way to trace the
> bottleneck, besides this tracing query:
>
> http://pastebin.com/sherFpgY
>
> Or, how would you interpret that?
>
> I'm not sure that row caches are being used, despite them being turned on
> in the cassandra.yaml file.
>
> I'm using a 3 node cluster on amazon, using datastax community edition,
> cassandra 2.0.1, in the same EC2 availability zone.
>
> Many thanks,
> @Mankins
>
>


-- 
Tyler Hobbs
DataStax <http://datastax.com/>

Reply via email to