[ https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13113326#comment-13113326 ]
Mck SembWever commented on CASSANDRA-3150: ------------------------------------------ This is persisting to be a problem. And continuously gets worse through the day. I can see like half my cf being read from one split. It doesn't matter if it's ByteOrderingPartition or RandomPartitioner. I've attached two screenshots from hadoop webpages. > ColumnFormatRecordReader loops forever (StorageService.getSplits(..) out of > whack) > ---------------------------------------------------------------------------------- > > Key: CASSANDRA-3150 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3150 > Project: Cassandra > Issue Type: Bug > Components: Hadoop > Affects Versions: 0.8.4 > Reporter: Mck SembWever > Assignee: Mck SembWever > Priority: Critical > Attachments: CASSANDRA-3150.patch, > attempt_201109071357_0044_m_003040_0.grep-get_range_slices.log > > > From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039 > {quote} > bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner > bq. CFIF's inputSplitSize=196608 > bq. 3 map tasks (from 4013) is still running after read 25 million rows. > bq. Can this be a bug in StorageService.getSplits(..) ? > getSplits looks pretty foolproof to me but I guess we'd need to add > more debug logging to rule out a bug there for sure. > I guess the main alternative would be a bug in the recordreader paging. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira