Re: Cass 1.1.11 out of memory during compaction ?

2013-11-04 Thread Takenori Sato
I would go with cleanup.

Be careful for this bug.
https://issues.apache.org/jira/browse/CASSANDRA-5454


On Mon, Nov 4, 2013 at 9:05 PM, Oleg Dulin  wrote:

> If i do that, wouldn't I need to scrub my sstables ?
>
>
> Takenori Sato  wrote:
> > Try increasing column_index_size_in_kb.
> >
> > A slice query to get some ranges(SliceFromReadCommand) requires to read
> > all the column indexes for the row, thus could hit OOM if you have a
> very wide row.
> >
> > On Sun, Nov 3, 2013 at 11:54 PM, Oleg Dulin 
> wrote:
> >
> > Cass 1.1.11 ran out of memory on me with this exception (see below).
> >
> > My parameters are 8gig heap, new gen is 1200M.
> >
> > ERROR [ReadStage:55887] 2013-11-02 23:35:18,419
> > AbstractCassandraDaemon.java (line 132) Exception in thread
> > Thread[ReadStage:55887,5,main] java.lang.OutOfMemoryError: Java heap
> > space
> > at
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323)
> >
> >at org.apache.cassandra.utils.ByteBufferUtil.read(
> > ByteBufferUtil.java:398)at
> >
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380)
> >
> >at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88)
> >
> >at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83)
> >
> >at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73)
> >
> >at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
> >
> >at org.apache.cassandra.db.columniterator.IndexedSliceReader$
> > IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179)at
> > org.apache.cassandra.db.columniterator.IndexedSliceReader.
> > computeNext(IndexedSliceReader.java:121)at
> > org.apache.cassandra.db.columniterator.IndexedSliceReader.
> > computeNext(IndexedSliceReader.java:48)at
> >
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> >
> >at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> >
> >at org.apache.cassandra.db.columniterator.
> > SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116)at
> > org.apache.cassandra.utils.MergeIterator$Candidate.
> > advance(MergeIterator.java:147)at
> > org.apache.cassandra.utils.MergeIterator$ManyToOne.
> > advance(MergeIterator.java:126)at
> > org.apache.cassandra.utils.MergeIterator$ManyToOne.
> > computeNext(MergeIterator.java:100)at
> >
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> >
> >at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> >
> >at org.apache.cassandra.db.filter.SliceQueryFilter.
> > collectReducedColumns(SliceQueryFilter.java:117)at
> > org.apache.cassandra.db.filter.QueryFilter.
> > collateColumns(QueryFilter.java:140)
> > at
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292)
> >
> >at
> >
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
> >
> >at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362)
> >
> >at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224)
> >
> >at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159)
> >
> >at org.apache.cassandra.db.Table.getRow(Table.java:378)at
> >
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
> >
> >at org.apache.cassandra.db.ReadVerbHandler.doVerb(
> > ReadVerbHandler.java:51)at
> >
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
> >
> >at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >
> >at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >
> >at java.lang.Thread.run(Thread.java:722)
> >
> > Any thoughts ?
> >
> > This is a dual data center set up, with 4 nodes in each DC and RF=2 in
> each.
> >
> > --
> > Regards,
> > Oleg Dulin http://www.olegdulin.com";>http://www.olegdulin.com
> 
>
>


Re: Cass 1.1.11 out of memory during compaction ?

2013-11-04 Thread Oleg Dulin
If i do that, wouldn't I need to scrub my sstables ?


Takenori Sato  wrote:
> Try increasing column_index_size_in_kb.
> 
> A slice query to get some ranges(SliceFromReadCommand) requires to read
> all the column indexes for the row, thus could hit OOM if you have a very 
> wide row.
> 
> On Sun, Nov 3, 2013 at 11:54 PM, Oleg Dulin  wrote:
> 
> Cass 1.1.11 ran out of memory on me with this exception (see below).
> 
> My parameters are 8gig heap, new gen is 1200M.
> 
> ERROR [ReadStage:55887] 2013-11-02 23:35:18,419
> AbstractCassandraDaemon.java (line 132) Exception in thread
> Thread[ReadStage:55887,5,main] java.lang.OutOfMemoryError: Java heap
> space   
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323)
> 
>at org.apache.cassandra.utils.ByteBufferUtil.read(
> ByteBufferUtil.java:398)at
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380)
> 
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88)
> 
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83)
> 
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73)
> 
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
> 
>at org.apache.cassandra.db.columniterator.IndexedSliceReader$
> IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179)at
> org.apache.cassandra.db.columniterator.IndexedSliceReader.
> computeNext(IndexedSliceReader.java:121)at
> org.apache.cassandra.db.columniterator.IndexedSliceReader.
> computeNext(IndexedSliceReader.java:48)at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> 
>at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> 
>at org.apache.cassandra.db.columniterator.
> SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116)at
> org.apache.cassandra.utils.MergeIterator$Candidate.
> advance(MergeIterator.java:147)at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.
> advance(MergeIterator.java:126)at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.
> computeNext(MergeIterator.java:100)at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> 
>at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> 
>at org.apache.cassandra.db.filter.SliceQueryFilter.
> collectReducedColumns(SliceQueryFilter.java:117)at
> org.apache.cassandra.db.filter.QueryFilter.
> collateColumns(QueryFilter.java:140)   
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292)
> 
>at
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
> 
>at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362)
> 
>at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224)
> 
>at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159)
> 
>at org.apache.cassandra.db.Table.getRow(Table.java:378)at
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
> 
>at org.apache.cassandra.db.ReadVerbHandler.doVerb(
> ReadVerbHandler.java:51)at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
> 
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 
>at java.lang.Thread.run(Thread.java:722)
> 
> Any thoughts ?
> 
> This is a dual data center set up, with 4 nodes in each DC and RF=2 in each.
> 
> --
> Regards,
> Oleg Dulin http://www.olegdulin.com";>http://www.olegdulin.com



Re: Cass 1.1.11 out of memory during compaction ?

2013-11-03 Thread Takenori Sato
Try increasing column_index_size_in_kb.

A slice query to get some ranges(SliceFromReadCommand) requires to read all
the column indexes for the row, thus could hit OOM if you have a very wide
row.



On Sun, Nov 3, 2013 at 11:54 PM, Oleg Dulin  wrote:

> Cass 1.1.11 ran out of memory on me with this exception (see below).
>
> My parameters are 8gig heap, new gen is 1200M.
>
> ERROR [ReadStage:55887] 2013-11-02 23:35:18,419
> AbstractCassandraDaemon.java (line 132) Exception in thread
> Thread[ReadStage:55887,5,main]
> java.lang.OutOfMemoryError: Java heap space
>at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323)
>
>at org.apache.cassandra.utils.ByteBufferUtil.read(
> ByteBufferUtil.java:398)
>at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380)
>
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88)
>
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83)
>
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73)
>
>at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
>
>at org.apache.cassandra.db.columniterator.IndexedSliceReader$
> IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179)
>at org.apache.cassandra.db.columniterator.IndexedSliceReader.
> computeNext(IndexedSliceReader.java:121)
>at org.apache.cassandra.db.columniterator.IndexedSliceReader.
> computeNext(IndexedSliceReader.java:48)
>at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>
>at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>
>at org.apache.cassandra.db.columniterator.
> SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116)
>at org.apache.cassandra.utils.MergeIterator$Candidate.
> advance(MergeIterator.java:147)
>at org.apache.cassandra.utils.MergeIterator$ManyToOne.
> advance(MergeIterator.java:126)
>at org.apache.cassandra.utils.MergeIterator$ManyToOne.
> computeNext(MergeIterator.java:100)
>at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>
>at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
>
>at org.apache.cassandra.db.filter.SliceQueryFilter.
> collectReducedColumns(SliceQueryFilter.java:117)
>at org.apache.cassandra.db.filter.QueryFilter.
> collateColumns(QueryFilter.java:140)
>at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292)
>
>at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
>
>at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362)
>
>at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224)
>
>at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159)
>
>at org.apache.cassandra.db.Table.getRow(Table.java:378)
>at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
>
>at org.apache.cassandra.db.ReadVerbHandler.doVerb(
> ReadVerbHandler.java:51)
>at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
>
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>at java.lang.Thread.run(Thread.java:722)
>
>
> Any thoughts ?
>
> This is a dual data center set up, with 4 nodes in each DC and RF=2 in
> each.
>
>
> --
> Regards,
> Oleg Dulin
> http://www.olegdulin.com
>
>
>


Re: Cass 1.1.11 out of memory during compaction ?

2013-11-03 Thread Mohit Anchlia
Post your gc logs

Sent from my iPhone

On Nov 3, 2013, at 6:54 AM, Oleg Dulin   wrote:

> Cass 1.1.11 ran out of memory on me with this exception (see below).
> 
> My parameters are 8gig heap, new gen is 1200M.
> 
> ERROR [ReadStage:55887] 2013-11-02 23:35:18,419 AbstractCassandraDaemon.java 
> (line 132) Exception in thread Thread[ReadStage:55887,5,main]
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323)
>  
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:398)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380)
>  
>   at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88)
>  
>   at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83)
>  
>   at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73)
>  
>   at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
>  
>   at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179)
>  
>   at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:121)
>  
>   at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:48)
>  
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>  
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) 
>   at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116)
>  
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:147)
>  
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:126)
>  
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:100)
>  
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
>  
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) 
>   at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:117)
>  
>   at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:140)
>  
>   at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292)
>  
>   at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
>  
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362)
>  
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224)
>  
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159)
>  
>   at org.apache.cassandra.db.Table.getRow(Table.java:378)
>   at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
>  
>   at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) 
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  
>   at java.lang.Thread.run(Thread.java:722)
> 
> 
> Any thoughts ?
> 
> This is a dual data center set up, with 4 nodes in each DC and RF=2 in each.
> 
> 
> -- 
> Regards,
> Oleg Dulin
> http://www.olegdulin.com
> 
>