Cleanup would have the same effect I think, in exchange for a minor
amount of extra CPU used.

On Mon, Oct 31, 2011 at 4:08 AM, Sylvain Lebresne <sylv...@datastax.com> wrote:
> On Mon, Oct 31, 2011 at 9:07 AM, Mick Semb Wever <m...@apache.org> wrote:
>> On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
>>> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>       at 
>>> org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
>>>       at 
>>> org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:66)
>>>       at 
>>> org.apache.cassandra.io.compress.CompressedRandomAccessReader.metadata(CompressedRandomAccessReader.java:53)
>>>       at 
>>> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:63)
>>>       at 
>>> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:896)
>>>       at 
>>> org.apache.cassandra.io.sstable.SSTableScanner.<init>(SSTableScanner.java:72)
>>>       at 
>>> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:748)
>>>       at 
>>> org.apache.cassandra.db.RowIteratorFactory.getIterator(RowIteratorFactory.java:88)
>>>       at 
>>> org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1310)
>>>       at 
>>> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:840)
>>>       at 
>>> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:698)
>>>
>>>
>>> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
>>
>>
>> I see now this was a bad choice.
>> The read pattern of these rows is always in bulk so the chunk_length
>> could have been much higher so to reduce memory usage (my largest
>> sstable is 61G).
>>
>> After changing the ckunk_length is there any way to rebuild just some
>> sstables rather than having to do a full nodetool scrub ?
>
> Provided you're using SizeTieredCompaction (i.e, the default), you can
> trigger a "user defined compaction" through JMX on each of the sstable
> you want to rebuild. Not necessarily a fun process though. Also note that
> you can scrub only an individual column family if that was the question.
>
> --
> Sylvain
>
>>
>> ~mck
>>
>> --
>> “An idea is a point of departure and no more. As soon as you elaborate
>> it, it becomes transformed by thought.” - Pablo Picasso
>>
>> | http://semb.wever.org | http://sesat.no |
>> | http://tech.finn.no   | Java XSS Filter |
>>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Reply via email to