[ 
https://issues.apache.org/jira/browse/CASSANDRA-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13202433#comment-13202433
 ] 

Sylvain Lebresne commented on CASSANDRA-3861:
---------------------------------------------

But what if your index are generally like ~10 rows, but you know some very rare 
ones can be bigger, yet still fit reasonably in memory. Typically it's hard for 
you to set a limit less than say 10000 (maybe the rows are skinny) for the 
query, but 99% of the query will be <= 10 rows. It's then a waste of memory to 
always allocate this 10000 entries.

Don't get me wrong, I agree that people should implement paging as soon as they 
have a doubt that a given index could return an unbounded number of rows, but 
it feels weird to arbitrary force people to pretty much *always* implements 
paging, and to make bigger allocation than necessary in any case.

As for protecting people when going from testing to production, if someone does 
do a bad estimation of the maximum size of a given indexed row, then you can be 
sure it's code don't handle paging (since he was sure he knew the indexed row 
couldn't be that big), and in that case I'd rather OOM (i.e. indicating the 
user made a mistake) than silently return what is a wrong result (for the 
application).
                
> get_indexed_slices throws OOM Error when is called with too big 
> indexClause.count
> ---------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-3861
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3861
>             Project: Cassandra
>          Issue Type: Bug
>          Components: API, Core
>    Affects Versions: 1.0.7
>            Reporter: Vladimir Tsanev
>            Assignee: Sylvain Lebresne
>             Fix For: 1.0.8
>
>         Attachments: 3861.patch
>
>
> I tried to call get_index_slices with Integer.MAX_VALUE as IndexClause.count. 
> Unfortunately the node died with OOM. In the log there si following error:
> ERROR [Thrift:4] 2012-02-06 17:43:39,224 Cassandra.java (line 3252) Internal 
> error processing get_indexed_slices
> java.lang.OutOfMemoryError: Java heap space
>       at java.util.ArrayList.<init>(ArrayList.java:112)
>       at 
> org.apache.cassandra.service.StorageProxy.scan(StorageProxy.java:1067)
>       at 
> org.apache.cassandra.thrift.CassandraServer.get_indexed_slices(CassandraServer.java:746)
>       at 
> org.apache.cassandra.thrift.Cassandra$Processor$get_indexed_slices.process(Cassandra.java:3244)
>       at 
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
>       at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:662)
> Is it necessary to allocate all the memory in advance. I only have 3 KEYS 
> that match my caluse. I do not known the exact number but in general I am 
> sure that they wil fit in the memory.
> I can/will implement some calls with paging, but wanted to test and I am not 
> happy with the fact the node disconnected.
> I wonder why ArrayList is used here?
> I think the result is never accessed by index (but only iterated) and the 
> subList for non RandomAccess Lists (for example LinkedList) will do the same 
> job if you are not using other operations than iteration.
> Is this related to the problem described in CASSANDRA-691.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to