Anders,
how many items are you holding in that representation list? Also, it could
be that the deserialization is taking up memory. Can you test just
returning a constant string (not building up memory) and check if that
changes anything? In that case we could track it down to the REST String
building ...

Cheers,

/peter neubauer

GTalk:      neubauer.peter
Skype       peter.neubauer
Phone       +46 704 106975
LinkedIn   http://www.linkedin.com/in/neubauer
Twitter      http://twitter.com/peterneubauer

http://www.neo4j.org              - NOSQL for the Enterprise.
http://startupbootcamp.org/    - Öresund - Innovation happens HERE.


2011/11/15 Anders Lindström <andli...@hotmail.com>

>
> Hi all,
> I'm currently writing a server plugin. I need it to make some specialized
> queries that are not supported by the standard REST API. The important
> methods I expose are 'query' and 'get_next_page', the latter to support
> results pagination (i.e. the plugin is stateful).
> In 'query', I run my query against the Neo4j backend, and store a Node
> iterator to the query results (this is either an iterator originating from
> 'getAllNodes', or a Lucene IndexHits<Node> instance). In 'get_next_page', I
> run through the next N items of the iterator and return these as a
> ListRepresentation. The same iterator object is kept across all page
> retrievals, but of course stepped forward N steps for every invocation.
> After having gone through all pages, the reference to the Node iterator is
> removed.
> Now, as I understand it, all the heap space I should be concerned about
> using, is the one I allocate locally in my methods, since the referenced
> stored to the iterator object is just a tiny reference, and iterator
> results are fetched lazily (i.e., even though the iterator covers a result
> set greater than the allotted heap size, I shall be able to page through it
> within given heap space if the page size is small enough). But when I run
> my plugin, this does not seem to be the case. I can make several successful
> calls in a row to 'get_next_page', but then after a while bump into "GC
> overhead limit exceeded" which I cannot quite understand. I am rather
> certain the size of each page returned is within the allotted heap size.
> For some reason the heap usage seems to grow with the calls to
> 'get_next_page' which I cannot understand, given my understanding of the
> Node iterators from Neo4j.
> How do I avoid hitting this GC overhead limit? Am I missing something?
> (And yes, I've tried using different values of the allowed heap space by
> fiddling in the conf-files, and sure I can give tons of memory to the
> instance, and then it works, but I shouldn't have to give more heap space
> than what Neo4j "needs", plus my page size).
> Thanks!
> Regards,Anders
>
> _______________________________________________
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>
_______________________________________________
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user

Reply via email to