Since it works to fetch 10K rows and doesn't work to fetch 100K rows in a 
single request, I very strongly suggest that you use the request that work. 
Make ten requests of 10K rows each. Or even better, 100 requests of 1K rows 
each.
Large requests make large memory demands. 

wunder

On Jul 5, 2013, at 7:58 AM, Shalin Shekhar Mangar wrote:

> Oops I actually meant to say that search engines *are not* optimized
> for large pages. See https://issues.apache.org/jira/browse/SOLR-1726
> 
> Well one of the shards involved in the request is throwing an error.
> Check the logs of your shards. You can also add a shards.info=true
> param to your search which should return the responses of each shard.
> 
> On Fri, Jul 5, 2013 at 8:18 PM, eakarsu <eaka...@gmail.com> wrote:
>> Thanks for your answer,
>> 
>> I can fetch 10K documents without any issue. I don't think we are having out
>> of memory exception because each tomcat server in cluster has 8GB memory
>> allocated.
>> 



Reply via email to