About the performance I can't say too much but what I would recommend to do is 
that you loop over the results.

The first query gives you the number of groups  (ngroups)

q=*:*&groups=true&group.ngroups=true&group.field=myfield&start=0&rows=10000

And after that you execute the other queries in a simply change the start param

q=*:*&groups=true&group.field=myfield&start=10000&rows=10000
q=*:*&groups=true&group.field=myfield&start=20000&rows=10000
q=*:*&groups=true&group.field=myfield&start=30000&rows=10000

and so on...

I think like this you can avoid the solr out of memory exception.
PS: But be carefull if you store all the rows on client side you need memory 
too :-)

Bests Sandro


-----Ursprüngliche Nachricht-----
Von: Alok Bhandari [mailto:alokomprakashbhand...@gmail.com] 
Gesendet: Donnerstag, 3. Oktober 2013 13:02
An: solr-user@lucene.apache.org
Betreff: Re: AW: Solr grouping performace

Thanks for reply Sandro.

My requirement is that I need all groups and then build compact data from it to 
send to server. I am not sure about how much RAM should be allocated to JVM 
instance to make it serve requests faster , any inputs on that are welcome.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-grouping-performace-tp4093300p4093311.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to