Have you tried asking for CSV as an output format? Then, you don't
have any XML wrappers and you will get your IDs one per line. I tried
it with returning about 400000 rows and it was just fine.

Regards,
   Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all
at once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD
book)


On Wed, Sep 12, 2012 at 9:52 AM, Paul Libbrecht <p...@hoplahup.net> wrote:
> Isn't XSLT the bottleneck here?
> I have not yet met an incremental XSLT processor, although I heard XSLT 1 
> claimed it could be done in principle.
>
> If you start to do this kind of processing, I think you have no other choice 
> than write your own output method.
>
> Paul
>
>
> Le 12 sept. 2012 à 15:47, Rohit Harchandani a écrit :
>
>> Hi all,
>> I have a solr index with 5,000,000 documents and my index size is 38GB. But
>> when I query for about 400,000 documents based on certain criteria, solr
>> searches it really quickly but does not return data for close to 2 minutes.
>> The unique key field is the only field i am requesting for. Also, I apply
>> an xslt transformation to the response to get a comma separated list of
>> unique keys. Is there a way to improve this speed?? Would sharding help in
>> this case?
>> I am currently using solr 4.0 beta in my application.
>> Thanks,
>> Rohit
>

Reply via email to