That's odd. What are your autocommit parameters? And are you either
committing or optimizing as part of your program? I'd bump the
autocommit parameters up and NOT commit (or optimize) from your
client if you are....

Best
Erick

On Tue, Nov 15, 2011 at 2:17 PM, Tod <listac...@gmail.com> wrote:
> Otis,
>
> The files are only part of the payload.  The supporting metadata exists in a
> database.  I'm pulling that information, as well as the name and location of
> the file, from the database and then sending it to a remote Solr instance to
> be indexed.
>
> I've heard Solr would prefer to get documents it needs to index in chunks
> rather than one at a time as I'm doing now.  The one at a time approach is
> locking up the Solr server at around 700 entries.  My thought was if I could
> chunk them in a batch at a time the lockup will stop and indexing
> performance would improve.
>
>
> Thanks - Tod
>
> On 11/15/2011 12:13 PM, Otis Gospodnetic wrote:
>>
>> Hi,
>>
>> How about just concatenating your files into one? �Would that work for
>> you?
>>
>> Otis
>> ----
>>
>> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
>> Lucene ecosystem search :: http://search-lucene.com/
>>
>>
>>> ________________________________
>>> From: Tod<listac...@gmail.com>
>>> To: solr-user@lucene.apache.org
>>> Sent: Monday, November 14, 2011 4:24 PM
>>> Subject: Help! - ContentStreamUpdateRequest
>>>
>>> Could someone take a look at this page:
>>>
>>> http://wiki.apache.org/solr/ContentStreamUpdateRequestExample
>>>
>>> ... and tell me what code changes I would need to make to be able to
>>> stream a LOT of files at once rather than just one?� It has to be something
>>> simple like a collection of some sort but I just can't get it figured out.�
>>> Maybe I'm using the wrong class altogether?
>>>
>>>
>>> TIA
>>>
>>>
>>>
>
>

Reply via email to