Probably, the bulk indexing feature is not implemented for tika processing,
but you can easily compile a script yourself:

Extract in a loop over the word files in a directory:
curl "
http://localhost:8983/solr/update/extract?literal.id=doc5&defaultField=text";
 --data-binary @tutorial.html  -H 'Content-type:text/html'

Notice the literal.id which you need to set to some unique value in a loop.

Submit to SOLR using the recursive directory scanning feature (you need
SOLR 4.x for this):

java -Dauto -Drecursive -jar post.jar .

HTH,
Dmitry

On Tue, Mar 5, 2013 at 12:15 PM, anarchos78
<rigasathanasio...@hotmail.com>wrote:

> Thank you for your reply. Reading "Solr Wiki" at
> http://wiki.apache.org/solr/ExtractingRequestHandler
> <http://wiki.apache.org/solr/ExtractingRequestHandler>   I don't see how
> to
> handle bulk word doc indexing. Is there any other source for that?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Bulk-word-document-indexing-tp4044794p4044799.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to