I got into lost documents when trying to do Bulk requests on my local 
server.
I was doing 1000 per request and I was loosing around 80% of the documents.
Changing to 10 solved it.  
Any other solution to this? I have to load 11 million documents and even 
multi threading is kind of slow doing it 10 at a time.

Thanks.

El martes, 1 de mayo de 2012 21:21:27 UTC-7, Ivan Brusic escribió:
>
> Just finished finished bulk indexing 36 million documents to a single 
> node with 5 shards. However, there are only 30 million products in the 
> index. The node stats are: 
>
> docs": { 
> "count": 30287500, 
> "deleted": 0 
> }, 
> "indexing": { 
> "index_total": 38177500, 
> "index_time": "1.6d", 
> "index_time_in_millis": 146190895, 
> "index_current": 0, 
> "delete_total": 0, 
> "delete_time": "0s", 
> "delete_time_in_millis": 0, 
> "delete_current": 0 
> } 
>
> Why the large discrepancy between the expected count, the doc count, 
> and the index_total? 
>
> -- 
> Ivan 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/43b9865c-1a4f-47db-8410-6827a3af405e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to