Thanks a lot. I understand the few bottlenecks necessitated while a client
is initialized. We have a tight schema for our docs wherein all our docs
follow a strict schema already enforced before indexing starts to take
place.
You can image a sample of our doc to have 32 odd keys with 20 of
Thanks a lot for the response.
On Saturday, August 9, 2014 1:15:17 AM UTC+5:30, abhiji...@housing.com
wrote:
Hello everyone,
I wanted to know if it is possible to index the docs through a stream
which pushes data to the Elasticsearch cluster.
Our current problem is to index the huge set
Thanks for the reply. Cursors are the what we used for getting large data
from Postgres. sequel_pg the ruby gem has also added streaming support for
postgres having version greater than 9.2. The ruby code doesn't complain
when we deal with limited data.
But getting batches of 1000 rows from
Hello everyone,
I wanted to know if it is possible to index the docs through a stream which
pushes data to the Elasticsearch cluster.
Our current problem is to index the huge set of data from Postgres to
Elasticsearch while processing the data in between. We have been able to
stream data out