It's not a dynamic setting, afaik.
Sorry, I don't know for sure how a translog can grow forever.
For my purposes, I decided to handle the challenge in front of ES, with
better timing control, and archive files for replay I can use outside of ES
too.
Jörg
On Sun, Feb 23, 2014 at 9:21 PM, vinee
Hi ,
I tried the below too without any luck -
curl -XPUT 'localhost:9200/documents/_settings' -d '{
"index" : {
"translog" : {
"disable_flush" : true
}
}
}
'
Thanks
Vineeth
On Mon, Feb 24, 2014 at 1:42 AM, vineeth mohan wrote:
> Hello Joerg ,
>
> Your config doesnt seem to
Hello Joerg ,
Your config doesnt seem to work.
I gave the following parameter and while i was doing some inserts , there
was no unusual behavior. The head showed the total number of documents i
had inserted and it was searchable.
index.translog.disable_flush : true
ES version - 0.90.9
Is there
Hello Joerg ,
I was still thinking how well will this handle cases where i have like 10
Million to insert in the translog and i ask ES to index them all in a
single flush.
Is a heap dump likely to happen.
Thanks
Vineeth
On Mon, Feb 24, 2014 at 1:08 AM, vineeth mohan wrote:
> Hello
Hello Joerg ,
So if i disable it , ES wont write the feeds to lucene until i make a
manual flush...
I believe translog is written to a file and its not resident in the memory.
This also means that translogs are maintained between restarts and we will
never loose data.
If all the above are right ,
Oops, the correct parameter is index.translog.disable_flush : true
index.gateway.local.flush: -1 is controlling the gateway.
Jörg
On Sun, Feb 23, 2014 at 8:21 PM, joergpra...@gmail.com <
joergpra...@gmail.com> wrote:
> Yes, it is possible to disable the translog sync (the component where the
>
Yes, it is possible to disable the translog sync (the component where the
operations are passed from ES to Lucene) with index.gateway.local.flush: -1
and use the flush action for "manual commit" instead.
I have never done that practically, though.
Jörg
On Sun, Feb 23, 2014 at 5:42 PM, vineeth
Hello Michael - Thanks for the configuration.
Hello Jörg - I was thinking more in lines of translog -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-translog.html
I believe the index operation is first written to translog ( Which i am not
sure if is a part of
Also, if there are no other clients wanting a faster refresh, you can
set index.refresh_interval to a higher value than the 1s default either in
general for your index or just during the times when you're doing your bulk
updates.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current
Best method to achieve this would be to implement this in front of ES so
the bulk indexing client runs only at the time it should run.
For the gathering plugin which I am working on, I plan to separate the two
phases of gathering documents and indexing documents. So, by giving a
scheduling option,
Hi ,
I am doing a lot of bulk insert into Elasticsearch and at the same time
doing lots of read in another index.
Because of the bulk insert my searches on other index are slow.
It is not very urgent that these bulk indexes actually gets indexed and are
immediately searchable.
Is there anyway ,
11 matches
Mail list logo