I see also exploding numbers in a quick local test on 1.1.0, thousands of
segments are created, but I'm not sure if this behavior is expected or not.
But that does not look like it should. It slows down bulk indexing
significantly.
Issuing an optimize just stalls...
curl
Hello,
We are experiencing a related problem with 1.1.0. Segments do not seem to
merge as they should during indexing. The optimize API does practically
nothing in terms of lowering the segments count either. The problem
persists through a cluster restart. The vast amount of segments seem to
For reference I'm also on 1.1.0 but I'm not seeing more segments then I
expect. I see an average of ~28 per shard on an index I write to
constantly. I don't write all that quickly, 50 updates a second.
Nik
On Fri, Apr 11, 2014 at 5:46 AM, Adrien Grand
adrien.gr...@elasticsearch.com wrote:
Adrien,
Just an FYI, after resetting the cluster, things seem to have improved.
Optimize calls now lead to CPU/IO activity over their duration.
Max_num_segments=1 does not seem to be working for me on any given call, as
each call would only reduce the segment count by about 600-700. I ran
Thanks for reporting this, the behavior is definitely unexpected. I'll test
_optimize on very large numbers of shards to see if I can reproduce the
issue.
On Thu, Apr 10, 2014 at 2:10 PM, Elliott Bradshaw ebradsh...@gmail.comwrote:
Adrien,
Just an FYI, after resetting the cluster, things
Any other thoughts on this? Would 1500 segments per shard be significantly
impacting performance? Have you guys noticed this behavior elsewhere?
Thanks.
On Monday, April 7, 2014 8:56:38 AM UTC-4, Elliott Bradshaw wrote:
Adrian,
I ran the following command:
curl -XPUT
Hi Elliott,
1500 segments per shard is certainly way too much, and it is not normal
that optimize doesn't manage to reduce the number of segments.
- Is there anything suspicious in the logs?
- Have you customized the merge policy or scheduler?[1]
- Does the issue still reproduce if you restart
Hi Adrien,
I did customize my merge policy, although I did so only because I was so
surprised by the number of segments left over after the load. I'm pretty
sure the optimize problem was happening before I made this change, but
either way here are my settings:
index : {
merge : {
policy : {
Hi Adrien,
I kept the logs up over the last optimize call, and I did see an
exception. I Ctrl-C'd a curl optimize call before making another one, but
I don't think that that caused this exception. The error is essentially as
follows:
netty - Caught exception while handling client http
The exception is just a side effect because you pressed ctrl-c and the
response could not be transmitted back, it does not point to the problem.
You should use
http://localhost:9200/index/_optimize?max_num_segments=1
instead of
http://localhost:9200/index/_optimize
Jörg
--
You received this
Thanks Jorg. That makes sense. I am actually using max_num_segments=1,
just forgot to add it...
On Wed, Apr 9, 2014 at 11:20 AM, joergpra...@gmail.com
joergpra...@gmail.com wrote:
The exception is just a side effect because you pressed ctrl-c and the
response could not be transmitted back,
Adrian,
I ran the following command:
curl -XPUT http://localhost:9200/_settings -d
'{indices.store.throttle.max_bytes_per_sec : 10gb}'
and received a { acknowledged : true } response. The logs showed
cluster state updated.
I did have to close my index prior to changing the setting and reopen
Any thoughts on this? I've run optimize several more times, and the number
of segments falls each time, but I'm still over 1000 segments per shard.
Has anyone else run into something similar?
On Thursday, April 3, 2014 11:21:29 AM UTC-4, Elliott Bradshaw wrote:
OK. Optimize finally
Elasticsearch throttles merges by default so that they don't slow search
down too much. This is usually preferable for read/writes loads, but in
your case it looks like you batch-indexed a lot of documents at once and
merges couldn't keep up with the indexing rate so you ended up with a very
high
Have you tried max_num_segments=1 on your optimize?
On Fri, Apr 4, 2014 at 11:27 AM, Elliott Bradshaw ebradsh...@gmail.comwrote:
Any thoughts on this? I've run optimize several more times, and the
number of segments falls each time, but I'm still over 1000 segments per
shard. Has anyone
Yes. I have run max_num_segments=1 every time.
On Fri, Apr 4, 2014 at 12:26 PM, Michael Sick
michael.s...@serenesoftware.com wrote:
Have you tried max_num_segments=1 on your optimize?
On Fri, Apr 4, 2014 at 11:27 AM, Elliott Bradshaw ebradsh...@gmail.comwrote:
Any thoughts on this? I've
Did you see a message in the logs confirming that the setting has been
updated? It would be interesting to see the output of hot threads[1] to see
what your node is doing.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html
On Fri, Apr 4,
Hi All,
I've recently upgraded to Elasticsearch 1.1.0. I've got a 4 node cluster,
each with 64G of ram, with 24G allocated to Elasticsearch on each. I've
batch loaded approximately 86 million documents into a single index (4
shards) and have started benchmarking cross_field/multi_match
OK. Optimize finally returned, so I suppose something was happening in the
background, but I'm still seeing over 6500 segments. Even after setting
max_num_segments=5. Does this seem right? Queries are a little faster
(350-400ms) but still not great. Bigdesk is still showing a fair amount
19 matches
Mail list logo