I have been collecting stats using _cat/allocation and I noticed that a new
cluster showed a lot of disk being used for a few shards.
When I checked I found that _cat/allocation was very different to df.
For one node _cat/allocation gives:
12 51.2gb 956.5gb 1007.8gb 5 esekilx5170
moving replicas, ES actually makes a new
copy of the primary as it protects against exactly these kinds of
situations:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html#cluster-reroute
Cheers,
Boaz
On Tuesday, June 10, 2014 9:23:56 AM UTC+2, Michael
I had a problem with corrupted shards so I restarted my cluster with
index.shard.check_on_startup: fix and the corrupted shards were fixed
(i.e. deleted). Unfortunately the replicas and primaries then had differing
numbers of documents despite them all being green. Fortunately the
primaries
Recently I tried cluster.routing.allocation.exclude._ip and it worked once
I had actually set the IP address, I had presumed that ES would look up the
address for me. I also tried setting the parameter to 2 names in an array
and while this was accepted it didn't seem to do anything.
In the end
Our cluster has been running for about 6 months now and we've collected a
few settings in /_cluster/settings, even the transients aren't transient as
I hardly ever take the entire cluster down. I vaguely remember reading that
entries could be deleted now but I can't find the article again. Any
checkIndex. This works as expected.
On Wednesday, 9 April 2014 11:41:24 UTC+2, Michael Salmon wrote:
I recently had a problem with an index and after searching the net I
decided to give checkIndex a try. I found the class in the right jar but I
haven't been able to get it to check an index
Every time I see mb (milli-bit) instead of MB (mega-byte) in a printout I
wonder why. Is there any particular reason for using the abbreviations that
are used? Most of the time it isn't a problem as mb aren't all that common
but it is a possible value for data transfers.
/Michael
--
You
The guide says that indices.store.throttle.type can be merge, all or not
but I think that the Lucene code says that it is all, merge or none. Does
anyone know which is correct?
/Michael
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Walkom wrote:
Where exactly are you seeing this?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com javascript:
web: www.campaignmonitor.com
On 29 April 2014 17:43, Michael Salmon michael...@inovia.nu javascript:
wrote:
Every time I see mb
I answered my own query by trying to set it to not:
org.elasticsearch.ElasticsearchIllegalArgumentException: rate limiting type
[not] not valid, can be one of [all|merge|none]
On Tuesday, 29 April 2014 10:25:22 UTC+2, Michael Salmon wrote:
The guide says that indices.store.throttle.type can
I would suggest that you install something like bigdesk or marvel to check
your usage, in particular heap, threads and file descriptors.
Every shard is a Lucene index and hence the more shards you have the more
searches that you can do in parallel but you also need memory and file
descriptors
On Friday, 25 April 2014 18:09:28 UTC+2, Jilles van Gurp wrote:
I've been using the elasticsearch rpms (1.1.1) on our centos 6.5 setup and
I've been wondering about the recommended way to configure it given that it
deploys an init.d script with defaults.
I figured out that I can use
I have started getting some timeouts during replication and I am unsure of
how to proceed. The index is about 500 million documents or 45GB spread
over 8 shards and created by a jdbc river. The timeout is occurring
during index/shard/recovery/prepareTranslog. It seems that the limit of 15
if this is related to tight
resources.
In the next JDBC river version there will be more convenient control of
bulk index settings (automatic replica level 0, refresh disabling,
re-enabling of refresh replica afterwards).
Jörg
On Mon, Apr 28, 2014 at 12:22 PM, Michael Salmon
michael
Basically you can't do with no stop at all. ES holds indexes open so that
it can access the data quickly. You could close the index, copy the data
and then open the index again but it is probably best to stop es, move the
data, change the data path and then start es again. You can have multiple
I'm planning on trying out multiple nodes on one host and I'd like to be able
to control the node id but as far as I can see this is set in NodeEnvironment
to the first unused value. The reason for setting the id is so that I would
like to include it in the node name which I currently set to
I recently had a problem with an index and after searching the net I decided to
give checkIndex a try. I found the class in the right jar but I haven't been
able to get it to check an index. For example when I run
checkIndex -verbose ...heat-analyzer/7/index
I get:
ERROR: could not read any
I recently had a problem with an index and after searching the net I
decided to give checkIndex a try. I found the class in the right jar but I
haven't been able to get it to check an index. For example when I run
checkIndex -verbose ...heat-analyzer/7/index
I get:
ERROR: could not read any
18 matches
Mail list logo