Trying to download latest elasticsearch-js, and download link does not work:
http://www.elasticsearch.org/guide/en/elasticsearch/client/javascript-api/current/browser-builds.html#_download
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
So it seems that we have somehow got some invalid events into our index.
The effect is that two shards are always in a state of initializing, never
completing.
In the Elasticsearch logs I see entries like this:
[2015-01-22 12:09:01,124][WARN ][index.engine.internal] [es113-es1]
My current setup is with 10 nodes with ample space on spinning disks, and
20 nodes with smaller SSD disks.
I would like my workflow to be that all data is initially indexed on the
SSD nodes, after 10 days is reallocated to the spinning disks, after a
further 10 days the index is closed, and
I have a machine with 132GB of memory and 32 cores on which I am running
two elasticsearch nodes. Each node should have only half the total number
of CPU cores available so that both nodes can work at full capacity and not
block each other.
I believe the correct configuration option would be:
,
queue_size : 200
...
It seems this cannot be set using the config API
Cheers,
-Robin-
On Tuesday, 16 December 2014 12:04:03 UTC+1, Robin Clarke wrote:
I have a machine with 132GB of memory and 32 cores on which I am running
two elasticsearch nodes. Each node should have only half the total number
created.
You might also post the Dockerfile you've built (I assume you must, to
incorporate your customizations) which would provide a clearer picture as
well as a sample run command complete with the switches you're using.
Tony
On Tuesday, November 18, 2014 5:38:48 AM UTC-8, Robin Clarke
I am trying to do a setup like this:
machine1 - 192.168.0.10
9300 = 9300 Docker container elasticsearch1 internal vlan 172.17.0.68
Elasticsearch transport running on port 9300
network.publish_host 192.168.10:9300
9301 = 9300 Docker container elasticsearch2 internal vlan 172.17.0.69
We are putting together a new cluster of 10 machines. Each machine has
132GB memory, so we want to put two Elasticsearch nodes (each with 30GB) on
each machine, each node in its own docker container - that should result in
20 nodes on 10 machines.
We have got docker to allow the two nodes on
I'm still having this problem... has anybody got an idea what the cause /
solution might be?
Thank you! :)
On Tuesday, 7 October 2014 14:29:22 UTC+2, Robin Clarke wrote:
I'm getting a lot of these errors in my Elasticsearch logs, and am also
experiencing a lot of slowness on the cluster
I'm getting a lot of these errors in my Elasticsearch logs, and am also
experiencing a lot of slowness on the cluster...
New used memory 7670582710 [7.1gb] from field [machineName.raw] would be
larger than configured breaker: 7666532352 [7.1gb], breaking
...
New used memory 7674188379 [7.1gb]
.
On Saturday, May 24, 2014 5:05:55 AM UTC-4, Robin Clarke wrote:
And found this error too in one of the nodes which left the cluster:
java.lang.NullPointerException
at org.elasticsearch.gateway.local.state.meta.
LocalGatewayMetaState.clusterChanged(LocalGatewayMetaState.java:185
and telltale sign of when things are about to go
south is when the old GC count starts to rise, and GC old duration
increases.
On Wednesday, June 25, 2014 9:54:27 AM UTC-4, Robin Clarke wrote:
I have a 10 machine cluster where frequently (about once per day when
indexing and querying is at its
I have a 10 machine cluster where frequently (about once per day when
indexing and querying is at its height) one elasticsearch node goes OOM...
It usually recovers, but by this time the cluster is redistributing the
lost shards, which causes more load, which often in turn causes an OOM on
Is there any way to configure Elasticsearch to output its logs in JSON
(custom log format, or configuration option)? This would make it much
easier to import the logs via logstash...
Cheers,
-Robin-
--
You received this message because you are subscribed to the Google Groups
elasticsearch
I am trying to delete a month of logstash indexes, and fire off e.g. this
command:
curl -XDELETE http://localhost:9200/logstash-2014.05*?pretty;
which returns within a few seconds (less than a minute - the default
timeout afaik) with:
{
acknowledged : true
}
But when I look at the indexes,
I have a 10 machine cluster (named es101-es110) with 32GB RAM per machine.
I've allocated 12GB per machine to Elasticsearch. Memory usage on the
machines looks ok, cpu and iowait is also not dramatic, nonetheless the
cluster is frequently becoming instable and losing nodes...
In the logs I am
And found this error too in one of the nodes which left the cluster:
java.lang.NullPointerException
at
org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState.clusterChanged(LocalGatewayMetaState.java:185)
at
I am writing a small script to create a snapshot of my kibana-int index,
and hit an odd race condition.
I delete the old snapshot if it exists:
curl -XDELETE
'http://localhost:9200/_snapshot/backup/snapshot_kibana?pretty'
Then make the new snapshot
curl -XPUT
:43:14 AM UTC-4, Robin Clarke wrote:
I am writing a small script to create a snapshot of my kibana-int index,
and hit an odd race condition.
I delete the old snapshot if it exists:
curl -XDELETE '
http://localhost:9200/_snapshot/backup/snapshot_kibana?pretty'
Then make the new snapshot
curl
I just pasted output from uname there - that's kernel 3.2.54-2 you're
reading there.
I think the Debian release is Wheezy (7.0)
-Robin-
On 26 March 2014 12:17, Steinar Bang s...@dod.no wrote:
Robin Clarke ro...@robinclarke.net:
Thanks for the tip with the number of masters!
java version
In that case, *7.4*
-Robin-
On 26 March 2014 12:43, Steinar Bang s...@dod.no wrote:
Robin Clarke ro...@robinclarke.net:
I just pasted output from uname there - that's kernel 3.2.54-2 you're
reading there.
I think the Debian release is Wheezy (7.0)
OK. The OS version can be found
...@campaignmonitor.com
web: www.campaignmonitor.com
On 25 March 2014 16:11, Robin Clarke robi...@gmail.com wrote:
I did some intensive tests last week on a 20-node cluster and had the
following insights - I'd be interested if anyone has similar/dissimilar
experience.
The had 20 nodes had 8 cores each
quorum.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 25 March 2014 17:35, Robin Clarke ro...@robinclarke.net wrote:
Each node had 8 cores (2.4GHz Xeon), 32GB RAM, SSD disks (I never saw
IOWait
I did some intensive tests last week on a 20-node cluster and had the
following insights - I'd be interested if anyone has similar/dissimilar
experience.
The had 20 nodes had 8 cores each, and 32GB memory each. I set up
Elasticsearch to have 15GB of that memory.
The sample events I was using
24 matches
Mail list logo