Bouncing? Does it allocate or just sit in a allocating state?
On 20 January 2015 at 00:06, daaku gee daa...@gmail.com wrote:
I am running 3 node Elasticsearch-1.3 cluster. One of the primary shards
for an index that has no more documents being indexed to it, keeps bouncing
from node to node.
Did you upgrade Java as well as ES? What version are you on?
On 20 January 2015 at 04:55, Eike Dehling e...@buzzcapture.com wrote:
Hi List,
we have recently upgraded our ES cluster (12 nodes, ~900GB data) from
0.90.9 to 1.4.2. We did a restart-upgrade, so backup data and then start
the new
? If so,
how many powerful?
On Saturday, January 17, 2015 at 6:55:12 PM UTC-8, Mark Walkom wrote:
Depends, sounds like you need a few client nodes if you are OOMing your
masters (which, is a bad thing to happen to masters).
On 18 January 2015 at 10:23, Justin Zhu haora...@gmail.com wrote:
We
Depends, sounds like you need a few client nodes if you are OOMing your
masters (which, is a bad thing to happen to masters).
On 18 January 2015 at 10:23, Justin Zhu haoranj...@gmail.com wrote:
We have a 9 node cluster, 3 masters, 6 data. We've been using the java
transport client, which
You've got too many replicas and shards. One shard per node (maybe 2) and
one replica is enough.
You should be using the bulk API as well.
What's your heap set to?
Also consider combining customers into one index, it'll reduce the work you
need to do.
On 17/01/2015 4:07 am, Nawaaz Soogund
Can you put your complete call into a gist or similar for us to check?
On 17 January 2015 at 05:34, Gabriele Angeli g.angeli...@gmail.com wrote:
i guys, I try to put a new template in ES 1.3.6 but i always obtain the
same result: {error:ActionRequestValidationException[Validation Failed:
1:
that could cache settings for future
clusters if I tear them down and rebuild them.
Thanks
Albion
On Wednesday, January 14, 2015 at 6:42:23 PM UTC-8, Mark Walkom wrote:
It doesn't cache this sort of info, it'll read what is in the config file.
Are you using puppet/chef/other for config
Unfortunately there isn't.
Feel free to raise an enhancement request on github though, as it could be
useful for others :)
On 17 January 2015 at 03:52, Darren McDaniel gizm...@gmail.com wrote:
Short of restarting the node.. Is there any thought of giving a user the
ability to reset the node
The data is there, it's just closed, and there are no actions taken on
closed indexes.
You need to reopen them -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/master/indices-open-close.html#indices-open-close
On 17 January 2015 at 02:35, Russell Butturini tcst...@gmail.com wrote:
Hi Traci,
This is a community based technical list. We'd greatly appreciate it if you
didn't post job ads.
On 16 January 2015 at 03:38, Traci Martin traci@gmail.com wrote:
Hello All!
I am a recruiter in Austin, TX trying to fill a Director of Data
Engineering for my client, also in
This is a known issue, see
https://github.com/elasticsearch/elasticsearch/issues/6732
On 15 January 2015 at 22:01, Gary Gao garygaow...@gmail.com wrote:
why this didn't work on my es :
GET /_cluster/settings
{
persistent: {
discovery: {
zen: {
You could use snapshot and restore, or even Logstash.
On 15 January 2015 at 10:07, Todd Nine tn...@apigee.com wrote:
Hi all,
We have a deployment scenario I can't seem to find any examples of, and
any help would be greatly appreciated. We're running ElasticSearch in 3
AWS regions. We
FYI once you get to 32GB heap you lose some efficiency, try to keep heap
under 32GB, so 31GB or less.
Are you using the bulk API?
On 15 January 2015 at 10:03, Bhumir Jhaveri bhumi...@gmail.com wrote:
I have just migrated to ES 1.4.2 - I have 5 data nodes and 1 master node -
all these ES
You really should have an uneven number of masters, it helps achieve easier
quorum which helps to prevent split brain.
On 15 January 2015 at 01:16, Marek Dabrowski marek.dabrow...@gmail.com
wrote:
Hello
My configuration is 2 nodes for master rule and 6 for data. I changed path
in config
1 - It's not worth worrying about, you cannot replicate from an older
version to a newer version when going between minor versions (eg 1.3 1.4).
2 - If an upgraded node rejoins the cluster, then it's good to go.
On 15 January 2015 at 01:31, Max Charas max.cha...@gmail.com wrote:
Thanks for the
It doesn't cache this sort of info, it'll read what is in the config file.
Are you using puppet/chef/other for config management perchance? These
could be over writing your config.
On 15 January 2015 at 06:22, Albion Baucom albi...@gene.com wrote:
I am new to ELK and I am still using a dev
There is nothing in ES that can do this, because it's essentially invisible
data loss, which is bad :)
On 15 January 2015 at 05:15, Eric Fontana e...@fontanas.net wrote:
Someone's redis queue was really backed up, and was trying to send (using
logstash elasticsearch_http plugin) messages
to a
On Wed, Jan 14, 2015 at 3:03 PM, Mark Walkom markwal...@gmail.com wrote:
You could use snapshot and restore, or even Logstash.
On 15 January 2015 at 10:07, Todd Nine tn...@apigee.com wrote:
Hi all,
We have a deployment scenario I can't seem to find any examples of,
and any help would
You could do this, but it's a lot of manual overhead to have to deal with.
However ES does have some disk space awareness during allocation, take a
look at
http://www.elasticsearch.org/guide/en/elasticsearch/reference/master/index-modules-allocation.html#disk
On 15 January 2015 at 10:57, Matías
This might be better raised as an issue on github as one of the devs can
comment directly on the code you're interested in.
On 14 January 2015 at 23:10, Meidan meidan.a...@gmail.com wrote:
Hi,
We're in the process of upgrading from 0.90.5 to 1.4.1 and we see a
significant performance
cluster.routing.allocation.same_shard.host: true but nothing
related to rack aweareness.Something we will look into it.
On Tue, Jan 13, 2015 at 2:12 PM, Mark Walkom markwal...@gmail.com wrote:
How many nodes did/do you have? What do your logs show?
You should look at using
http
How many nodes did/do you have? What do your logs show?
You should look at using
http://www.elasticsearch.org/guide/en/elasticsearch/reference/master/modules-cluster.html#allocation-awareness
if you are running multiple nodes per physical machine.
On 14 January 2015 at 10:22, Darsh
I'd guess that'd be virtual memory, that the OS handles.
On 14 January 2015 at 10:14, Itai Frenkel itaifren...@live.com wrote:
I would like to reduce the amount of non-heap memory used by a client
node. I would like to reclaim as much as I can from these 280MB, what is it
used for?
On
Yes but once you move a shard to a newer node then (usually) you can't
shift it back. This is due to changes in the underlying lucene segment.
You'll also need to make sure you have the same Java versions on all nodes
it you'll see problems.
But yes, snapshot and restore would be a better way to
Honestly, with this sort of scale you should be thinking about support
(disclaimer: I work for Elasticsearch support).
However let's see what we can do;
What version of ES, java?
What are you using to monitor your cluster?
How many GB is that index?
Is it in one massive index?
How many GB in your
It'd help if you could gist/pastebin/etc your nginx and kibana configs.
On 10 January 2015 at 07:57, William Tarrant tarrant.will...@gmail.com
wrote:
Tried a clean install with ES 1.3.7 and Kibana 3.1.2 and still the same
issue. I must be missing something with permissions or nginx, as I
Yes and yes -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/master/setup-upgrade.html#rolling-upgrades
On 8 January 2015 at 09:50, Ankit Jain an...@quettra.com wrote:
Quick question: Does ES support clusters with mixed versions (to do a
rolling restart during an upgrade)? Is
even allocate a primary (until I
reduce the number of replicas)
On Wednesday, January 7, 2015 4:58:58 PM UTC+13, Mark Walkom wrote:
It's not recommended to run an Elasticsearch cluster across
geographically dispersed locations.
You cannot assign both primaries and replicas to a single node
?
On Wednesday, January 7, 2015 11:46:57 AM UTC-8, Mark Walkom wrote:
Yes it auto distributes existing, and new, shards.
On 8 January 2015 at 05:55, Bhumir Jhaveri bhum...@gmail.com wrote:
Also one more question - lets say intially I have one node architecture
- i.e. everything on one single
This would be better asked on
https://groups.google.com/forum/?hl=en-GB#!forum/logstash-users :)
On 8 January 2015 at 05:57, k...@fuelpowered.com wrote:
I am wondering where I can find more information regarding verbosity.
http://logstash.net/docs/1.4.2/flags
I see there are two options:
Yes it auto distributes existing, and new, shards.
On 8 January 2015 at 05:55, Bhumir Jhaveri bhumi...@gmail.com wrote:
Also one more question - lets say intially I have one node architecture -
i.e. everything on one single node and additional mount is having all the
ES data (index, documents
No it's not possible.
On 6 January 2015 at 18:45, phani.nadimi...@goktree.com wrote:
Hi All,
Can we maintain common data repository (data folder) for all the data
nodes in a cluster?
can we maintain common data folder for dedicated data nodes ? will
this be possible (common
The best way is to add more nodes.
There isn't much you can do with that amount of data!
On 7 January 2015 at 06:09, David Mavashev crypti...@gmail.com wrote:
Hi,
I have a cluster of 20 nodes, 1 TB/day of data indexed, right now we only
keep the last 3 days opened but the customer wants us
Nice spot, I've raised an issue for it
https://github.com/elasticsearch/elasticsearch/issues/9170
On 7 January 2015 at 02:24, ajay.bh...@gmail.com wrote:
As per the documentation cat api will display output similar to
curl 192.168.56.10:9200/_cat/nodes
SP4H 4727 192.168.56.30 9300 1.4.2
Are both running the same ES and java versions?
Can you telnet between the data and master nodes on 9300?
On 7 January 2015 at 09:56, sh...@gethashed.com wrote:
Howdy - I cannot get two ec2 servers to connect to one another as a
cluster. The servers are successfully discovering themselves
It's not recommended to run an Elasticsearch cluster across geographically
dispersed locations.
You cannot assign both primaries and replicas to a single node, it defeats
the purpose! So it's as design.
On 7 January 2015 at 14:08, Mathew D mathew.degerh...@gmail.com wrote:
Hi all,
I've
index level settings will override cluster level ones.
On 6 January 2015 at 15:11, Chris Neal chris.n...@derbysoft.net wrote:
Hi all.
My elasticsearch.yml file has these settings with regards to merging:
index:
codec:
bloom:
load: false
merge:
policy:
It sounds like because that isn't a local interface that ES is bound to it
tries to access it. Are you using NAT on a higher layer?
On 6 January 2015 at 01:59, Matt Hughes hughes.m...@gmail.com wrote:
In my VM environment, a VM can't actually see its public IP address. I
have the following
One shard per node is ideal as you spread the load.
Reducing the shard count can help but it depends on a few things.
How much data do you have in your cluster, how many indexes?
On 6 January 2015 at 08:51, mike.giardine...@gmail.com wrote:
Hi All,
We have started noticing in our environment
I've since been informed this is a known issue and a bug has been raised
for it, so a fix is on the way.
On 6 January 2015 at 07:57, Mark Walkom markwal...@gmail.com wrote:
It'd be great if you could raise this as an issue on github for this
behaviour to be checked - https://github.com
Try setting it in transient instead of persistent.
Persistent settings are usually read from the config file only.
On 6 January 2015 at 09:16, rogthefrog roger...@amino.com wrote:
Can this setting be updated dynamically? It doesn't look that way:
$ curl -XPUT localhost:9200/_cluster/settings
It won't work, the snapshot is run against any node that has shards of the
index and doesn't funnel data back to the node you ran the command on.
On 6 January 2015 at 02:40, bitsofinf...@gmail.com wrote:
I have a cluster (1.3.2) of 10 data nodes and 5 master nodes.
I want to take a snapshot
Can you elaborate what you mean by becoming an issue?
When you add a node into the cluster it will automatically start to
reallocate shards to the new node, you can't have a node sitting there idle
and with lots of disk space free waiting for the other nodes to fill up
before being called upon.
It'd be great if you could raise this as an issue on github for this
behaviour to be checked - https://github.com/elasticsearch/elasticsearch
On 6 January 2015 at 00:06, Paul Scott p...@duedil.com wrote:
Regarding the behaviour of Sense to automatically choose POST regardless
of the user
You set marvel.agent.exporter.es.hosts
in elasticsearch.yml.
It'd let you then put some kind of proxy layer between ES but still allow
marvel to operate.
On 5 January 2015 at 21:26, John Bohne johnboh...@gmail.com wrote:
Why would I want to do that? I'm using Apache by the way.
I saw
Depends on your setup.
Increasing shard count is only going to be useful if you add more nodes.
On 5 January 2015 at 16:21, phani.nadimi...@goktree.com wrote:
Hi All,
I have an index with 51 millions records i have 2 nodes in my
cluster.
no of shards for the above index is
There are settings you can change, see
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-indices.html#recovery
On 5 January 2015 at 12:50, Salman ahmed.sal...@gmail.com wrote:
On ES 1.4.2 cluster, post cluster restart, 90% of shards are unassigned. Is
there a way to
Because you have to load data from the shard when you get a query, so the
larger the shard the more data you load, and OOM or slower response times
happen.
It also helps recovery and reallocation if they are smaller.
On 5 January 2015 at 13:13, Jinyuan Zhou zhou.jiny...@gmail.com wrote:
I read
and size of the events? If you are using logstash and Redis is the
queue backing up because it can't index?
On Thu, Jan 1, 2015 at 4:49 PM, Mark Walkom markwal...@gmail.com wrote:
How much data do you have on the node? How many indexes? Have you
checked the logs for GC issues?
You can use nice
Good to hear :)
Also, you seemed to type YS instead of ES, not sure if that is a typo or
not!?
On 2 January 2015 at 06:46, Steve Johnson st...@parisgroup.net wrote:
Dude!
Thanks for sticking with me on this one! With your recent comments, I
turned off replication and then tried my tests
You could do this with a script, you'd have to develop that yourself though.
But ultimately you may better off doing this in Logstash, just head over to
https://groups.google.com/forum/?hl=en-GB#!forum/logstash-users and ask for
help on sorting it out :)
On 2 January 2015 at 15:00, Mike
Except the docs there make the assumption that you have multiples of each
node type (ie strong/medium) to spread the data across. That is the key
thing, it's not just one of each!
Unless you remove the replica or add more nodes, you will always have data
on both nodes. However you should probably
If the second node cannot see the first one then you have either a ES or
network configuration problem.
It's hard to tell based on the info you have provided.
On 1 January 2015 at 03:23, Praveen Kumar praveen.pade...@gmail.com wrote:
We are using elastic search cluster with 2 nodes. The index
Take a look at
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html#shards-allocation
On 31 December 2014 at 00:55, Arun arungn...@gmail.com wrote:
Hello,
I have a 3-node ES (v1.2.1) cluster with 1 replica. When there is a
node failure, the cluster
for the response Mark!
However, I am trying to understand how massive index can be a problem if
everytime I know which type to query ? Any explanation or link to some
documentation regarding this ?
On Tuesday, December 30, 2014 3:42:20 AM UTC+5:30, Mark Walkom wrote:
Ideally you want
You're probably at the limits of a single node. You should upgrade to a
later version of ES as there is always performance improvements or add more
heap or nodes. The default settings of ES are more than suitable up to tens
of nodes, you shouldn't need to change anything there in the immediate
As you have found so far, you cannot do this.
On 30 December 2014 at 19:44, Ashutosh Parab ashush...@gmail.com wrote:
I want to configure Kibana in such a way that my different panels have
different indexes. For example, histogram panel uses index 'X'' and table
panel uses index 'Y'.
Is
How slow?
Is the load on your system high?
On 31 December 2014 at 05:04, psk...@gmail.com wrote:
I have about 50 GB of data (1 mil docs) in a single node--8 cores with 32
GB (24 GB heap). I just upgraded from 1.2.4 to 1.4.2, and I noticed that a
few commands take a long time to return, and
I just installed ES 1.4.2 from repos on CentOS and it created both user and
group;
[root@vagrant-centos65 ~]# getent passwd|grep elasticsearch
elasticsearch:x:497:497:elasticsearch
user:/usr/share/elasticsearch:/sbin/nologin
It also set the directories it needs to write to to the correct
It doesn't matter where the primary or replica's live, they are the same
thing.
If you only want to query the second node then send your queries to it and
use local preference -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-preference.html
On 29 December
You don't need to close the index to assign a tag, just assign it
dynamically.
But you have replica's so there will always be shards assigned to both
nodes. You need to drop the replica count to 0 for this work the way you
are looking for.
On 28 December 2014 at 12:40, Steve Johnson
Check your ES logs, you are probably running into GC issues.
How many nodes, how much heap, how much data is on the nodes - both GB and
index count, how many shards, how many replicas, what java version?
On 26 December 2014 at 17:21, chris85l...@googlemail.com wrote:
Hello,
we use a two node
You really need to upgrade, 0.90.X is no longer supported!
On 26 December 2014 at 17:19, Xiaoliang Tian xiaoliang.t...@gmail.com
wrote:
Thanks,And M using 0.9.13.can it enable auto-balancing manually?
2014-12-26 14:17 GMT+08:00 Michael deMan (ES) elasticsea...@deman.com:
Why did you set it to -1? I'd set it back and see if that fixes it.
On 26 December 2014 at 23:47, Piyush Rai piyushra...@gmail.com wrote:
I fired the following command on elasticsearch
PUT /_cluster/settings
{
persistent : {
threadpool.index.queue_size: -1
:05:22 AM UTC+5:30, Mark Walkom wrote:
That's a pretty big number of shards, why is it so high?
The recommended there is one shard per node, so you should (ideally)
have closer to 6600 shards.
On 25 December 2014 at 07:07, Pat Wright sqla...@gmail.com wrote:
Mark,
I work on the cluster
, December 23, 2014 5:03:30 PM UTC-5, Mark Walkom wrote:
Can you elaborate on your dataset and structure; how many indexes, how
many shards, how big they are etc.
On 24 December 2014 at 07:36, Chris Moore cmo...@perceivant.com wrote:
Updating again:
If we reduce the number of shards per node
You should drop your heap to 31GB, over that and you lose some performance
and actual heap stack due to uncompressed pointers.
it looks like a node, or nodes, dropped out due to GC. How much data, how
many indexes do you have? What ES and java versions?
On 24 December 2014 at 22:29, Abhishek
.
Data: 580GB
Shards: 10K
Indices: 347
ES version: 1.3.2
Not sure the Java version.
Thanks for getting back!
pat
On Wednesday, December 24, 2014 12:04:03 PM UTC-7, Mark Walkom wrote:
You should drop your heap to 31GB, over that and you lose some
performance and actual heap stack due
Can you elaborate on your dataset and structure; how many indexes, how many
shards, how big they are etc.
On 24 December 2014 at 07:36, Chris Moore cmo...@perceivant.com wrote:
Updating again:
If we reduce the number of shards per node to below ~350, the system
operates fine. Once we go
1. That is ok, but just make sure you size the heap to account for large
queries (ie aggs) or your master could still OOM (which is bad). You may
find as your cluster grows it'll make sense to split the masters and
clients.
2. Should be ok, the master doesn't need much heap. But you
Your email is a little unclear.
What exactly is the problem?
On 23 December 2014 at 16:47, sandeep kaushal verma
sandeepkaushalve...@gmail.com wrote:
Current server architecture/configuration
1 Master Node (m3.large)
1 Search Node (m3.large)
3 Data Node (c3.xlarge)
20 shard
user has
The recommended max shard size is 40-50GB.
To figure out the best performance point though would require you to test a
number of things, as this is dependent on your setup.
On 22 December 2014 at 14:05, Costya Regev cos...@totango.com wrote:
Hello,
We keep data in monthly indices that grew to
You should reduce your heap to half your system RAM, ie 30GB, so it's not
more than 31GB. Above that your java pointers aren't compressed and you get
less efficient heap use.
What sort of data is it, how many shards in your index, are your queries
heavy (ie lots of aggs), are you using
of aggregations but no parent child
relationships. We currently have 36 shards for the index.
—
Thanks
On Sun, Dec 21, 2014 at 2:53 PM, Mark Walkom markwal...@gmail.com wrote:
You should reduce your heap to half your system RAM, ie 30GB, so it's
not more than 31GB. Above that your java
I haven't seen it, but 1.4.2 has only just come out :)
Are you using Marvel at all? It might help shed some light on what is
happening.
Also just to confirm, did you restart all your nodes after the upgrade? Is
there anything in the logs on each node that might be of use?
On 21 December 2014 at
.
—
Thanks
On Sun, Dec 21, 2014 at 3:47 PM, Mark Walkom markwal...@gmail.com wrote:
That's way too many shards, you only really need 1 shard per node,
unless you're expecting to dramatically increase your node count in the
near future.
More info on what data it is would be useful
(build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
We have installed Marvel on the cluster and on the original message I
posted a screen shot from the dashboard.
Thanks
On Sun, Dec 21, 2014 at 4:45 PM, Mark Walkom markwal...@gmail.com wrote:
My original
Firstly, ES is not a time series data store, it can definitely be leveraged
as one but it does a lot, lot more!
1 - If that is referring to using document level TTLs then you may want to
avoid that for 1PB of data, TTLs can be resource intensive and at that
level you might find them very
18, 2014 11:50:32 PM UTC-8, Mark Walkom wrote:
You should really have heap the size on both nodes.
What ES and java versions are you on?
On 18 December 2014 at 19:54, shriyansh jain shriyan...@gmail.com
wrote:
Hi All,
I am seeing some warning message in elasticsearh log files which
Ok that makes a bit more sense, but it seems the amount of CPU you will
save isn't worth the effort.
You could create an index template that matches fields with pattern * and
sets index: not_analyzed, that'd be easiest.
On 19 December 2014 at 10:12, Eran Duchan pav...@gmail.com wrote:
We use
Sorry but there isn't anything public on this, however 1.5 is looking like
it will land sometime in January.
On 18 December 2014 at 04:41, Peter Portante peter.a.porta...@gmail.com
wrote:
Looking for a possible timeline for the release of 1.5 (anticipating
inner_hits support). Is there a
You should really have heap the size on both nodes.
What ES and java versions are you on?
On 18 December 2014 at 19:54, shriyansh jain shriyanshaj...@gmail.com
wrote:
Hi All,
I am seeing some warning message in elasticsearh log files which are
taking pretty long time for garbage collection.
Can you elaborate on where you are seeing the extra 50k of entries? ie how
did you get this count.
There is currently no O/JDBC connector plugins for logstash so you need to
use a river.
You may also want to ask further Logstash questions at
It feels like you're almost defeating the whole purpose of using
Elasticsearch with this approach! Is it really that much of a problem?
On 19 December 2014 at 08:15, Eran Duchan pav...@gmail.com wrote:
I'd like not to use analysis across my schema to save a bit of CPU (I know
the penalty this
It doesn't matter, the value after node. is just an arbitrary label.
On 17 December 2014 at 10:26, panfei cnwe...@gmail.com wrote:
in the default elasticsearch.yml file I see there is a node.rack
configuration parameter:
*# A node can have generic attributes associated with it, which can
How many nodes, how much data and in how many indexes? What ES version?
On 17 December 2014 at 11:47, Wilfred Hughes yowilf...@gmail.com wrote:
Hi folks
After a few hours/days of uptime, our elasticsearch cluster is spending
all its time in GC. We're forced to restart nodes to bring response
Try it with the -n and -D flags and see if that provides more info.
On 17 December 2014 at 14:12, Chetan Dev cheten@carwale.com wrote:
Hi,
after executing the command curator delete --older-than 3 i got the following
response
2014-12-17 18:39:02,088 INFO Job starting...
(two data and one dataless) and using ES 1.2.4,
for storing logstash data. 500 GiB data total, 49 indexes, 5 shards per
index.
On Wednesday, 17 December 2014 11:39:29 UTC, Mark Walkom wrote:
How many nodes, how much data and in how many indexes? What ES version?
On 17 December 2014 at 11:47
Did you take a backup?
Did you go from 0.90.0 to 1.3.4 directly?
On 17 December 2014 at 19:21, Peter Portante peter.a.porta...@gmail.com
wrote:
On Wednesday, December 17, 2014 9:23:28 AM UTC-5, Grzegorz K wrote:
Hello,
I have updated ElasticSearch from ver 0.90.3 to ver 1.3.4 ( OS -
It looks like this -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html#disk
What is your actual disk usage? Can you run a curl -XGET
localhost:9200/_cluster/settings and see if it mentions those settings?
On 16 December 2014 at 23:28, Pauline
Unfortunately you lose all data on the node as ES will stripe segments
across the disks/mount points.
On 15 December 2014 at 11:45, Elvar Böðvarsson elv...@gmail.com wrote:
If you have a node that has 4x disks as JBOD and you configure
Elasticsearch to use all of them, so it will write to as
Master only nodes are very light, you can probably get away with 1 or 2GB
for heap.
Of course this will depend on your cluster topology and a few other things,
so it might be best to trial it.
On 14 December 2014 at 10:01, Yoav Melamed yo...@exelate.com wrote:
Hello,
I run Elasticsearch
As I found out yesterday, the problem with shard splitting in ES is that
there algorithms that are used to round robin the data allocation during
indexing that are based on a pre-determined hash. So if you suddenly alter
the hash you may end up with shards that are overloaded compared to others.
It's still undergoing beta testing, I haven't heard a GA release yet but
I'll ask!
(If you're not aware, Shield is a paid product, like Marvel it's not an
open source product like the core ELK stack.)
On 12 December 2014 at 00:04, Ben McCann benjamin.j.mcc...@gmail.com
wrote:
Hi,
Is there
If you can architect around the loss of a node and subsequent recovery,
then I reckon it's worth testing the notion of not running RAID.
On 12 December 2014 at 14:30, Nikolas Everett nik9...@gmail.com wrote:
Striping raid is viable for 2 or 3 disks because of the redundancy.
Software raid
is that it should scale and should be easy to
use. Having no headaches around the shard count, once it is set, is easy.
Jörg
On Fri, Dec 12, 2014 at 9:31 AM, Mark Walkom markwal...@gmail.com wrote:
As I found out yesterday, the problem with shard splitting in ES is that
there algorithms
It's a risk thing, you need to be comfy with the risk of losing one disk
and all that it entails.
If you can mitigate that through a process and you are happy with the
remaining risk, then :)
On 12 December 2014 at 16:13, Elvar Böðvarsson elv...@gmail.com wrote:
This would be for log storage,
Shield is still in closed beta and it is not accessible to the general
public at the moment.
I don't have an ETA on a general release either sorry!
On 11 December 2014 at 08:48, Deepak Kumar deepmun1...@gmail.com wrote:
Hi Friends,
Tried downloading Shield. it is mentioned that You can also
1 - Depends on how much data you have.
2 - Yes, two replicas will mean one will never be assigned. This is because
you have 2 nodes but 3 copies of the data. Set replica to just 1.
3 - That sounds very unusual. Have you tried to fetch one of these
documents via id?
On 11 December 2014 at 15:50,
You will need to proxy ES with something else, like nginx, to get that info.
On 10 December 2014 at 10:55, Gabesz Gabesz boss.gab...@gmail.com wrote:
Anybody help me?
Kind Regards
Gabesz
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
501 - 600 of 1266 matches
Mail list logo