@rmuir Interesting, it sounds like my gains may be better than previously
expected, given the server is constantly evicting from heap. If I'm able,
I'll post some performance metrics back here when I'm done.
--
Please update your bookmarks! We have moved to https://discuss.elastic.co/
---
Thanks for the clarification Adrien. If that's the case, is there such a
flag that can enable them by default for all fields (excluding non-analyzed
strings; using ~1.4.3 here)?
Also, do you guys have more performance metrics on using Doc Values vs FDC?
I've seen the 10-25% slower value
Just to correct myself, I misstated; a 1/3 increase in index size, not 3x.
--
Please update your bookmarks! We have moved to https://discuss.elastic.co/
---
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop
I'm doing some overall testing on my cluster, debating if I should switch
to Doc Values. I have about 15 fields for each document, with 83 million
documents spread across 60 indices. All the fields are dynamically mapped,
and all of them can migrate to Doc Values. So, I have one copy of the
I believe that we're seeing the same issue. We're using Ubuntu 14.04, ES
1.4.4 and the AWS plugin. We get this random failures every few weeks and
have to restart our cluster:
Caused by: org.elasticsearch.transport.NodeNotConnectedException:
Is there an ETA for 2.0?
--
Thanks,
Matt Weber
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
Thanks Adrien!
On Mon, Apr 20, 2015 at 3:38 PM, Adrien Grand adr...@elastic.co wrote:
Hi Matt,
We have this meta issue which tracks what remains to be done before we
release 2.0: https://github.com/elastic/elasticsearch/issues/9970. We
plan to release as soon as we can but some
Is the source for the Logstash Shield Plugin open source / available
anywhere? The plugin adds SSL mutual-auth support for the transport
ports. I was hoping to do the same for HTTPS. Currently the HTTP output
is only server-auth.
--
You received this message because you are subscribed to
Great thanks!
I hadn't realised there were indices-templates - sometimes you can't see
the wood for the trees.
Editing elasticsearch.yml did the trick though.
On Wednesday, 11 March 2015 10:58:42 UTC, Magnus Bäck wrote:
On Wednesday, March 11, 2015 at 10:56 CET,
Matt Stibbs matts
Hi there - I have searched all manner of online resources but can't seem to
find an answer to:
*How do I change the default number of replicas for new indices in my ES
cluster?*
I am running a single node cluster in development, and want to stop it from
creating unallocated replica shards
duplicated that rule for ‘docker0’.
sudo iptables -I INPUT 3 -i docker0 -j ACCEPT sudo service iptables save
The line ‘3’ may be different depending on any other rules you may have added.
On March 2, 2015 at 6:37:29 AM, wzcwts521 (wzcwts...@gmail.com) wrote:
Hi Matt
I am wondering if you have
Hi all,
I am currently indexing tags (industries) for an entity with a data
structure like this:
industry: [Consulting Recruitment,Professional Services,Education
Training]
I am applying a termsAggregation to the query as:
AggregationBuilders.terms(industry).field(industry);
What I
Found the answer. The /_cluster/state endpoint has a list of all snapshots
in progress. For whatever reason, mine was stuck in INIT for a very long
time. I deleted via the /_snapshot/repo/snapshot_name endpoint.
On Wednesday, February 11, 2015 at 9:31:10 AM UTC-5, Matt Hughes wrote:
My ES
My ES cluster seems to think it is in my middle of creating a snapshot yet
I don't see any IN_PROGRESS snapshots in my repository. It's supposed to
be an hourly snapshot, but I don't see anything that has started within a
few days.
Yet every time I try and do anything with the repository, it
connectivity to other host in the cluster
}
}
On Monday, January 5, 2015 5:42:55 PM UTC-5, Mark Walkom wrote:
It sounds like because that isn't a local interface that ES is bound to it
tries to access it. Are you using NAT on a higher layer?
On 6 January 2015 at 01:59, Matt Hughes hughe
In my VM environment, a VM can't actually see its public IP address. I
have the following setup:
network.publish_host: 10.255.207.123
discovery.zen.ping.unicast.hosts: 10.255.207.123,10.255.207.124,10.255.
207.125
My VM can see 124 and 125 just fine, but due to issues completely unrelated
:
No there isn't.
On 10 December 2014 at 21:38, Matt Hughes hughes.m...@gmail.com wrote:
Is there a mechanism inside ES to specify multiple config files? I'd like to
have something like:
defaults.yml
overrides.yml
That way, it's much easier for me to see exactly what's different about one box
vs another
Is there a mechanism inside ES to specify multiple config files? I'd like
to have something like:
defaults.yml
overrides.yml
That way, it's much easier for me to see exactly what's different about one
box vs another.
--
You received this message because you are subscribed to the Google
Could someone clarify the difference between the 'status' field on the root
URL vs the 'status' field on /_cluster/health? In this system, the root
URL returns '200' even though the cluster is in 'yellow' as reported by the
cluster health check. What does 200 mean here? What are other
Could someone detail exactly what is re(stored) when you set this value to
true? Some subset of values returned by /_cluster/state?
Why would you ever want to set this to true?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe
Those settings look correct to me. You can set using kb,mb,gb, etc.
On Wednesday, December 3, 2014 4:25:38 AM UTC-5, Johan Öhr wrote:
Hi,
I have 12 elasticsearch nodes, with 10gb eth
Ive been having alot of problem with the performance of snapshots, its
throttles to 20 mb/s even tho i
in one repository, and daily
snapshots of your optimized indices in another. This prevents the
slow-down by reducing the number of segments the repositories must search
through for both hourly *and* daily snapshots.
--Aaron
On Tuesday, December 2, 2014 1:20:25 PM UTC-5, Matt Hughes wrote
AM, Matt Hughes hughe...@gmail.com
javascript: wrote:
Thanks for the speedy reply.
As for 1, I understand that ES optimizes for *storage* as snapshots of
the same index accumulate; I just wish it could also optimize for
performance. Right now, with a measly 4.5 gig cluster, the difference
to 300 segments per index or more. 30 days of that
is a rather large number of segments to test, especially over TCP/IP to Amazon
S3. It tests before it can ignore.
—Aaron
On Wed, Dec 3, 2014 at 10:24 AM, Matt Hughes hughes.m...@gmail.com wrote:
I understand that the segments are only backed
As noted here --
https://groups.google.com/forum/#!searchin/elasticsearch/snapshot$20duration/elasticsearch/bCKenCVFf2o/TFK-Es0wxSwJ
-- the time it takes to perform a snapshot increases the more snapshots you
take. This eventually can become untenable. So far, the only solution
seems to be
/issues/1991
when I asked if it was possible.
I guess I liked that it was a static webapp as it allows for easier
integration on our end.
Same, I didn't want to run Yet Another Webserver that has fewer features,
etc. etc. and I liked how v3 was easy to deploy integrate.
Matt
--
You received
Trying to use the curator API. I want to do a backup of all my indices and
only if the snapshot backup is successful, trim any snapshots older than 5
days.
First part is simple enough, but I don't see any return for create_snapshot:
bash-4.1# cat test.py
#!/usr/bin/env python
import
-20141121160846 is within the
threshold period (10 days).
2014-11-21 16:38:32,959 INFO Specified snapshots deleted.
2014-11-21 16:38:32,959 INFO Done in 0:00:00.016456.
On Friday, November 21, 2014 11:09:46 AM UTC-5, Matt Hughes wrote:
Trying to use the curator API. I want to do a backup
Hey Adrien,
Say I have two fields in my index with values:
genre = {Action, Adventure}
actor = {Tom Cruise, Jason Statham}
I'm looking for a way to get the distinct combinations of values with doc
counts, so I use a sub-aggregation:
aggs:{
genreAgg: {
terms: {
field:
Two of my three nodes had catastrophic disk loss. Cluster was set up with
1 replica, 5 shards per index. Obviously the remaining node does not have
all shards for each index.
The system still responds to queries though it obviously has holes in the
data.
If I do nothing, my cluster
Thoughts, anybody? I saw that you can somewhat do this with scripts and
letting the top aggregation encompass all term fields, but is that any more
performant?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and
I have a field with values like:
foo
bar
bar-one
Unfortunately, when I set up this index, I didn't realize that I wanted to
turn off tokenization (index: not_analyzed). Now when I try and do
terms aggregation, I get back the tokenized values:
foo
bar (2)
one
Is there any way to do an
Currently have three-node cluster in a single data center; all nodes are
both master/data eligible. I'm trying to migrate to six-node cluster
across three separate DCs for increased reliably with this topology:
DC1:
1. Master-only
2. Data-only
DC2:
1. Master-only
2. Data-only
DC3:
1.
Unclear how that would work. Example?
On Thursday, October 16, 2014 6:49:13 PM UTC-4, Mark Walkom wrote:
You could try using an if statement in your env variable?
On 17 October 2014 06:50, Matt Hughes hughe...@gmail.com javascript:
wrote:
I have parameterized my elasticsearch.yml file
Here's a bit of background info:
I'm interested in using aggregations to produce distinct keys for multiple
term fields and then getting a measure value for those keys. This can
be accomplished by tree-ing term aggregations together and whatever
measure terms are applied to the lowest
I have parameterized my elasticsearch.yml file like so:
node.name: ${NODE_NAME}
This let's me pass in NODE_NAME as an environment variable when running
ES. I was *hoping* that if NODE_NAME was undefined, ES would fall back to
the default of picking a dynamic name, but it just dies. Is
I have an ES cluster of 3 servers where indexes are configured with 5
shards and 2 replicas (so, every index has 5 primary shards and 10 replica
shards, with 5 shards allocated to each server). I have just upgraded from
1.0.0 to 1.3.4 by stopping one server at a time, updating ES then
this is because the file downloaded contains uncompiled source
code and not the binary. Assuming that guess is correct, is there a hosted
source for this plugin or some other workaround, or do I need to compile
and serve the plugin myself?
Thanks for any help,
Matt
--
You received this message
Thanks David! I appreciate the quick response. Please keep me posted.
Best,
Matt
On Tuesday, October 7, 2014 12:53:17 PM UTC-4, David Pilato wrote:
At the moment working on the plugin.
It has not been released yet.
I hope to release it tomorrow or so.
Best
--
*David Pilato
I have a logstash-forwarder client sending events to lumberjack -
elasticsearch to timestamped logstash indices. How does logstash decide
what *day* index to put the document in. Does it look at @timestamp?
@timestamp is just generated when the document is received, correct? So if
you
at 2:55:32 PM, Brian (brian.from...@gmail.com) wrote:
Matt,
Assuming your logstash configurations correctly set the @timestamp field, then
logstash will store the document in the day that is specified by the @timestamp
field.
I have verified this behavior by observation over the time we have been
performance hits that are obvious. Is this a known issue?
On Tuesday, September 16, 2014 5:15:11 PM UTC-4, Matt Hughes wrote:
I have logstash indicies that go back thirty days. I have logs in those
indices from today.
If I do a search with:
size: 500,
sort: [
{
@timestamp
I'm running curator in every node in an N-node ELK cluster. Is there any
reason I *wouldn't* want to have the --master-only flag turned on?
If you delete an index from the master, it's still going to get deleted
from the other nodes right? I'm trying to understand why you would ever
not want
://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_shard_size
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_minimum_document_count
Thanks,
Matt Weber
On Tue, Sep 16
I have logstash indicies that go back thirty days. I have logs in those
indices from today.
If I do a search with:
size: 500,
sort: [
{
@timestamp: {
order: desc,
ignore_unmapped: true
}
}
]
I don't get any logs from today. If I limit the search
With a standard LB in front of an N-node cluster, what's the best URL in
the ES API to check the health of a particular node (so as to know to
remove it at least temporarily).
There is the node info API:
curl -XGET 'http://localhost:9200/_nodes'
Lots of ES best practice articles recommend having dedicated master nodes.
Specifically, that would involve setting these flags:
node.master: true
node.data: false
Say, you had 7 index nodes and 3 master nodes
(https://blog.hipchat.com/2013/10/16/how-hipchat-scales-to-1-billion-messages/)
As a followup, any ratio guidelines for indexing nodes vs dedicated
masters. From what I can tell, it wouldn't make much sense to run with
only one-dedicated master node; if that node goes down, your whole cluster
becomes unavailable.
On Monday, August 11, 2014 2:18:08 PM UTC-4, Matt Hughes
You will be able to do this soon. See:
https://github.com/elasticsearch/elasticsearch/pull/7075
Thanks,
Matt Weber
On Aug 9, 2014 10:44 AM, James Cook djamesc...@gmail.com wrote:
There seems to be some reluctance by ES team to provide scrip table
aggregators, or perhaps it's on a roadmap
I'm running elasticsearch in a somewhat locked down environment and the
less ports I have open, the better.
I guess my questions boil down to:
1) Why do you need multiple ports open in the 93xx range?
2) Will anything bad happen if I just have 9300 open? Will performance
suffer, and if so, in
-port-numbers/
for an unassigned port number to avoid conflicts.
Jörg
On Sat, Aug 2, 2014 at 6:26 PM, Matt Hughes hughe...@gmail.com
javascript: wrote:
I'm running elasticsearch in a somewhat locked down environment and the
less ports I have open, the better.
I guess my questions boil
[{}],
serverChannel.getLocalAddress());
First port in the port range that is open gets bound. And this appears to
only happen once. Thanks.
On Saturday, August 2, 2014 2:19:33 PM UTC-4, Matt Hughes wrote:
Thanks for the reply.
Ok, I guess this comment in the config still confuses me:
# Elasticsearch, by default
Originally I was getting a bunch of No Route to Host errors and tracked
it down to being out of file handles. I have fixed the file handle,
problem, but I still keep getting No route to host errors; the odd thing
is, the error says it can't connect to itself: These logs are *from*
...@campaignmonitor.com javascript:
web: www.campaignmonitor.com
On 2 August 2014 11:51, Matt Hughes hughe...@gmail.com javascript:
wrote:
Originally I was getting a bunch of No Route to Host errors and tracked
it down to being out of file handles. I have fixed the file handle,
problem
:::444 :::*
On Friday, August 1, 2014 11:30:06 PM UTC-4, Matt Hughes wrote:
No, not loopback. I should caveat that this is running in a docker
container.
The only values I specify in my config are:
network.publish_host: 10.52.207.36
I have a time-series index. I create a new version once a day using the
format index-name--mm-dd.
When creating the index, I assign it a constant alias of 'index-name'.
This way, clients can just refer to the 'index-name', and not have to
append the timestamp. I create the alias when I
It's currently blocked until we can figure out a way to prevent a bad query
from triggering an OOM error. The goal (as far as I've been told) is to
get this in, but no ETA. I need to update the PR to the latest master as
there have been significant changes as well.
Thanks,
Matt Weber
On Jul 25
will be the
default, but you can write native transform scripts as well [2].
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-transform.html#mapping-transform
[2] https://github.com/imotov/elasticsearch-native-script-example/pull/7
Thanks,
Matt Weber
--
You received
job!
Thanks,
Matt Weber
On Thu, Jul 24, 2014 at 7:01 PM, Nikolas Everett nik9...@gmail.com wrote:
I wanted to do conditional copy_to and Andrian suggested implementing
scripted transforms instead. Much more flexible. They mesh well with the
shift to groovy too because groovy is much more
I have a three-node cluster that has gone red. The nodes in the cluster
were shut down abruptly as a result of power loss.
Cluster Health is:
{
cluster_name : elasticsearch,
status : red,
timed_out : false,
number_of_nodes : 3,
number_of_data_nodes : 3,
active_primary_shards : 43,
See PR #3278. Hopefully it will get merged into one of the next releases.
https://github.com/elasticsearch/elasticsearch/pull/3278
Thanks,
Matt Weber
On Thu, Jun 26, 2014 at 12:10 AM, Thomas thomas.bo...@gmail.com wrote:
Hi,
Unfortunately this is not supported by elasticsearch
I have not tested routing but I did put that functionality in so it should
work fine. Let me know if you have any issues!
Thanks,
Matt Weber
On Thu, Jun 26, 2014 at 7:20 PM, Drew Kutcharian d...@venarc.com wrote:
Thanks Matt, that feature is exactly what we need. One thing I couldn’t
the real count.
Any help would be appreciated.
-Matt
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
.
client.admin().indices().prepareCreate(indexName).addMapping(product,
mappingJson).execute().actionGet();
-Matt
On Tuesday, June 24, 2014 6:35:32 AM UTC+12, Brian wrote:
Perhaps you need to insert the execute().actionGet() method calls, as
below?
CountRequestBuilder builder = client.prepareCount
Fixed that one by parsing the json as MapString,Object rather than a
string.
On Tuesday, June 24, 2014 7:41:42 AM UTC+12, Matt Chambers wrote:
I think get() is just a shortcut for execute().actionGet()
Anyway, problem solved, it was a problem with my mapping not applying
correctly
Unfortunately, that version of the sqlite driver does not work on OSX:
java.lang.NoClassDefFoundError: org/sqlite/NativeDB
See:
https://bitbucket.org/xerial/sqlite-jdbc/issue/127
On Thursday, 24 April 2014 07:59:11 UTC+1, Jörg Prante wrote:
You must use a JDBC4 driver (jdbc sqlite
Ahh, I just realised that if I solve this, I just bump into the next
problem regarding the readonly flag:
https://github.com/jprante/elasticsearch-river-jdbc/issues/250
Humph :(
On Thursday, 5 June 2014 12:05:24 UTC+1, Matt Burns wrote:
Unfortunately, that version of the sqlite driver does
Yes that works. Docs could really use work there though. The same string
in elasticsearch.yml is [one, two, three], so I assumed I'd need to
pass in brackets.
On Thursday, May 29, 2014 4:09:02 PM UTC-4, InquiringMind wrote:
I believe that the host names must be comma-separated and no
I'm trying to set up unicast discovery. I want to pass in the hosts via
environment variable and am relying on elasticsearch.yml support of
environment variable interpolation.
Tried two formats without any luck:
First approach (pass in contents of array):
export ES_HOSTS='one, two'
to reverse your query string as the wildcard query
is not analyzed.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-reverse-tokenfilter.html#analysis-reverse-tokenfilter
Thanks,
Matt Weber
On Thu, May 22, 2014 at 11:09 AM, Erik Rose grinche...@gmail.com wrote:
Martijn
dashboards?
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
I would be best to manage elasticsearch outside of tomcat and use the java
or rest api to communicate with ES from within your app. If you absolutely
must run ES within tomcat, have a look at the wares transport[1].
[1] https://github.com/elasticsearch/elasticsearch-transport-wares
Thanks,
Matt
There is an open issue to add a blocking close method to BulkProcessor
https://github.com/elasticsearch/elasticsearch/pull/4180
Matt
On Wednesday, 23 April 2014 10:51:50 UTC+1, Jörg Prante wrote:
You must flush the BulkProcessor and wait until your code has received all
responses from
Hi,
yes, I've tested it with bulk failures - it seems to work well.
Internally, BulkProcessor releases semaphores for bulk failures and other
exceptions that can be thrown by the client, like NoNodeAvailableException,
so it shouldn't ever block forever.
Matt
On Wednesday, 23 April 2014 13
/en/elasticsearch/reference/current/query-dsl-function-score-query.html
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_native_java_scripts
Thanks,
Matt Weber
On Thu, Apr 17, 2014 at 11:54 PM, Srinivasan Ramaswamy
ursva...@gmail.comwrote:
I would
and make the script return as quick as possible.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-all-query.html
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-rescore.html
Thanks,
Matt Weber
On Fri, Apr 18, 2014
Trying to compose a query and filter combination to no avail:
{
from:0,
size:200,
query:{
filtered:{
query:{
query_string:{
fields:[
_all
],
query:\Test message\
}
},
at
http://www.elasticsearch.org/blog/all-about-elasticsearch-filter-bitsets/.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#string
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-bool-filter.html
Thanks,
Matt
escaping the terms. Is there
anything special about having nested field names like that
'where.processId'?
On Friday, April 18, 2014 4:07:31 PM UTC-4, Matt Weber wrote:
Chances are your appId and processId fields are analyzed so it is breaking
up the id's. Update your mapping
Did you reindex your docs after updating the mapping? Can you post your
mapping and original docs?
On Friday, April 18, 2014, Matt Hughes hughes.m...@gmail.com wrote:
Thanks for the quick reply!
I updated the mappings and confirmed both types read not_analyzed. I
also updated the query
Nevermind. It was an error on my part; these changes worked. Thanks again!
On Friday, April 18, 2014 5:51:31 PM UTC-4, Matt Hughes wrote:
Thanks for the quick reply!
I updated the mappings and confirmed both types read not_analyzed. I
also updated the query to use bool/must
* Matt Dainty m...@bodgit-n-scarper.com [2014-04-11 11:01:33]:
With the cluster up I tried the tribe node again, now it just logs this
every 20 seconds:
[2014-04-11 15:38:25,497][INFO ][discovery.zen]
[es-tribe-01/sydney] failed to send join request to master
[[es-master-02
* David Pilato da...@pilato.fr [2014-04-11 06:51:56]:
Sorry Matt
I did not get a chance to look at it yet in details.
Just to make sure it's not related to the jetty plugin, could you try remove
it from the tribe node?
I just pared back the config on the tribe node to just the minimum
the private IP addresses again and I'm in
the same situation as before.
Matt
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com
discovery.zen.ping_timeout to 10s but it doesn't help.
All the other timeouts I can find seem to default to 30s which is more
than enough.
Matt
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send
We use Flume 1.4 to pass logs into HDFS as well as ElasticSearch for
storage. The pipeline looks roughly like this:
Client to Server Flow...
(local_app - local_host_flume_agent) AVRO/SSL
(remote_flume_agent)...
Agent Server Flow ...
(inbound avro - FC1 - ElasticSearch)
(inbound avro
:03:25 AM UTC-7, Matt wrote:
We use Flume 1.4 to pass logs into HDFS as well as ElasticSearch for
storage. The pipeline looks roughly like this:
Client to Server Flow...
(local_app - local_host_flume_agent) AVRO/SSL
(remote_flume_agent)...
Agent Server Flow ...
(inbound avro
* David Pilato da...@pilato.fr [2014-04-08 03:50:52]:
Hey Matt,
I'd like to understand better what is happening here.
Could you gist your elasticsearch.yml files (the ones for elasticsearch
standard nodes and the tribe node one)?
Of course, replace your EC2 credentials by dummy values
this is supposed to work? Is there an
alternative way to attach a remote tribe node to this cluster easily?
Thanks
Matt
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
feature? Bug? Thoughts?
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com
anything in nodes logs?
How much HEAP did you give to your nodes?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 21 mars 2014 à 22:26, Matt Greenfield
matthew.d@gmail.comjavascript:
a écrit :
Hi,
We have been seeing sporadic NodeDisconnectedException
If this is a concern, why not have your client's use the REST api so they
don't need to worry about matching their java version with the java version
of the search cluster?
Thanks,
Matt Weber
On Fri, Mar 21, 2014 at 12:07 PM, kimchy kim...@gmail.com wrote:
Not trivializing the bug at all
Hi,
We have been seeing sporadic NodeDisconnectedException and
NoNodeAvailableException in our ES cluster (0.90.7).
Our cluster is made up of 2 data nodes. One data node has a single primary
shard and one data node has a single replica shard. We connect to using the
Java TransportClient
on your computer and then publish them to Kibana. I swear in
3.0.0 there used to be a simple method for loading them from a GIST or
something, but now we can't find it. Help?
--Matt
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe
How about using parent/child functionality?
https://gist.github.com/mattweber/96f3515fc4453a5cb0db
Thanks,
Matt Weber
On Wed, Feb 26, 2014 at 7:45 PM, Jayesh Bhoyar jsbonline2...@gmail.comwrote:
Hi Binh,
Thanks for the answer.
Is there any case if I index this data into same index
stress free.
Has anyone encountered this issue before or have any suggestions on
improving the insert time for a query?
Note that we are not seeing this behavior with bulk indexing, etc. so the
problem seems to be just confined to the percolator itself.
Thanks,
Matt
--
You received this message
: 1,
version : {
created : 199
}
}
}
}
}
where we have 10 nodes serving as our data nodes
On Saturday, February 22, 2014 4:31:32 PM UTC-5, Matt Price wrote:
Hi,
Today we starting noticing that we have a long response delay from ES when
inserting
at least three master eligible nodes,
in case you get network disruptions. Additionally, you should set up at
least two data nodes, otherwise your data is not fault tolerant against
data loss if one server fails. Replica level 1 requires two data nodes.
Jörg
On Mon, Feb 10, 2014 at 10:35 PM, Matt
Hey all!
Background: I am using elasticsearch with logstash to do some log analysis.
My use-case is write-heavy, and I have configured ES accordingly. After
experimenting with different setups, I am considering the following
implementation:
*separate log processing from ES cluster*
1x
Thanks for the responses! After reading up on the split brain problem, I
am moving to a three-node cluster with one master-only (on logstash
server), one master/data, and one data-only server
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
1 - 100 of 105 matches
Mail list logo