@rmuir Interesting, it sounds like my gains may be better than previously
expected, given the server is constantly evicting from heap. If I'm able,
I'll post some performance metrics back here when I'm done.
--
Please update your bookmarks! We have moved to https://discuss.elastic.co/
---
You
Thanks for the clarification Adrien. If that's the case, is there such a
flag that can enable them by default for all fields (excluding non-analyzed
strings; using ~1.4.3 here)?
Also, do you guys have more performance metrics on using Doc Values vs FDC?
I've seen the "10-25%" slower value thr
Just to correct myself, I misstated; a 1/3 increase in index size, not 3x.
--
Please update your bookmarks! We have moved to https://discuss.elastic.co/
---
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop r
I'm doing some overall testing on my cluster, debating if I should switch
to Doc Values. I have about 15 fields for each document, with 83 million
documents spread across 60 indices. All the fields are dynamically mapped,
and all of them can migrate to Doc Values. So, I have one copy of the d
I believe that we're seeing the same issue. We're using Ubuntu 14.04, ES
1.4.4 and the AWS plugin. We get this random failures every few weeks and
have to restart our cluster:
Caused by: org.elasticsearch.transport.NodeNotConnectedException:
[prod-flume-vpc-es-useast1-58-i-1ff961e3-flume-elasti
Thanks Adrien!
On Mon, Apr 20, 2015 at 3:38 PM, Adrien Grand wrote:
> Hi Matt,
>
> We have this meta issue which tracks what remains to be done before we
> release 2.0: https://github.com/elastic/elasticsearch/issues/9970. We
> plan to release as soon as we can but some of t
Is there an ETA for 2.0?
--
Thanks,
Matt Weber
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To
+1 it looks really good. Would the mailing list mode be enabled so we can
still get everything in our inbox if desired?
Thanks,
Matt Weber
On Thu, Apr 9, 2015 at 11:21 AM, Leslie Hawthorn wrote:
> Thank you for your feedback, Glen! We're currently planning to use the
> host
Is the source for the Logstash Shield Plugin open source / available
anywhere? The plugin adds SSL mutual-auth support for the transport
ports. I was hoping to do the same for HTTPS. Currently the HTTP output
is only server-auth.
--
You received this message because you are subscribed to t
Great thanks!
I hadn't realised there were indices-templates - sometimes you can't see
the wood for the trees.
Editing elasticsearch.yml did the trick though.
On Wednesday, 11 March 2015 10:58:42 UTC, Magnus Bäck wrote:
>
> On Wednesday, March 11, 2015 at 10:56 CET,
>
Hi there - I have searched all manner of online resources but can't seem to
find an answer to:
*How do I change the default number of replicas for new indices in my ES
cluster?*
I am running a single node cluster in development, and want to stop it from
creating unallocated replica shards whic
ust duplicated that rule for ‘docker0’.
> sudo iptables -I INPUT 3 -i docker0 -j ACCEPT && sudo service iptables save
The line ‘3’ may be different depending on any other rules you may have added.
On March 2, 2015 at 6:37:29 AM, wzcwts521 (wzcwts...@gmail.com) wrote:
Hi Matt
I am wo
Hi all,
I am currently indexing tags (industries) for an entity with a data
structure like this:
industry: ["Consulting & Recruitment","Professional Services","Education &
Training"]
I am applying a termsAggregation to the query as:
AggregationBuilders.terms("industry").field("industry");
W
Found the answer. The /_cluster/state endpoint has a list of all snapshots
in progress. For whatever reason, mine was stuck in INIT for a very long
time. I deleted via the /_snapshot/repo/snapshot_name endpoint.
On Wednesday, February 11, 2015 at 9:31:10 AM UTC-5, Matt Hughes wrote:
>
&
My ES cluster seems to think it is in my middle of creating a snapshot yet
I don't see any IN_PROGRESS snapshots in my repository. It's supposed to
be an hourly snapshot, but I don't see anything that has started within a
few days.
Yet every time I try and do anything with the repository, it s
lishHostIp) {
// check connectivity to other host in the cluster
}
}
On Monday, January 5, 2015 5:42:55 PM UTC-5, Mark Walkom wrote:
>
> It sounds like because that isn't a local interface that ES is bound to it
> tries to access it. Are you using NAT on a higher layer?
>
In my VM environment, a VM can't actually see its public IP address. I
have the following setup:
network.publish_host: 10.255.207.123
discovery.zen.ping.unicast.hosts: 10.255.207.123,10.255.207.124,10.255.
207.125
My VM can see 124 and 125 just fine, but due to issues completely unrelated
to
No there isn't.
On 10 December 2014 at 21:38, Matt Hughes wrote:
Is there a mechanism inside ES to specify multiple config files? I'd like to
have something like:
defaults.yml
overrides.yml
That way, it's much easier for me to see exactly what's different about one box
vs a
Is there a mechanism inside ES to specify multiple config files? I'd like
to have something like:
defaults.yml
overrides.yml
That way, it's much easier for me to see exactly what's different about one
box vs another.
--
You received this message because you are subscribed to the Google Group
Could someone clarify the difference between the 'status' field on the root
URL vs the 'status' field on /_cluster/health? In this system, the root
URL returns '200' even though the cluster is in 'yellow' as reported by the
cluster health check. What does 200 mean here? What are other possibl
Those settings look correct to me. You can set using kb,mb,gb, etc.
On Wednesday, December 3, 2014 4:25:38 AM UTC-5, Johan Öhr wrote:
>
> Hi,
>
> I have 12 elasticsearch nodes, with 10gb eth
>
> Ive been having alot of problem with the performance of snapshots, its
> throttles to 20 mb/s even
Could someone detail exactly what is re(stored) when you set this value to
true? Some subset of values returned by /_cluster/state?
Why would you ever want to set this to true?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe
300 segments per index or more. 30 days of that
is a rather large number of segments to test, especially over TCP/IP to Amazon
S3. It tests before it can ignore.
—Aaron
On Wed, Dec 3, 2014 at 10:24 AM, Matt Hughes wrote:
I understand that the segments are only backed up once. But
” I referenced before), and that’s
> only useful with the —older-than flag that is already there.
>
> —Aaron
>
>
> On Wed, Dec 3, 2014 at 8:57 AM, Matt Hughes > wrote:
>
>> Thanks for the speedy reply.
>>
>> As for 1, I understand that ES optimizes for *storage
ts. In this way you could keep hourly
> snapshots of the last few daily indices in one repository, and daily
> snapshots of your optimized indices in another. This prevents the
> slow-down by reducing the number of segments the repositories must search
> through for both hourly *and
As noted here --
https://groups.google.com/forum/#!searchin/elasticsearch/snapshot$20duration/elasticsearch/bCKenCVFf2o/TFK-Es0wxSwJ
-- the time it takes to perform a snapshot increases the more snapshots you
take. This eventually can become untenable. So far, the only solution
seems to be e
thub.com/elasticsearch/kibana/issues/1991
when I asked if it was possible.
> I guess I liked that it was a static webapp as it allows for easier
> integration on our end.
Same, I didn't want to run Yet Another Webserver that has fewer features,
etc. etc. and I liked how v3 was easy t
21160724 is within the
threshold period (10 days).
2014-11-21 16:38:32,959 INFO curator-20141121160846 is within the
threshold period (10 days).
2014-11-21 16:38:32,959 INFO Specified snapshots deleted.
2014-11-21 16:38:32,959 INFO Done in 0:00:00.016456.
On Friday, November 21, 2014 11:09:
Trying to use the curator API. I want to do a backup of all my indices and
only if the snapshot backup is successful, trim any snapshots older than 5
days.
First part is simple enough, but I don't see any return for create_snapshot:
bash-4.1# cat test.py
#!/usr/bin/env python
import elasticse
Two of my three nodes had catastrophic disk loss. Cluster was set up with
1 replica, 5 shards per index. Obviously the remaining node does not have
all shards for each index.
The system still responds to queries though it obviously has holes in the
data.
If I do nothing, my cluster statu
Hey Adrien,
Say I have two fields in my index with values:
genre = {Action, Adventure}
actor = {Tom Cruise, Jason Statham}
I'm looking for a way to get the distinct combinations of values with doc
counts, so I use a sub-aggregation:
"aggs":{
"genreAgg": {
"terms": {
"
I have a field with values like:
foo
bar
bar-one
Unfortunately, when I set up this index, I didn't realize that I wanted to
turn off tokenization ("index": "not_analyzed"). Now when I try and do
terms aggregation, I get back the tokenized values:
foo
bar (2)
one
Is there any way to do an ag
Thoughts, anybody? I saw that you can somewhat do this with "scripts" and
letting the top aggregation encompass all term fields, but is that any more
performant?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group
Unclear how that would work. Example?
On Thursday, October 16, 2014 6:49:13 PM UTC-4, Mark Walkom wrote:
>
> You could try using an if statement in your env variable?
>
> On 17 October 2014 06:50, Matt Hughes >
> wrote:
>
>>
>>
>> I have paramete
Currently have three-node cluster in a single data center; all nodes are
both master/data eligible. I'm trying to migrate to six-node cluster
across three separate DCs for increased reliably with this topology:
DC1:
1. Master-only
2. Data-only
DC2:
1. Master-only
2. Data-only
DC3:
1. Master-o
Here's a bit of background info:
I'm interested in using aggregations to produce distinct keys for multiple
"term" fields and then getting a "measure" value for those keys. This can
be accomplished by "tree"-ing term aggregations together and whatever
"measure" terms are applied to the lowest
I have parameterized my elasticsearch.yml file like so:
node.name: ${NODE_NAME}
This let's me pass in NODE_NAME as an environment variable when running
ES. I was *hoping* that if NODE_NAME was undefined, ES would fall back to
the default of picking a dynamic name, but it just dies. Is there
I have an ES cluster of 3 servers where indexes are configured with 5
shards and 2 replicas (so, every index has 5 primary shards and 10 replica
shards, with 5 shards allocated to each server). I have just upgraded from
1.0.0 to 1.3.4 by stopping one server at a time, updating ES then
re-starti
Thanks David! I appreciate the quick response. Please keep me posted.
Best,
Matt
On Tuesday, October 7, 2014 12:53:17 PM UTC-4, David Pilato wrote:
>
> At the moment working on the plugin.
>
> It has not been released yet.
> I hope to release it tomorrow or so.
>
>
ssing this is because the file downloaded contains uncompiled source
code and not the binary. Assuming that guess is correct, is there a hosted
source for this plugin or some other workaround, or do I need to compile
and serve the plugin myself?
Thanks for any help,
Matt
--
You received th
2:55:32 PM, Brian (brian.from...@gmail.com) wrote:
Matt,
Assuming your logstash configurations correctly set the @timestamp field, then
logstash will store the document in the day that is specified by the @timestamp
field.
I have verified this behavior by observation over the time we have been
I have a logstash-forwarder client sending events to lumberjack ->
elasticsearch to timestamped logstash indices. How does logstash decide
what *day* index to put the document in. Does it look at @timestamp?
@timestamp is just generated when the document is received, correct? So if
you lo
I'm running curator in every node in an N-node ELK cluster. Is there any
reason I *wouldn't* want to have the --master-only flag turned on?
If you delete an index from the master, it's still going to get deleted
from the other nodes right? I'm trying to understand why you would ever
not want
;t see any
performance hits that are obvious. Is this a known issue?
On Tuesday, September 16, 2014 5:15:11 PM UTC-4, Matt Hughes wrote:
>
> I have logstash indicies that go back thirty days. I have logs in those
> indices from today.
>
> If I do a search with:
&
I have logstash indicies that go back thirty days. I have logs in those
indices from today.
If I do a search with:
"size": 500,
"sort": [
{
"@timestamp": {
"order": "desc",
"ignore_unmapped": true
}
}
]
I don't get any logs from today. If I limit the s
://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_shard_size
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_minimum_document_count
Thanks,
Matt Weber
On Tue, Sep 16
With a standard LB in front of an N-node cluster, what's the best URL in
the ES API to check the health of a particular node (so as to know to
remove it at least temporarily).
There is the node info API:
curl -XGET 'http://localhost:9200/_nodes'
http://www.elasticsearch.org/guide/en/elasticsea
As a followup, any ratio guidelines for indexing nodes vs dedicated
masters. From what I can tell, it wouldn't make much sense to run with
only one-dedicated master node; if that node goes down, your whole cluster
becomes unavailable.
On Monday, August 11, 2014 2:18:08 PM UTC-4, Matt H
Lots of ES best practice articles recommend having dedicated master nodes.
Specifically, that would involve setting these flags:
node.master: true
node.data: false
Say, you had 7 index nodes and 3 master nodes
(https://blog.hipchat.com/2013/10/16/how-hipchat-scales-to-1-billion-messages/)
in
You will be able to do this soon. See:
https://github.com/elasticsearch/elasticsearch/pull/7075
Thanks,
Matt Weber
On Aug 9, 2014 10:44 AM, "James Cook" wrote:
> There seems to be some reluctance by ES team to provide scrip table
> aggregators, or perhaps it's on a road
er.debug("Bound to address [{}]",
serverChannel.getLocalAddress());
First port in the port range that is open gets bound. And this appears to
only happen once. Thanks.
On Saturday, August 2, 2014 2:19:33 PM UTC-4, Matt Hughes wrote:
>
> Thanks for the reply.
>
> Ok, I guess this comment in the c
if you want to declare your port
> number as "official", you might want to consult
>
> http://www.iana.org/assignments/service-names-port-numbers/
>
> for an unassigned port number to avoid conflicts.
>
> Jörg
>
>
>
> On Sat, Aug 2, 2014 at 6:26 PM, Matt Hughes &
I'm running elasticsearch in a somewhat locked down environment and the
less ports I have open, the better.
I guess my questions boil down to:
1) Why do you need multiple ports open in the 93xx range?
2) Will anything bad happen if I just have 9300 open? Will performance
suffer, and if so, in
:::444 :::*
On Friday, August 1, 2014 11:30:06 PM UTC-4, Matt Hughes wrote:
>
> No, not loopback. I should caveat that this is running in a docker
> container.
>
> The only values I specify in my config are:
> network.publish_
ineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 2 August 2014 11:51, Matt Hughes >
> wrote:
>
>> Originally I was getting a bunch of "No Route to Host" errors and tracked
>> it down to being
Originally I was getting a bunch of "No Route to Host" errors and tracked
it down to being out of file handles. I have fixed the file handle,
problem, but I still keep getting "No route to host" errors; the odd thing
is, the error says it can't connect to itself: These logs are *from*
10.52.2
I have a time-series index. I create a new version once a day using the
format index-name--mm-dd.
When creating the index, I assign it a constant alias of 'index-name'.
This way, clients can just refer to the 'index-name', and not have to
append the timestamp. I create the alias when I c
It's currently blocked until we can figure out a way to prevent a bad query
from triggering an OOM error. The goal (as far as I've been told) is to
get this in, but no ETA. I need to update the PR to the latest master as
there have been significant changes as well.
Thanks,
Matt Weber
job!
Thanks,
Matt Weber
On Thu, Jul 24, 2014 at 7:01 PM, Nikolas Everett wrote:
> I wanted to do conditional copy_to and Andrian suggested implementing
> scripted transforms instead. Much more flexible. They mesh well with the
> shift to groovy too because groovy is much more stabl
default, but you can write native transform scripts as well [2].
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-transform.html#mapping-transform
[2] https://github.com/imotov/elasticsearch-native-script-example/pull/7
Thanks,
Matt Weber
--
You received this
I have a three-node cluster that has gone red. The nodes in the cluster
were shut down abruptly as a result of power loss.
Cluster Health is:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_
I have not tested routing but I did put that functionality in so it should
work fine. Let me know if you have any issues!
Thanks,
Matt Weber
On Thu, Jun 26, 2014 at 7:20 PM, Drew Kutcharian wrote:
> Thanks Matt, that feature is exactly what we need. One thing I couldn’t
> figure o
See PR #3278. Hopefully it will get merged into one of the next releases.
https://github.com/elasticsearch/elasticsearch/pull/3278
Thanks,
Matt Weber
On Thu, Jun 26, 2014 at 12:10 AM, Thomas wrote:
> Hi,
>
> Unfortunately this is not supported by elasticsearch, the parent docum
Fixed that one by parsing the json as Map rather than a
string.
On Tuesday, June 24, 2014 7:41:42 AM UTC+12, Matt Chambers wrote:
>
> I think get() is just a shortcut for execute().actionGet()
>
> Anyway, problem solved, it was a problem with my mapping not applying
> correctl
plied.
client.admin().indices().prepareCreate(indexName).addMapping("product",
mappingJson).execute().actionGet();
-Matt
On Tuesday, June 24, 2014 6:35:32 AM UTC+12, Brian wrote:
>
> Perhaps you need to insert the execute().actionGet() method calls, as
> below?
>
&g
turn builder.get().getCount();
I get a count of 0 with the Java API. If I take that request and hit the
server with curl I get the real count.
Any help would be appreciated.
-Matt
--
You received this message because you are subscribed to the Google Groups
"elasticsearch"
Ahh, I just realised that if I solve this, I just bump into the next
problem regarding the readonly flag:
https://github.com/jprante/elasticsearch-river-jdbc/issues/250
Humph :(
On Thursday, 5 June 2014 12:05:24 UTC+1, Matt Burns wrote:
>
> Unfortunately, that version of the sqlite drive
Unfortunately, that version of the sqlite driver does not work on OSX:
java.lang.NoClassDefFoundError: org/sqlite/NativeDB
See:
https://bitbucket.org/xerial/sqlite-jdbc/issue/127
On Thursday, 24 April 2014 07:59:11 UTC+1, Jörg Prante wrote:
>
> You must use a JDBC4 driver (jdbc sqlite 3.8.2-SNAP
Yes that works. Docs could really use work there though. The same string
in elasticsearch.yml is ["one", "two", "three"], so I assumed I'd need to
pass in brackets.
On Thursday, May 29, 2014 4:09:02 PM UTC-4, InquiringMind wrote:
>
> I believe that the host names must be comma-separated and
I'm trying to set up unicast discovery. I want to pass in the hosts via
environment variable and am relying on elasticsearch.yml support of
environment variable interpolation.
Tried two formats without any luck:
First approach (pass in contents of array):
export ES_HOSTS='"one", "two"'
disc
dcard": {"content_rev": "txeN*nerdlihC*"}}
]
}
}
}
}
}
Note that you will need to reverse your query string as the wildcard query
is not analyzed.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-revers
we can run to upgrade our
dashboards?
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubsc
I would be best to manage elasticsearch outside of tomcat and use the java
or rest api to communicate with ES from within your app. If you absolutely
must run ES within tomcat, have a look at the wares transport[1].
[1] https://github.com/elasticsearch/elasticsearch-transport-wares
Thanks,
Matt
Hi,
yes, I've tested it with bulk failures - it seems to work well.
Internally, BulkProcessor releases semaphores for bulk failures and other
exceptions that can be thrown by the client, like NoNodeAvailableException,
so it shouldn't ever block forever.
Matt
On Wednesday, 23 Apr
There is an open issue to add a blocking close method to BulkProcessor
https://github.com/elasticsearch/elasticsearch/pull/4180
Matt
On Wednesday, 23 April 2014 10:51:50 UTC+1, Jörg Prante wrote:
>
> You must flush the BulkProcessor and wait until your code has received all
> respo
Nevermind. It was an error on my part; these changes worked. Thanks again!
On Friday, April 18, 2014 5:51:31 PM UTC-4, Matt Hughes wrote:
>
> Thanks for the quick reply!
>
> I updated the mappings and confirmed both types read not_analyzed. I
> also updated the query t
Did you reindex your docs after updating the mapping? Can you post your
mapping and original docs?
On Friday, April 18, 2014, Matt Hughes wrote:
> Thanks for the quick reply!
>
> I updated the mappings and confirmed both types read not_analyzed. I
> also updated the query to u
}
},
{
"term":{
"where.processId":
"bd13dbe5-0a4c-4469-a645-44cb3fde280a"
}
}
]
}
}
}
}
}
Still not getting any hits though. Tried escaping the terms. Is there
anything special about hav
ance. Read
why at
http://www.elasticsearch.org/blog/all-about-elasticsearch-filter-bitsets/.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#string
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-bool-filter.htm
Trying to compose a query and filter combination to no avail:
{
"from":0,
"size":200,
"query":{
"filtered":{
"query":{
"query_string":{
"fields":[
"_all"
],
"query":"\"Test message\""
}
s and make the script return as quick as possible.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-all-query.html
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-rescore.html
Thanks,
Matt Weber
On Fri, Apr 18, 20
/en/elasticsearch/reference/current/query-dsl-function-score-query.html
[2]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html#_native_java_scripts
Thanks,
Matt Weber
On Thu, Apr 17, 2014 at 11:54 PM, Srinivasan Ramaswamy
wrote:
> I would like
I have a field in my documents that consists of a URL.
{...
"url":"http://example.com/2014/04/15/foo-bar-baz/";
...}
I would like to use a regexp query/filter to find documents in my index
with urls matching a regex pattern.
For example: "http://example\.com/\d{4}/\d{2}/\d{2}/([^/]+)/$"
I'm a
* Matt Dainty [2014-04-11 11:01:33]:
>
> With the cluster up I tried the tribe node again, now it just logs this
> every 20 seconds:
>
> [2014-04-11 15:38:25,497][INFO ][discovery.zen]
> [es-tribe-01/sydney] failed to send join request to master
ng discovery.zen.ping_timeout to 10s but it doesn't help.
All the other timeouts I can find seem to default to 30s which is more
than enough.
Matt
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group a
ee it makes a query to find all the nodes in the cluster, with
their public DNS names and IP addresses but then when it connects to
them it starts trying to use the private IP addresses again and I'm in
the same situation as before.
Matt
--
You received this message because you are s
* David Pilato [2014-04-11 06:51:56]:
> Sorry Matt
>
>
> I did not get a chance to look at it yet in details.
> Just to make sure it's not related to the jetty plugin, could you try remove
> it from the tribe node?
I just pared back the config on the tribe node to
* Matt Dainty [2014-04-08 04:58:46]:
> * David Pilato [2014-04-08 03:50:52]:
> > Hey Matt,
> >
> >
> > I'd like to understand better what is happening here.
> >
> > Could you gist your elasticsearch.yml files (the ones for elasticsearch
> &
On Thursday, April 10, 2014 9:03:25 AM UTC-7, Matt wrote:
>
> We use Flume 1.4 to pass logs into HDFS as well as ElasticSearch for
> storage. The pipeline looks roughly like this:
>
> Client to Server Flow...
> (local_app -> local_host_flume_agent) AVRO/SSL >
We use Flume 1.4 to pass logs into HDFS as well as ElasticSearch for
storage. The pipeline looks roughly like this:
Client to Server Flow...
(local_app -> local_host_flume_agent) AVRO/SSL >
(remote_flume_agent)...
Agent Server Flow ...
(inbound avro -> FC1 -> ElasticSearch)
(inbound av
* David Pilato [2014-04-08 03:50:52]:
> Hey Matt,
>
>
> I'd like to understand better what is happening here.
>
> Could you gist your elasticsearch.yml files (the ones for elasticsearch
> standard nodes and the tribe node one)?
> Of course, replace your EC
address?
Am I misunderstanding how this is supposed to work? Is there an
alternative way to attach a remote tribe node to this cluster easily?
Thanks
Matt
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this grou
pet to think it needed to start
up ElasticSearch. When this happened, we ended up running two ES daemons on
each of our nodes, and a whole ton of "reshuffling" occurred.
Is this a design feature? Bug? Thoughts?
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message b
on or 1.0.1 although
> it probably won't solve your "network" issue.
>
> I suppose you don't have anything in nodes logs?
> How much HEAP did you give to your nodes?
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
>
> Le 2
do
it locally on your computer and then publish them to Kibana. I swear in
3.0.0 there used to be a simple method for loading them from a GIST or
something, but now we can't find it. Help?
--Matt
--
You received this message because you are subscribed to the Google Groups
"
Hi,
We have been seeing sporadic NodeDisconnectedException and
NoNodeAvailableException in our ES cluster (0.90.7).
Our cluster is made up of 2 data nodes. One data node has a single primary
shard and one data node has a single replica shard. We connect to using the
Java TransportClient config
If this is a concern, why not have your client's use the REST api so they
don't need to worry about matching their java version with the java version
of the search cluster?
Thanks,
Matt Weber
On Fri, Mar 21, 2014 at 12:07 PM, kimchy wrote:
> Not trivializing the bug at all, god
1. The histogram aggregation (and facet) work on indexed values not based
on the current time or "now". So, if the last indexed document timestamp
is 3/15/14T16:15 you will not get empty buckets between 3/15/14T16:15 and the
current time. It would be interesting to be able to set the "to" and
"f
How about using parent/child functionality?
https://gist.github.com/mattweber/96f3515fc4453a5cb0db
Thanks,
Matt Weber
On Wed, Feb 26, 2014 at 7:45 PM, Jayesh Bhoyar wrote:
> Hi Binh,
>
> Thanks for the answer.
>
> Is there any case if I index this data into same index
"uuid" : "_Flxtpi9R8C3wiCe3O04kA",
"number_of_replicas" : "9",
"number_of_shards" : "1",
"version" : {
"created" : "199"
}
}
}
}
}
where we have 10 nodes s
1 - 100 of 111 matches
Mail list logo