Good morning
Can anyone guide me how to setup 2 or more instances of ElasticSearch
Cluster on two different window server machine on same network?
Thanks Regards,
Jonbon Dash
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe
Hi
For performance improvement I'm trying to combine
Elasticsearch/Logstash/Kibana with Hadoop (cdh4) and configure opensource
alternative to Hunk. Unfortunately I'm familiar only with HDFS where I
store logs. In my opinion the combination of Elasticsearch and Hadoop
should use HDFS as
You can use different cluster name.
Thanks,
Pulkit Agrawal
Sent from my iPhone
On 19-Jun-2014, at 11:58 AM, JONBON DASH jonbonwo...@gmail.com wrote:
Good morning
Can anyone guide me how to setup 2 or more instances of ElasticSearch Cluster
on two different window server machine on same
Hi,all:
I'm using elasticsearch for the top 10 with count 500 item,But not valid.
would be glad to let me know if you have any knowledge about this, thanks.
{
facets: {
terms: {
terms: {
field: agent,
size: 10,
Hi Pulkit,
Thanks for the early response.
I want to maintain the same cluster name between two different machine.
For more clarification,
Suppose NodeA started with cluster name elasticsearch_RM in machine 1
as a master and NodeB started with same cluster name elasticsearch_RM
in machine 2
Have a search for unicast zen discovery in the docs and you will be good to
go.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 19 June 2014 17:52, JONBON DASH jonbonwo...@gmail.com wrote:
Hi Pulkit,
Thanks for
http://www.elasticsearch.org/resources/ has videos and documentation that
will help.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 19 June 2014 17:44, srinu konda konda.srin...@gmail.com wrote:
Hi,
I
Lots of GC isn't bad, you want to see a lot of small GCs rather than
stop-the-world sort of ones which can bring your cluster down.
You can try increasing the index refresh interval - index.refresh_interval.
If you don't require live access, then increasing it to 60 seconds or
more will help.
If
Hi,
i am playing around with puppet-easticsearch 0.4.0, works wells so far
(thanks!), but I am missing a few options I havent seen in the
documentation. As I couldnt figure it out immediately by reading the
scripts, may be someone can help me fast on this:
- there is an option to change the
Hey,
you potentially could use the termvectors API for this, see
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-termvectors.html
Not sure, if this is excalty, what you are after... maybe explain your
use-case a bit more
--Alex
On Tue, Jun 17, 2014 at 2:19 PM,
Hey,
your request is already ambigous. You specify a single ID, but expect two
results to return, as you did not specify a type. However to identify a
document in a unique fashion, you need the tuple of index/type/id, of which
a type is missing here.
So, either specify all three, or maybe
Hey,
just a wild guess: Are you having more than one type in your mapping and
not every type has this field configured as a completion field?
If it is not this cause, can you create a full blown recreation like
mentioned in http://elasticsearch.org/help
Thanks a lot!
--Alex
On Wed, Jun 18,
Hi Elasticsearch list :)
I'm having some trouble while running Elasticsearch on r3.large (HVM
virtualization) instances in AWS. The short story is that, as soon as I put
any significant load on them, some requests take a very long time (for
example, Indices Stats) and I see disconnected/timeout
Hi
I am replying to my query I sent last week for the benefit of all. Here is
the answer:
{
query: {
match: {status : ERROR}
},
filter: {
not:{
filter:{
has_child: {
type: redelivery,
query : {
match_all: {}
No strange entries in the log.
As a temporary solution I rebuild the index through Ruby gem tire:
ModelName.rebuild_index
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
We had a 2,2TB/d installation of Splunk and ran it on VMWare with 12
Indexer and 2 Searchheads. Each indexer had 1000IOPS guaranteed assigned.
The system is slow but ok to use.
We tried Elasticsearch and we were able to get the same performance with
the same amount of machines. Unfortunately
I recently learned that ES's default for http.compression is false (no
compression). A quick search through the archives find several instances of
folks turning this on. Are there counter indications to enabling
compression? My main goal is to enable some remote scan queries executed
via the
It seems exact match query is not working on embedded index of elastic
search.
Is this an issue with elastic search?
Is there any one to help me on this?
Thanks,
Samanth
On Wednesday, June 18, 2014 11:06:38 PM UTC+5:30, K.Samanth Kumar Reddy
wrote:
Hi,
I am using Lucene for last one
Further to (2). Would it be an improvement to have a different kind of
request for a scrolling search - that way the api could exclude items that
don't make sense (e.g. aggregations, facets, etc)
On Wednesday, 18 June 2014 10:28:06 UTC+1, mooky wrote:
Many thanks Jörg.
Further
Hi All,
One thing i forgot to mention is that my 3rd query, which takes input from
the 2nd query, gets close to 500-1000 values from 2nd query. So the *terms
*query
gets 500-100 values. The 90th percentile for the third query comes out to
be ~350 ms.
Thanks!
Ravi
On Wednesday, 18 June
Count request does not support [filter]. Why? How to count with the same
filter (except for size, fields, from) and query I'm probably going
to search hits after counting?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from
Hello,
we have a cluster of two nodes. Every index in this cluster consists of 2
shards and one replica. We want to make use of snapshots restore to
transfer data between two clusters. When we make our snapshots on node one
only the primary shard is included, the replica shard is missing.
@Zennet: I was thinking of doing something similar via a reverse-proxy in
front of Kibana, however I believe Kibana still uses DELETE, PUT, and POST
requests to save its dashboards, so I'm not sure what to block exactly.
@Jaguar: jetty plugin looks interesting, especially the
I've found another alternative. New 3.6 version of HUE supports
data visualization in kibana style:
http://gethue.com/hadoop-ui-hue-3-6-and-the-search-dashboards-are-out/
четверг, 19 июня 2014 г., 10:42:50 UTC+4 пользователь kay rus написал:
Hi
For performance improvement I'm trying to
the url type is used in combination with the fs type. some machines can
write/read snapshots to a fs type repository, and same machines can only
read for a url repository which points to the same location the fs
repository points at.
Is this behavior by any chance possible using S3
When I want to list the snapshots that are within a certain repository, I
issue the following command:
curl -XGET http://localhost:9200/_snapshots/repository_name/_all
as I understand, this is the only way of doing it.
However, chances are that while I'm issuing that command, some other
Hello
I have some issue, when I index a particular data note_source (sql
longtext).
I use the same analyzer for each fields (except date_source and id_source)
but for note_source, I have a warn monitor.jvm.
When I remove note_source, everything fine. If I don't use analyzer on
note_source,
Hello,
Can you isolate your slow queries and check if they are slow even when
running them independently ? Check how many documents are matched by theses
queries, if they are millions that would explain.
Also you are using a Terms filter with hundreds of entries. If theses
entries are
Hi,
we have somehow a complex type holding some nested docs with arrays (lets
assume an hierarchy of books and for each book we have an array of pages
containing its metadata).
we want to search for the nested doc - search for all the books that have
the term XYZ in one of their pages - but
This is usually something that's being solved using parent-child, but the
question here really is what do you mean by needing to retrieve both books
pages.
Can you describe the actual scenario and what you are trying to achieve?
--
Itamar Syn-Hershko
http://code972.com | @synhershko
Heya,
We are pleased to announce the release of the Elasticsearch Azure cloud plugin,
version 2.3.0.
The Azure Cloud plugin allows to use Azure API for the unicast discovery
mechanism and add Azure storage repositories..
https://github.com/elasticsearch/elasticsearch-cloud-azure/
Release
Bump
On Wednesday, June 18, 2014 6:20:58 PM UTC-7, sai...@roblox.com wrote:
One out of 4 nodes always spikes to 100% CPU when we do some load tests
using JMeter (50 Threads, 50 Loops) with any query (Match_All, Filtered
Query etc.,). That particular node has 3 Shards with 2 Primary Shards.
You mean that you want to snapshot your data to S3 and then expose your
SNAPSHOT on http, right?
I think, but I might be wrong that in that case, using a fs repository to read
your http SNAPSHOT (even if it had been build with S3) should work.
But may be I misunderstood your case here???
I
Hi
I am running a script in mvel (scripted fields) that returns a computed
value by getting the payload information.
When i run the script, for the first few times i get the following error.
After executing the query 3-4 times, the mvel script starts working
perfectly fine till i change the
Hi,
i've had few issues with the mvel scripting, so was looking at other
languages.
1) Is it true that i cant execute more than one sentence in python
language??
2) When using the javascript language plugin, scripts calling functions
like doc['field_name'].value OR _source.obj1.test work well.
Well, assuming we have a book type. the book holds a lot of metadata, lets
say something of the following:
{
author: {
name: Jose,
lastName: Martin
},
sections: [{
chapters: [{
pages: [{
pageNum: 1,
numOfChars: 1000,
text: let my people...,
numofWords: 125
},
{
pageNum: 2,
numOfChars: 1005,
text:
Java 8 with G1GC perhaps? It'll have more overhead but perhaps it'll be
more consistent wrt pauses.
On Wednesday, June 18, 2014 2:02:24 PM UTC-4, Eric Brandes wrote:
I'd just like to chime in with a me too. Is the answer just more
nodes? In my case this is happening every week or so.
Hi,
*Situation:*
We are using ES 1.2.1 on a machine with 32GB RAM, fast SSD and 12 cores. The
machine runs Ubuntu 14.0.x LTS.
The ES process has 12GB of RAM allocated.
We have an index in which we inserted 105 million small documents so the ES
data folder is around 50GB in size
(we see this by
See this if interested in Elasticsearch performance on different hardware
configurations.
http://www.slideshare.net/bigstep-infrastructure/bigstep-partners-elasticsearch-scaling-benchmarks
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Span queries are another option, but the main drawback is that they use
non-analyzed term queries.
--
Ivan
On Thu, Jun 19, 2014 at 2:11 AM, Alexander Reelsen a...@spinscale.de wrote:
Hey,
you potentially could use the termvectors API for this, see
It is very hard to give you concrete advice without knowing more about your
domain and usecases, but here are 2 points that came to mind:
1. You can make use of the highlighting features to show the content that
matched. Highlighters can return whole blocks of text, and by using
Hi,
How would I define a custom mapping if I wanted to index/store only the
keys in the map and discard the values?
Prasanna
On Monday, March 17, 2014 2:28:39 AM UTC-7, Tomislav Poljak wrote:
Hi,
I think first you need to define is what is the requirement (or
expectation) of hashMap
I'd be interested in knowing what problems you had with ELK, if you don't
mind sharing.
I understand the ease of splunk, but ELK isn't that difficult if you have
some in-house linux skills.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web:
Thanks for you response Mark.
I think I've finally fine tuned my scenario...
For starters, it helped me A LOT to set xms on Logstash to the same value
as LS_HEAP_SIZE. It really reduced the GC.
Second, I followed some tips form
I am using the phrase suggester to implement did-you-mean functionality. My
source field is named did_you_mean_source which is a combination of first
and last name with a space in the middle.
When I search for say allex blak I do get fairly descent suggestions,
including the alex black I am
Hey,
We all know that terms aggregator groups the count of specific fields and
give it to us. But, suppose I want to display an extra field that doesn't
exist in the field counts and display
it as 0 count. Do we have anything for that provision
-Raghav
--
You received this message because
How does shingle filter work on match_phrase in query phase?
After analyzing phrase t1 t2 t3, shingle filter produced five tokens,
t1
t2
t3
t1 t2
t2 t3
Will match_phrase still give t1 t2 t3 a match? How it works? Thank you.
--
You received this message because you are subscribed to
47 matches
Mail list logo