Hello,
I would like to ask for advice about our index that is built on the
principle of Parent Child,
Now a search word takes a very long time about 3 minutes.
Below is the index structure.
Our database (shown below the index schema) contains information about
purchases and reviews in
Hi,
I just upgraded my four node ES cluster to 1.4.2. After restart I cant
retrieve data using Kibana3 unless I manually creates an alias for each
index? I still have one ES1.3.x left in the cluster where kibana works fine.
If I run _aliases?pretty on an ES 1.4. node it returns {} only and
I'm looking to build a search across URLs that orders the results by urls
that end in the search criteria.
For example; across the below 3 urls: z/a/b/c; a/z/b/c; a/b/c/z
searching for z - would result in a the order
a/b/c/z; a/z/b/c; z/a/b/c
searching for a
z/a/b/c; a/b/c/z; a/z/b/c
I'm
There are several concepts:
- filter operation (bool, range/geo/script)
- filter composition (composable or not, composable means bitsets are used)
- filter caching (ES stores filter results or not, if not cached, ES must
walk doc-by-doc to apply filter)
#1 says you should take care what kind of
Hi
I am new to elasticsearch- hadoop . Currently trying to index data stored
in HDFS to elasticsearch index. I am to do that successfully . But while
doing that my only concern is when we index the data by help of code , then
each document is provided with an id field , which is randomly
Currently I have an item in my elasticsearch index with the title:
*testing123*
When I search for it, I can only get it returned if I search *testing123*
exactly. However, I want to be able to search *testing* and have it
returned too.
How can I have it so the search must start with that term
Hi
I am trying to index data stored in HDFS to elasticsearch index. I am able
to do it successfully . What my concern is currently the id provided for
each document in index has a random generated alpha number. Is there any
way by which we can assign the id field with a value which will be
You should use this then:
http://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-word-delimiter-tokenfilter.html
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Lucene.NET committer and PMC member
On Thu, Mar
This boils down to Lucene fundamentals, in particular what search tokens
are created and then searched. I've explained this in depth here:
https://www.youtube.com/watch?v=QI566fe9Svs
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer
Hi All
Any updates
On Friday, 6 March 2015 19:27:07 UTC+5:30, Shanmugam wrote:
Hi All
I have 3 unique servers for elasticsearch, i made the cluster setup in
this.
Each and every index has 5 shards and 2 replicas. i have an index with 4
lac of documents in it. while perform a search
Thank you for the reply. I thought it was much about making my search query
not require an exact match, rather than splitting down the words I am
searching against?
On 19 March 2015 at 12:30, Itamar Syn-Hershko ita...@code972.com wrote:
You should use this then:
Hello good people,
We have an ELK setup for our nginx / postfix etc logs. it's great.
Now we'd like to be able to alert based on various criteria. icinga is
great, we just installed it to play with.
is there a plugin that we can use to query elasticsearch from within
icinga, to create
Hello experts,
I have problem with red title on Kibana3: *Oops!*
SearchPhaseExecutionException[Failed to execute phase [query], all shards
failed]
I also check on elasticsearch and notification message: [Mentallo] All
shards failed for phase: [query]
Here is my configured:
Jörg,
Looks like I have found a solution: making a singleton wrapper around the
MorphAnalyzer object has solved the issue (to be tested on larger scale
still).
Here is the code:
[code]
public class MorphAnalyzerSingleton {
private static MorphAnalyzer INSTANCE = null;
private final
I am using the ELK stack for analyzing logs. So as per default
configuration a new index by logsatash--MM-DD is created by ES.
So if I have configured logstash to read like this:
/var/log/rsyslog/**/2014-12-0[1-7]/auditd.log
So it is reading old logs and the index name created will be
Hi,
You can use script
filter(http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-script-filter.html).
Checking doc['manager'].value == doc['teamMember'].value should work.
Alternatively, you can precompute it and add managerIsAlsoTeamMember field to
documents.
Masaru
Not sure I understand the difference between composable vs. cacheable. Can
filters be cached without using bitsets? What format are the results
stored in, if not as bitsets?
In the example below, would the string range field y filter be evaluated
on every document in the index, or just on
I have the same problem after upgrading to ES 1.4.2. How did you solve it?
Br
Mathias
Den torsdag 11 december 2014 kl. 14:09:18 UTC+1 skrev Erick Blanchard:
Hello All,
First thanks a lotfor the great work on the ELK stack!
This post is both about elasticsearch and kibana3. There is
Dear Elasticsearch,
We use the elasticsearch's _ttl feature which is not clear to us. The next
is written in the latest documentation :
Expired documents will be automatically deleted regularly. You can
dynamically set the indices.ttl.interval to fit your needs. The default
value is 60s .
come on guys, someone must know this :)
On Wednesday, March 18, 2015 at 6:23:50 PM UTC+2, Lior Goldemberg wrote:
hi all
How would you handle the following problem.
I save for each back office user the list of pages he visited.
Now I want to fetch all the users who visited page a and
HI Lior,
Have your tried aggregation
http://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html
query?
On Wednesday, March 18, 2015 at 6:23:50 PM UTC+2, Lior Goldemberg wrote:
hi all
How would you handle the following problem.
I save for each back office
Hi,
I wish to use Elasticsearch and Kibana in a commercial product.
I wish to understand the licensing concerns if I replace the Kibana logo by my
own logo. Also, some text may be modified to suit my needs.
Is it fine if I do this by giving proper attribution in source code and not on
UI?
Can anybody comment on this? Does this look ok?
Thanks,
Drew
On Wednesday, March 11, 2015 at 3:02:56 PM UTC-5, Drew Town wrote:
Hello all,
Just want to make sure my field data settings are going to work in a way
that will protect my cluster from a bad query.
elasticsearch.yml
Hello Mark,
You can see I already raised it on github and received a response. This
issue will be temporary and is related to Elastic changing their domain and
not updating the CORS header in the definitive guide.
https://github.com/elastic/elasticsearch-definitive-guide/issues/330
Thanks for
KB reads data from Elasticsearch, so yeah an index is the same thing for
both.
Basically you either need a timestamp in your docs to use KB3, or move to
KB4.
On 18 March 2015 at 19:10, Karthik Sharma karthik.sha...@gmail.com wrote:
I have inserted some data into elastic search using REST
Elasticsearch uses the Apache 2 license -
http://www.apache.org/licenses/LICENSE-2.0
The relevant part of that license says;
*4. Redistribution*. You may reproduce and distribute copies of the Work or
Derivative Works thereof in any medium, with or without modifications, and
in Source or
Interesting. Thanks for the heads up! That worked. Looks like I'll have to
escape the @ as well.
Really appreciate that.
On Thu, Mar 19, 2015 at 11:59 AM, Nikolas Everett nik9...@gmail.com wrote:
Try escaping the hash tag. It has a special meaning in the Lucene
Dialect of Regular Expression
Thanks for the reply. Going by the mentioned clauses it should be possible to
white label the software because I would only need to specify the modification
in source code only and not on the UI.
I hope this makes sense. It would be great if someone can share their
experience of doing the
You need the server in there, I don't know why it doesn't add that
automatically but that's why you are getting the CORS error, it is a
misleading response.
It'd be worth raising this as an issue on Github against the Elasticsearch
project :)
On 19 March 2015 at 09:43, Zelfapp n...@usamm.com
What are the various logger. settings? I couldn't find a list of available
settings in the documentation.
I am looking to turn on log level to DEBUG for a particular class and route the
debug messages to a different appender.
I am aware of the logger.cluster.service : DEBUG setting ...
I understand that we can sort a text field (Eg: title of a book)
alphabetically only if it is mapped as not_analyzed. Is it possible to
ignore stop words in the text during sorting?
Does ES provide any such functionality or our app should index the field
after removing all stop words and store
for Costin...? I enjoyed the talk at Spark Summit East on
spark-elasticseach integration in Spark 1.3 (sparkContext.esRDD and
rdd.saveToEs APIs). Will these APIs eventually be able for pyspark
context/rdd? Cheers, JH
--
You received this message because you are subscribed to the Google
Hi,
We've noticed that one task seems to fail with an OOME when it's reading
from ES on 5 partitions and trying to repartition out 200 partitions. Our
job uses 50 cores and allocates 2GB per executor.
ES Setup:
-1.4.1
-4 Nodes (r3.2xlarge)
-5 Shards
-We are using the default configurations
Hi all,
I am seeing an issue running consecutive tests that startup 2 elastic
search clusters in the setup for the Test Class and close the clients and
nodes down when the Test Class finishes. Individually by themselves the
tests run clean. It seems that during the closure of the client
You mean specific type?
On Thursday, March 19, 2015, Yarden Bar ayash.jor...@gmail.com wrote:
HI Lior,
Have your tried aggregation
http://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html
query?
On Wednesday, March 18, 2015 at 6:23:50 PM UTC+2, Lior
Hi someone else noticed there is a difference between the bulk api and the
multi percolate api. In the bulk you need to use _type and _index while in
the multi percolate API you use type and index. is there a reason for this?
Or just an inconsistency?
{percolate : {index : twitter, type :
Thanks Uri, like the documentation mentiones:
Elasticsearch clusters with the Shield security plugin installed do not
honor the ignore_unavailable option.
Op donderdag 5 maart 2015 13:18:37 UTC+1 schreef uboness:
Hey Jettro,
Indeed with shield the behaviour of the cluster when it comes
Hello,
I'm going through the definitive guide for elastic search learning. I've
got Marvel up and running on my local windows. I'm able to run commands
against elastic search, etc. In the definitive guide there are View in
Sense links.
E.g.
Try escaping the hash tag. It has a special meaning in the Lucene Dialect
of Regular Expression
https://lucene.apache.org/core/4_1_0/core/org/apache/lucene/util/automaton/RegExp.html?is-external=true
.
On Thu, Mar 19, 2015 at 11:44 AM, Mahesh Kommareddi
mahesh.kommare...@gmail.com wrote:
Hi,
Scores are based on the docs in the particular shard that is queried, so
it's relative.
On 19 March 2015 at 05:19, Shanmugam shanmuth...@gmail.com wrote:
Hi All
Any updates
On Friday, 6 March 2015 19:27:07 UTC+5:30, Shanmugam wrote:
Hi All
I have 3 unique servers for elasticsearch,
Thanks i tested the settings and the speed get better, I also found out ES
1.4.0 has a bug about the snapshot/restore, after upgrading to 1.4.4 my
speed triplicated.
On Monday, March 16, 2015 at 8:44:32 AM UTC-5, Alejandro Calderon wrote:
Hello,
I am using elasticsearch-cloud-aws to create
This is one reason why we don't recommend such deployments, no matter what
you pick here you risk a split or unavailable cluster.
Thus, there's no good answer and you need to decide what site you are
willing to live without.
On 18 March 2015 at 21:03, Gobin Sougrakpam gobinsougrak...@gmail.com
Hi,
I'm trying to do a RegEx Filter to match on .*#.* to find all the items (in
a field) that contains a hash tag. I looked around and thought maybe the
analyzer was stripping the character, so I took a cue from a previous post (
There are a few nagios scripts people have written, eg
https://github.com/anchor/nagios-plugin-elasticsearch
On 19 March 2015 at 05:48, Yarden Bar ayash.jor...@gmail.com wrote:
Hello good people,
We have an ELK setup for our nginx / postfix etc logs. it's great.
Now we'd like to be able to
I've created a Youtube video demo'ing the issue I'm seeing. Kind of
annoying I can't figure this out. I'm sure the answer is stupid simple, but
I'm not finding and I have searched, which is ironic.
https://youtu.be/4Ksb0t_9Ym0
On Thursday, March 19, 2015 at 9:28:21 AM UTC-7, Zelfapp wrote:
It's not really worth flushing things from cache based on time,
Elasticsearch uses LRU (last recently used) to clear things and it's best
to just let that handle it.
Don't forget that allowing that much fielddata in your heap means there is
less for actual good things to happen, as the JVM cannot
46 matches
Mail list logo