Once you have your mapping set up, then create an application that itself
constructs the analyzer you need. Then feed it your real words and let it
generate the stemmed versions.
I don't think that ES can be told to do this; but it provides the classes
you need to do it yourself.
For my own sy
Especially when feeding log data via logstash, I have never used store:true
and have found no need to specify it at all. The logstash JSON will be
stored as the _source and retrieved by the query so there is no need to use
store at all.
Anyway, that's my experience.
Brian
--
You received thi
David,
On each machine on which either ES or a client is deployed, we have the
following directory which contains all of the jars that are packaged with
ES:
/opt/db/current/elasticsearch-1.3.4/lib
Then the java command's -classpath includes
/opt/db/current/elasticsearch-1.3.4/lib/* (along with
Hi Brian,
I think I'm missing something.
At the end you still have the full elasticsearch jars, right?
What is the difference with having that as a maven dependency?
Is it a way for not getting all elasticsearch dependencies which are shaded in
elasticsearch jar such as Jackson, Guice,... ?
Dav
I have an analysis chain like this for some Spanish text:
standard asciifolding lowercase es_stop_filter es_stem_filter es_synonyms
With synonyms at the end, after all the other filters, I have to define my
synonyms in their stemmed, ASCII-folded, lowercase forms. So instead of
defining a synony
I *think* that the length of the field impacts the score (at least that was
the case last time I used Lucene, which is what's underneath
elasticsearch). The second entry is longer.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscri
Hiya - I'm seeing the following on elasticsearch 1.0 : the following query:
{
"query_string": {
"fields" : [ "name", "content",
"comment" ],
"query": "\\um",
"boost" : 50
Filip,
Or, just put all of the Elasticsearch jars on your local client system,
then add their containing directory (with "/*" appended to it) to your
-classpath, and your client can use the TransportClient. Java will pull in
exactly what it needs and nothing it doesn't. And your client code sta
I highly recommend that you use the HTTP output. Works great, is immune to
the ES version, and there are no performance issues that I've seen. It Just
Works.
For example, here's my sample logstash configuration's output settings:
output {
# Uncomment for testing only:
# stdout { codec => r
My setup:
Logstash Node parsing NXLog eventlogs from windows servers and sending them
to a 3 node ES Cluster.
Here's my logstash conf file:
input {
tcp {
type => "eventlog"
host => "logstash01"
port => 3515
codec => 'json'
}
}
output {
elasti
Hi, I'm trying to build an aggregation and show the number in Kibana I have
the agg working in sense, but when I try to move it to my .js dashboard, I
can't get it to publish the aggregation (in this case an expected value of
"2". I guess overall my question is how do I integrate an aggregation
Oh, ok, I guess I'll just have to send requests via REST. Thank you!
El viernes, 14 de noviembre de 2014 18:06:50 UTC-2, David Pilato escribió:
>
> No. :)
>
> David
>
> Le 14 nov. 2014 à 21:00, Filip > a écrit :
>
> Hey there,
>
> I am planning to use only transportClient to connecto to a remote E
No. :)
David
> Le 14 nov. 2014 à 21:00, Filip a écrit :
>
> Hey there,
>
> I am planning to use only transportClient to connecto to a remote ES cluster.
> Hence I am guessing I don't need the whole elasticsearch jar because I am not
> going to have a local node on my application.
> Is there
Hey there,
I am planning to use only transportClient to connecto to a remote ES
cluster. Hence I am guessing I don't need the whole elasticsearch jar
because I am not going to have a local node on my application.
Is there a mvn artifact to import only java client and not the whole
elasticsearc
Hi,
I have query parser plugin. This plugin also implements transport action.
Both: query parser and transport action should share the same data source
object (with cache).
Unfortunately when I run a query and parser is called I see one instance of
data source. When transport action is called I se
On Fri, Nov 14, 2014 at 3:41 AM, wrote:
> I'm also seing this problem when a 1.4.0 node tries joining a 1.3.4 cluster
> with cloud-aws plugin version 2.4.0. Is there a workaround to use during
> upgrade, since I assume it's not a problem when they're all upgraded to
> 1.4.0.
I ended up starting
Great suggestion, I didn't know about that function. And it's something I
will incorporate into my larger config that has multiple filters.
Buteven with the addition of tag_on_failure it still returns nothing
except the _grokparsefailure. There are no other grok in this config
except thi
Two of my three nodes had catastrophic disk loss. Cluster was set up with
1 replica, 5 shards per index. Obviously the remaining node does not have
all shards for each index.
The system still responds to queries though it obviously has holes in the
data.
If I do nothing, my cluster statu
Hey Adrien,
Say I have two fields in my index with values:
genre = {Action, Adventure}
actor = {Tom Cruise, Jason Statham}
I'm looking for a way to get the distinct combinations of values with doc
counts, so I use a sub-aggregation:
"aggs":{
"genreAgg": {
"terms": {
"
Hey All,
I have a question about the internal implementation of geo hashes and
distance filters. Here is my current understanding, I'm struggling to
figure out how to apply these to our queries internally in ES.
Using bool queries are very efficient. Internally they
perform bitmap union, i
Hi,
I posted this elsewhere but I thought some people on this list could be
intersted by this too.
this is rather complex (at least to me). I have a load of items, let's say
they are meals. It could look something like:
'index' : 'meals',
'body' : {
'name' : {'type' : 'string'},
'di
Hey guys,
We're testing ES 1.4.0 from 1.3.2. I'm noticing some strange behavior in
our clients in our integration tests. They perform the following logic.
Create an the first index in the cluster (single node) with a custom
__default__ dynamic mapping
Add 3 documents, each of a a new type
Hi,
We are using terms aggregation on high cardinality field and limiting
results to 5000 (using “size” parameter). We also have a cardinality sub
aggregation on this terms aggregation to get the number of unique values on
a separate field for each term returned. Such combination of aggregati
Try to add a tag_on_failure on each grok filter to identify which grok
filter is failing.
I had the same issue and by explicitly setting a tag on each grok I could
determine the one causing issues.
Op vrijdag 14 november 2014 16:23:41 UTC+1 schreef Billy F:
>
> I have a message that is driving m
I have a message that is driving me nuts and I don't know how to fix it.
For some reason I'm getting a _grokparsefailure for every one the entries I
have for one of our blade enclosures. Everything that I can see shows that
it's actually working the right way and doing everything I ask it to.
Hi,
I'm trying to calculate a metric distance using a native scoring script on
my Elasticsearch matches between my query string terms and the found field
term terms of my indexed field.
Therefore I have to retrieve *all* the indexed field terms.
But it seems to me than I can just get them *sepa
Hello
I have to create a mapping to a type that will have a text field with
values:
- that are huge (more than 32KB),
- that are very bad structured, and will have snippets like "elas tic
search" and I need to find it when the user searches for "elasticsearch" or
"elastic search"
I can't modi
Settings beginning with "es.*" are new to me in config files. They are only
for command line.
So I think es.nodes and es.port are ignored.
Beside that, the settings for the TransportClient do not set the host(s) to
connect to, they set the node of the TransportClient itself (which is quite
menani
1.4.0 trying to join a 1.3.5 cluster with cloud-aws also fails.
On Friday, November 14, 2014 12:41:08 PM UTC+1, madsm...@colourbox.com
wrote:
>
> I'm also seing this problem when a 1.4.0 node tries joining a 1.3.4
> cluster with cloud-aws plugin version 2.4.0. Is there a workaround to use
> dur
Hi All,
I builded index with the help of following git hub issue to get the dynamic
fields as facets.
https://github.com/elasticsearch/elasticsearch/issues/5789#issuecomment-59472925
I have the following schema to index two values PROPERTY1 and PROPERTY2.
"props": {
"t
I'm also seing this problem when a 1.4.0 node tries joining a 1.3.4 cluster
with cloud-aws plugin version 2.4.0. Is there a workaround to use during
upgrade, since I assume it's not a problem when they're all upgraded to
1.4.0.
On Friday, November 14, 2014 11:33:45 AM UTC+1, Jörg Prante wrote:
Hi,
I was trying to configure the mapping for my logstash objects in the next
way:
"my_type":{
"dynamic_templates": [{
"string_fields": {
"mapping": {
"index": "no",
"type": "string"
},
"match_mapping_type": "string",
I think this is only related to unicast. But, nevertheless, it *should*
work... not sure if this is a bug or a feature
Jörg
On Fri, Nov 14, 2014 at 12:58 AM, Eric Jain wrote:
> On Thu, Nov 13, 2014 at 10:05 AM, joergpra...@gmail.com
> wrote:
> > Do not mix 1.3 with 1.4 nodes, it does not w
Hello,
I am trying to create transport client for our remote 2 node es cluster
(version 1.3) in java. When I use "hardcoded" cluster details all works
just fine as code follows:
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name",
"elasticsearch").build();
final
We've configured logstash in combination with elasticsearch and Kibana to
centralize our server logs.
In Kibana I want to setup a table which groups all error messages so we can
create a top 10 of the most occuring errors.
We tried to setup a terms table grouped on a specific field (which contai
Yes, I am now seeing the snapshots complete in about 2 minutes after
switching to a new, empty bucket.
I'm not sure why the initial request to snapshot to the empty repo was
hanging because the snapshot did in fact complete in about 2 minutes,
according to the S3 timestamp.
Time to automate dele
36 matches
Mail list logo