search cascading in ES

2014-04-01 Thread Chetana
We are developing an application which requires cascaded (flow based) 
search where the search result of one will become the input criteria for 
the next search.
 
Is there a way to do this in ES ? If not, can you suggest some third party 
library which can provide cascading functionality over ES search
 
 
Thanks
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ce66247d-f8d3-4d0e-8dc0-ecc848542240%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: using java get document api within a script field

2014-04-01 Thread mat taylor
Yes. I wanted to join documents via a script, to avoid multiple round trips 
to the client. Could be used to autoload parent docs from child docs for 
example. I don't want to start a new node and client on each script call, 
for obvious reasons. I suppose a native script plugin could do the trick 
but I was wondering if there was any other way to invoke a shared node 
client from a script which could be used accross multiple requests. 

On Tuesday, April 1, 2014 9:46:17 PM UTC-7, David Pilato wrote:
>
> O_o starting a Node from a script? First time I see that... Looks like a 
> hack to me! :-)
>
> That said what is the use case here? Are you trying to perform some JOIN 
> using scripts?
>
>
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
>
> Le 2 avr. 2014 à 05:46, mat taylor > a 
> écrit :
>
> Figured this out, however the code below seems to restart a new node on 
> each request, and  is not cleaning up after itself.
> Is there a better way to do this - for example by exposing a shared node 
> object to the mvel script plugin ?
>
> PUT /test/user/1 
> { "name":"jane"
> , "partner":2
> }
>
> PUT /test/user/2
> { "name":"john"
> , "partner":1
> }
>
> GET /test/user/_search
> { "_source":true
> , "script_fields": 
> { "partner": 
>   { 
> "script":"org.elasticsearch.node.NodeBuilder.nodeBuilder().node().client().prepareGet('test',
>  
> 'user',_source.partner).execute().actionGet().getSource()"
> }
>   }
> }
>
>
>
>
>
> On Tuesday, April 1, 2014 4:44:40 PM UTC-7, mat taylor wrote:
>>
>> Is it possible to query the database from a script field by instantiating 
>> a java client and issuing a get request? 
>> Are there any examples of this? 
>>
>> Thanks
>> Mat
>>
>  -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/a0f57ffc-cecf-4bc8-8082-9f5737faea87%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/37072cd6-2c0a-4476-936a-23848c92c868%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: using java get document api within a script field

2014-04-01 Thread David Pilato
O_o starting a Node from a script? First time I see that... Looks like a hack 
to me! :-)

That said what is the use case here? Are you trying to perform some JOIN using 
scripts?



--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 2 avr. 2014 à 05:46, mat taylor  a écrit :

Figured this out, however the code below seems to restart a new node on each 
request, and  is not cleaning up after itself.
Is there a better way to do this - for example by exposing a shared node object 
to the mvel script plugin ?

PUT /test/user/1 
{ "name":"jane"
, "partner":2
}

PUT /test/user/2
{ "name":"john"
, "partner":1
}

GET /test/user/_search
{ "_source":true
, "script_fields": 
{ "partner": 
  { 
"script":"org.elasticsearch.node.NodeBuilder.nodeBuilder().node().client().prepareGet('test',
 'user',_source.partner).execute().actionGet().getSource()"
}
  }
}





> On Tuesday, April 1, 2014 4:44:40 PM UTC-7, mat taylor wrote:
> Is it possible to query the database from a script field by instantiating a 
> java client and issuing a get request? 
> Are there any examples of this?
> Thanks
> Mat
> 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a0f57ffc-cecf-4bc8-8082-9f5737faea87%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0B3391C2-9F52-4B23-86F1-E4F7AA6FD2AB%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Nodes randomly disconnected

2014-04-01 Thread Hans Krijger


We have a cluster running 1.0.0 in Azure using unicast discovery. Recently 
we started seeing exceptions like these in the logs:

[2014-04-01 21:40:22,720][DEBUG][action.admin.indices.status] [ES2PROD-M01] 
[usg-2014-03-04][4], node[3cCeFKJrTMWaIhE3R6tlZA], [P], s[STARTED]: Failed 
to execute [
org.elasticsearch.action.admin.indices.status.IndicesStatusRequest@2c06e67
]
org.elasticsearch.transport.NodeDisconnectedException: 
[ES2PROD-D07][inet[/10.0.64.68:9300]][indices/status/s] disconnected

In this case D07 is still up and running. After several dozen of these 
exceptions, D07 is disconnected:

[2014-04-01 21:40:24,096][INFO ][cluster.service ] [ES2PROD-M01] removed 
{[ES2PROD-D07][3cCeFKJrTMWaIhE3R6tlZA][es2prod-d07][inet[/10.0.64.68:9300]]{master=false},},
 
reason: 
zen-disco-node_failed([ES2PROD-D07][3cCeFKJrTMWaIhE3R6tlZA][es2prod-d07][inet[/10.0.64.68:9300]]{master=false}),
 
reason transport disconnected (with verified connect)

Four seconds later the same node is added back:

[2014-04-01 21:40:28,712][INFO ][cluster.service ] [ES2PROD-M01] added 
{[ES2PROD-D07][3cCeFKJrTMWaIhE3R6tlZA][es2prod-d07][inet[/10.0.64.68:9300]]{master=false},},
 
reason: zen-disco-receive(join from 
node[[ES2PROD-D07][3cCeFKJrTMWaIhE3R6tlZA][es2prod-d07][inet[/10.0.64.68:9300]]{master=false}])

In the mean time the cluster goes yellow and starts recovery. This does not 
seem like a timeout type of issue since it happens so quickly, and then the 
disconnected node is added right back.

Any ideas how we can get more info on the root cause and avoid this from 
happening?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/df768c5e-5833-42b5-a804-b7d07f51996b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch-Memcache

2014-04-01 Thread Bharvi Dixit
Hi,
Kindly someone tell me that how to use memcached transport plugin with 
elasticsearch. I am not able to understand the document about this plugin 
mentioned 
here: 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-memcached.html

I could not get any thing relevant about this plugin after searching a lot 
on internet.
I have downloaded the plugin in the elasticsearch plugin directory but i 
don't know what step to take after this. 
Is there anything need to be specified in elasticsearch.yaml file?
How to make connections with elasticsearch to use this plugin?
Can this plugin be used with kibana?

Thanks in advance
Bharvi Dixit

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2c6f0e1b-b67b-4898-9da9-48ff6b00ff7c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to modify term frequency formula?

2014-04-01 Thread Ivan Brusic
It has been a while since I used a custom similarity, but what you have
looks right. Can you try a full class name instead?
Use org.elasticsearch.index.similarity.tfCappedSimilarityProvider.
According to the error, it is looking for org.elasticsearch.index.
similarity.tfcappedsimilarity.tfCappedSimilaritySimilarityProvider.

-- 
Ivan


On Tue, Apr 1, 2014 at 7:00 AM, geantbrun  wrote:

> Sure.
>
> {
>  "settings" : {
>   "index" : {
>"similarity" : {
> "my_similarity" : {
>  "type" : "tfCappedSimilarity"
> }
>}
>   }
>  },
>  "mappings" : {
>   "post" : {
>"properties" : {
> "id" : { "type" : "long", "store" : "yes", "precision_step" : "0" },
> "name" : { "type" : "string", "store" : "yes", "index" : "analyzed"},
> "contents" : { "type" : "string", "store" : "no", "index" :
> "analyzed", "similarity" : "my_similarity"}
>}
>   }
>  }
> }
>
> If I substitute tfCappedSimilarity for tfCapped in the mapping, the error
> is the same except that provider is referred as tfCappedSimilarityProviderand 
> not as
> tfCappedSimilaritySimilarityProvider.
> Cheers,
> Patrick
>
>
> Le lundi 31 mars 2014 17:13:24 UTC-4, Ivan Brusic a écrit :
>>
>> Can you also post your mapping where you defined the similarity?
>>
>> --
>> Ivan
>>
>>
>> On Mon, Mar 31, 2014 at 10:36 AM, geantbrun  wrote:
>>
>>> I realize that I probably have to define the similarity property of my
>>> field as "my_similarity" (and not as "tfCappedSimilarity") and define in
>>> the settings my_similarity as being of type tfCappedSimilarity.
>>> When I do that, I get the following error at the index/mapping creation:
>>>
>>> {"error":"IndexCreationException[[exbd] failed to create index];
>>> nested: NoClassSettingsException[Failed to load class setting [type]
>>> with value [tfCappedSimilarity]]; nested: ClassNotFoundException[org.
>>> elasticsearch.index.similarity.tfcappedsimilarity.
>>> tfCappedSimilaritySimilarityProvider]; ","status":500}]
>>>
>>> Note that the provider is referred in the error as
>>> tfCappedSimilaritySimilarityProvider (similarity repeated 2 times). Is
>>> it normal?
>>> Patrick
>>>
>>> Le lundi 31 mars 2014 13:06:00 UTC-4, geantbrun a écrit :
>>>
 Hi Ivan,
 I followed your instructions but it does not seem to work, I must be
 wrong somewhere. I created the jar file from the following two java files,
 could you tell me if they are ok?

 tfCappedSimilarity.java
 ***
 package org.elasticsearch.index.similarity;

 import org.apache.lucene.search.similarities.DefaultSimilarity;
 import org.elasticsearch.common.logging.ESLogger;
 import org.elasticsearch.common.logging.Loggers;

 public class tfCappedSimilarity extends DefaultSimilarity {

 private ESLogger logger;

 public tfCappedSimilarity() {
 logger = Loggers.getLogger(getClass());
 }

 /**
  * Capped tf value
  */
 @Override
 public float tf(float freq) {
 return (float)Math.sqrt(Math.min(9, freq));
 }
 }

 tfCappedSimilarityProvider.java
 *
 package org.elasticsearch.index.similarity;

 import org.elasticsearch.common.inject.Inject;
 import org.elasticsearch.common.inject.assistedinject.Assisted;
 import org.elasticsearch.common.settings.Settings;

 public class tfCappedSimilarityProvider extends
 AbstractSimilarityProvider {

 private tfCappedSimilarity similarity;

 @Inject
 public tfCappedSimilarityProvider(@Assisted String name,
 @Assisted Settings settings) {
 super(name);
 this.similarity = new tfCappedSimilarity();
 }

 /**
  * {@inheritDoc}
  */
 @Override
 public tfCappedSimilarity get() {
 return similarity;
 }
 }


 In my mapping, I define the similarity property of my field as
 tfCappedSimilarity, is it ok?

 What makes me say that it does not work: I insert a doc with a word
 repeated 16 times in my field. When I do a search with that word, the
 result shows a tf of 4 (square root of 16) and not 3 as I was expecting, Is
 there a way to know if the similarity was loaded or not (maybe in a log
 file?).

 Cheers,
 Patrick

 Le mercredi 26 mars 2014 17:16:36 UTC-4, Ivan Brusic a écrit :
>
> I updated my gist to illustrate the SimilarityProvider that goes along
> with it. Similarities are easier to add to Elasticsearch than most 
> plugins.
> You just need to compile the two files into a jar and then add that jar
> into Elasticsearch's classpath ($ES_HOME/lib most likely). The code will
> scan for every SimilarityProvider defined and 

Match query with "the" text parsed using "OR" even specified "AND" in the operator field

2014-04-01 Thread cyrilforce
Hi,

I have a match query that search a field with AND operator. When the text 
contain "*the*" eg : "t*he house*" it will return all the matching of "
*house*" instead of "*the house*"  in the result even i specified operator 
: "AND".  However it behaving well if the text doesn't contain "*the*"  eg 
: "*crowded house". *To me it seems like it ignore the "the" in query text.

*The query with "the*" : 

{
  "explain": true,
  "query" : { 
 "match_phrase" : {
"PERFORMER" : {
  *  "query" : "the house",*
"operator" : "and"

}
}
}
}
  }
}


*Result : *
1)
 "PERFORMER": "Crowded House",

 "_explanation": {
"value": 6.4987273,
"description": "weight(*PERFORMER:house *in 1171039) 
[PerFieldSimilarity], result of:",
"details": [

2)
  "PERFORMER": "House Of Downtown",

"_explanation": {
"value": 6.4987273,
"description": "weight(*PERFORMER:house* in 381) 
[PerFieldSimilarity], result of:",




*The query with "crowded house" : *

{
  "explain": true,
  "query" : { 
 "match_phrase" : {
"PERFORMER" : {
  *  "query" : "crowded house",*
"operator" : "and"

}
}
}
}
  }
}


Result : 

1) 
 "PERFORMER": "Crowded House",
 "_explanation": {
"value": 14.409363,
"description": "weight(*PERFORMER:\"crowded house\*" in 
3183) [PerFieldSimilarity], result of:",

2)
"PERFORMER": "Crowded House",
"_explanation": {
"value": 13.537326,
"description": "weight(*PERFORMER:\"crowded house\*" in 
23752) [PerFieldSimilarity], result of:",
"details": [



Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/971e8240-ff72-4249-8e02-33825a363666%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: using java get document api within a script field

2014-04-01 Thread mat taylor
Figured this out, however the code below seems to restart a new node on 
each request, and  is not cleaning up after itself.
Is there a better way to do this - for example by exposing a shared node 
object to the mvel script plugin ?

PUT /test/user/1 
{ "name":"jane"
, "partner":2
}

PUT /test/user/2
{ "name":"john"
, "partner":1
}

GET /test/user/_search
{ "_source":true
, "script_fields": 
{ "partner": 
  { 
"script":"org.elasticsearch.node.NodeBuilder.nodeBuilder().node().client().prepareGet('test',
 
'user',_source.partner).execute().actionGet().getSource()"
}
  }
}





On Tuesday, April 1, 2014 4:44:40 PM UTC-7, mat taylor wrote:
>
> Is it possible to query the database from a script field by instantiating 
> a java client and issuing a get request? 
> Are there any examples of this? 
>
> Thanks
> Mat
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a0f57ffc-cecf-4bc8-8082-9f5737faea87%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


How about the upgrade performance?

2014-04-01 Thread Meng Li
Hi, guys:

We are a heavy user of es for log searching, one log job is now indexing 
more than 10,000 lines per minute (just name it as 'A')
We now have  another log 'B' which we want to join it into A with the _id , 
and we decide to use update operation.
B is relatively small, 2 million a day and not real-time,( but will go up 
in the future)
Anybody here know the performance of update? Or will the cluster's 
performance go down facing the frequent update?
Both production environment experience and theoretical explanation are 
appreciated.
Thank you.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4cbfb523-9d57-44e6-a9e5-0f577521dab7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: aggregation with conditions

2014-04-01 Thread lvalbuena


On Tuesday, April 1, 2014 6:17:30 PM UTC+8, lval...@egg.ph wrote:
>
> Hi,
>
> I have 2 cases.
>
> Given the structure
> {
>email:value,
>points:value
> }
>
> Case 1:
> I have 1000  rows, where multiple rows can have the same value for the 
> email field.
> {"email":"s...@email.com","points":5}
> {"email":"s...@email.com","points":2}
> ...
>
> How do I tell elasticsearch to search for all emails that have only 
> appeared *once* in the data set.
>
> Case 2:
> Also using aggregation. How can I tell elasticsearch to get all possible 
> occurrences the emails appeared in the data set.
> ex.
> emails = 5, occourances >= 5 // There are 5 emails that appeared 5 times 
> or greater in the dataset
> emails = 6, occourances = 4
> emails = 23, occourances = 3
> emails = 2, occourances = 2
> emails = 12, occourances = 1
>
> Or is it even posible?
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/70ad3a1d-283f-4828-b241-b151432d4957%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


using java get document api within a script field

2014-04-01 Thread mat taylor
Is it possible to query the database from a script field by instantiating a 
java client and issuing a get request? 
Are there any examples of this? 

Thanks
Mat

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1d61260b-72e5-45de-bcfd-ffc74dacdc2a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Rolling restart of a cluster?

2014-04-01 Thread Mike Deeks
What is the proper way of performing a rolling restart of a cluster? I 
currently have my stop script check for the cluster health to be green 
before stopping itself. Unfortunately this doesn't appear to be working.

My setup:
ES 1.0.0
3 node cluster w/ 1 replica.

When I perform the rolling restart I see the cluster still reporting a 
green state when a node is down. In theory that should be a yellow state 
since some shards will be unallocated. My script output during a rolling 
restart:
1396388310 21:38:30 dev_cluster green 3 3 1202 601 2 0 0
1396388310 21:38:30 dev_cluster green 3 3 1202 601 2 0 0
1396388310 21:38:30 dev_cluster green 3 3 1202 601 2 0 0

1396388312 21:38:32 dev_cluster green 3 3 1202 601 2 0 0
1396388312 21:38:32 dev_cluster green 3 3 1202 601 2 0 0
1396388312 21:38:32 dev_cluster green 3 3 1202 601 2 0 0

curl: (52) Empty reply from server
1396388313 21:38:33 dev_cluster green 3 3 1202 601 2 0 0
1396388313 21:38:33 dev_cluster green 3 3 1202 601 2 0 0

curl: (52) Empty reply from server
1396388314 21:38:34 dev_cluster green 3 3 1202 601 2 0 0
1396388314 21:38:34 dev_cluster green 3 3 1202 601 2 0 0
... continues as green for many more seconds...

Since it is reporting as green, the second node thinks it can stop and ends 
up putting the cluster into a broken red state:
curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388339 21:38:59 dev_cluster green 2 2 1202 601 2 0 0

curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388341 21:39:01 dev_cluster yellow 2 2 664 601 2 8 530

curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388342 21:39:02 dev_cluster yellow 2 2 664 601 2 8 530

curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388343 21:39:03 dev_cluster yellow 2 2 664 601 2 8 530

curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388345 21:39:05 dev_cluster yellow 1 1 664 601 2 8 530

curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388346 21:39:06 dev_cluster yellow 1 1 664 601 2 8 530

curl: (52) Empty reply from server
curl: (52) Empty reply from server
1396388347 21:39:07 dev_cluster red 1 1 156 156 0 0 1046

My stop script issues a call 
to http://localhost:9200/_cluster/nodes/_local/_shutdown to kill the node. 
Is it possible the other nodes are waiting to timeout the down node before 
moving into the yellow state? I would assume the shutdown API call would 
inform the other nodes that it is going down.

Appreciate any help on how to do this properly.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/baba0a96-a991-42e3-a827-43881240e889%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Weird Behavior of Elastic Search

2014-04-01 Thread rajeev reddy
I am indexing it the same way as you suggested.

I am able to properly index into elasticsearch. But the search
functionality only not working. With the same code in other machine i am
able to index and able to search also.




On Tue, Apr 1, 2014 at 4:29 PM, Binh Ly  wrote:

> Attachments require a specific mapping. It is likely that your mapping is
> not correct. Here is a simple example of how to index a PDF with the proper
> attachment type:
>
> class T1
> {
> }
>
> private static void IndexPdf()
> {
> var settings = new ConnectionSettings(new Uri("
> http://localhost:9200";), "_all");
> var client = new ElasticClient(settings);
>
> client.CreateIndex("foo", c => c
> .AddMapping(m => m
> .Properties(props => props
> .Attachment(s => s
> .Name("file")
> .FileField(fs => fs.Store())
> )
> )
> )
> );
>
> var doc = new { file = new {
> content =
> Convert.ToBase64String(File.ReadAllBytes(@"C:\ESData\pdf\fn6742.pdf")),
> _indexed_chars = -1
> }};
>
> client.Index(doc, i => i.Index("foo").Type("t1"));
> }
>
> Then after that you can run a search like this:
>
> POST localhost:9200/foo/t1/_search
> {
>   "fields": "file",
>   "query": {
> "match": {
>   "file": "blah bah"
> }
>   }
> }
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/RU5a2wSZCYc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/9b3a7cf5-f3b7-45d7-881e-099c803f7bdc%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
R R R

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAOLrPN6zfu6D1w4Sp-5w5OKaJ3B63yNXzJ7zamMFjFEv3_mAHg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: transport.tcp.port doesn't work for localhost?

2014-04-01 Thread InquiringMind
This usually means that there is no local server bound to that port. If 
you're performing an integration test, it could be that you aren't giving 
ES time to completely initialize and bind to the port and be ready to 
accept connections. Or that you aren't configuring your local node to be a 
server so it's not binding to the port to which you wish to connect.

I don't know the deterministic way to wait for ES to be listening to its 
ports, so one of my production servers (that uses a TransportClient and 
contains our business logic) waits 4 seconds for ES to start up, and then 
waits for the Yellow status (at least) before it starts. That has never 
failed to start-up properly, probably due to the TransportClient retrying 
the initial connection if it's not yet available during the wait for yellow 
status. Perhaps?

Your colleague's case is successful when connecting to another host because 
more than likely ES is already up and running on that other host.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ec9ac253-236e-4ad2-b37b-d63e5030e018%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Weird Behavior of Elastic Search

2014-04-01 Thread Binh Ly
Attachments require a specific mapping. It is likely that your mapping is 
not correct. Here is a simple example of how to index a PDF with the proper 
attachment type:

class T1
{
}

private static void IndexPdf()
{
var settings = new ConnectionSettings(new 
Uri("http://localhost:9200";), "_all");
var client = new ElasticClient(settings);

client.CreateIndex("foo", c => c
.AddMapping(m => m
.Properties(props => props
.Attachment(s => s
.Name("file")
.FileField(fs => fs.Store())
)
)
)
);

var doc = new { file = new {
content = 
Convert.ToBase64String(File.ReadAllBytes(@"C:\ESData\pdf\fn6742.pdf")),
_indexed_chars = -1 
}};

client.Index(doc, i => i.Index("foo").Type("t1"));
}

Then after that you can run a search like this:

POST localhost:9200/foo/t1/_search
{
  "fields": "file",
  "query": {
"match": {
  "file": "blah bah"
}
  }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9b3a7cf5-f3b7-45d7-881e-099c803f7bdc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


merging output of one facet in another facet input

2014-04-01 Thread vikrant mahajan
I have a facet which returns proper no of terms but lesser no of count as 
many documents have that field missing


I have another field in documents which have that field missing which 
contains that term but also other unrelated junk terms also.


I want to run first the first facet and get the correct faceted terms and 
then run second facet on different field but want to return only those 
terms which are also sent by first facet too.and then agregate the result.

this way i will be able to ignore junk and get proper output of counts.

Can it be done ??

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f0d6ae1c-5ec7-4382-b995-c6f1592cab03%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


merging output of one facet in another facet input

2014-04-01 Thread vikrant mahajan
I have a facet which returns proper no of terms but lesser no of count as 
many documents have that field missing


I have another field in documents which have that field missing which 
contains that term but also other unrelated junk terms also.


I want to run first the first facet and get the correct faceted terms and 
then run second facet on different field but want to return only those 
terms which are also sent by first facet too.and then agregate the result.

this way i will be able to ignore junk and get proper output of counts.

Can it be done ??

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/337152b5-c28a-496c-9986-4b58f92de6c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Python and Elasticsearch Autocompletion issue.

2014-04-01 Thread redrubia
Hi all,

So to begin. I am trying to add around 7.2k documents. No problem there. 
The issue is after I am not able to get any suggestions returned to me. So 
this is how the information is added:


 def addVariantToElasticSearch(self,docId, companyId, companyName, parent,
 companyIndustry, variants, count,conn):
body = { "company":{
 "company_name": companyName,
 "parent": parent,
 "suggest": {   "input": variants,
"output": companyName,
"weight": count,
"payload": {"industry_id": companyIndustry, 
"no_of_jobseekers":count,
"company_id": companyId
   } 
 }
 }
}
 res = conn.index(body=body, index="companies", doc_type="company", id=docId
)


The mapping and settings is defined as:

def setting():
 return { "settings" : {
"index": {
   "number_of_replicas" : 0,
   "number_of_shards": 1
},
"analysis" : {
"analyzer" : {
"my_edge_ngram_analyzer" : {
"tokenizer" : "my_edge_ngram_tokenizer",
"filter":["standard", "lowercase"]
}
},
"tokenizer" : {
"my_edge_ngram_tokenizer" : {
"type" : "edgeNGram",
"min_gram" : "1",
"max_gram" : "5",
"token_chars": [ "letter", "digit" ]
}
}
}
},
"mappings": {
"company" : {
  "properties" : {
"name" : { "type" : "string" },
"industy": {"type": "integer"},
"count" : {"type": "long" },
"parent": {"type": "string"},
"suggest" : {
  "type" : "completion",
  "index_analyzer": "my_edge_ngram_analyzer",
  "search_analyzer": "my_edge_ngram_analyzer",
  "payloads": True
}
  }
}
  }
}

Index creation:

def createMapping(es):
  settings = setting()
  es.indices.create(index="companies", body=settings)


I call `createMapping` which uses `setting()`, then add each variant - 
surrounded by a try,except -> causes no issue. I can see all my documents 
added in the browser as well as looking at the status, settings and 
mappings. 

But when I use a curl request as below, I get no results. (See curl and 
output beneath)

curl -X POST localhost:9200/companies/_suggest -d '
{ 
 "company-suggest" : { 
"text" : "1800", 
"completion" : { 
   "field" : "suggest" 
 } 
   } 
 }'

{

  "_shards" : {

"total" : 1,

"successful" : 1,

"failed" : 0

  },

  "suggest" : [ {

"text" : "ruby",

"offset" : 0,

"length" : 4,

"options" : [ ]

  } ]

I am currently using ES 1.1.0. I have tried both Python API 0.4 and 1.1.0 
with no luck (I tried 0.4 as a result of 1.1.0 not working although I know 
it isn't best to due to compatibility issues with version of ES). I have 
also been able to add the same settings with mappings via curl and added a 
company which I have been able to retrieve by this curl above. 

I'm not sure exactly where the issue lies. I have looked at the Data folder 
in ES to ensure it has been created, as well as the browser. I have also 
ensured only a single ES instance is running.


Any help greatly appreciated,


Ruby



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/067ac9ac-deca-48bc-aa4c-9260aa4c7575%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Issue with Python and Autocompletion

2014-04-01 Thread redrubia
Hi all,

So to begin. I am trying to add around 7.2k documents. No problem there. 
The issue is after I am not able to get any suggestions returned to me. So 
this is how the information is added:


 def addVariantToElasticSearch(self,docId, companyId, companyName, 
parent,companyIndustry
, variants, count, conn):
body = { "company":{
 "company_name": companyName,
 "parent": parent,
 "suggest": {   "input": variants,
"output": companyName,
"weight": count,
"payload": {"industry_id": companyIndustry, 
"no_of_jobseekers":count,
"company_id": companyId
   } 
 }
 }
}
 res = conn.index(body=body, index="companies", doc_type="company", id=docId
)


The mapping and settings is defined as:

def setting():
 return { "settings" : {
"index": {
   "number_of_replicas" : 0,
   "number_of_shards": 1
},
"analysis" : {
"analyzer" : {
"my_edge_ngram_analyzer" : {
"tokenizer" : "my_edge_ngram_tokenizer",
"filter":["standard", "lowercase"]
}
},
"tokenizer" : {
"my_edge_ngram_tokenizer" : {
"type" : "edgeNGram",
"min_gram" : "1",
"max_gram" : "5",
"token_chars": [ "letter", "digit" ]
}
}
}
},
"mappings": {
"company" : {
  "properties" : {
"name" : { "type" : "string" },
"industy": {"type": "integer"},
"count" : {"type": "long" },
"parent": {"type": "string"},
"suggest" : {
  "type" : "completion",
  "index_analyzer": "my_edge_ngram_analyzer",
  "search_analyzer": "my_edge_ngram_analyzer",
  "payloads": True
}
  }
}
  }
}

Index creation:

def createMapping(es):
  settings = setting()
  es.indices.create(index="companies", body=settings)


I call `createMapping` which uses `setting()`, then add each variant - 
surrounded by a try,except -> causes no issue. I can see all my documents 
added in the browser as well as looking at the status, settings and 
mappings. 

But when I use a curl request as below, I get no results. (See curl and 
output beneath)

curl -X POST localhost:9200/companies/_suggest -d '

{ 

 "company-suggest" : { 

"text" : "1800", 

"completion" : { 

quote>   "field" : "suggest" 

quote> } 

quote>   } 

quote> }'



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f866147a-242b-4488-966c-c2d1d405269e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Regexp query flags documentation bug?

2014-04-01 Thread Binh Ly
Thanks, that is an error in the docs and submitted a correction already. 
Your second syntax is the correct one.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/777af59c-78dd-42c1-877c-7213dc3b6163%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


is it possible to write to ES from a json file in HDFS where JSON file has inconsistent or different keys in different records

2014-04-01 Thread siva mannem
Hi,

my json file is like this
+++
{"k1":"v1"  , "k2":"v2"  ,   "k3":"v3"  , "k4":"v4"  ,"k5":"v5"}

{"k12":"v11"  ,  "k23":"v22"   ,  "k34":"v33"  ,  "k45":"v44"  ,   
 "k56":"v55"}

{"k1":"v111"  , "k2":"v222"  , "k3":"v333"  , "k4":"v444" ,  "k5":"v555"}

{"k123":"v12"  ,  "k234":"v23"   ,  "k345":"v34"  ,  "k456":"v45"  ,   
 "k567":"v56"}
+


my pig script is like this
+++
REGISTER /usr/lib/gphd/pig/elasticsearch-hadoop-1.3.0.M2-yarn.jar;

DEFINE ESTOR org.elasticsearch.hadoop.pig.EsStorage('es.nodes=gateway1 , 
es.resource=ca/sf');

A = LOAD '/elastic_search/in_dir/' using 
JsonLoader('k1:chararray,k2:chararray,k3:chararray,k4:chararray,k5:chararray');

*B = FOREACH A GENERATE k1, k3, k5;*
+

I am expecting a output like this
+++
(v1,v3,v5)
(v111,v333,v555)
++

but i am getting a output like this

(v1,v3,v5)
*(v11,v33,v55)*
(v111,v333,v555)
++

is there any way to ignore the second record as there are no keys K1, K3 
and k5  in second record?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/410f55ac-7e43-4789-83dc-eb4958fa2d55%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


transport.tcp.port doesn't work for localhost?

2014-04-01 Thread Dario Rossi
I've the following problem: to do an integration test, I set up an embedded 
node and then I create a TransportClient to connect to it. 

The setup of the embedded node is (among other things):


 port = 11547; // User ports range 1024 - 49151
tcpport = 9300;
settings.put("http.port", port);
settings.put("transport.tcp.port", tcpport);
 
Settings esSettings = settings.build();


node = NodeBuilder.nodeBuilder().local(true).settings(esSettings).
node();  //I tried setting local to false too
node.start();


and the transportclient is as simple as:




  TransportClient client = new TransportClient();
client.addTransportAddress(new InetSocketTransportAddress(
"localhost", 9300));


client.prepareIndex("test", "type").setSource("field", "value").
execute().actionGet();




(I tried both localhost and 127.0.0.1). 

Anyway I get a connection refused when running the above code:


Caused by: java.net.ConnectException: Connection refused: localhost/127.0.
0.1:9300
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
 at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(
NioClientBoss.java:150)
 at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.
processSelectedKeys(NioClientBoss.java:105)
 at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(
NioClientBoss.java:79)
 at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.
run(AbstractNioSelector.java:318)
 at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(
NioClientBoss.java:42)
 at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
ThreadRenamingRunnable.java:108)
 at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(
DeadLockProofWorker.java:42)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
java:615)
 at java.lang.Thread.run(Thread.java:724)
[2014-04-01 17:48:10,836][TRACE][org.elasticsearch.transport.netty] [Cap 'N 
Hawk] connect exception caught on transport layer [[id: 0x9526b405]]
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9300
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
 at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150)
 at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
 at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
 at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
 at 
org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
 at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)



my colleague was successful when he tried to connect to another host. but 
he fails with localhost. 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a8c21baf-4d41-4d43-b695-660dbba85623%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


After setting transport.tcp.port on an embedded node, I get connection refused on that port

2014-04-01 Thread Dario Rossi
I have a test that uses the embedded node. It works well usually with 
Data-less node and HTTP API. I have a problem when I run a test using 
TransportClient to connect to the other node (on the same JVM), I get:

org.elasticsearch.transport.ConnectTransportException: 
[][inet[/127.0.0.1:9300]] 
connect_timeout[30s]
 at 
org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:683)
 at 
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:643)
 at 
org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:610)
 at 
org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:133)
 at 
org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:355)
 at 
org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:301)
 at 
org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:169)
 at 
org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:237)
 at 
com.netaporter.cms.estests.test.TransportClientTest.testTransportClient(TransportClientTest.java:18)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:202)
 at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:65)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9300



which is strange because I set up the port with (on the embedded node):

 port = 11547; // User ports range 1024 - 49151
tcpport = 9300;
settings.put("http.port", port);
settings.put("transport.tcp.port", tcpport);

the embedded node *is also local*.

The transport client is as simple as:

TransportClient client = new TransportClient();
client.addTransportAddress(new InetSocketTransportAddress(
"127.0.0.1", getTcpPort()));


client.prepareIndex("test", "type").setSource("field", "value").
execute().actionGet();


I tried both with 127.0.0.1 and localhost

any ideas? A colleague has been able to have the TransportClient work with 
a remote machine, but not with localhost. 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b3397e4d-31fe-4953-ac1b-83baa4674a16%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using script fields in Kibana

2014-04-01 Thread Gal Zolkover
Ok thank you , I'm up for the chalange 😀

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ad37c4db-ce55-4e0a-af5b-22d007fa452e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: path facets

2014-04-01 Thread Volker
do you remember your talk about ecommerce and ES in Hamburg? :-)

viewing and drilling down in a catalogue is also a valid use case, where 
you need to drill down on hirarchical faceted data.

is there an easy way to do that in ES or do I have to aggregate the facets 
by myself?

kind regards

Am Montag, 31. März 2014 10:11:31 UTC+2 schrieb Alexander Reelsen:
>
> Hey,
>
> havent spent a lot of thought, if or how you can solve this directly, but 
> wouldnt it be much easier if you used different fields like 'continent', 
> 'country', 'state', 'city' and would simply aggregate on those and use it 
> for drill down? Or does it have to be generic and this is just an example?
>
>
> --Alex
>
>
> On Sun, Mar 30, 2014 at 6:05 PM, Volker >wrote:
>
>> dear reader
>>
>> I have a question about facets.
>>
>> I have documents with a path as part of a document.
>>
>> e.g.:
>>
>> /america/
>> /america/usa
>> /america/usa/california
>> /america/usa/new-york
>> /america/mexico
>> /europe/spain
>> /europe/germany
>>
>> I would like to drill down on an area and get a facet count for the next 
>> level.
>>
>> If I filter in the facet e.g. for america I would like to get counts for 
>> - /america/usa
>> - /america/mexico
>>
>> I started with using a prefix filter in the facet to filter e.g. for 
>> america but then I get 
>>
>> - /america/usa
>> - /america/usa/california
>> - /america/usa/new-york
>> - /america/mexico
>>
>> but I would like to get the facet counts for only the next level, so I 
>> can drill down on the area.
>>
>> I tried a search for this topic, but I did not find a solution.
>>
>> Hope that somebody can help.
>>
>> Kind regards
>>
>>
>>
>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/0f825504-2d93-43ff-ab7f-f2dfd0866816%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d2516db2-416d-4d36-9284-6d145727f5a5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Sense on github abandoned?

2014-04-01 Thread Shay Banon
Sense started as a weekend project, and Boaz did not place a license on it. As 
you mentioned, this license effectively applies: 
http://choosealicense.com/no-license/. We consulted our lawyers, who specialize 
in open source, and changing the license to open source one is complex, 
expensive, and requires a lot of resources. The reason is that its not only 
getting the committers agreement, but also reaching all possible users and have 
them agree to it (or at least showing big investment in trying to do so, + a 
rather large time window to allow for people to object).

When Boaz created Sense, he was not employed by Elasticsearch. Obviously any 
project started by our employees has a clear license (as you can notice with 
the many projects we created).

Regarding Marvel:

- You are only required to pay for it when used in production.
- You don't have to be a support customer of Elasticsearch the company, you can 
buy a license for Marvel easily on the web. We made it super cheap since we 
think its something that a lot of people will find benefit from.

On Apr 1, 2014, at 17:00, Ivan Brusic  wrote:

> I personally do not require an open source license for Marvel/Sense, but I 
> would like to see an explicit clarification about the use of Marvel in this 
> scenario. Marvel does require a license to use and that would apply to any of 
> its subsystems. Then again, Sense does not have a license, which means its 
> use is also somewhat restricted.
> 
> Sense is an excellent tool and users dependency on the tool is quite apparent 
> from this thread. :)
> 
> I haven't packaged a Chrome plugin in about 3 years. Not only has my memory 
> faded, but I would assume the mechanism has changed in our fast changing 
> world of development. It would be a fun exercise to attempt to do it again.
> 
> Cheers,
> Ivan
> 
> 
> On Tue, Apr 1, 2014 at 5:48 AM, Tim S  wrote:
> @kimchy the whole reason for me asking these questions is that sometimes a 
> customer is using elasticsearch but they don't (yet) have a support contract, 
> but don't consider themselves "in development" either, and thus wouldn't 
> allow me to use Marvel. Yes, there are other tools for poking around, but 
> sense is invaluable for constructing complicated queries etc quickly. In this 
> situation they wouldn't let me install a chrome plugin either, but sense 
> works nicely as an elasticsearch plugin too.
> 
> So, if sense (the abandoned version on github) had some kind of permissive 
> licence, I could turn up on customer site and use sense to poke around.
> Ideally, it would have a licence like AL2 which would allow me to modify it 
> if necessary.
> 
> I realise that you don't want updates pushed back to the version of sense on 
> github because those changes are helping you to make money from Marvel, I 
> understand that. But if the abandoned version of sense did have an 
> appropriate licence, it would allow us to use the current version - it's 
> still useful even if it's not kept up to date. I might even be tempted to try 
> and keep it up to date in my spare time. But clearly I can't do this unless 
> it has a licence that allows me to do it.
> 
> Glad to see I'm not the only person thinking along these lines.
> 
> 
> 
> On Tuesday, April 1, 2014 11:15:07 AM UTC+1, Jörg Prante wrote:
> +1 for Sense standalone packaging
> +1 for Sense in Chrome Web Store
> 
> Sense is used here all the time, it's essential.
> 
> I have also forked the code in case Sense goes away, hoping for a FOSS 
> license.
> 
> Not that I'm fluid in writing browser plugins, but if I find time, I am not 
> afraid of the learning curve.
> 
> Jörg
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/837794c8-1a0a-411f-a29c-852133d6fbc2%40googlegroups.com.
> 
> For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDQQ%2BJRADr%2Bu%3DiqHjs9suKS6Yu8pSc1aKv0JsmavFypoQ%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/DE9F785A-48D3-441D-93B8-4363EEE16BE5%40gmail.com.
For more options, visit https://groups.

Re: Relevancy sorting of result returned

2014-04-01 Thread Binh Ly
If you specify explain=true in your query, it will tell you in detail how 
the score is computed:

{
  "explain": true,
  "query": {}
}

Some useful info:

http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html

http://jontai.me/blog/2012/10/lucene-scoring-and-elasticsearch-_all-field/

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/523ccc24-90a5-4b1a-aca1-bd1018e041aa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Backups of Nodes?

2014-04-01 Thread Binh Ly
These might be useful:

http://www.elasticsearch.org/blog/introducing-snapshot-restore/

http://www.elasticsearch.org/guide/en/elasticsearch/reference/master/modules-snapshots.html

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/87669e51-5b53-40dd-aaec-b815665ed31a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Need help in copy data to ElasticSearch

2014-04-01 Thread Binh Ly
You'd probably need to either restructure the file into _bulk format, or 
just read the files in code and build/send bulk requests dynamically.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/75e7b458-c659-4186-b52e-36e293368d80%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Backups of Nodes?

2014-04-01 Thread IronMan2014
I would like to understand how it works, is there any explanation on how 
this works, the link seems like just a setup.

On Tuesday, April 1, 2014 11:38:45 AM UTC-4, Binh Ly wrote:
>
> If you are using ES 1.x, you can use the snapshot restore API. Quick 
> instructions:
>
> https://gist.github.com/bly2k/9652596
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d11c8a14-ef83-499c-8208-0925993c2845%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Backups of Nodes?

2014-04-01 Thread Binh Ly
If you are using ES 1.x, you can use the snapshot restore API. Quick 
instructions:

https://gist.github.com/bly2k/9652596

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0cac233f-56de-444e-b4f4-102b0cf9fe1c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Not able to create index in elasticsearch-0.90.5 with lengthy names

2014-04-01 Thread Binh Ly
Can you show how you create the index? I just tried this on 0.90.5 and it 
seemed to work for me:

$ curl -XPUT localhost:9200/testindex_16161_1396381784242
{"ok":true,"acknowledged":true}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9c393e9c-975a-466d-8086-6d8068fd002e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Transportclient throws a NoNodeAvailableException in an integration test

2014-04-01 Thread IronMan2014
Not sure, From my experience, there are two things to check, The client and 
server versions have to match and make sure ports are opened. Like I 
mentioned, check with 9300 to make sure it's working there at least.

On Tuesday, April 1, 2014 11:19:22 AM UTC-4, Dario Rossi wrote:
>
> The embedded data node uses port 11547 (this setting is working with a 
> data less node without any problem).
>
>
> Il giorno martedì 1 aprile 2014 16:15:57 UTC+1, IronMan2014 ha scritto:
>>
>> Is the transportclient working on port 9300 to begin with?
>>
>> On Tuesday, April 1, 2014 11:04:43 AM UTC-4, Dario Rossi wrote:
>>>
>>> So I've a test where I try to connect to a local embedded Elasticsearch 
>>> node that usually works with other tests. The embedded node is also local = 
>>> true. It works in two other tests, one uses the HTTP Rest API and the other 
>>> the data less node client. They both work good.
>>>
>>> But when I try to use the TransportClient like this:
>>>
>>>  
>>>  Settings settings = ImmutableSettings.settingsBuilder()
>>> .put("client.transport.ignore_cluster_name", false)
>>> .put("cluster.name", clusterName)
>>> .put("client.transport.sniff", false)
>>> .put("client.transport.ping_timeout", pingTimeout)
>>> 
>>> .put("client.transport.nodes_sampler_interval",pingSamplerInterval
>>> ).build();
>>>
>>>
>>> client = new TransportClient(settings);
>>>
>>>
>>> String hostPieces[] = hosts.split(",");
>>> for (String piece : hostPieces){
>>> String hostAndPort[] = piece.trim().split(":");
>>> if (hostAndPort.length != 2){
>>> throw new IllegalArgumentException("Error in hosts 
>>> string: "+ piece);
>>> }
>>> String host = hostAndPort[0].trim();
>>> String portStr = hostAndPort[1].trim();
>>>
>>>
>>> InetSocketTransportAddress address = new 
>>> InetSocketTransportAddress(host, Integer.valueOf(portStr));
>>>
>>>
>>> client.addTransportAddress(address);
>>> }
>>>
>>> although at the begging it seems to connect to the local node, then it 
>>> dies:
>>>
>>>
>>> [2014-04-01 15:31:21,504][DEBUG][org.elasticsearch.client.transport] [
>>> Googam] adding address [[
>>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>>> [2014-04-01 15:31:21,517][DEBUG][org.elasticsearch.transport.netty] [
>>> Googam] connected to node [[
>>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>>> [2014-04-01 15:31:31,316][DEBUG][org.elasticsearch.cluster.service] [
>>> Sepulchre] processing [routing-table-updater]: execute
>>> [2014-04-01 15:31:31,317][DEBUG][org.elasticsearch.cluster.service] [
>>> Sepulchre] processing [routing-table-updater]: no change incluster_state
>>> [2014-04-01 15:32:16,521][INFO ][org.elasticsearch.client.transport] [
>>> Googam] failed to get node info for 
>>> [#transport#-1][d][inet[localhost/127.0.0.1:11547]], 
>>> disconnecting...
>>> org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[
>>> localhost/127.0.0.1:11547]][cluster/nodes/info] request_id [0] timed 
>>> outafter 
>>> [55001ms]
>>> at org.elasticsearch.transport.TransportService$TimeoutHandler.run(
>>> TransportService.java:356)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> ThreadPoolExecutor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:724)
>>> [2014-04-01 15:32:16,525][DEBUG][org.elasticsearch.transport.netty] [
>>> Googam] disconnected from [[
>>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>>> [2014-04-01 15:32:16,527][DEBUG][org.elasticsearch.transport.netty] [
>>> Googam] connected to node [[
>>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>>> [2014-04-01 15:32:16,530][INFO ][org.elasticsearch.node   ] 
>>> [Sepulchre]stopping 
>>> ...
>>> [2014-04-01 15:32:16,530][DEBUG][org.elasticsearch.transport.netty] [
>>> Googam] disconnected from 
>>> [[#transport#-1][d][inet[localhost/127.0.0.1:11547]]], 
>>> channel closed event
>>> [>> ...
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cf53efd0-5246-41ac-b35c-966d7cb5a5d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using script fields in Kibana

2014-04-01 Thread Binh Ly
The basic idea is as follows:

1) In the Kibana src folder, there is a config.js file. At the bottom of 
that file is a list of panel names available to Kibana. You will add your 
new panel name there.

2) Then, under src/app/panels, create folder that corresponds to your panel 
name and then copy a bunch of files from an existing panel - I'd probably 
use the text panel as the basis for testing and experimentation

3) Then in your new panel folder, edit the files that you copied from 
another panel so that the name of the panel and references in code matches 
your new panel name

4) Then study the more complex panels like histogram or table and you 
should be able to duplicate them and adapt to your requirements. Just need 
to inject your script fields right where the query is constructed and then 
extract the script field results and inject into the panel's model data 
structure

It will be time consuming but not impossible. :)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e388fd5d-3686-4f5d-be61-6f72fa599191%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Backups of Nodes?

2014-04-01 Thread IronMan2014
If I have a cluster of 3 Nodes, primaries and replicas distributed, and I 
would like to back up the data on S3 for instance, do I have to backup all 
3 nodes? Can someone elaborate?
What if all my nodes are in the same geographical zone, and the whole zone 
is down?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fdc8eccd-c747-44f9-94df-56233ffe9441%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Transportclient throws a NoNodeAvailableException in an integration test

2014-04-01 Thread Dario Rossi
The embedded data node uses port 11547 (this setting is working with a data 
less node without any problem).


Il giorno martedì 1 aprile 2014 16:15:57 UTC+1, IronMan2014 ha scritto:
>
> Is the transportclient working on port 9300 to begin with?
>
> On Tuesday, April 1, 2014 11:04:43 AM UTC-4, Dario Rossi wrote:
>>
>> So I've a test where I try to connect to a local embedded Elasticsearch 
>> node that usually works with other tests. The embedded node is also local = 
>> true. It works in two other tests, one uses the HTTP Rest API and the other 
>> the data less node client. They both work good.
>>
>> But when I try to use the TransportClient like this:
>>
>>  
>>  Settings settings = ImmutableSettings.settingsBuilder()
>> .put("client.transport.ignore_cluster_name", false)
>> .put("cluster.name", clusterName)
>> .put("client.transport.sniff", false)
>> .put("client.transport.ping_timeout", pingTimeout)
>> 
>> .put("client.transport.nodes_sampler_interval",pingSamplerInterval
>> ).build();
>>
>>
>> client = new TransportClient(settings);
>>
>>
>> String hostPieces[] = hosts.split(",");
>> for (String piece : hostPieces){
>> String hostAndPort[] = piece.trim().split(":");
>> if (hostAndPort.length != 2){
>> throw new IllegalArgumentException("Error in hosts 
>> string: "+ piece);
>> }
>> String host = hostAndPort[0].trim();
>> String portStr = hostAndPort[1].trim();
>>
>>
>> InetSocketTransportAddress address = new 
>> InetSocketTransportAddress(host, Integer.valueOf(portStr));
>>
>>
>> client.addTransportAddress(address);
>> }
>>
>> although at the begging it seems to connect to the local node, then it 
>> dies:
>>
>>
>> [2014-04-01 15:31:21,504][DEBUG][org.elasticsearch.client.transport] [
>> Googam] adding address [[
>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>> [2014-04-01 15:31:21,517][DEBUG][org.elasticsearch.transport.netty] [
>> Googam] connected to node [[
>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>> [2014-04-01 15:31:31,316][DEBUG][org.elasticsearch.cluster.service] [
>> Sepulchre] processing [routing-table-updater]: execute
>> [2014-04-01 15:31:31,317][DEBUG][org.elasticsearch.cluster.service] [
>> Sepulchre] processing [routing-table-updater]: no change in cluster_state
>> [2014-04-01 15:32:16,521][INFO ][org.elasticsearch.client.transport] [
>> Googam] failed to get node info for 
>> [#transport#-1][d][inet[localhost/127.0.0.1:11547]], 
>> disconnecting...
>> org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[
>> localhost/127.0.0.1:11547]][cluster/nodes/info] request_id [0] timed 
>> outafter 
>> [55001ms]
>> at org.elasticsearch.transport.TransportService$TimeoutHandler.run(
>> TransportService.java:356)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:724)
>> [2014-04-01 15:32:16,525][DEBUG][org.elasticsearch.transport.netty] [
>> Googam] disconnected from [[
>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>> [2014-04-01 15:32:16,527][DEBUG][org.elasticsearch.transport.netty] [
>> Googam] connected to node [[
>> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
>> [2014-04-01 15:32:16,530][INFO ][org.elasticsearch.node   ] 
>> [Sepulchre]stopping 
>> ...
>> [2014-04-01 15:32:16,530][DEBUG][org.elasticsearch.transport.netty] [
>> Googam] disconnected from 
>> [[#transport#-1][d][inet[localhost/127.0.0.1:11547]]], 
>> channel closed event
>> [> ...
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e69a4b68-44ed-4daa-bba6-f1544b6e32a2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Transportclient throws a NoNodeAvailableException in an integration test

2014-04-01 Thread IronMan2014
Is the transportclient working on port 9300 to begin with?

On Tuesday, April 1, 2014 11:04:43 AM UTC-4, Dario Rossi wrote:
>
> So I've a test where I try to connect to a local embedded Elasticsearch 
> node that usually works with other tests. The embedded node is also local = 
> true. It works in two other tests, one uses the HTTP Rest API and the other 
> the data less node client. They both work good.
>
> But when I try to use the TransportClient like this:
>
>  
>  Settings settings = ImmutableSettings.settingsBuilder()
> .put("client.transport.ignore_cluster_name", false)
> .put("cluster.name", clusterName)
> .put("client.transport.sniff", false)
> .put("client.transport.ping_timeout", pingTimeout)
> 
> .put("client.transport.nodes_sampler_interval",pingSamplerInterval
> ).build();
>
>
> client = new TransportClient(settings);
>
>
> String hostPieces[] = hosts.split(",");
> for (String piece : hostPieces){
> String hostAndPort[] = piece.trim().split(":");
> if (hostAndPort.length != 2){
> throw new IllegalArgumentException("Error in hosts 
> string: "+ piece);
> }
> String host = hostAndPort[0].trim();
> String portStr = hostAndPort[1].trim();
>
>
> InetSocketTransportAddress address = new 
> InetSocketTransportAddress(host, Integer.valueOf(portStr));
>
>
> client.addTransportAddress(address);
> }
>
> although at the begging it seems to connect to the local node, then it 
> dies:
>
>
> [2014-04-01 15:31:21,504][DEBUG][org.elasticsearch.client.transport] [
> Googam] adding address [[
> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
> [2014-04-01 15:31:21,517][DEBUG][org.elasticsearch.transport.netty] [
> Googam] connected to node [[
> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
> [2014-04-01 15:31:31,316][DEBUG][org.elasticsearch.cluster.service] [
> Sepulchre] processing [routing-table-updater]: execute
> [2014-04-01 15:31:31,317][DEBUG][org.elasticsearch.cluster.service] [
> Sepulchre] processing [routing-table-updater]: no change in cluster_state
> [2014-04-01 15:32:16,521][INFO ][org.elasticsearch.client.transport] [
> Googam] failed to get node info for 
> [#transport#-1][d][inet[localhost/127.0.0.1:11547]], 
> disconnecting...
> org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[
> localhost/127.0.0.1:11547]][cluster/nodes/info] request_id [0] timed outafter 
> [55001ms]
> at org.elasticsearch.transport.TransportService$TimeoutHandler.run(
> TransportService.java:356)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> [2014-04-01 15:32:16,525][DEBUG][org.elasticsearch.transport.netty] [
> Googam] disconnected from [[
> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
> [2014-04-01 15:32:16,527][DEBUG][org.elasticsearch.transport.netty] [
> Googam] connected to node [[
> #transport#-1][d][inet[localhost/127.0.0.1:11547]]]
> [2014-04-01 15:32:16,530][INFO ][org.elasticsearch.node   ] 
> [Sepulchre]stopping 
> ...
> [2014-04-01 15:32:16,530][DEBUG][org.elasticsearch.transport.netty] [
> Googam] disconnected from 
> [[#transport#-1][d][inet[localhost/127.0.0.1:11547]]], 
> channel closed event
> [ ...

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2512ceda-0ca9-4d98-8748-6cd2df7f0865%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: really bad post_filter performance

2014-04-01 Thread Binh Ly
I'd probably just collapse everything into a filtered query. Something like 
this:

{
  "query": {
"filtered": {
  "filter": {
"bool": {
  "must": [
{
  "terms": {
"index_ids": ["2134616789944"]
  }
}
  ],
  "should": [
{
  "terms": {
"trashed_at": "0"
  }
},
{
  "not": {
"exists": {
  "field": "trashed_at"
}
  }
}
  ]
}
  }
}
  }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0845cbee-26bb-43be-9318-7a36a08e6504%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Elasticsearch Transportclient throws a NoNodeAvailableException in an integration test

2014-04-01 Thread Dario Rossi
So I've a test where I try to connect to a local embedded Elasticsearch 
node that usually works with other tests. The embedded node is also local = 
true. It works in two other tests, one uses the HTTP Rest API and the other 
the data less node client. They both work good.

But when I try to use the TransportClient like this:

 
 Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.ignore_cluster_name", false)
.put("cluster.name", clusterName)
.put("client.transport.sniff", false)
.put("client.transport.ping_timeout", pingTimeout)

.put("client.transport.nodes_sampler_interval",pingSamplerInterval
).build();


client = new TransportClient(settings);


String hostPieces[] = hosts.split(",");
for (String piece : hostPieces){
String hostAndPort[] = piece.trim().split(":");
if (hostAndPort.length != 2){
throw new IllegalArgumentException("Error in hosts string: "
+ piece);
}
String host = hostAndPort[0].trim();
String portStr = hostAndPort[1].trim();


InetSocketTransportAddress address = new 
InetSocketTransportAddress(host, Integer.valueOf(portStr));


client.addTransportAddress(address);
}

although at the begging it seems to connect to the local node, then it dies:


[2014-04-01 15:31:21,504][DEBUG][org.elasticsearch.client.transport] [Googam
] adding address [[#transport#-1][d][inet[localhost/127.0.0.1:11547]]]
[2014-04-01 15:31:21,517][DEBUG][org.elasticsearch.transport.netty] 
[Googam]connected to node 
[[#transport#-1][d][inet[localhost/127.0.0.1:11547]]]
[2014-04-01 15:31:31,316][DEBUG][org.elasticsearch.cluster.service] [
Sepulchre] processing [routing-table-updater]: execute
[2014-04-01 15:31:31,317][DEBUG][org.elasticsearch.cluster.service] [
Sepulchre] processing [routing-table-updater]: no change in cluster_state
[2014-04-01 15:32:16,521][INFO ][org.elasticsearch.client.transport] [Googam
] failed to get node info for 
[#transport#-1][d][inet[localhost/127.0.0.1:11547]], 
disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[
localhost/127.0.0.1:11547]][cluster/nodes/info] request_id [0] timed outafter 
[55001ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(
TransportService.java:356)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:724)
[2014-04-01 15:32:16,525][DEBUG][org.elasticsearch.transport.netty] 
[Googam]disconnected 
from [[#transport#-1][d][inet[localhost/127.0.0.1:11547]]]
[2014-04-01 15:32:16,527][DEBUG][org.elasticsearch.transport.netty] 
[Googam]connected to node 
[[#transport#-1][d][inet[localhost/127.0.0.1:11547]]]
[2014-04-01 15:32:16,530][INFO ][org.elasticsearch.node   ] [Sepulchre]stopping 
...
[2014-04-01 15:32:16,530][DEBUG][org.elasticsearch.transport.netty] 
[Googam]disconnected 
from [[#transport#-1][d][inet[localhost/127.0.0.1:11547]]], channel closed 
event
[2014-04-01 15:32:16,532][INFO ][org.elasticsearch.client.transport] [Googam
] failed to get node info for 
[#transport#-1][d][inet[localhost/127.0.0.1:11547]], 
disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][inet[localhost/
127.0.0.1:11547]][cluster/nodes/info] disconnected
[2014-04-01 15:32:16,534][INFO ][org.elasticsearch.node   ] [Sepulchre]stopped
Exception in thread "Thread-1" org.elasticsearch.client.transport.
NoNodeAvailableException: No node available

In the log you have Googam that is the TransportClient and Sepulchre that 
is the embedded node.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1a8729d1-aa82-4dc8-857c-52c633ac540a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Need help in copy data to ElasticSearch

2014-04-01 Thread prachi sharma
I'm having a small dataset of tweets, the structure of JSON is below and a 
file contains multiple JSON objects 

{"created_at":"Fri Dec 06 13:00:00 + 
2013","id":408943910199627778,"id_str":"408943910199627778","text":"\u9ad8\u6728\u3055\u3093\u306e\u3053\u3048\u5143\u592a\u3058\u3083\u306d\n\n\u3061\u3052\u30fc\u304b","source":"\u003ca
 
href=\"https:\/\/about.twitter.com\/products\/tweetdeck\" 
rel=\"nofollow\"\u003eTweetDeck\u003c\/a\u003e","truncated":false,"in_reply_to_status_id":null,"in_reply_to_status_id_str":null,"in_reply_to_user_id":null,"in_reply_to_user_id_str":null,"in_reply_to_screen_name":null,"user":{"id":263559267,"id_str":"263559267","name":"\u3078\u3063\u3049\u306f\u5927\u4e08\u592b\u3058\u3083\u306a\u3044\u3051\u3069\u5927\u4e08\u592b\u3067\u3059","screen_name":"hello4you","location":"\u8eca\u8f2a\u306e\u56fd","url":null,"description":"\u30a2\u30cb\u30e1\u3057\u304b\u898b\u3066\u306a\u3044\u672a\u6765\u3082\u898b\u3066\u306a\u3044\u3002\r\n\u5f10\u5bfaSP\u521d\u6bb5\u306e\u5e95\u8fba\u3001\u30dd\u30c3\u30d7\u30f3\u306f\u30ec\u30d9\u30eb30\u304c\u9650\u754c\u30de\u30f3\u3002\u30ed\u30fc\u30c9\u30e9\u3001\u30d1\u30ba\u30c9\u30e9\u307c\u3061\u307c\u3061\u30003DS\u30d5\u30ec\u30b3\u3259-1312-2538\r\n\u7d20\u6575\u306a\u30a2\u30a4\u30b3\u30f3\u306f@hello4you\u3055\u3093\u306b\u66f8\u3044\u3066\u3044\u305f\u3060\u304d\u307e\u3057\u305f\uff01\uff01\u3042\u308a\u304c\u3068\u3046\u3054\u3056\u3044\u307e\u3059\uff01","protected":false,"followers_count":302,"friends_count":386,"listed_count":10,"created_at":"Thu
 
Mar 10 08:16:42 + 
2011","favourites_count":778,"utc_offset":32400,"time_zone":"Tokyo","geo_enabled":true,"verified":false,"statuses_count":63363,"lang":"ja","contributors_enabled":false,"is_translator":false,"profile_background_color":"C0DEED","profile_background_image_url":"http:\/\/a0.twimg.com\/profile_background_images\/686949334\/e345db29067bd59b463c9be16aef09e8.jpeg","profile_background_image_url_https":"https:\/\/si0.twimg.com\/profile_background_images\/686949334\/e345db29067bd59b463c9be16aef09e8.jpeg","profile_background_tile":true,"profile_image_url":"http:\/\/pbs.twimg.com\/profile_images\/37880829025490\/40c2ba6b6aa5ea89ddb609e4f0b4b624_normal.png","profile_image_url_https":"https:\/\/pbs.twimg.com\/profile_images\/37880829025490\/40c2ba6b6aa5ea89ddb609e4f0b4b624_normal.png","profile_banner_url":"https:\/\/pbs.twimg.com\/profile_banners\/263559267\/1384989886","profile_link_color":"0084B4","profile_sidebar_border_color":"FF","profile_sidebar_fill_color":"DDEEF6","profile_text_color":"33","profile_use_background_image":true,"default_profile":false,"default_profile_image":false,"following":null,"follow_request_sent":null,"notifications":null},"geo":null,"coordinates":null,"place":null,"contributors":null,"retweet_count":0,"favorite_count":0,"entities":{"hashtags":[],"symbols":[],"urls":[],"user_mentions":[]},"favorited":false,"retweeted":false,"filter_level":"medium","lang":"ja"}

How do i import this data in elastic search as there are no indexes in the 
data (so can't use bulk data)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3ae6373f-24b8-4128-8d61-dc83fd4c38ed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Sense on github abandoned?

2014-04-01 Thread Ivan Brusic
I personally do not require an open source license for Marvel/Sense, but I
would like to see an explicit clarification about the use of Marvel in this
scenario. Marvel does require a license to use and that would apply to any
of its subsystems. Then again, Sense does not have a license, which means
its use is also somewhat restricted.

Sense is an excellent tool and users dependency on the tool is quite
apparent from this thread. :)

I haven't packaged a Chrome plugin in about 3 years. Not only has my memory
faded, but I would assume the mechanism has changed in our fast changing
world of development. It would be a fun exercise to attempt to do it again.

Cheers,
Ivan


On Tue, Apr 1, 2014 at 5:48 AM, Tim S  wrote:

> @kimchy the whole reason for me asking these questions is that sometimes a
> customer is using elasticsearch but they don't (yet) have a support
> contract, but don't consider themselves "in development" either, and thus
> wouldn't allow me to use Marvel. Yes, there are other tools for poking
> around, but sense is invaluable for constructing complicated queries etc
> quickly. In this situation they wouldn't let me install a chrome plugin
> either, but sense works nicely as an elasticsearch plugin too.
>
> So, if sense (the abandoned version on github) had some kind of permissive
> licence, I could turn up on customer site and use sense to poke around.
> Ideally, it would have a licence like AL2 which would allow me to modify
> it if necessary.
>
> I realise that you don't want updates pushed back to the version of sense
> on github because those changes are helping you to make money from Marvel,
> I understand that. But if the abandoned version of sense did have an
> appropriate licence, it would allow us to use the current version - it's
> still useful even if it's not kept up to date. I might even be tempted to
> try and keep it up to date in my spare time. But clearly I can't do this
> unless it has a licence that allows me to do it.
>
> Glad to see I'm not the only person thinking along these lines.
>
>
>
> On Tuesday, April 1, 2014 11:15:07 AM UTC+1, Jörg Prante wrote:
>>
>> +1 for Sense standalone packaging
>> +1 for Sense in Chrome Web Store
>>
>> Sense is used here all the time, it's essential.
>>
>> I have also forked the code in case Sense goes away, hoping for a FOSS
>> license.
>>
>> Not that I'm fluid in writing browser plugins, but if I find time, I am
>> not afraid of the learning curve.
>>
>> Jörg
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/837794c8-1a0a-411f-a29c-852133d6fbc2%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDQQ%2BJRADr%2Bu%3DiqHjs9suKS6Yu8pSc1aKv0JsmavFypoQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Need some help for creating my model

2014-04-01 Thread Binh Ly
This might help:

http://www.elasticsearch.org/blog/managing-relations-inside-elasticsearch/

Out of the box, you can't model many to many in ES (unless you do it 
yourself in code). One to many is supported using either nested or 
parent-child.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/159e41dd-32b6-4745-83a0-35f17eda0eda%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Performance problems with has parent filter

2014-04-01 Thread Lauri Fjällström
Hi,

Thank you for your replies!

I was afraid that the answer would be something like that. I was just
amazed how slow the has parent filter is as any other queries take just a
few milliseconds to execute. I guess I have to find out how I could
denormalize my data. The problem is that the  parents may update frequently
and they can potentially have thousands of children.

Best,
Lauri


On Tue, Apr 1, 2014 at 11:34 AM, Karol Gwaj  wrote:

> there is not that much you can really do here
> parent/child queries tend to be very slow & eat a lot of heap space
>
> i had similar performance problem
> in my case i had 3 level relationship (parent/child/grandchild) and query
> time was in average x10 slower for every level
>
> so my suggestion will be to switch to using nested documents + update api
> if your query time is more important than update time, that will be the
> way to go
> (in my case query performance improvement was x100 times)
>
>
> http://www.elasticsearch.org/blog/managing-relations-inside-elasticsearch/
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-update.html
>
> Regards,
> Karol Gwaj
>
> On Sunday, March 30, 2014 8:28:33 AM UTC+1, Lauri wrote:
>>
>> Hi,
>>
>> I'm having performance problems with has parent filter.
>>
>> The for the child document is:
>> {
>>   "program": {
>> "_parent": { "type": "series" },
>> ...
>>   }
>> }
>>
>> And for the parent document:
>> {
>>   "series": {
>> ...
>> "properties": {
>>   ...
>>   "subject":{
>> "type": "object",
>> "properties": {
>>   ...
>>   "_path": {
>> "type": "object",
>> "properties": {
>>   "id": { "type": "string", "analyzer": "path_analyzer" }
>>   ...
>> }
>>   }
>> }
>>   },
>>   ...
>> }
>>   }
>> }
>>
>> If I search documents of type program (the child) like this:
>> {
>>   "from": 0,
>>   "size": 25,
>>   "query": {
>> "filtered": {
>>   "query": { "match_all": {} },
>>   "filter": {
>> "has_parent": {
>>   "filter": {
>> "terms" : {
>>   "subject._path.id" : [ "5-162" ]
>> }
>>   },
>>   "parent_type" : "series"
>> }
>>   }
>> }
>>   }
>> }
>>
>> It takes constantly around 160 milliseconds to run and it returns finds
>> about 60k documents.
>>
>> If I search documents of type series (the parent) like this:
>> {
>>   "from" : 0,
>>   "size" : 25,
>>   "query" : {
>> "filtered": {
>>   "query": { "match_all": {} },
>>   "filter": {
>> "terms": {
>>   "subject._path.id": [ "5-162" ]
>> }
>>   }
>> }
>>   }
>> }
>>
>> It takes around 5 milliseconds and returns about 400 documents.
>>
>> The total count of program objects is about 1,7M and series objects 11k.
>> The index is fully optimized and the cluster is not doing anything else.
>> The index has 3 shards and 1 replica of each shard. There are three nodes
>> in the cluster. The nodes have twice the ram that is the index size. Half
>> of the ram is assigned to Elasticsearch. Elasticsearch version is 1.0. If I
>> use bigdesk plugin, it looks like there is more than enough ram. I'm not
>> seeing cache evictions or something like that.
>>
>> So for me it looks like there is something weird going on as the has
>> parent filter runs more than 30 times slower than the actual parent query.
>> Is there anything I can do to make it faster?
>>
>> Thanks,
>> Lauri
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/91c59820-c9e6-40fc-8f7f-b2ee1a4cd19e%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CA%2BMBRY%2BR17LhZrL%2B5pOFWLCwXOrZs4Foaw2Y67QmhhUaZn2zMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Not able to create index in elasticsearch-0.90.5 with lengthy names

2014-04-01 Thread Hanish Bansal
Hi All,

We are trying to create indices with below name format

*testindex_16161_1396381784242*

Till elasticsearch-0.90.2, we are able to create indices with this
naming convention.

When i try to create to index with same name as above mentioned in
elasticsearch-0.90.5, i get below

exception:

[2014-04-01 18:17:41,683][DEBUG][action.admin.indices.create] [node0]
[testindex_16161_1396375678203] failed to create
java.lang.NumberFormatException: For input string: "1396375678203"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:461)
at java.lang.Integer.valueOf(Integer.java:554)
at 
org.elasticsearch.cluster.routing.RoutingIndexSorter$ShardComparator.compare(RoutingIndexSorter.java:89)
at 
org.elasticsearch.cluster.routing.RoutingIndexSorter$ShardComparator.compare(RoutingIndexSorter.java:82)
at 
org.apache.lucene.util.CollectionUtil$ListIntroSorter.compare(CollectionUtil.java:64)
at org.apache.lucene.util.Sorter.insertionSort(Sorter.java:169)
at org.apache.lucene.util.IntroSorter.quicksort(IntroSorter.java:46)
at org.apache.lucene.util.IntroSorter.sort(IntroSorter.java:41)
at 
org.apache.lucene.util.CollectionUtil.introSort(CollectionUtil.java:137)
at 
org.elasticsearch.cluster.routing.RoutingIndexSorter.getSortedList(RoutingIndexSorter.java:57)
at 
org.elasticsearch.cluster.routing.RoutingNodes.unassigned(RoutingNodes.java:232)
at 
org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:154)
at 
org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$2.execute(MetaDataCreateIndexService.java:359)
at 
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:299)
at 
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)

If i remove some digits to short the index name for example

*testindex_16161_1396381* and then try to create, in that case i am
able to create index successfully.

Is there any limit on index name in  elasticsearch-0.90.5, if not how
can i resolve this issue?

-- 
*Thanks & Regards*
*Hanish Bansal*

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAB0WE8YGhUumZs209zZFvAa5tk2m0HYoLQeXyLdvL6Gc_8nHGw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: how to modify term frequency formula?

2014-04-01 Thread geantbrun
Sure.

{
 "settings" : {
  "index" : {
   "similarity" : {
"my_similarity" : {
 "type" : "tfCappedSimilarity"
}
   }
  }
 },
 "mappings" : {
  "post" : {
   "properties" : {
"id" : { "type" : "long", "store" : "yes", "precision_step" : "0" },
"name" : { "type" : "string", "store" : "yes", "index" : "analyzed"},
"contents" : { "type" : "string", "store" : "no", "index" : "analyzed", 
"similarity" : "my_similarity"}
   }
  }
 }
}

If I substitute tfCappedSimilarity for tfCapped in the mapping, the error 
is the same except that provider is referred as tfCappedSimilarityProviderand 
not as 
tfCappedSimilaritySimilarityProvider.
Cheers,
Patrick


Le lundi 31 mars 2014 17:13:24 UTC-4, Ivan Brusic a écrit :
>
> Can you also post your mapping where you defined the similarity?
>
> -- 
> Ivan
>
>
> On Mon, Mar 31, 2014 at 10:36 AM, geantbrun 
> > wrote:
>
>> I realize that I probably have to define the similarity property of my 
>> field as "my_similarity" (and not as "tfCappedSimilarity") and define in 
>> the settings my_similarity as being of type tfCappedSimilarity.
>> When I do that, I get the following error at the index/mapping creation:
>>
>> {"error":"IndexCreationException[[exbd] failed to create index]; nested: 
>> NoClassSettingsException[Failed to load class setting [type] with value 
>> [tfCappedSimilarity]]; nested: 
>> ClassNotFoundException[org.elasticsearch.index.similarity.tfcappedsimilarity.tfCappedSimilaritySimilarityProvider];
>>  
>> ","status":500}]
>>
>> Note that the provider is referred in the error as 
>> tfCappedSimilaritySimilarityProvider 
>> (similarity repeated 2 times). Is it normal?
>> Patrick
>>
>> Le lundi 31 mars 2014 13:06:00 UTC-4, geantbrun a écrit :
>>
>>> Hi Ivan,
>>> I followed your instructions but it does not seem to work, I must be 
>>> wrong somewhere. I created the jar file from the following two java files, 
>>> could you tell me if they are ok?
>>>
>>> tfCappedSimilarity.java
>>> ***
>>> package org.elasticsearch.index.similarity;
>>>
>>> import org.apache.lucene.search.similarities.DefaultSimilarity;
>>> import org.elasticsearch.common.logging.ESLogger;
>>> import org.elasticsearch.common.logging.Loggers;
>>>
>>> public class tfCappedSimilarity extends DefaultSimilarity {
>>>
>>> private ESLogger logger;
>>>
>>> public tfCappedSimilarity() {
>>> logger = Loggers.getLogger(getClass());
>>> }
>>>
>>> /**
>>>  * Capped tf value
>>>  */
>>> @Override
>>> public float tf(float freq) {
>>> return (float)Math.sqrt(Math.min(9, freq));
>>> }
>>> }
>>>
>>> tfCappedSimilarityProvider.java
>>> *
>>> package org.elasticsearch.index.similarity;
>>>
>>> import org.elasticsearch.common.inject.Inject;
>>> import org.elasticsearch.common.inject.assistedinject.Assisted;
>>> import org.elasticsearch.common.settings.Settings;
>>>
>>> public class tfCappedSimilarityProvider extends 
>>> AbstractSimilarityProvider {
>>>
>>> private tfCappedSimilarity similarity;
>>>
>>> @Inject
>>> public tfCappedSimilarityProvider(@Assisted String name, 
>>> @Assisted Settings settings) {
>>> super(name);
>>> this.similarity = new tfCappedSimilarity();
>>> }
>>>
>>> /**
>>>  * {@inheritDoc}
>>>  */
>>> @Override
>>> public tfCappedSimilarity get() {
>>> return similarity;
>>> }
>>> }
>>>
>>>
>>> In my mapping, I define the similarity property of my field as 
>>> tfCappedSimilarity, is it ok?
>>>
>>> What makes me say that it does not work: I insert a doc with a word 
>>> repeated 16 times in my field. When I do a search with that word, the 
>>> result shows a tf of 4 (square root of 16) and not 3 as I was expecting, Is 
>>> there a way to know if the similarity was loaded or not (maybe in a log 
>>> file?).
>>>
>>> Cheers,
>>> Patrick
>>>
>>> Le mercredi 26 mars 2014 17:16:36 UTC-4, Ivan Brusic a écrit :

 I updated my gist to illustrate the SimilarityProvider that goes along 
 with it. Similarities are easier to add to Elasticsearch than most 
 plugins. 
 You just need to compile the two files into a jar and then add that jar 
 into Elasticsearch's classpath ($ES_HOME/lib most likely). The code will 
 scan for every SimilarityProvider defined and load it.

 You then mapping the similarity to a field: http://www.
 elasticsearch.org/guide/en/elasticsearch/reference/
 current/mapping-core-types.html#_configuring_similarity_per_field

 Note that you cannot change the similarity of a field dynamically.

 Ivan


 http://www.elasticsearch.org/guide/en/elasticsearch/
 reference/current/mapping-core-types.html#_configuring_
 similarity_per_field


 On Wed, Mar 26, 2014 at 12:49 PM, geantbrun wrote:

> 

Re: Using script fields in Kibana

2014-04-01 Thread Gal Zolkover
ok i'm native to java so no issue there, any references i can use or 
examples i can follow ?

On Tuesday, April 1, 2014 3:48:19 PM UTC+3, Gal Zolkover wrote:
>
> Hi All
>
> I'm new to ES and Kibana and i have a simple question:
>
> I have ES collecting counters and i'd like to present in Kibana KPI based 
> on the collected counters so i have a script that works in Chrome Sense 
> extension,
> my question is how do i pass this script and execute it via Kibana to 
> present the calculated counters AKA KPI new calculated fields in the 
> Dashboard as a histogram and as table?
>
> script example
>
> GET kpi1-/_search
>   {
>   
> "script_fields" : {
> "reg_succ_rate" : {
> "script" : "doc['UEREGISTRATIONSUCCESS'].value / 
> doc['UEREGISTRATIONATTEMPTS'].value"
> },
> "test2" : {
> "script" : "doc['UEREGISTRATIONATTEMPTS'].value * factor",
> "params" : {
> "factor"  : 1.0
> }
> }
> }
> }
>
> Regards
>
> Gal
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/48560ce2-c39a-4f83-bdea-fd557c21324c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Using script fields in Kibana

2014-04-01 Thread Binh Ly
Unfortunately not out of the box. If you're up to it, you can probably 
create a panel that runs this kind of query in Kibana and wire it in. Need 
to learn a little of Angular and Javascript but its doable. :)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bfeb09a5-8efb-4900-8781-f6b0d062e9bd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: aggregation with conditions

2014-04-01 Thread Binh Ly
The first one is not available, however a terms aggregation and sort by 
_count asc will bubble up the least frequent terms (emails) and you can 
filter yourself which ones you want. The second one sounds like a simple 
terms aggregation on the email field (just make sure the email field is 
not_analyzed):

{
  "aggs": {
"group_by_email": {
  "terms": {
"field": "email"
  }
}
  }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3c2f9f63-9a05-4bd5-beda-093f162b48e2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Using script fields in Kibana

2014-04-01 Thread Gal Zolkover
Hi All

I'm new to ES and Kibana and i have a simple question:

I have ES collecting counters and i'd like to present in Kibana KPI based 
on the collected counters so i have a script that works in Chrome Sense 
extension,
my question is how do i pass this script and execute it via Kibana to 
present the calculated counters AKA KPI new calculated fields in the 
Dashboard as a histogram and as table?

script example

GET kpi1-/_search
  {
  
"script_fields" : {
"reg_succ_rate" : {
"script" : "doc['UEREGISTRATIONSUCCESS'].value / 
doc['UEREGISTRATIONATTEMPTS'].value"
},
"test2" : {
"script" : "doc['UEREGISTRATIONATTEMPTS'].value * factor",
"params" : {
"factor"  : 1.0
}
}
}
}

Regards

Gal

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7dd9a0be-fc34-4c30-8111-ebe07417528a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Sense on github abandoned?

2014-04-01 Thread Tim S
@kimchy the whole reason for me asking these questions is that sometimes a 
customer is using elasticsearch but they don't (yet) have a support 
contract, but don't consider themselves "in development" either, and thus 
wouldn't allow me to use Marvel. Yes, there are other tools for poking 
around, but sense is invaluable for constructing complicated queries etc 
quickly. In this situation they wouldn't let me install a chrome plugin 
either, but sense works nicely as an elasticsearch plugin too.

So, if sense (the abandoned version on github) had some kind of permissive 
licence, I could turn up on customer site and use sense to poke around.
Ideally, it would have a licence like AL2 which would allow me to modify it 
if necessary.

I realise that you don't want updates pushed back to the version of sense 
on github because those changes are helping you to make money from Marvel, 
I understand that. But if the abandoned version of sense did have an 
appropriate licence, it would allow us to use the current version - it's 
still useful even if it's not kept up to date. I might even be tempted to 
try and keep it up to date in my spare time. But clearly I can't do this 
unless it has a licence that allows me to do it.

Glad to see I'm not the only person thinking along these lines.


On Tuesday, April 1, 2014 11:15:07 AM UTC+1, Jörg Prante wrote:
>
> +1 for Sense standalone packaging
> +1 for Sense in Chrome Web Store
>
> Sense is used here all the time, it's essential.
>
> I have also forked the code in case Sense goes away, hoping for a FOSS 
> license.
>
> Not that I'm fluid in writing browser plugins, but if I find time, I am 
> not afraid of the learning curve.
>
> Jörg
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/837794c8-1a0a-411f-a29c-852133d6fbc2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: JSON validation by Sense

2014-04-01 Thread Boaz Leskes
If the json isn't valid Sense will display a little red icon on the line 
number where things break (including a message why). 

Cheers,
Boaz

On Friday, March 28, 2014 1:51:28 PM UTC+1, Nikolas Everett wrote:
>
> The one in chrome will refuse to auto indent invalid json, I believe. 
>
> Sent from my iPhone
>
> On Mar 28, 2014, at 8:41 AM, aristechnologypartn...@gmail.com wrote:
>
> perhaps fairly naive question, but does Sense let you validate JSON 
> documents before they are sent to ES? Thx.
>
> I'm using one that comes with "latest" Marvel and only see "Request" and 
> "Copy as cURL" and "Auto Indent" functions. 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/84b0c616-4f62-4219-9efa-0d293eebd761%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9d844c1a-1f58-4c76-b5a6-3599de3207e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Marvel - monitoring multiple ES clusters by one Monitoring ES cluster

2014-04-01 Thread Boaz Leskes
Hi Me,

At the moment marvel doesn't support monitoring multiple production 
clusters with a single monitoring one. Working on it!

Cheers,
Boaz

On Friday, March 28, 2014 2:05:56 PM UTC+1, me wrote:
>
> Hi there,
>
> I have installed a 3 node ES 
> "Monitoring" 
> cluster and configured two other ES clusters (10 nodes one an 3 nodes one) 
> to send data to it. The data gathering / storage seems to be working fine, 
> but the display using "latest" Marvel UI a bit confusing. For example, on 
> "Cluster Overview" page, how does one should pick a specific ES Cluster? At 
> the moment, the "Cluster Summary" is jumping between two ES cluster I have 
> seems to me randomly. I did try to add query criteria on top to make it 
> choose to no avail... 
>
> That said, the "Nodes" panel on the same page works as expected - i.e. by 
> defaults it show all reporting nodes (from both cluster), but if I specify 
> "Filter" there, I do get information about Nodes of a specific cluster. 
>
> I'm new to Marvel and above may seems a bit naive, but in its current 
> form, does "latest" Marvel was intended to support my Use Case? Thx.
>
> Thanks for your help. 
>
> Regards, Me.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1d741ed0-5072-469b-993c-2e68cde8e573%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Relevancy sorting of result returned

2014-04-01 Thread chee hoo lum
Hi Binh,

Thanks. Excellent info that you shared. In addition i would like to know
how actually the scores calculated as the two queries below yield different
results :


1)

{
  "from" : 0,
  "size" : 100,
  "query" : {
"filtered" : {
  "query" : {
 "multi_match": {
   "query": "love",
   "fields": [ "DISPLAY_NAME^6", "LONG_DESCRIPTION",
"SHORT_DESCRIPTION", "PERFORMER" ]
}
  },
  "filter" : {
"query" : {
  "bool" : {
  "must" : {
"term" : {
  "CHANNEL_ID" : "1"
}
  }
}
}
  }
}
  }
}


*Result *:
  "_score": *1.372128*,
"_source": {
"DISPLAY_NAME": "Listen To My Song",
"PRICE": 5,


2)

{
  "from" : 0,
  "size" : 100,
  "query" : {
"filtered" : {
  "query" : {
"bool" : {
"should" : [ {
  "wildcard" : {
"DISPLAY_NAME" : {"value": "love", "boost": 6}
  }
}, {
  "wildcard" : {
"LONG_DESCRIPTION" : "love"
  }
}, {
  "wildcard" : {
"SHORT_DESCRIPTION" : "love"
  }
}, {
  "wildcard" : {
"PERFORMER" : "love"
  }
} ]
  }
  },
  "filter" : {
"query" : {
  "bool" : {
  "must" : {
"term" : {
  "CHANNEL_ID" : "1"
}
  }
}
}
  }
}
  }
}


*Result *:

"_score":* 0.040032037*,
"_source": {
"DISPLAY_NAME": "Listen To My Song",




















On Tue, Apr 1, 2014 at 5:59 AM, Binh Ly  wrote:

> ^ is a boost - so it makes the match score higher. Aboout your other
> question, that's default behavior for Lucene scoring - i.e., fields that
> are shorter will have higher relevancy against your query terms. You can
> disable norms if you don't want this behavior:
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#norms
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/RXuuSlkDSyA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/099604d4-b5f2-492b-b8cd-185d66293921%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Regards,

Chee Hoo

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGS0%2Bg8L_a5TNMyZa7Q9k%3D%2BR012W_tXckkp%2B_MiRVcacyktEmg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: near real time alerts for syslogs

2014-04-01 Thread Christophe Dontaine
Hi David,

I have the same request but, as a new user of ES, I'm interested to know why 
the alerting process should be moved to the Logstash layer.

I'm thinking (on a white board for now) about a logstash layer (Log->ES) 
followed by an ES layer (index + alerting).
I thought building requests via the percolate API to be able to centralize the 
alerting process instead of spanning the same "rules" on any logstash layer.

Is it so heavier in terms of CPU/IO/... on the ES layer side that you prefer 
move this on the Logstash layer ? Or because other reasons ?

Thanks in advance.

Christophe

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5e073c5a-7681-45db-a86d-cabefe2f4411%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


aggregation with conditions

2014-04-01 Thread lvalbuena
Hi,

I have 2 cases.

Given the structure
{
   email:value,
   points:value
}

Case 1:
I have 1000  rows, where multiple rows can have the same value for the 
email field.
{"email":"s...@email.com","points":5}
{"email":"s...@email.com","points":2}
...

How do I tell elasticsearch to search for all emails that have only 
appeared *once* in the data set.

Case 2:
Also using aggregation. How can I tell elasticsearch to get all possible 
occurrences the emails appeared in the data set.
ex.
emails = 5, occourances >= 5 // There are 5 emails that appeared 4 times in 
the dataset
emails = 6, occourances = 4
emails = 23, occourances = 3
emails = 2, occourances = 2
emails = 12, occourances = 1

Or is it even posible?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/240d054f-131c-4904-81d6-95b7982f2f6c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Sense on github abandoned?

2014-04-01 Thread joergpra...@gmail.com
+1 for Sense standalone packaging
+1 for Sense in Chrome Web Store

Sense is used here all the time, it's essential.

I have also forked the code in case Sense goes away, hoping for a FOSS
license.

Not that I'm fluid in writing browser plugins, but if I find time, I am not
afraid of the learning curve.

Jörg

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEsuWZBZ_XHA7XrUKq16J5TKvSuq2bR5f0s2F1bJGecrw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Filter first then search

2014-04-01 Thread Max Ivanov
Production data is ~ 1 documents, 230 of them match filter.
I need my text queries to be analyzed, and prefix query doesn't support 
that, so it won't help.

On Thursday, March 27, 2014 1:20:59 PM UTC, Binh Ly wrote:
>
> Not sure if your example data is representative of production data, but if 
> you have single not_analyzed term values in the field title, you can 
> probably use the prefix query instead.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e74f146f-10db-4759-8abc-08b89b389e63%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Sense on github abandoned?

2014-04-01 Thread Itamar Syn-Hershko
The Chrome plugin integration (or other x-platform executable of any kind)
is a quick road to success which you really want to have with Sense.
Requiring a Marvel download kinda beats this purpose.

And doing wget && tar && open isn't x-platform, nor easy.

--

Itamar Syn-Hershko
http://code972.com | @synhershko 
Freelance Developer & Consultant
Author of RavenDB in Action 


On Tue, Apr 1, 2014 at 12:19 PM, Shay Banon  wrote:

> Heya, you can easily download Marvel and run Sense as a standalone app
> from Marvel. In the next version of Marvel, we will make it even simpler,
> in which case, even users that don't run Chrome (like myself ;) ) can more
> easily enjoy it.
>
> On Apr 1, 2014, at 11:05, Mark Walkom  wrote:
>
> +1 from me to this sentiment too, Sense as a standalone app is awesome.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 1 April 2014 19:59, Nicholas Summerlin wrote:
>
>> I agree with Itamar. I'm a recent Elasticsearch user and Sense made it
>> easier for me to learn the query syntax. I recommend it to all ES
>> beginners. I only learned of Marvel later.
>>
>> +1 for keeping the Chrome plugin on the Web Store, perhaps with a
>> disclaimer that it's incomplete and no longer maintained.
>>
>> --
>> Nick
>>
>>
>>
>>  On Tue, Apr 1, 2014 at 10:51 AM, Itamar Syn-Hershko 
>> wrote:
>>
>>> Boaz,
>>>
>>> Sense is a must-have tool for any developer playing with Elasticsearch,
>>> at least in my opinion. I use it daily to quickly sketch and test modeling
>>> ideas, or just to prove a point. This is also the first thing I ask my
>>> customers to install whenever I give on-site consults.
>>>
>>> I'd ask you and Elasticsearch to continue releasing Sense as a chrome
>>> plugin that we can install and use. Even if it gets updated less
>>> frequently, it's still an awesome tool and 80% is better than nothing.
>>>
>>> You can't rely on having Marvel installed for using Sense - it's an
>>> additional install step, and many remote staging/prod clusters don't have
>>> it installed. Sense is really great as a client-side independent thing.
>>>
>>> You can quote me on that: working with Elasticsearch is not the same
>>> without Sense available as a Chrome plugin.
>>>
>>> For your consideration,
>>>
>>> --
>>>
>>> Itamar Syn-Hershko
>>> http://code972.com | @synhershko 
>>> Freelance Developer & Consultant
>>> Author of RavenDB in Action 
>>>
>>>
>>> On Tue, Apr 1, 2014 at 11:44 AM, Boaz Leskes  wrote:
>>>
 Hi Tim, Ivan,

 Sense, the chrome extension, was started a hobby project of mine. I did
 it in my spare time  - I actually started it to prove it can be done in a
 weekend ;). As you know Sense is part of Marvel now where it will be
 professionally developed and get way more time and love. The github repo is
 not maintained anymore and the code as published on GitHub will not be
 developed further. I've updated the readme to make that more clear.

 As Tim noted, the Chrome extension has known bugs and does not support
 all of 1.0's features (like Aggregations). Since bug fixes are all going
 forward in Marvel, I've decided to remove the plugin from the Chrome Web
 Store so new users will not have a bad initial experience.

 I (and many others) am always happy to answer any questions/issue
 regard Marvel on the mailing list. If you find any issue, please post it
 and it will be picked up. I know you both know this as we interacted
 before, but I wanted to say again for anyone else that might read this in
 the future.
 Cheers,
 Boaz

 On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:
>
> I would have suggested opening an issue on Github to clarify the
> license, but issues are disabled on the repo. I agree that Sense is a
> fantastic tool. I deploy apps using the Java API, but I formulate my
> thoughts and queries beforehand with Sense. Beforehand I was just using
> straight curl commands.
>
> Boaz is a regular on the mailing list, he probably will chime in soon.
>
> --
> Ivan
>
>
> On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:
>
>> Yeah, I cloned it locally in case it disappears and it seems to work
>> perfectly well standalone or as an elasticsearch plugin - IMO it doesn't
>> really need to be a chrome extension. But I guess its usefulness if 
>> limited
>> if dev has stopped and the auto-complete isn't kept up-to-date with the
>> elasticsearch api.
>>
>> Also if I recommended my clients used it (which I would, because it's
>> one of the better rest front ends for ES I've come across because of the
>> auto-complete) then they'd want to know whether they were actually 
>> allowed

Re: Sense on github abandoned?

2014-04-01 Thread Shay Banon
Here is how you do it now:

wget 
https://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.tar.gz
tar -xzf marvel-latest.tar.gz
open _site/sense/index.html


On Apr 1, 2014, at 11:19, Shay Banon  wrote:

> Heya, you can easily download Marvel and run Sense as a standalone app from 
> Marvel. In the next version of Marvel, we will make it even simpler, in which 
> case, even users that don't run Chrome (like myself ;) ) can more easily 
> enjoy it.
> 
> On Apr 1, 2014, at 11:05, Mark Walkom  wrote:
> 
>> +1 from me to this sentiment too, Sense as a standalone app is awesome.
>> 
>> Regards,
>> Mark Walkom
>> 
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>> 
>> 
>> On 1 April 2014 19:59, Nicholas Summerlin  
>> wrote:
>> I agree with Itamar. I'm a recent Elasticsearch user and Sense made it 
>> easier for me to learn the query syntax. I recommend it to all ES beginners. 
>> I only learned of Marvel later. 
>> 
>> +1 for keeping the Chrome plugin on the Web Store, perhaps with a disclaimer 
>> that it's incomplete and no longer maintained. 
>> 
>> --
>> Nick
>> 
>> 
>> 
>> On Tue, Apr 1, 2014 at 10:51 AM, Itamar Syn-Hershko  
>> wrote:
>> Boaz,
>> 
>> Sense is a must-have tool for any developer playing with Elasticsearch, at 
>> least in my opinion. I use it daily to quickly sketch and test modeling 
>> ideas, or just to prove a point. This is also the first thing I ask my 
>> customers to install whenever I give on-site consults.
>> 
>> I'd ask you and Elasticsearch to continue releasing Sense as a chrome plugin 
>> that we can install and use. Even if it gets updated less frequently, it's 
>> still an awesome tool and 80% is better than nothing.
>> 
>> You can't rely on having Marvel installed for using Sense - it's an 
>> additional install step, and many remote staging/prod clusters don't have it 
>> installed. Sense is really great as a client-side independent thing.
>> 
>> You can quote me on that: working with Elasticsearch is not the same without 
>> Sense available as a Chrome plugin.
>> 
>> For your consideration,
>> 
>> --
>> 
>> Itamar Syn-Hershko
>> http://code972.com | @synhershko
>> Freelance Developer & Consultant
>> Author of RavenDB in Action
>> 
>> 
>> On Tue, Apr 1, 2014 at 11:44 AM, Boaz Leskes  wrote:
>> Hi Tim, Ivan,
>> 
>> Sense, the chrome extension, was started a hobby project of mine. I did it 
>> in my spare time  - I actually started it to prove it can be done in a 
>> weekend ;). As you know Sense is part of Marvel now where it will be 
>> professionally developed and get way more time and love. The github repo is 
>> not maintained anymore and the code as published on GitHub will not be 
>> developed further. I've updated the readme to make that more clear.
>> 
>> As Tim noted, the Chrome extension has known bugs and does not support all 
>> of 1.0's features (like Aggregations). Since bug fixes are all going forward 
>> in Marvel, I've decided to remove the plugin from the Chrome Web Store so 
>> new users will not have a bad initial experience.
>> 
>> I (and many others) am always happy to answer any questions/issue regard 
>> Marvel on the mailing list. If you find any issue, please post it and it 
>> will be picked up. I know you both know this as we interacted before, but I 
>> wanted to say again for anyone else that might read this in the future.
>> Cheers,
>> Boaz
>> 
>> On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:
>> I would have suggested opening an issue on Github to clarify the license, 
>> but issues are disabled on the repo. I agree that Sense is a fantastic tool. 
>> I deploy apps using the Java API, but I formulate my thoughts and queries 
>> beforehand with Sense. Beforehand I was just using straight curl commands.
>> 
>> Boaz is a regular on the mailing list, he probably will chime in soon.
>> 
>> -- 
>> Ivan
>> 
>> 
>> On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:
>> Yeah, I cloned it locally in case it disappears and it seems to work 
>> perfectly well standalone or as an elasticsearch plugin - IMO it doesn't 
>> really need to be a chrome extension. But I guess its usefulness if limited 
>> if dev has stopped and the auto-complete isn't kept up-to-date with the 
>> elasticsearch api.
>> 
>> Also if I recommended my clients used it (which I would, because it's one of 
>> the better rest front ends for ES I've come across because of the 
>> auto-complete) then they'd want to know whether they were actually allowed 
>> to use it i.e. what licence it is.
>> 
>> 
>> 
>> On Friday, March 28, 2014 2:50:04 PM UTC, Ivan Brusic wrote:
>> I just forked it i case it goes away. I haven't built a Chrome extension in 
>> years. Let me re-figure out how to do it and update my fork with build/local 
>> installation instructions.
>> 
>> The lack of a license might be problematic since default copyright 
>> provisions apply: http://choosealicense.com/no-license/
>> 
>> -- 
>> 

Re: Sense on github abandoned?

2014-04-01 Thread Shay Banon
Heya, you can easily download Marvel and run Sense as a standalone app from 
Marvel. In the next version of Marvel, we will make it even simpler, in which 
case, even users that don't run Chrome (like myself ;) ) can more easily enjoy 
it.

On Apr 1, 2014, at 11:05, Mark Walkom  wrote:

> +1 from me to this sentiment too, Sense as a standalone app is awesome.
> 
> Regards,
> Mark Walkom
> 
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
> 
> 
> On 1 April 2014 19:59, Nicholas Summerlin  
> wrote:
> I agree with Itamar. I'm a recent Elasticsearch user and Sense made it easier 
> for me to learn the query syntax. I recommend it to all ES beginners. I only 
> learned of Marvel later. 
> 
> +1 for keeping the Chrome plugin on the Web Store, perhaps with a disclaimer 
> that it's incomplete and no longer maintained. 
> 
> --
> Nick
> 
> 
> 
> On Tue, Apr 1, 2014 at 10:51 AM, Itamar Syn-Hershko  
> wrote:
> Boaz,
> 
> Sense is a must-have tool for any developer playing with Elasticsearch, at 
> least in my opinion. I use it daily to quickly sketch and test modeling 
> ideas, or just to prove a point. This is also the first thing I ask my 
> customers to install whenever I give on-site consults.
> 
> I'd ask you and Elasticsearch to continue releasing Sense as a chrome plugin 
> that we can install and use. Even if it gets updated less frequently, it's 
> still an awesome tool and 80% is better than nothing.
> 
> You can't rely on having Marvel installed for using Sense - it's an 
> additional install step, and many remote staging/prod clusters don't have it 
> installed. Sense is really great as a client-side independent thing.
> 
> You can quote me on that: working with Elasticsearch is not the same without 
> Sense available as a Chrome plugin.
> 
> For your consideration,
> 
> --
> 
> Itamar Syn-Hershko
> http://code972.com | @synhershko
> Freelance Developer & Consultant
> Author of RavenDB in Action
> 
> 
> On Tue, Apr 1, 2014 at 11:44 AM, Boaz Leskes  wrote:
> Hi Tim, Ivan,
> 
> Sense, the chrome extension, was started a hobby project of mine. I did it in 
> my spare time  - I actually started it to prove it can be done in a weekend 
> ;). As you know Sense is part of Marvel now where it will be professionally 
> developed and get way more time and love. The github repo is not maintained 
> anymore and the code as published on GitHub will not be developed further. 
> I've updated the readme to make that more clear.
> 
> As Tim noted, the Chrome extension has known bugs and does not support all of 
> 1.0's features (like Aggregations). Since bug fixes are all going forward in 
> Marvel, I've decided to remove the plugin from the Chrome Web Store so new 
> users will not have a bad initial experience.
> 
> I (and many others) am always happy to answer any questions/issue regard 
> Marvel on the mailing list. If you find any issue, please post it and it will 
> be picked up. I know you both know this as we interacted before, but I wanted 
> to say again for anyone else that might read this in the future.
> Cheers,
> Boaz
> 
> On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:
> I would have suggested opening an issue on Github to clarify the license, but 
> issues are disabled on the repo. I agree that Sense is a fantastic tool. I 
> deploy apps using the Java API, but I formulate my thoughts and queries 
> beforehand with Sense. Beforehand I was just using straight curl commands.
> 
> Boaz is a regular on the mailing list, he probably will chime in soon.
> 
> -- 
> Ivan
> 
> 
> On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:
> Yeah, I cloned it locally in case it disappears and it seems to work 
> perfectly well standalone or as an elasticsearch plugin - IMO it doesn't 
> really need to be a chrome extension. But I guess its usefulness if limited 
> if dev has stopped and the auto-complete isn't kept up-to-date with the 
> elasticsearch api.
> 
> Also if I recommended my clients used it (which I would, because it's one of 
> the better rest front ends for ES I've come across because of the 
> auto-complete) then they'd want to know whether they were actually allowed to 
> use it i.e. what licence it is.
> 
> 
> 
> On Friday, March 28, 2014 2:50:04 PM UTC, Ivan Brusic wrote:
> I just forked it i case it goes away. I haven't built a Chrome extension in 
> years. Let me re-figure out how to do it and update my fork with build/local 
> installation instructions.
> 
> The lack of a license might be problematic since default copyright provisions 
> apply: http://choosealicense.com/no-license/
> 
> -- 
> Ivan
> 
> 
> On Fri, Mar 28, 2014 at 1:37 AM, Tim S  wrote:
> I notice that https://github.com/bleskes/sense has a message saying "The 
> development of Sense has moved into Elasticsearch Marvel".
> 
> Does this mean that no further development will happen on github? I.e. if the 
> Marvel team find bugs in sense will the fixes be pushed to th

custom score with distance levenstein

2014-04-01 Thread maryline x
hi,

In first time I am sorry about my English.

I want scoring my document with only the distance levenstein.
I try to custom the Similarity but it doesn't work (i don't manage to 
change the value of queryNorm)
I don't know how i can use the function_score.
So i try to extends AbstractFloatSearchScript, to do something like that.
How can i have the value of the query ?


public float runAsFloat() {

LuceneLevenshteinDistance distance = new LuceneLevenshteinDistance
();

String nomDoc = docFieldStrings("nom").getValue();
float score = distance.getDistance(nomDoc, nomQuery);

return score;
}



And I think there is a better way to do that.
I wait with pleasure for your proposals


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2fcdae1c-79b3-4056-9f6c-aa6bfc30271c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Sense on github abandoned?

2014-04-01 Thread Mark Walkom
+1 from me to this sentiment too, Sense as a standalone app is awesome.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 1 April 2014 19:59, Nicholas Summerlin wrote:

> I agree with Itamar. I'm a recent Elasticsearch user and Sense made it
> easier for me to learn the query syntax. I recommend it to all ES
> beginners. I only learned of Marvel later.
>
> +1 for keeping the Chrome plugin on the Web Store, perhaps with a
> disclaimer that it's incomplete and no longer maintained.
>
> --
> Nick
>
>
>
>  On Tue, Apr 1, 2014 at 10:51 AM, Itamar Syn-Hershko 
> wrote:
>
>> Boaz,
>>
>> Sense is a must-have tool for any developer playing with Elasticsearch,
>> at least in my opinion. I use it daily to quickly sketch and test modeling
>> ideas, or just to prove a point. This is also the first thing I ask my
>> customers to install whenever I give on-site consults.
>>
>> I'd ask you and Elasticsearch to continue releasing Sense as a chrome
>> plugin that we can install and use. Even if it gets updated less
>> frequently, it's still an awesome tool and 80% is better than nothing.
>>
>> You can't rely on having Marvel installed for using Sense - it's an
>> additional install step, and many remote staging/prod clusters don't have
>> it installed. Sense is really great as a client-side independent thing.
>>
>> You can quote me on that: working with Elasticsearch is not the same
>> without Sense available as a Chrome plugin.
>>
>> For your consideration,
>>
>> --
>>
>> Itamar Syn-Hershko
>> http://code972.com | @synhershko 
>> Freelance Developer & Consultant
>> Author of RavenDB in Action 
>>
>>
>> On Tue, Apr 1, 2014 at 11:44 AM, Boaz Leskes  wrote:
>>
>>> Hi Tim, Ivan,
>>>
>>> Sense, the chrome extension, was started a hobby project of mine. I did
>>> it in my spare time  - I actually started it to prove it can be done in a
>>> weekend ;). As you know Sense is part of Marvel now where it will be
>>> professionally developed and get way more time and love. The github repo is
>>> not maintained anymore and the code as published on GitHub will not be
>>> developed further. I've updated the readme to make that more clear.
>>>
>>> As Tim noted, the Chrome extension has known bugs and does not support
>>> all of 1.0's features (like Aggregations). Since bug fixes are all going
>>> forward in Marvel, I've decided to remove the plugin from the Chrome Web
>>> Store so new users will not have a bad initial experience.
>>>
>>> I (and many others) am always happy to answer any questions/issue regard
>>> Marvel on the mailing list. If you find any issue, please post it and it
>>> will be picked up. I know you both know this as we interacted before, but I
>>> wanted to say again for anyone else that might read this in the future.
>>> Cheers,
>>> Boaz
>>>
>>> On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:

 I would have suggested opening an issue on Github to clarify the
 license, but issues are disabled on the repo. I agree that Sense is a
 fantastic tool. I deploy apps using the Java API, but I formulate my
 thoughts and queries beforehand with Sense. Beforehand I was just using
 straight curl commands.

 Boaz is a regular on the mailing list, he probably will chime in soon.

 --
 Ivan


 On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:

> Yeah, I cloned it locally in case it disappears and it seems to work
> perfectly well standalone or as an elasticsearch plugin - IMO it doesn't
> really need to be a chrome extension. But I guess its usefulness if 
> limited
> if dev has stopped and the auto-complete isn't kept up-to-date with the
> elasticsearch api.
>
> Also if I recommended my clients used it (which I would, because it's
> one of the better rest front ends for ES I've come across because of the
> auto-complete) then they'd want to know whether they were actually allowed
> to use it i.e. what licence it is.
>
>
>
> On Friday, March 28, 2014 2:50:04 PM UTC, Ivan Brusic wrote:
>
>> I just forked it i case it goes away. I haven't built a Chrome
>> extension in years. Let me re-figure out how to do it and update my fork
>> with build/local installation instructions.
>>
>> The lack of a license might be problematic since default copyright
>> provisions apply: http://choosealicense.com/no-license/
>>
>> --
>> Ivan
>>
>>
>> On Fri, Mar 28, 2014 at 1:37 AM, Tim S  wrote:
>>
>>> I notice that https://github.com/bleskes/sense has a message saying
>>> "The development of Sense has moved into Elasticsearch Marvel".
>>>
>>> Does this mean that no further development will happen on github?
>>> I.e. if the Marvel team find bugs in sense will the fixes be pushed to 
>>> the
>>> sense o

Re: Sense on github abandoned?

2014-04-01 Thread Nicholas Summerlin
I agree with Itamar. I'm a recent Elasticsearch user and Sense made it
easier for me to learn the query syntax. I recommend it to all ES
beginners. I only learned of Marvel later.

+1 for keeping the Chrome plugin on the Web Store, perhaps with a
disclaimer that it's incomplete and no longer maintained.

--
Nick



On Tue, Apr 1, 2014 at 10:51 AM, Itamar Syn-Hershko wrote:

> Boaz,
>
> Sense is a must-have tool for any developer playing with Elasticsearch, at
> least in my opinion. I use it daily to quickly sketch and test modeling
> ideas, or just to prove a point. This is also the first thing I ask my
> customers to install whenever I give on-site consults.
>
> I'd ask you and Elasticsearch to continue releasing Sense as a chrome
> plugin that we can install and use. Even if it gets updated less
> frequently, it's still an awesome tool and 80% is better than nothing.
>
> You can't rely on having Marvel installed for using Sense - it's an
> additional install step, and many remote staging/prod clusters don't have
> it installed. Sense is really great as a client-side independent thing.
>
> You can quote me on that: working with Elasticsearch is not the same
> without Sense available as a Chrome plugin.
>
> For your consideration,
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko 
> Freelance Developer & Consultant
> Author of RavenDB in Action 
>
>
> On Tue, Apr 1, 2014 at 11:44 AM, Boaz Leskes  wrote:
>
>> Hi Tim, Ivan,
>>
>> Sense, the chrome extension, was started a hobby project of mine. I did
>> it in my spare time  - I actually started it to prove it can be done in a
>> weekend ;). As you know Sense is part of Marvel now where it will be
>> professionally developed and get way more time and love. The github repo is
>> not maintained anymore and the code as published on GitHub will not be
>> developed further. I've updated the readme to make that more clear.
>>
>> As Tim noted, the Chrome extension has known bugs and does not support
>> all of 1.0's features (like Aggregations). Since bug fixes are all going
>> forward in Marvel, I've decided to remove the plugin from the Chrome Web
>> Store so new users will not have a bad initial experience.
>>
>> I (and many others) am always happy to answer any questions/issue regard
>> Marvel on the mailing list. If you find any issue, please post it and it
>> will be picked up. I know you both know this as we interacted before, but I
>> wanted to say again for anyone else that might read this in the future.
>> Cheers,
>> Boaz
>>
>> On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:
>>>
>>> I would have suggested opening an issue on Github to clarify the
>>> license, but issues are disabled on the repo. I agree that Sense is a
>>> fantastic tool. I deploy apps using the Java API, but I formulate my
>>> thoughts and queries beforehand with Sense. Beforehand I was just using
>>> straight curl commands.
>>>
>>> Boaz is a regular on the mailing list, he probably will chime in soon.
>>>
>>> --
>>> Ivan
>>>
>>>
>>> On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:
>>>
 Yeah, I cloned it locally in case it disappears and it seems to work
 perfectly well standalone or as an elasticsearch plugin - IMO it doesn't
 really need to be a chrome extension. But I guess its usefulness if limited
 if dev has stopped and the auto-complete isn't kept up-to-date with the
 elasticsearch api.

 Also if I recommended my clients used it (which I would, because it's
 one of the better rest front ends for ES I've come across because of the
 auto-complete) then they'd want to know whether they were actually allowed
 to use it i.e. what licence it is.



 On Friday, March 28, 2014 2:50:04 PM UTC, Ivan Brusic wrote:

> I just forked it i case it goes away. I haven't built a Chrome
> extension in years. Let me re-figure out how to do it and update my fork
> with build/local installation instructions.
>
> The lack of a license might be problematic since default copyright
> provisions apply: http://choosealicense.com/no-license/
>
> --
> Ivan
>
>
> On Fri, Mar 28, 2014 at 1:37 AM, Tim S  wrote:
>
>> I notice that https://github.com/bleskes/sense has a message saying
>> "The development of Sense has moved into Elasticsearch Marvel".
>>
>> Does this mean that no further development will happen on github?
>> I.e. if the Marvel team find bugs in sense will the fixes be pushed to 
>> the
>> sense on github, and if I create a pull request on github will my fix 
>> find
>> its way into the version of sense included in Marvel?
>>
>> Regardless of the answer to the above, does the code on github have
>> any kind of licence? Even without Marvel, sense is still a useful tool.
>>
>> Thanks.
>>
>> --
>> You received this message because you are

Re: Sense on github abandoned?

2014-04-01 Thread Itamar Syn-Hershko
Boaz,

Sense is a must-have tool for any developer playing with Elasticsearch, at
least in my opinion. I use it daily to quickly sketch and test modeling
ideas, or just to prove a point. This is also the first thing I ask my
customers to install whenever I give on-site consults.

I'd ask you and Elasticsearch to continue releasing Sense as a chrome
plugin that we can install and use. Even if it gets updated less
frequently, it's still an awesome tool and 80% is better than nothing.

You can't rely on having Marvel installed for using Sense - it's an
additional install step, and many remote staging/prod clusters don't have
it installed. Sense is really great as a client-side independent thing.

You can quote me on that: working with Elasticsearch is not the same
without Sense available as a Chrome plugin.

For your consideration,

--

Itamar Syn-Hershko
http://code972.com | @synhershko 
Freelance Developer & Consultant
Author of RavenDB in Action 


On Tue, Apr 1, 2014 at 11:44 AM, Boaz Leskes  wrote:

> Hi Tim, Ivan,
>
> Sense, the chrome extension, was started a hobby project of mine. I did it
> in my spare time  - I actually started it to prove it can be done in a
> weekend ;). As you know Sense is part of Marvel now where it will be
> professionally developed and get way more time and love. The github repo is
> not maintained anymore and the code as published on GitHub will not be
> developed further. I've updated the readme to make that more clear.
>
> As Tim noted, the Chrome extension has known bugs and does not support all
> of 1.0's features (like Aggregations). Since bug fixes are all going
> forward in Marvel, I've decided to remove the plugin from the Chrome Web
> Store so new users will not have a bad initial experience.
>
> I (and many others) am always happy to answer any questions/issue regard
> Marvel on the mailing list. If you find any issue, please post it and it
> will be picked up. I know you both know this as we interacted before, but I
> wanted to say again for anyone else that might read this in the future.
> Cheers,
> Boaz
>
> On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:
>>
>> I would have suggested opening an issue on Github to clarify the license,
>> but issues are disabled on the repo. I agree that Sense is a fantastic
>> tool. I deploy apps using the Java API, but I formulate my thoughts and
>> queries beforehand with Sense. Beforehand I was just using straight curl
>> commands.
>>
>> Boaz is a regular on the mailing list, he probably will chime in soon.
>>
>> --
>> Ivan
>>
>>
>> On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:
>>
>>> Yeah, I cloned it locally in case it disappears and it seems to work
>>> perfectly well standalone or as an elasticsearch plugin - IMO it doesn't
>>> really need to be a chrome extension. But I guess its usefulness if limited
>>> if dev has stopped and the auto-complete isn't kept up-to-date with the
>>> elasticsearch api.
>>>
>>> Also if I recommended my clients used it (which I would, because it's
>>> one of the better rest front ends for ES I've come across because of the
>>> auto-complete) then they'd want to know whether they were actually allowed
>>> to use it i.e. what licence it is.
>>>
>>>
>>>
>>> On Friday, March 28, 2014 2:50:04 PM UTC, Ivan Brusic wrote:
>>>
 I just forked it i case it goes away. I haven't built a Chrome
 extension in years. Let me re-figure out how to do it and update my fork
 with build/local installation instructions.

 The lack of a license might be problematic since default copyright
 provisions apply: http://choosealicense.com/no-license/

 --
 Ivan


 On Fri, Mar 28, 2014 at 1:37 AM, Tim S  wrote:

> I notice that https://github.com/bleskes/sense has a message saying
> "The development of Sense has moved into Elasticsearch Marvel".
>
> Does this mean that no further development will happen on github? I.e.
> if the Marvel team find bugs in sense will the fixes be pushed to the 
> sense
> on github, and if I create a pull request on github will my fix find its
> way into the version of sense included in Marvel?
>
> Regardless of the answer to the above, does the code on github have
> any kind of licence? Even without Marvel, sense is still a useful tool.
>
> Thanks.
>
> --
> You received this message because you are subscribed to the Google
> Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to elasticsearc...@googlegroups.com.
>
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/elasticsearch/381ca707-f29c-4b81-8e5b-957c1c02fe0a%40goo
> glegroups.com
> .
> 

Re: Sense on github abandoned?

2014-04-01 Thread Boaz Leskes
Hi Tim, Ivan,

Sense, the chrome extension, was started a hobby project of mine. I did it 
in my spare time  - I actually started it to prove it can be done in a 
weekend ;). As you know Sense is part of Marvel now where it will be 
professionally developed and get way more time and love. The github repo is 
not maintained anymore and the code as published on GitHub will not be 
developed further. I've updated the readme to make that more clear.

As Tim noted, the Chrome extension has known bugs and does not support all 
of 1.0's features (like Aggregations). Since bug fixes are all going 
forward in Marvel, I've decided to remove the plugin from the Chrome Web 
Store so new users will not have a bad initial experience.

I (and many others) am always happy to answer any questions/issue regard 
Marvel on the mailing list. If you find any issue, please post it and it 
will be picked up. I know you both know this as we interacted before, but I 
wanted to say again for anyone else that might read this in the future.
Cheers,
Boaz

On Friday, March 28, 2014 5:19:38 PM UTC+1, Ivan Brusic wrote:
>
> I would have suggested opening an issue on Github to clarify the license, 
> but issues are disabled on the repo. I agree that Sense is a fantastic 
> tool. I deploy apps using the Java API, but I formulate my thoughts and 
> queries beforehand with Sense. Beforehand I was just using straight curl 
> commands.
>
> Boaz is a regular on the mailing list, he probably will chime in soon.
>
> -- 
> Ivan
>
>
> On Fri, Mar 28, 2014 at 8:40 AM, Tim S  wrote:
>
>> Yeah, I cloned it locally in case it disappears and it seems to work 
>> perfectly well standalone or as an elasticsearch plugin - IMO it doesn't 
>> really need to be a chrome extension. But I guess its usefulness if limited 
>> if dev has stopped and the auto-complete isn't kept up-to-date with the 
>> elasticsearch api.
>>
>> Also if I recommended my clients used it (which I would, because it's one 
>> of the better rest front ends for ES I've come across because of the 
>> auto-complete) then they'd want to know whether they were actually allowed 
>> to use it i.e. what licence it is.
>>
>>
>>
>> On Friday, March 28, 2014 2:50:04 PM UTC, Ivan Brusic wrote:
>>
>>> I just forked it i case it goes away. I haven't built a Chrome extension 
>>> in years. Let me re-figure out how to do it and update my fork with 
>>> build/local installation instructions.
>>>
>>> The lack of a license might be problematic since default copyright 
>>> provisions apply: http://choosealicense.com/no-license/
>>>
>>> -- 
>>> Ivan
>>>
>>>
>>> On Fri, Mar 28, 2014 at 1:37 AM, Tim S  wrote:
>>>
 I notice that https://github.com/bleskes/sense has a message saying 
 "The development of Sense has moved into Elasticsearch Marvel".

 Does this mean that no further development will happen on github? I.e. 
 if the Marvel team find bugs in sense will the fixes be pushed to the 
 sense 
 on github, and if I create a pull request on github will my fix find its 
 way into the version of sense included in Marvel?

 Regardless of the answer to the above, does the code on github have any 
 kind of licence? Even without Marvel, sense is still a useful tool.

 Thanks.
  
 -- 
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.

 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/381ca707-f29c-4b81-8e5b-957c1c02fe0a%
 40googlegroups.com
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/a8c17fc2-0f74-4161-992f-0a3e77e7153b%40googlegroups.com
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a124cd7d-5466-4786-9e4b-935eb214566c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Performance problems with has parent filter

2014-04-01 Thread Karol Gwaj
there is not that much you can really do here
parent/child queries tend to be very slow & eat a lot of heap space 

i had similar performance problem 
in my case i had 3 level relationship (parent/child/grandchild) and query 
time was in average x10 slower for every level

so my suggestion will be to switch to using nested documents + update api
if your query time is more important than update time, that will be the way 
to go 
(in my case query performance improvement was x100 times)


http://www.elasticsearch.org/blog/managing-relations-inside-elasticsearch/
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-update.html

Regards,
Karol Gwaj

On Sunday, March 30, 2014 8:28:33 AM UTC+1, Lauri wrote:
>
> Hi,
>
> I'm having performance problems with has parent filter.
>
> The for the child document is:
> {
>   "program": {
> "_parent": { "type": "series" },
> ...
>   }
> }
>
> And for the parent document:
> {
>   "series": {
> ...
> "properties": {
>   ...
>   "subject":{
> "type": "object",
> "properties": {
>   ...
>   "_path": {
> "type": "object",
> "properties": {
>   "id": { "type": "string", "analyzer": "path_analyzer" }
>   ...
> }
>   }
> }
>   },
>   ...
> }
>   }
> }
>
> If I search documents of type program (the child) like this:
> {
>   "from": 0,
>   "size": 25,
>   "query": {
> "filtered": {
>   "query": { "match_all": {} },
>   "filter": {
> "has_parent": {
>   "filter": {
> "terms" : {
>   "subject._path.id" : [ "5-162" ]
> }
>   },
>   "parent_type" : "series"
> }
>   }
> }
>   }
> }
>
> It takes constantly around 160 milliseconds to run and it returns finds 
> about 60k documents.
>
> If I search documents of type series (the parent) like this:
> {
>   "from" : 0,
>   "size" : 25,
>   "query" : {
> "filtered": {
>   "query": { "match_all": {} },
>   "filter": {
> "terms": {
>   "subject._path.id": [ "5-162" ]
> }
>   }
> }
>   }
> }
>
> It takes around 5 milliseconds and returns about 400 documents.
>
> The total count of program objects is about 1,7M and series objects 11k. 
> The index is fully optimized and the cluster is not doing anything else. 
> The index has 3 shards and 1 replica of each shard. There are three nodes 
> in the cluster. The nodes have twice the ram that is the index size. Half 
> of the ram is assigned to Elasticsearch. Elasticsearch version is 1.0. If I 
> use bigdesk plugin, it looks like there is more than enough ram. I'm not 
> seeing cache evictions or something like that.
>
> So for me it looks like there is something weird going on as the has 
> parent filter runs more than 30 times slower than the actual parent query. 
> Is there anything I can do to make it faster?
>
> Thanks,
> Lauri
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/91c59820-c9e6-40fc-8f7f-b2ee1a4cd19e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Homogeneous distribution of primary shards

2014-04-01 Thread Mark Walkom
Sounds like you just need to give ES some time to rebalance.

Otherwise look at this, maybe you can adapt it to suit -
http://blog.sematext.com/2012/05/29/elasticsearch-shard-placement-control/

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 1 April 2014 19:10, Pedro Plaza García  wrote:

> Our problem is not that no other nodes are not promoted to primary when
> the node is down, Our problem is that when the node is down and after is
> up, sometimes the node is out of primary shards. This problem cause that
> the swing of charge is not swing across this node. we need that all nodes
> contain the same number of primary shards so that the load swings to all
> nodes.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/9d675854-340b-4787-854a-958af5e98d73%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624Ysi3p10m8%3Dsce4pu_X96qWomE%2BccA3950spby1QTyEVQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Create Automated Cluster

2014-04-01 Thread Pedro Plaza García
Thanks vineeth mohan !! ;)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d91a6fa1-a98e-49d8-a071-e8d2539e5bb9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Homogeneous distribution of primary shards

2014-04-01 Thread Pedro Plaza García
Our problem is not that no other nodes are not promoted to primary when the 
node is down, Our problem is that when the node is down and after is up, 
sometimes the node is out of primary shards. This problem cause that the 
swing of charge is not swing across this node. we need that all nodes 
contain the same number of primary shards so that the load swings to all 
nodes.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9d675854-340b-4787-854a-958af5e98d73%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exact difference between "Query_string" query and "multi_match" query with cross_fields

2014-04-01 Thread Mark Harwood
The first difference is that the match/multi-match query aims to avoid the 
sorts of syntax errors that can occur when using query_string which 
supports several special characters that act as operators that must be used 
correctly or the query will fail to parse.

Given the simpler nature of the match/multi-match queries it is possible to 
apply some smarter logic to ranking how the terms match in a multi-field 
context. 
Lucene has a natural tendency to rank the rarer elements in a query more 
highly and you can probably see this effect in multi-field query_string 
queries in the way it can favour bizarre interpretations of your search. As 
an example, if searching firstName and lastName fields the search term 
"John" will rank higher if found in the lastName field rather than the more 
commonplace firstName field due to Lucene's "IDF" (Inverse Document 
Frequency) ranking favouring the scarcity of lastName:John.
The new multi-match algorithm will instead attempt to favour the most 
likely interpretation of each search term and so firstName:John should rank 
higher than lastName:John. 

Cheers
Mark



On Friday, March 28, 2014 5:50:21 AM UTC, Prashy wrote:
>
> Hi ES users, 
>
> While exploring the new release of ES 1.1.0 I came across with the release 
> of cross_fields query in multi_match. 
> So is there any difference between  "Query_string" query and "multi_match" 
> query with cross_fields. 
>
> for ex: 
> if my query_string query is: 
> "query_string": { 
>   "fields": [ 
> "title", 
> "content" 
>   ], 
>   "query": "mobile phones" 
>
> So it will search for mobile or phones in either of title or content. like 
> the result will come as 
> *{mobile in title | mobile in content | phones in title | phones in 
> content}* 
>
> Similarly for multi_match query with cross_fields: 
> { 
>   "multi_match" : { 
> "query":  "mobile phones", 
> "type":   "cross_fields", 
> "fields": [ "title", "content" ] 
>   } 
> } 
>
> So in this also the search result will be like: 
> *{mobile in title | mobile in content | phones in title | phones in 
> content}* 
>
>
> So just let me know if my interpretation is wrong in any of the scenario. 
> If 
> not then what exactly is the difference between these two. 
>
>
>
>
>
>
>
>
> -- 
> View this message in context: 
> http://elasticsearch-users.115913.n3.nabble.com/Exact-difference-between-Query-string-query-and-multi-match-query-with-cross-fields-tp4052982.html
>  
> Sent from the ElasticSearch Users mailing list archive at Nabble.com. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/94c94379-2f62-49dd-97ac-b6c04765334e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: _update API in elastic search

2014-04-01 Thread David Pilato
You are looking for this: 
https://github.com/elasticsearch/elasticsearch/issues/1607


May be this plugin could help: 
https://github.com/yakaz/elasticsearch-action-updatebyquery in the meantime.


-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr


Le 1 avril 2014 à 09:06:43, Prashant Agrawal (prashant.agra...@paladion.net) a 
écrit:

Hi ES users,  

Can we perform the _update query to multiple Id's.  

For example if I want to update the Time field of my Index/type/id as below:  
curl -xPOST http://192.168.0.164:9200/*prashant_cata/session/123*/_update -d  
'{  

"script": "ctx._source.Time = \"2014-03-25T14:31:12\""  
}'  

So is there anyway I can update the Time field of all docs like:  
curl -xPOST http://192.168.0.164:9200/*prashant_cata/session/**/_update -d  
'{  

"script": "ctx._source.Time = \"2014-03-25T14:31:12\""  
}'  

Or is there any other way around for this scenario.  





--  
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/update-API-in-elastic-search-tp4053216.html
  
Sent from the ElasticSearch Users mailing list archive at Nabble.com.  

--  
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.  
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.  
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1396336000440-4053216.post%40n3.nabble.com.
  
For more options, visit https://groups.google.com/d/optout.  

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/etPan.533a713c.12e685fb.16bdd%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.


Re: Accuracy on cardinality aggregate

2014-04-01 Thread Adrien Grand
Hi Henrik,

Indeed, there is no way to compute exact unique counts. The reason why we
don't expose such a feature is that it would be very costly. In your case,
the cardinality is not too large so the terms aggregation helped compute
the number of unique values but if the actual cardinality had been very
large (eg. 100M), it is very likely that trying to use the terms agg to do
so would have required a lot of memory (maybe triggering out-of-memory
errors on your nodes), been very slow and caused a lot of network traffic.
We will try to clarify this through documentation or a blog post soon.

Thanks for trying out this new aggregation!



On Mon, Mar 31, 2014 at 11:09 PM, Henrik Nordvik  wrote:

> Ah, so there is currently not easy way of getting exact unique counts out
> of elasticsearch?
>
> I found a manual way of doing it:
>
> curl -s 'http://localhost:9200/twitter-2014.03.26/_search' -d '{
> "facets": { "a": {  "terms": { "field": "screen_name", "size":
> 20},"facet_filter": {"query": {"term": {"lang": "en"},"size": 0}' |
> ./jq '.facets.a.terms | length'
> 145474 (vs 145541)
> curl -s 'http://localhost:9200/twitter-2014.03.26/_search' -d '{
> "facets": { "a": {  "terms": { "field": "screen_name", "size":
> 20},"facet_filter": {"query": {"term": {"lang": "ja"},"size": 0}' |
> ./jq '.facets.a.terms | length'
> 50949 (vs 50824)
>
> So the count is quite close! Thank you.
>
>
>
> On Friday, March 28, 2014 10:32:55 PM UTC+1, Binh Ly wrote:
>>
>> value_count is the total number of values extracted per bucket. This
>> example might help:
>>
>> https://gist.github.com/bly2k/9843335
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/8669e9f0-eece-4b77-8e99-fec483359e2f%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Adrien Grand

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j7Qxe0SJSfFreK%3DfpqSBfziLzTVoGgi-T73J1YDx6ApTQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: "Locale" parameter in query_string query

2014-04-01 Thread Robert Muir
This controls the behavior of the string conversions triggered by
lowercase_expanded_terms.

For example Turkish/Azeri have different casing characteristics:
http://en.wikipedia.org/wiki/Dotted_and_dotless_I

On Tue, Apr 1, 2014 at 2:43 AM, Prashant Agrawal
 wrote:
> Any updates on the above query?
>
>
>
> --
> View this message in context: 
> http://elasticsearch-users.115913.n3.nabble.com/Locale-parameter-in-query-string-query-tp4052983p4053213.html
> Sent from the ElasticSearch Users mailing list archive at Nabble.com.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/1396334600143-4053213.post%40n3.nabble.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMUKNZXXSVqqazPWz7%3Deo376ybi1ZJPU42GLFs-5wiUC8B_Wkg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Removing unused fields (more Lucene than ES but..)

2014-04-01 Thread Robert Muir
On Tue, Apr 1, 2014 at 2:41 AM, Paul Smith  wrote:
>
> Thanks Robert for the reply, all of that sounds fairly hairy.  I did try a
> full optimize of the shard index using Luke, but the residual über-segment
> still has the  filed definitions in it.  Are saying in (1) that the creating
> of a new Shard index through a custom call to IndexWriter.addIndexes(..)
> would produce a _fully_ optimized index without the fields, and that is
> different than what an Optimize operation through ES would call? More a
> technical question now on what the differences is between the Optimize call
> and a manual create-new-index-from-multiple-readers.  (I actually though
> that's what the Optimize does in practical terms, but there's obviously more
> or less going on under the hood under these different code paths).
>
> We're going the reindex route for now, was just hoping there was some
> special trick we could do a little easier than the above. :)
>

Optimize and normal merging don't "garbage collect" unused fields from
fieldinfos:

https://issues.apache.org/jira/browse/LUCENE-1761

The addindexes trick is also a forced merge, but it decorates the
readers-to-be-merged: lying
and hiding the fields as if they don't exist.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMUKNZW2-FEjA6CChSR3%2Br0GQYAfJ9ZOOhyU565V79QMTrPFWQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


_update API in elastic search

2014-04-01 Thread Prashant Agrawal
Hi ES users,

Can we perform the _update query to multiple Id's.

For example if I want to update the Time field of my Index/type/id as below:
curl -xPOST http://192.168.0.164:9200/*prashant_cata/session/123*/_update -d
'{

  "script": "ctx._source.Time = \"2014-03-25T14:31:12\""
}'

So is there anyway I can update the Time field of all docs like:
curl -xPOST http://192.168.0.164:9200/*prashant_cata/session/**/_update -d
'{

  "script": "ctx._source.Time = \"2014-03-25T14:31:12\""
}'

Or is there any other way around for this scenario.





--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/update-API-in-elastic-search-tp4053216.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1396336000440-4053216.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Need some help for creating my model

2014-04-01 Thread Stefan Kruse
Hello, i am new to elasticsearch and i need some help/hints for creatine my 
model. For my relational database ( mysql ) is have the following situation:

Table: Subcategories
Columns: Name, Company

Every subcategory has a name and has a 1:n company relation.

The next is:

Table: Company
Columns: Name, Subcategory

Every company has a name and has a 1:n subcategory relation.

How can i modelling this for elasticsearch. I have a serach form with the 
following situation: The user can serach for subcategories, but only one 
should shown with have a company assigned. Then you select a subcategory 
and omitted the subcategory id. Then only companies should shown which are 
assigned to the subcategory.

For some hints i would be very thankfull.

Thanks

Stefan

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/53655a8b-46bc-4c95-8959-61831ac3d1c7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.