Hi!
I'm trying to use the percentiles aggregation in ES 1.0, as described
here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.x/search-aggregations-metrics-percentile-aggregation.html
But I only get this error, reporting that the aggregator does not exist:
"Could not find agg
Hello
Where can found a lot of script fields document??
I saw values.length or values.size .
But, I don't found a lot of function with elasticsearch.org.
Thanks
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group
Hey,
you can also use geo_shapes instead of geo_points, as there is a specific
geo_shape from type point. Then your query should work.
--Alex
On Thu, Mar 6, 2014 at 5:48 PM, Alain Perry wrote:
> I'm quite the newbie when it comes to elasticsearch or GIS matters, so
> please excuse any dumb q
Hey,
can you tell, about which timeout you are speaking? Which one did you
configure? Where did you configure it?
--Alex
On Thu, Mar 6, 2014 at 1:11 PM, ssr wrote:
> We have 20 node ES 90.2 cluster with about 8mil documents. We use pyes for
> talking to elasticsearch. We are using 4 primary
And Java 7 as well!
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 7 March 2014 18:35, Jakub Muszynski wrote:
> On Fri, Mar 7, 2014 at 4:08 AM, David Pilato wrote:
>
>> Mixing JVM versions won't work for sure.
>>
On Fri, Mar 7, 2014 at 4:08 AM, David Pilato wrote:
> Mixing JVM versions won't work for sure.
>
Just to clarify for future debuging:
* Some differences between my test virtual machine "A", and current machine
B are:*
* A debian 7 vs B debian6*
* A java 7 vs B java 6*
I have java 7, amd debia
If I understand your question, when you get the answer at Step 2, your document
is on all nodes which requires it.
But not available for search immediatly though get will work.
1 second later max, it will be available for search (on all nodes that is).
You can for demo purpose force the refresh u
Step 2.
But as I said, you don't cluster a document, you might want to recheck your
terminology :)
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 7 March 2014 17:55, prashy wrote:
> There are two scenario.
> 1) I
I'm sometimes doing the same error with JVM memory parameters.
-Xmx512 -Xms512 definitely does not give enough memory to elasticsearch! :-)
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 mars 2014 à 07:05, Swaroop CH a écrit :
For posterity note, the problem was solve
No :-)
Means that I don't recall someone did it already.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 mars 2014 à 06:29, Nitesh Earkara a écrit :
Anyone has any idea on this??
> On Thursday, March 6, 2014 4:57:55 PM UTC+5:30, Nitesh Earkara wrote:
> Hi,
>
> We hav
There are two scenario.
1) I am submitting the documents to ES for indexing.
2) I am executing a search query using cluster plugin.
Just to elaborate through example.
*Step 1:
*curl -xPOST 'http://192.168.0.179:9200/prashant' -d
{
"mappings": {
"emp": {
"properties": {
If you still have your data source, use it as you used it the first time you
created your index.
If not, if you did not disable _source, you could read all your data from the
old cluster using scan&scroll with a matchAll query and push them into new
cluster using Bulk.
Have a look at https://gi
Lowercase your filter as a TermFilter is not analyzed
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 mars 2014 à 06:18, Emilio García-Pumarino Álvarez a
écrit :
HI!
I want filter my search by a value in a array. I have this in ES:
{
"_id" : "BeyKH2Cx",
"occupations" :
Cluster doesn't happen at a document level, it happens on a node level, ie
you have a cluster of N nodes.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 7 March 2014 17:42, prashy wrote:
> >>The indexing happens a
>>The indexing happens as soon as you post a document to ES, not as you query
it.
ES will automatically replicate the document (depending on your setup) at
the same time.
This is fine for indexing.
But what happens exactly wrt clustering. Clustering means I wanted to know
like I submitted one doc
I don't think that applies to ES.
The indexing happens as soon as you post a document to ES, not as you query
it.
ES will automatically replicate the document (depending on your setup) at
the same time.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.co
Hi Mark,
I read some where that there are clustering like real time and on demand so
was keen to know about that.
I have one more concern that might be the silly one but still wanted to know
about that *"whether clustering happens at the time when we are indexing the
data to ES or it happens whil
For posterity note, the problem was solved by specifying
{"index.refresh_interval": "5s"} - note the "s" : by specifying just "5", ES
assumes 5 milliseconds!
Regards,
Swaroop
07.03.2014, 11:28, "Swaroop CH" :
> Hello,
>
> We have a brand-new ES 1.0.1 cluster of 3 m2.xlarge machines, we set
>
Hello,
We have a brand-new ES 1.0.1 cluster of 3 m2.xlarge machines, we set
`index.refresh_interval` to -1, `index.number_of_replicas` to 0,
`index.number_of_shards` to 10 and indexed about half a million documents in
about 2000 indexes, this completed successfully in about 10 hours.
However,
Hi,
We use our own SPM for Elasticsearch. It has classic threshold-based
alerts as well as alerts based on automatic anomaly detection -
http://blog.sematext.com/2013/10/15/introducing-algolerts-anomaly-detection-alerts/
. It's a SaaS, not a plugin, but maybe it would work for you.
Otis
--
Anyone has any idea on this??
On Thursday, March 6, 2014 4:57:55 PM UTC+5:30, Nitesh Earkara wrote:
>
> Hi,
>
> We have plugin fsriver to search within files in filesystem and jdbc-river
> for SQL database.
>
> Please let me know if plugin is available for ElasticSearch to search data
> within S
HI!
I want filter my search by a value in a array. I have this in ES:
{
"_id" : "BeyKH2Cx",
"occupations" : ["d5ot2bwu","HwmXygzK"],
"name" : "Matthew McConaughey"
},
{
"_id" : "BeyKH2Cx",
"occupations" : ["HwmXygzK"],
"name" : "Matthew Nilan"
}
And i execute a search as below:
GET mongoindex/pr
ES indexes data as soon as it receives it, it is then available right after
that. It's as close to real time as it can get.
There is no concept of on demand, unless you are thinking of something
else.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
I just wanted to know that is there any difference between Real time vs On
demand cluster wrt no of document to be indexed.
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Real-time-vs-On-demand-cluster-tp4051151p4051246.html
Sent from the ElasticSearch Users ma
Thanks David.
I am new to this. How do you reindex? Is there a API that i can call to
index all indexes?
Thanks in advance.
Michael
On Tuesday, March 4, 2014 11:46:49 PM UTC-8, David Pilato wrote:
>
> I think you have to reindex.
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr /
This is exactly what I tried to describe with:
Something like what you exactly describe:
{
'index' : 'index_name',
'alias' : 'user_{user_id}',
'filter' : {
'term' : {
'user' : '{user_id}',
},
),
'routing' => 'r{user_id}'
}
It does not exist yet (or I missed it) but I think it cou
I think there is a small performance hit with using the http output, but I
haven't tested that so don't take it as definitive.
And the removal of dependencies between ES and LS is worth it to me anyway.
Java 6 isn't recommended, and if I recall correctly isn't supported with
LS, you want 7.
Regar
Mixing JVM versions won't work for sure.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 mars 2014 à 00:22, sirkubax a écrit :
W dniu czwartek, 6 marca 2014 23:54:26 UTC+1 użytkownik sirkubax napisał:
>
> Hi David,
>
> This is what I've been wondering about
> I hav
Not sure I fully understand what you are trying to do but may be this could
answer to your question: "Change(disable) Text Analysis at Index Time" which
means to me that you don't know in advance if you want to analyze a field or
not and that you want to take that decision at index time.
http:
Hi there! I have a question regarding regexp query. Is there any way to
enable case-insensitive flag when doing regexp query? I didn't find any way
to specify that. Also, not in regex filter, or the regex functionality in
query string query. However, it is available in Terms facet regex patterns
I'm not sure what is wrong with this dynamic template definition, when I
try to load data the curl command does not return and no errors in ES log
as well.
Here is the sample GIST to re-create.
https://gist.github.com/hariinfo/9403577
ES Version 1.0.1
--
You received this message because you a
Hi,
I'm trying to make elasticsearch-view-plugin (
https://github.com/tlrx/elasticsearch-view-plugin) to work with the 0.90.*
version of elasticsearch (
https://github.com/tlrx/elasticsearch-view-plugin/pull/1) and after few
changes to make it compile it started to work on localhost.
Unfortunatel
How does one change (or disable) the text analyzer for certain fields at
index time.
I am injecting reasonably structured data and want to exclude most of my
data from text analysis.
In particular the "_", "-" and the "." character in my object names wreaks
havok when the analysis of this dat
After some aggressive limiting I think I have managed to get what I wanted.
I think it was one of the caches, probably field cache that was eating up
all the memory and never letting it go.
If someone has a similar problem and similar use case (daily-weekly
reindexing, not that much searches),
The hit from the command line has the same wrong integer, but I cannot
repeat this problem using a new index. Could it be corruption? Some of my
other indices were corrupted because of node thrashing: new nodes kept
being started after updating to 1.0.1 because I didn't know the command
line op
Your idea sounds reasonable. The only thing to be aware of is client nodes
generally need to run the same version of ES and also Java version as the
server/data nodes. Also you'll need to maintain an IP list in each client
node for it to see the server/data nodes in the case of unicast discovery
There's nothing else in the logs.
On Thu, Mar 6, 2014 at 11:18 PM, Binh Ly wrote:
> I would check the log files and see if there is anything else there...
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group
I passed 1.
Rob
On Thu, Mar 6, 2014 at 6:13 PM, Binh Ly wrote:
> Curious, when you ran optimize, what did you pass to the max_num_segments
> parameter?
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from t
W dniu czwartek, 6 marca 2014 23:54:26 UTC+1 użytkownik sirkubax napisał:
>
> Hi David,
>
> This is what I've been wondering about
> I have copierd elasticsearch 0.90.9 from my virtual machine I've been
> testing with a month ago, but no luck (it was installed with
> https://github.com/valenti
I would check the log files and see if there is anything else there...
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.co
FWIW, I created your mapping, indexed the document you specified at the
top, ran the query and it returned the document to me. So maybe something
else is going on, like maybe are you searching the right index?
If you're using Logstash, remember it creates 1 index per day by default,
so it is po
Curious, when you ran optimize, what did you pass to the max_num_segments
parameter?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@
Currently this is ran on 4 nodes, with the possibility of adding new nodes.
I get nothing in the logs.
I don't completely understand this for my use case:
https://github.com/elasticsearch/elasticsearch/pull/5180 .
Ideally what I would like, would be not to create so many aliases, but to
have a t
Hi,
I've performed a delete by query and the number of deleted_docs and index
size_in_bytes does not decrease. I've also looked through the ES mailing
group and tried the recommended optimize, but I'm still not seeing expected
results. Here is my test case:
*curl http://localhost:9200/apache
I'm sorry, I restarted ES and it stopped happening. I will try and
reproduce it and email again if successful.
On 6 Mar 2014 22:32, "Binh Ly" wrote:
> This is interesting. Can you try adding a match_all filter or query to the
> percolate call and see if that makes any difference?
>
> --
> You rec
I am using 1.0.0.Beta2, with an explicit mapping, and ran this in the Sense
plugin. I will try from command line.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
I'm curious about the exact environment you are running these commands
from. I tested it quickly (ES 1.0.1) and it works fine for me from the curl
command-line (i.e. I index the value, it mapped automatically to long, and
queried it back it gave the same value).
I've heard cases where some peop
Hi David,
This is what I've been wondering about
I have copierd elasticsearch 0.90.9 from my virtual machine I've been
testing with a month ago, but no luck (it was installed
with https://github.com/valentinogagliardi/ansible-logstash ansible play)
There is logstash 1.3.2, I will copy it too, a
You'd probably need to first index that field as not analyzed, or something
that does not strip the quotes. And then use the match query, like for
example
{
"query": {
"match": {
"text": "\"hello world\""
}
}
}
--
You received this message because you are subscribed to the Go
Unfortunately, paging is not available for the terms facet. You certainly
can ask for all the terms in the terms facet using all_terms: true, but
you'd still have to page it out yourself in your application.
--
You received this message because you are subscribed to the Google Groups
"elastics
This is interesting. Can you try adding a match_all filter or query to the
percolate call and see if that makes any difference?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, s
If you are not concerned with relevance scores, you might want to use
filters exclusively. This article describes what filters are and how to use
them effectively:
http://www.elasticsearch.org/blog/all-about-elasticsearch-filter-bitsets/
--
You received this message because you are subscribed
Or use the elasticsearch_http output and not worry about version
compatibility :)
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 7 March 2014 08:51, David Pilato wrote:
> I think this logstash version is not compa
If you're referring to Carrot2 clustering, you might find that information
here:
http://project.carrot2.org/faq.html#scalability
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
I upgraded to ES 1.0.1 and I have it running on our Test server, but it
keeps stopping after being up for a few hours. It is just a single node
with 5 shards. Any ideas on what might be causing this?
[2014-03-06 11:30:40,782][INFO ][node ] [Killraven]
started
[2014-03-06 14:
So ES is storing the exact integer and matching against it, but does not
show the exact integer in the returned document. How can I get that?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving ema
I think this logstash version is not compatible with elasticsearch 1.0.1.
You should try with another elasticsearch version I think.
My 2 cents
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 6 mars 2014 à 22:48, sirkubax a écrit :
Hi
Im trying to run a server with el
I have never seen that number of aliases. That means you have 5 million users?
Nice project ;-)
I guess here that the cluster state is getting so big that it takes more and
more time to update it and copy it to all nodes.
BTW how many nodes you have for those 200 shards?
Do you see anything in
Hi
Im trying to run a server with elasticsearch and logstash,
I did configure minimalistic settings, and still I can not get it running:
In ES log i see:
[transport.netty ] [lekNo1] Message not fully read (request) for
[30] and action [], resetting
The elasticsearch.yml contains:
cl
Can you also confirm that the Java 1.7 update versions are all the same,
i.e. Java 1.7.0u_XXX?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+un
They are are actually two different things. parent is to identify parents
for parent-child relationships. routing is to tell ES to shortcut the
search/indexing only to a specific shard.
It's interesting that the second call works (it must have been implemented
for convenience) but in general if
For the purposes of the example, "metric.batchId" should be just "id".
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.co
I've got a document with a long integer identifier:
{
"id": -6001495857799773000
}
Trying to filter based on this identifier returns nothing:
{
"query": {
"filtered": {
"filter": {
"term": {"id": -6001495857799773000}
}
}
}
}
I sw
First of all, thank you for building ElasticSearch. It is a truly awesome
product.
I am trying to use the "User data flow". For this, I create a single index
and multiple aliases inside of it. In my use case, I have about 5mln
aliases to add.
The alias structure roughly looks like this:
{
'in
I'm running Elasticsearch 1.0.1 using the following
$ES_HOME/config/default-mapping.json:
{
"_default": {
"properties": {
"foo": {
"type": "nested",
"include_in_all": false,
"properties": {
"bar": {
"type": "string",
"index": "n
Binh! Thanks so much!! It worked. But how do I limit it from the top
based on max score??
I have an additional complication in that the parent has multiple children
types. I figured out how to assign score of 0 to the children that should
not contribute to the final score, but now I would l
is there a plugin or api support for monitoring ES key metrics and alerting
the dev ops about situations when some node in a cluster fails or there is
a spike in latency due to whatever reason?
what are the best practices here and what do people usually do?
thanks
--
You received this message b
Thx David.
I’ve read a few times the differences between query and filter and I do not
fully grasp the difference.
Here is my understanding, a filter is a post processing logic on the data set
resulted by a query? Is this right? Why would that be faster?
It is pointed in the doc that it is fas
I've tried to configure logrotate to just zip up the daily files that
elasticsearch creates, but I just can't figure out how...
So, I thought I'd configure elasticsearch to just use one file and not
create daily files. That way logrotate can deal with rotation.
In logging.yml I see:
appender:
Thank you David. That cleared a few things
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion
hello everyone - first of all my apology for following up on this soon
enough.
wondering if anyone has got things working with script scoring and can
share their script, or help me with what might be wrong with the below
script.
"script_score": {
"script": "doc['boostValue'].empty || (doc
No, that doesn't seem to work.
On Thu, Mar 6, 2014 at 5:29 PM, Amit Soni wrote:
> I think you can do that by specifying a different id for each of the
> different suggestion types you want to include in the request.
>
> "suggest" : {
> "id_1" : {
> ...//put your suggestor for on
Yes!
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 6 mars 2014 à 18:32:02, Hari Prasad (iamhari1...@gmail.com) a écrit:
Thank you David for response
So untill this new settings, that you mention, will i have to mount the shared
location on all the
Hi,
At the moment, the delimiter in a path_hierarchy tokenizer must be a single
char.
For example, something like this is not allowed :
"tokenizer": {
"arrow_path_tokenizer": {
"type": "path_hierarchy",
"delimiter": "-->"
}
}
Please, can s
Despite my name, I do not speak Russian. :) Please excuse my ignorance of
the Russian language while I attempt to debug.
Currently, the synonym token filter is being applied after the other three
token filters: "snowball_text", "lowercase", and "russian_morphology". In
this case, the synonym mappi
Thank you David for response
So untill this new settings, that you mention, will i have to mount the shared
location on all the nodes?
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails f
I think you can do that by specifying a different id for each of the
different suggestion types you want to include in the request.
"suggest" : {
"id_1" : {
...//put your suggestor for one field here
},
"id2" : {
.//put your suggestor for second field here
We just talked about it.
Right now, master node needs to save some metadata information in FS and each
node which has a primary shard will copy data directly to the filesystem.
That's the reason each master node and data node needs to have access to the
file system.
One option could be in the f
Hello, I would like to know what is the best way to search for those hits
which have some text wrapped in double quotes.
As an example, this is one of the items I have
"_index": "posts",
"_type": "post",
"_id": "2915129",
"_score": 1,
"fields": {
Thanks David, pl do share how your discussion with Igor goes.
I agree that a "preference=local" flag will be useful since having a shared
file system is just additional setup to be done and if there is any way to
avoid it, things would get simpler.
-Amit.
On Wed, Mar 5, 2014 at 11:57 PM, David
Thanks Amit. Is there anyway to execute the autocomplete suggest on
multiple fields within the same request?
On Thu, Mar 6, 2014 at 5:03 PM, Amit Soni wrote:
> This is another class for fuzzy match: CompletionSuggestionFuzzyBuilder
>
> -Amit.
>
>
> On Thu, Mar 6, 2014 at 8:56 AM, Dan wrote:
>
Thanks Lukáš, Leslie, this is the ticket for Elasticsearch in Fedora/RHEL,
it has been open for a while
https://bugzilla.redhat.com/show_bug.cgi?id=902086
I hope I can contribute something useful in the next few weeks.
Jörg
On Thu, Mar 6, 2014 at 4:10 PM, Leslie Hawthorn <
leslie.hawth...@ela
This is another class for fuzzy match: CompletionSuggestionFuzzyBuilder
-Amit.
On Thu, Mar 6, 2014 at 8:56 AM, Dan wrote:
> I found an example in the tests, if anyone else is looking for it and
> comes across this post:
>
> SuggestResponse suggestResponse =
> client().prepareSuggest(INDEX).ad
Hi,
I am indexing using an nGram filter and it seems to be working - I am able
to find substrings of words. However, I have noticed that documents that
contain only a substring of the term I am looking for are ranked above
documents who have an exact match. For instance, if I search for "rain"
I found an example in the tests, if anyone else is looking for it and comes
across this post:
SuggestResponse suggestResponse =
client().prepareSuggest(INDEX).addSuggestion(
new
CompletionSuggestionBuilder("testSuggestions").field(FIELD).text("foo").size(10)
On Thursday, March
I'm quite the newbie when it comes to elasticsearch or GIS matters, so
please excuse any dumb questions or misunderstandings.
I have two indices with one type each: one in which I store districts of a
certain kind with a geo_shape (multipolygons, mostly), another in which I
store some sort of P
I'm changing our current autocomplete implementation to use the new
autocomplete suggester as below. Is there a Java API available to send the
suggest search request to ElasticSearch?
http://www.elasticsearch.org/blog/you-complete-me/
--
You received this message because you are subscribed to
Heya,
We are pleased to announce the release of the Elasticsearch Groovy language
plugin, version 2.0.0.
The Groovy language plugin allows to have groovy as the language of scripts to
execute..
https://github.com/elasticsearch/elasticsearch-lang-groovy/
Release Notes - elasticsearch-lang-gr
Thanks!
I still can't seem to find these settings.
Apologies in advance if I am just missing them...
indices.memory.index_buffer_size
indices.memory.min_shard_index_buffer_size
indices.memory.min_index_buffer_size
And when I run the _cluster/settings all I get is:
{
"persistent": {},
"tran
Hello,
I am writing an app that makes heavy use of percolators (the es 1.0
variety). I've found that sometimes I get hits back from percolate() that
don't actually exist anymore. In the paste below I search the logs index
for any percolators, but find none, then run an existing document through
Hi all,
I'm sorry if this is a totally beginner question with a obvious answer, but
I just haven't found anything that explains this clearly, most tutorials
don't cover this sort of thing. We're investigating the use of Elastic
Search and was wondering what the best topology would be. My readi
I heard at FOSDEM that Peter Robinson was looking at packaging
Elasticsearch for Fedora. You may want to check in with him to see if
he's moved down that path.
Cheers,
LH
--
Leslie Hawthorn
Community Manager
http://elasticsearch.com
Other Places to Find Me:
Freenode: lh
Twitter: @lhawthorn
Skyp
I posted the code sample that works for me on stack overflow to your
question. It might be a problem with the type used during inserting the
document you are searching for.
Op donderdag 6 maart 2014 12:39:37 UTC+1 schreef Isaac Hazan:
>
> Thx David.
>
> Sorry but I misunderstand(beginner in ES).
Jörg
IMO the best approach would be to do this in context on Fedora packaging,
this means opening a ticket in appropriate system, where Fedora packaging
issues are tracked and do the work in context of such ticket. This way it
could get attention from the people that have a lot of experience in th
Dear all,
could you please give me a hint on how one could implement the pagination
for facets?
Say we had a facet that contains a terms facet and say we get the N most
frequent terms, how one could have the the N most frequent terms, but also
be able to have the next functionality, or the l
Thanks for your reply Alex. I have replies inline.
bq. but you are also putting meta information in the same document -
Correct. My elasticsearch implementation is part of a larger framework.
Similar to Pig, Hive, Avro and other data model-agnostic frameworks, I pass
along a small piece metad
I meant field values.
But sorry I did not notice your mapping at the end.
The query sounds good.
Could you check your current mapping?
curl -XGET 'http://localhost:9200/unique_app_install/_mapping?pretty
To answer to the other question, fastest way would be to use post filters and
not queries a
did you resolve this issue?.. we seem top have the same issue - also with
0.90.9
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@goog
At the moment, we have a whole index size of less than 100MB (less than
200MB with backuped data) and the estimated_size is 1.4GB... How are we
supposed to deal we that kind of trouble ?
Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :
>
> Isn't it a bit weird that we reached a 800MB limit
We have 20 node ES 90.2 cluster with about 8mil documents. We use pyes for
talking to elasticsearch. We are using 4 primary 1 replica shard setting
for each index creation.
We have a timeout value of 2 (obviously low) seconds every time we try to
create an index we are getting an IndexAlreadyEx
1 - 100 of 148 matches
Mail list logo