Are you actually using a comma after the firstname^1.3? It is invalid
JSON in both cases
2015-04-28 14:15 GMT-03:00 Daniel Nill danielln...@gmail.com:
curl -XPUT http://0.0.0.0:9200/users; -d '{
first_name: daniel,
last_name: nill
}'
curl -XGET 'http://0.0.0.0:9200/users/_search; -d
Are you sure that calling the same scroll_id won't return the next results?
AFAIK, the scroll_id can be the same and still return new records
2015-04-14 14:26 GMT-03:00 Todd Nine tn...@apigee.com:
Hey guys,
I have 2 indexes. I have a read alias on both of the indexes (A and B),
and a
I have never used percolator, but afaik you have to call the percolator api
after you have the document indexed:
http://www.elastic.co/guide/en/elasticsearch/reference/current/search-percolate.html#_percolating_an_existing_document
2015-04-02 15:25 GMT-03:00 Lincoln Xiong
Elastic won't edit your source. The long type is used internally
2015-03-20 14:16 GMT-03:00 Erik Iverson erikriver...@gmail.com:
Hello everyone,
I have a question on how Elasticsearch returns JSON representations of
fields with the date type. My confusion comes from the fact that the page
-20 16:16 GMT-03:00 Mark Walkom markwal...@gmail.com:
It's Elasticsearch, Elastic is the company :)
We convert dates to unix epoch, which is why you should insert them as UTC.
On 20 March 2015 at 10:22, Roger de Cordova Farias
roger.far...@fontec.inf.br wrote:
Elastic won't edit your
Look at this example on how to use multiple filters:
http://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-filtered-query.html#_multiple_filters
You should wrap them on a bool filter
2015-03-17 15:32 GMT-03:00 jrkroeg jrkr...@gmail.com:
I'm trying to get the top 100
We are running ElasticSearch in a cluster with 1 node, 1 index, 6 shards,
55 million docs. We run queries with terms aggregation in 15 fields and it
works well, taking about 10 seconds to return.
We reindexed the docs in another cluster with 1 node, 1 index, 4 shards and
the same 55 million docs
I'm searching on an array of objects
The problem is when I search using query string
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-dsl-query-string-query,
it matches the text split in different objects (different array positions).
Is
AM, Roger de Cordova Farias
roger.far...@fontec.inf.br wrote:
Thank you for your explanation
Do you know if it is a bug of intended behavior?
I don't think deleted (marked as deleted) docs should be used at all
2015-01-07 1:53 GMT-02:00 Masaru Hasegawa haniomas...@gmail.com:
Hi
of the document after
running the update? Note that the update creates a new field if it was not
found (== null), but this field is not used in the query
2015-01-05 13:35 GMT-02:00 Roger de Cordova Farias
roger.far...@fontec.inf.br:
The added field is an array of Integers, but we are not using
of the query. Is this field free
text?
Ivan
On Dec 23, 2014 9:12 PM, Roger de Cordova Farias
roger.far...@fontec.inf.br wrote:
Hello
Our documents have metadata indexed with them, but we don't want the
metadata to interfere in the scoring
After a user searches for documents, they can bookmark
Hello
Our documents have metadata indexed with them, but we don't want the
metadata to interfere in the scoring
After a user searches for documents, they can bookmark them (what means we
add more metadata to the document), then in the next search with the same
query the bookmarked document
Hello
I'm trying to update a document whose root object contains a list of nested
objects. I need to send an object of the nested type as a script parameter
to append to the list
How can I append the json (a string type) to the nested objects list of the
root object using Groovy? or should I use
):*
String script = ctx._source.objectsList += newObject;
2014-12-16 13:04 GMT-02:00 Roger de Cordova Farias
roger.far...@fontec.inf.br:
Hello
I'm trying to update a document whose root object contains a list of
nested objects. I need to send an object of the nested type as a script
(newObject, {\status\:\aasdsd\});
prepareUpdate.get();
Is there a way to reproduce the working REST API behavior with the Java
API?
2014-12-16 15:17 GMT-02:00 Roger de Cordova Farias
roger.far...@fontec.inf.br:
Ok, I found out that I can send a JSON as a script parameter and just
append
Hello
I have a query with a from/size, and I need to get the unique values of a
specific field of the returned docs only. I could do it in the client side,
but it would help if ElasticSearch could do it for me
The Terms Aggregation helps getting the unique values, but it ignores the
from/size of
Thank you for the advice
2014-12-04 9:30 GMT-02:00 Elvar Böðvarsson elv...@gmail.com:
I upgraded our logging cluster to 1.4 without any problems.
When I looked into upgrading a separate dev/test instance used for a
different purpose I ran into problems with the plugins. If you are using
We have a lot of docs like this:
{
_type: doc,
_id: 123,
_source: {
parent_name: abc
}
}
Each doc has only one parent_name but multiple docs can have the same
parent. It is like a many-to-one relationship, but the parent has no other
info apart of its name, so we didn't create a
execution on a new segment might cause latency spikes since there are lots
of postings lists that need to be merged. Would it be possible to change it
to a simpler term filter, eg. by adding more metadata to your documents?
On Mon, Dec 1, 2014 at 9:23 PM, Roger de Cordova Farias
roger.far
You can use the toString() method of the SearchRequestBuilder to see the
generated query. With your example it was:
{
size : 10,
query : {
multi_match : {
query : searchterm,
fields : [ FIELD1.not_analyzed, FIELD2.partial ]
}
},
sort : [ {
SCORE : {
order :
Hello
We currently have a cluster with 50 millions of docs using ElasticSearch
version 1.3.2
We were looking for something like a persisted filter, and the filtered
aliases, added in version 1.4.0.beta1, seems perfect for it
Our infrastructure team is not happy to upgrading it in production
You have to index it as a single token.
You can have the same string indexed twice using multi fields:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/_multi_fields.html#_multi_fields
Then you can index the string not analyzed (as in the multi fields page's
example) or
Hello
I have to create a mapping to a type that will have a text field with
values:
- that are huge (more than 32KB),
- that are very bad structured, and will have snippets like elas tic
search and I need to find it when the user searches for elasticsearch or
elastic search
I can't modify
I'm reindexing a ElasticSearch base with 50m docs using the scroll-scan
request to retrieve all docs, but my reindexer program stopped at 30m
Is there a way to redo the query to retrieve the left docs? Like using
offset?
Would the the internal order of the scan query be the same with a second
on a timeout value you give it.
Everytimetime you scroll you restart the countdown.
You could track the last scroll id you used and try it again from there?
On Thursday, 23 October 2014 12:47:02 UTC-4, Roger de Cordova Farias wrote:
I'm reindexing a ElasticSearch base with 50m docs using
is forward only. So not sure once
you got once scroll id you can go back to it. I guess one way to find out :)
On Thursday, 23 October 2014 15:44:04 UTC-4, Roger de Cordova Farias wrote:
Hmm, I was using a small ttl, just enough to process each scroll call,
but I could try using a longer time
26 matches
Mail list logo