I can see a `setExecution` method in RangeFilterBuilder, I think this is
what you are looking for?
On Thu, Feb 20, 2014 at 2:15 AM, Matthew Kehrt mke...@gmail.com wrote:
Hi,
I'm investigating upgrading an ES installation to 1.0 and I see that
numericRangeFilter is deprecated, with the
I have and index with following documents:
{name: 'Device1', type: 'start', 'eventTime': '2013-02-19 12:00:00'}
{name: 'Device2', type: 'start', 'eventTime': '2013-02-19 12:02:00'}
{name: 'Device1', type: 'stop', 'eventTime': '2013-02-19 12:45:00'}
{name: 'Device2', type: 'stop', 'eventTime':
Hey
maybe you should reduce your parallel bulk indexing. Having 500 parallel
bulk request means you need to reserve a lot of memory for parsing those
requests and keeping some state in memory until all of the requests are
processed and returned. Because a single bulk requests needs to send lots
Hey,
the main question is, what does proximity mean for you? Could be several
things
* Words should be similar to the ones specified (including typos)
* Words should be exact, but the occurance of each words should be near to
each other
* The whole phrase should be similar
* etc...
You might
Hey,
not too sure, what you are after with your classification, but you could
execute an aggregation with a has_child filter, which returns the product
category of a certain product.
However I am not sure, if this is the right approach, as in most systems a
product can have more than one category
Hi,
I have installed the last version of elasticsearch on redhat5 with the
official rmp everything look fine (clean install).
After installation of the plugin marvel, my elasticsearch instance don't
start :
[2014-02-20 10:27:05,012][INFO ][node ] [sacchi]
version[1.0.0],
On the 19th it left the replica of shard 4 unassigned:
node01 - (4)
node02 - 0 (1)
node03 - (0) 1
node04 - 2 (3)
node05 - (2) 3
unassigned 4
This morning however, it's allocated all the shards:
node01 - (0) 1
node02 - (2) 3
node03 - (1) 2
node04 - 0 (4)
node05 - (3) 4
--
You received this
any idea why query below is executing in around 5s:
POST INDEX/PARENT/_search
{
query: {
has_child: {
type: CHILD,
query: {
has_child: {
type: CHILD_SCORE,
query: {
range: {
yep, as Isabel mentioned, you should use *dfs_query_then_fetch* search type
(it is slower doo)
On Monday, February 17, 2014 3:23:27 PM UTC, Vallabh Bothre wrote:
Thanks Karol for replying,
As per your suggestion i used search type which execute the query on all
relevant shards and return
Hi Tony,
The problem is the hardware, that is why we need to move it to the new
cluster.
Yann, how can I restore a snapshot on a different machine?
Cheers,
Attila
On 02/19/2014 07:00 PM, Tony Su wrote:
I was wondering which backup/restore method you intended to use.
Snapshot/restore
Hi,
I want to use ElasticSearch MLT API, to find a set of documents similar to
a document in my index. Since, I do not have like_text and do not want to
make an extra request for retrieving like text, I am using MLT API with
document Id. The problem is that in our schema some fields weight
Hi will,
Do you have any marvel specific settings in your elasticsearch.yml?
To enable debug logs in es (assuming you run it from the command line):
./bin/elasticsearch --logger.level=DEBUG
Let me know what you see.
Cheers,
Boaz
On Thursday, February 20, 2014 10:39:24 AM UTC+1, Xwilly Azel
Hi David
I have successfully created the riever, unfortunatelly the the documents
are not stored in the index I specify. I can see via debugger that the
messages are taken from the queue, but the bulk request is failing,
Here is my confdiguration:
public void createRiver()
{
What say logs?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 févr. 2014 à 13:00, Doru Sular doru.su...@gmail.com a écrit :
Hi David
I have successfully created the riever, unfortunatelly the the documents are
not stored in the index I specify. I can see via
Hi,
I've read this one, but it's not clear to me how to do it on a different
machine.
If I do a backup on oldcluster with
curl -XPUT oldcluster:9200/_snapshot/my_backup/snapshot_1 (assuming I've
already created the my_backup repo), there will be metadata-snapshot_1,
snapshot-snapshot_1 and
Hi Boaz,
here the result :
[2014-02-20 13:40:20,827][INFO ][node ] [sacchi] version
[1.0.0], pid[18874], build[a46900e/2014-02-12T16:18:34Z]
[2014-02-20 13:40:20,828][INFO ][node ]
[sacchi]initializing
...
[2014-02-20 13:40:20,828][DEBUG][node
and here my elasticsearch lib
elasticsearch-1.0.0.jar
jna-3.3.0.jar
jts-1.12.jar
log4j-1.2.17.jar
lucene-analyzers-common-4.6.1.jar
lucene-codecs-4.6.1.jar
lucene-core-4.6.1.jar
lucene-grouping-4.6.1.jar
lucene-highlighter-4.6.1.jar
lucene-join-4.6.1.jar
lucene-memory-4.6.1.jar
i will like to disable some information in the fielddata.format. but i dont
know if i can disable the type attachment used by the mapper-attachments
without problem ?
*http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/fielddata-formats.html#_disabling_field_data_loading
--
Sounds like you set path.plugins, right?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 20 février 2014 à 13:44:08, Xwilly Azel (xwi...@gmail.com) a écrit:
and here my elasticsearch lib
elasticsearch-1.0.0.jar
jna-3.3.0.jar
jts-1.12.jar
BTW, I will fix this NPE. Thanks for reporting
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 20 février 2014 à 13:55:03, David Pilato (da...@pilato.fr) a écrit:
Sounds like you set path.plugins, right?
--
David Pilato | Technical Advocate |
After some search, I think you are hitting this issue:
https://github.com/elasticsearch/elasticsearch/pull/4187
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 20 février 2014 à 13:56:46, David Pilato (da...@pilato.fr) a écrit:
BTW, I will fix this
same with path for plugin
[2014-02-20 13:59:23,388][INFO ][node ] [sacchi]
version[1.0.0], pid[21718], build[a46900e/2014-02-12T16:18:34Z]
[2014-02-20 13:59:23,389][INFO ][node ] [sacchi]
initializing ...
[2014-02-20 13:59:23,389][DEBUG][node
Really, no clue?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
and Thanks for your help David :).
On Thursday, February 20, 2014 2:21:15 PM UTC+1, Xwilly Azel wrote:
done.
./bin/plugin
should create the directory plugins with the same right than the directory
lib and we should have an explicit error log in this case.
i've done the installation with
done.
./bin/plugin
should create the directory plugins with the same right than the directory
lib and we should have an explicit error log in this case.
i've done the installation with official rpm :).
On Thursday, February 20, 2014 2:12:54 PM UTC+1, David Pilato wrote:
Issue happens when
I sent a PR for the log issue:
https://github.com/elasticsearch/elasticsearch/pull/5196
Interesting. How do you run bin/plugin? Do you run it as root?
Something like sudo bin/plugin …. ?
If so I guess it comes from here.
You should run it as elasticsearch user but I'm wondering if it will
You can use source filtering for this:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-source-filtering.html
Example:
_search
{
_source: [ a.b ]
}
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Maybe try:
/index/testLogs/_search
{
query: {
filtered: {
filter: {
has_parent: {
parent_type: testProx,
filter: {
terms: {
category: [
category1,
category2
]
}
}
The exception is:
org.elasticsearch.action.ActionRequestValidationException: Validation
Failed: 1: no requests added;
I have the feeling that the messages in queue should be alternante:
{ index : { _index : twitter, _type : tweet, _id : 1 } }
{ tweet : { text : this is first tweet } }
{ index
No you can not have the later format.
Check Bulk API docs:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html
Don't forget the last \n char BTW.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 20 février 2014 à
I get this error sometimes when I try to create an index.
My version of java in easticsearch is the same as the client server.
Not is always this error occurs, different than seen in other posts.
The log:
Exception in thread main
Yes, I am still running 0.90. Sorry, but I find the interaction via curl
with complex json to be be awkward; I lost a lot of time experimenting
with possible syntax by cutting and pasting trials, only to find that
linefeeds and tabs in my formatted text were causing it to fail.
I will upgrade to
What are the exact versions of your JVM and of ES in both node and client?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 févr. 2014 à 15:06, Tiago Rodrigues tiago_t...@hotmail.com a écrit :
I get this error sometimes when I try to create an index.
My version of
My node run in a linux debian pc, and my server in a windows 7 (for
developping only).
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
1) Do facet filters produce (or are supposed to produce) the same facet
counts as query strings with facets? Doesn't seem to
2) Do DSL query facets produce same counts as query string queries with
facets? Don't seem to
3) Do queries with filters produce same facet counts as any of the above?
Hi,
since 1.0.0 it's not possible any more to delete documents by a filtered
query.
As exmaple the query from
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-filtered-query.html
returns HTTP/1.1 400 Bad Request and prints to the logfile
Sorry, forget to post the full query that fails:
curl -vv -XDELETE 'http://localhost:9200/test/_query' -d '{
filtered : {
query : {
term : { tag : wow }
},
filter : {
range : {
age : { from : 10, to : 20 }
}
Very odd indeed. I can duplicate your problem but I'm not sure what is the
cause. I did try to run ES 0.90.9 externally and it works fine in that
case, so for now, I would suggest to not run embedded if you need this
working.
--
You received this message because you are subscribed to the
Sorry, forgot to post the full query that fails:
curl -vv -XDELETE 'http://localhost:9200/test/_query' -d '{
filtered : {
query : {
term : { tag : wow }
},
filter : {
range : {
age : { from : 10, to : 20 }
}
Thank you very much, I will try again.
Doru
On Thursday, February 20, 2014 3:00:58 PM UTC+1, David Pilato wrote:
No you can not have the later format.
Check Bulk API docs:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html
Don't forget the last \n char
Your best bet is probably a custom parser with your own grammar. On the ES
side, if you use something like a simple_query_string or a match query, it
would also help a little bit (instead of using the query_string query).
--
You received this message because you are subscribed to the Google
Neil, I have been trying to reproduce but I can't seem to (0.90.11 and
1.0.0). Perhaps is it possible for you to look at your logs on all nodes
and see if there is anything there that might pinpoint/relate to this?
--
You received this message because you are subscribed to the Google Groups
Could you first upgrade to es 0.90.11 and downgrade jvm to 1.7.0_25 ?
On both sides: client and node
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 févr. 2014 à 15:26, Tiago Rodrigues tiago_t...@hotmail.com a écrit :
Elasticsearch are 0.90.6 in both.
I have only 1
Hi all,
Just to let you all know I released the first-ever Elasticsearch plugin for
Hebrew analysis. The plugin and the dictionary it relies on (hspell) are
both released under the AGPL3 license - it is an open-source license
compatible with the ASL, but less permissive.
I do not believe you can boost individual fields/terms separately in a MLT
query. Your best bet is to probably run a bool query of multiple MLT
queries each with a different field and boost, but you'll need to first
extract the MLT text before you can do this.
--
You received this message
Any more tutorials, say append to list?
On Wednesday, February 19, 2014 12:54:15 PM UTC-5, Costin Leau wrote:
Hi,
We tried to make the docs friendly in this regard - each section (from
Map/Reduce to Pig) has several examples. There's
also a short video which guides you through the
You should also check if you have plugins installed and if these plugins
cause errors. For example, analyzer plugins may throw exceptions that are
local to a plugin.
If you create an index with TransportClient, you must also install these
analyzer plugins in the client, just for being able to
Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
Hi Terry,
Just a comment on your linefeeds and tabs problem...
My guess is that you're working on Windows. Plenty of text editors on
Windows insert invisible characters that can screw things up.
But, I've found that the opposite is true in the *NIX world, few text
editors do that sort of
The best way to help you is if you can show a complete example of your data
and queries (doesn't have to be much or complex) and then we can take a
look analyze your queries one by one.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
The syntax has changed slightly, you'll now need to wrap your query inside
a query block.
curl -vv -XDELETE 'http://localhost:9200/test/_query' -d '{
query: {
filtered : {
query : {
term : { tag : wow }
},
filter : {
range : {
I ran into same problem, version was correct, plugins installed, In my case
port 9300 was not opened for transportclient, once I opened it, it worked
fine.
On Thursday, February 20, 2014 9:06:42 AM UTC-5, Tiago Rodrigues wrote:
I get this error sometimes when I try to create an index.
My
Great, it worked.
Yesterday I tried adding in this same file the option -Xms=4g in the
ES_JAVA_OPTS but it was not working.
Guess I just had to update the ES_HEAP_SIZE as it was overwriting my
previous option.
Thanks a lot,
Pablo Musa
On Wednesday, February 19, 2014 9:40:24 PM UTC, Mark Walkom
To clearly differentiate the Kibana from what it may be monitoring,
Maybe deploy Kibana on a completely separate machine?
That way, anything related to deploying Kibana can be isolated and studied.
Local sensors (eg Marvel) and other processes not strictly part of Kibana
but may be invoked by
I am a bit confused about this topic, I would like to index images
(png,jpegs, gifs...), my understanding is that I need to extract and index
text portions from images, I don't really care for the meta data. So, I
looked online and decided to use apache Tika which I also use to extract
text
Hi, great idea.
But, you seem to need to add/modify the following...
- The default page only describes generic info about Azure apps, nothing
about your app.
- You need to describe minimal client machine requirements. I don't know if
anything more is required besides Visual Studio and Azure
- Running 0.90.11
- Marvel monitor machine and the cluster it monitors are separated
- Installed Marvel the day after it was released and it was originally
running fine (all panels filled with data)
Checked the dashboard again today (nothing has changed since it was
installed) and some panels
I updated only the elasticsearch to version 0.90.11 in node and in server.
Apparently is all ok.
Thanks!
Em quinta-feira, 20 de fevereiro de 2014 11h58min25s UTC-3, David Pilato
escreveu:
Could you first upgrade to es 0.90.11 and downgrade jvm to 1.7.0_25 ?
On both sides: client and node
Thanks David. I agree that OCR and maybe any kind of text extraction should
be done pre-Elastic Search indexing. But, I am just wondering if apache
tika supports this, or if anyone has experience with using a certain tool.
I do plan to do extract before indexing.
On Thursday, February 20, 2014
Thanks Tony.
I was using gedit on a Linux box. Since I got my delete to work, I'm
hoping to upgrade to Version 1.0 and then avoid using curl once my
indices are created. I have to keep some proof of concept apps running
so I'm trying to only break one thing at a time
I just started feeding
HI Steven,
Although I'm not part of the ES team, because I have an old ES 0.90.10
node on CentOS 6.5 I wanted to upgrade to 1.0 anyway, I went ahead and more
or less replicated what you did...
The original ES 0.90.10 was installed from the downloaded RPM (because no
repos existed then).
Added
We want to use the 0.90.11 Java client because it contains a bug fix, but
we're on ES 0.90.9. Do we need to update ES too or can we get away with
just updating the client in this case?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Unfortunately yes you will need to upgrade and match your ES Java client
with the ES server version.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Found the issue. The monitoring machine log was filled with these:
[2014-02-20 14:05:07,137][DEBUG][action.search.type ]
[elasticsearch.marvel.1.traackr.com] [.marvel-2014.02.20][0],
node[SwW74zsHRSGZd7OIpukWVQ], [P], s[STARTED]: Failed to execute
When using routing is GET by doc id still efficient? I am assuming when
routing is used the hash algorithm uses routing value instead of doc value
so based on that fact is the GET by doc id still efficient?
--
You received this message because you are subscribed to the Google Groups
You should be ok. Give it a try. :)
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the
Hi ,
we had some network glitch .then we had rebounced all the four nodes
.after rebouncing we are getting the cluster status red and seeing some
shards are unassiged.
Could you guys please help what we need to do with those unassigned
shards.to get the data back
i had tried cluster
Steve,
There was a bug in 0.90.1 that caused this. The Elasticsearch user would
be removed and re-added with a new ID during an upgrade. That was fixed
for 0.90.2. Sorry about the bug.
Kevin
On Wed, Feb 19, 2014 at 9:05 AM, Steven Williamson steven43...@gmail.comwrote:
Hi,
Unless I
The only difference is in one case the doc ID is used to hash, and in the
routing case the routing ID is used to hash. Other than that, execution is
the same between the two.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from
I understand that, but my question was the execution path during GET when
using docid which was indexed with routing id. Does it search all the
shards?
On Thu, Feb 20, 2014 at 3:49 PM, Binh Ly b...@hibalo.com wrote:
The only difference is in one case the doc ID is used to hash, and in the
As an update,
This is now supported in master (the upcoming elasticsearch hadoop 1.3.0.M3).
From the console:
01:13:50,630 INFO main mapred.JobClient - Elasticsearch Hadoop Counters
01:13:50,630 INFO main mapred.JobClient - Bytes Written=173923
01:13:50,630 INFO main mapred.JobClient
ElasticSearch is an amazing product, that i'm trying to master, but have
some queries:
I just switched to version 1.0.0 - so the code below is run against that
version
*1.* I start with a fresh clean installation and try to add a mapping:
PUT infopoint/DIVER/_mapping
{
mapping : {
Here is my problem:
I query some data from MySql, and then index those data to ElasticSearch.
While the data in MySql is updating (update and insert) all the time, so I
have to update ElasticSearch index accordingly.
I cannot afford to do full index (data is huge), and I should not do that
Not sure I follow, but if routing is supplied, the routing value will be
used to hash to a single shard on which the GET is performed on. If routing
is not supplied, the doc ID will be used to hash to a single shard on which
the GET is performed on. In either case, 1 shard is used to GET the
1/2) There are several ways, but one way to create an index + mapping in
one shot is:
PUT http://localhost:9200/infopoint
{
mappings : {
DIVER : {
properties : {
location : {
type : geo_point,
lat_lon: true,
geohash:
How does hash algorithm work on 2 variable at the same time? For eg:
1) insert a doc with route value 1
2) ES creates doc id A
3) Send a GET for doc id A with no routing value - In this case how is ES
able to find just one shard since it doesn't have routing value that it can
use to find the
Hi all,
I am new to Elastic Search and I had a use case which needs to
create a mapping dynamically on addition of an index.
I heard about dynamic templates and read some about it, all examples given
are regarding mapping dynamically added fields.
Is the same possible for index ?.
Es will route your doc to shard corresponding to routing value 1.
If you search for docA without routing value, 2 options:
You are lucky: hash id A correspond to the same shard as routing value 1: you
get the doc
You are not: you won't find the doc.
So, when using routing value at index time,
Elasticsearch will index or update any document you will send to it.
So get the delta on your side and send documents you want to update to
elasticsearch.
Did I misunderstand the question?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 21 févr. 2014 à 03:04, Daniel Guo
We have customers creating documents with arbitrary fields and we need to
both analyze and sort them on request. I wonder if there is a chance to
configure that for every field of this documents type a multi field mapping
is created, with an untouched version and an analyzed version? Like
80 matches
Mail list logo