Hi ,
I have a very huge index with _source disabled.
I need to change the number of shards for it , like increase or decrease.
I know i cant do a reindex operation as _source disabled.
But then its possible to migrate the index due to presence of docuValues.
Kindly let me know , how to do
Responses inline.
On Wed, Mar 19, 2014 at 7:25 PM, Zachary Tong zacharyjt...@gmail.comwrote:
Yeah, in case anyone reads this thread in the future, this log output is a
good indicator of multicast problems. You can see that the the nodes are
pinging and talking to each other on this log line:
Have a look at aggregations.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 mars 2014 à 03:55, Nguyen Manh Tien tien.nguyenm...@gmail.com a écrit :
Hello,
In SQL i can use GROUP BY to limit number of rows in each group like bellow.
Is there any way to do that in
Hi Adrien,
thanks for respone.
The Curator is a very helpful tool thanks for the link!
I tried it and save about 1 GB of disk space (24-23GB) i think the effekt
increases with more indexes.
Am Mittwoch, 19. März 2014 17:42:56 UTC+1 schrieb Adrien Grand:
Hi,
This setting doesn't exist
Yes, the problem is because when the histogram encounters buckets with no data
it assumes zero values instead of joining the points between the 2 interval.
I solved it by using a different kibana version, I effectively found 2 patches
one with a zero-fill checkbox and another one with an option
No. I have no idea.
I think something is wrong with your installation.
Could you just
download elasticsearch 1.0.1 (zip or tar.gz file)
unzip it
Run bin/plugin -install x (your plugin)
Run bin/elasticsearch
Only those commands please.
If it works, then something went wrong when you
Best: when you push in your application to MySQL, push as well to elasticsearch.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 mars 2014 à 08:29, Komal Parekh komaldpar...@gmail.com a écrit :
Hello,
We are running one application which has very large amount of data
Hi Georgi Ivanov,
yes,i am able to understand the Exception i.e.
UnresolvedAddressException,but you are telling that to make sure host1 and
host2 are resolved by adding entries to /etc/hosts to wherever the file in
on Windows,for this can you give me the steps how to approach this.Sorry i
Hi all,
Assume my schema is
{
settings: {
index: {
analysis: {
analyzer: {
ik_analyzer: {
tokenizer: ik,
filter: [engram]
}
},
filter: {
engram: {
type: edgeNGram,
min_gram: 2,
max_gram: 10
}
}
}
}
},
mappings: {
main: {
_analyzer: {
path: analyzer_name
},
properties: {
Hi Preeti Jain,
i am venu,i am new to this technology i.e.
elasticsearch,i am trying to do communicate between the java and
elasticsearch like ccommuniccation between java and oracle,i am surfing for
the examples but unable to get the required way.Can help to achieve the
Thanks David for your prompt response. Actually we are using MSSQL not
MySQL. So this solution will not work for us.
On Thursday, March 20, 2014 1:04:28 PM UTC+5:30, David Pilato wrote:
Best: when you push in your application to MySQL, push as well to
elasticsearch.
--
David ;-)
Twitter
Thanks David for your prompt response. But we want some automatic Push or
poll mechanism for this.
On Thursday, March 20, 2014 1:04:28 PM UTC+5:30, David Pilato wrote:
Best: when you push in your application to MySQL, push as well to
elasticsearch.
--
David ;-)
Twitter : @dadoonet /
It's still unclear, I've decoded my whole text and instead I'm getting this
kind of text.
Where should I see my actual text?
I also tried using different charset, but still unclear.
/Filter/FlateDecode/Length 1549
stream
xœXKoÛF ¾ ð Б â –\.Ék€8MÑ^
÷ $=Ð %
–-—”ìôßwfvgw–‘ (
title and ti have the same duplicate dataset, I used ti for testing to
reduce the array elements to just two, but no luck.
I'll get some test data and post it in a bit.
/M
On Thursday, 20 March 2014 03:12:47 UTC+1, Zachary Tong wrote:
Could you provide a small recreation of the problem in a
Thanks David,
If you check my sample above, i don't want any aggregate info (min, max,
count, ...) for each group, just want to get top N result from each group,
is this possible?
On Thu, Mar 20, 2014 at 1:53 PM, David Pilato da...@pilato.fr wrote:
Have a look at aggregations.
--
David ;-)
Terms aggregation should be what you are looking for.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 20 mars 2014 à 10:24:02, Nguyen Manh Tien (tien.nguyenm...@gmail.com) a
écrit:
Thanks David,
If you check my sample above, i don't want any
Hi All.
I am a newbie to Elastic search and I am configuring Kibana with Logstash
and Redis and Elasticsearch in Centos 32 Bit and when i am trying to
start the service of elastic search i am getting the below error
WrapperSimpleApp Error: Unable
to locate the class
Have you tried the JDBC river for poll mechanism?
https://github.com/jprante/elasticsearch-river-jdbc/
Jörg
On Thu, Mar 20, 2014 at 9:50 AM, Komal Parekh komaldpar...@gmail.comwrote:
Thanks David for your prompt response. But we want some automatic Push or
poll mechanism for this.
--
I think I'm starting to understand what you are trying to get…
You don't want original content but only extracted content, right?
I think that if you store content it should work.
Something like this (in mapping):
{
person : {
properties : {
file : {
type
Hello,
Like we have cascade on update and cascade on delete in SQL , do we have
any such functionality in ElasticSearch?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
Hi,
Sorry that I am relatively fresh to elasticsearch so please don't be too
harsh.
I feel like I'm not being able to understand the behaviour of any of the
fuzzy queries in ES.
*1) match with fuzziness enabled*
{
query: {
fuzzy_like_this_field: {
field_name: {
Yes we have done some implementation on river but it does not give real
time data. And also it is not as much mature. So for real time data we need
to use some push mechanism which can help up to have realtime data.
On Thursday, March 20, 2014 3:13:13 PM UTC+5:30, Jörg Prante wrote:
Have you
Check terms aggregation, it allow return specified top field in a group,
can i return the whole doc there?
On Thu, Mar 20, 2014 at 4:36 PM, David Pilato da...@pilato.fr wrote:
Terms aggregation should be what you are looking for.
--
*David Pilato* | *Technical Advocate* |
Not exactly but I think you can use
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query
to remove all children corresponding to a given parent and then remove the
parent?
I suppose you are talking about parent / child feature
Hi,
The aggregation doesn't work because today, when you enter the context of a
nested field in an aggregation, it is not possible to escape it. I don't
think there is an easy way to modify your data model in order to work
around this issue, however this is an issue that we plan to fix in the
Hi
I am struggling to get this working too. I'm just trying locally for now,
running Shark 0.8.1, Hive 0.9.0 and ES 1.0.1 with ES-hadoop 1.3.0.M2.
I managed to get a basic example working with WRITING into an index. But
I'm really after READING and index.
I believe I have set everything up
Can anybody help me it is a bit urgent .
On Thursday, March 20, 2014 3:09:56 PM UTC+5:30, Anikessh Jain wrote:
Hi All.
I am a newbie to Elastic search and I am configuring Kibana with Logstash
and Redis and Elasticsearch in Centos 32 Bit and when i am trying to
start the service of
Got it! Seems that clear cache is workaround :)
client.admin().indices().prepareClearCache(INDEX_NAME).execute().actionGet();
W dniu czwartek, 20 marca 2014 11:17:02 UTC+1 użytkownik Tomasz Romanczuk
napisał:
I have indexed 1 queries in percolator. Next I want to update some of
them and
As David said, for push, you must modify your middleware that performs the
insert/update/delete - there is nothing ES can do for you. You must add an
ES client that can execute the respective operations on your data against
an ES cluster.
Poll method does not scale, push does scale.
I do not
Awesome demo! Very well done!
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 20 mars 2014 à 11:41:24, Kevin Wang (kevin807...@gmail.com) a écrit:
Hi All,
I've released version 1.2.0 of Elasticsearch Image Plugin.
The Image Plugin is an Content Based
This is a bug and has been fixed. Can you try using the latest 0.90.x
release or maybe upgrade to the latest 1.0.x release?
On 20 March 2014 00:02, Riyaz mohamed.ri...@gmail.com wrote:
Hi,
I am using elasticsearch v0.90.5 and trying to set QueryName (_name) for a
has_child query as
Hello together,
I'm looking into getting rid of the s3 gateway. I'm snapshotting to s3 and
use extensive replication (each shard replicated to 4 machines). I the
unlikely case that I loose a shard completely, then I can recover from
backup, and I even have to possibility to repopulate the data
I indexed 10 queries in percolator index. Next 9 was deleted.
Sometimes it looks like index didn't refresh (I repeated step many times),
deleted queries are still matched and returned in resposne. I tried to
clear cache and refresh index, but sometimes it doesn't work, my code:
Hi,
I've been assigned to upgrade a single node production Elasticsearch
server.
The ideal would be to have an upgrade process description witch avoid
losing data . So my question taking this requirement into account is it
possible to upgrade such a server knowing from ElasticSearch
I recommend using master - there are several improvements done in this area. Also using the latest Shark (0.9.0) and
Hive (0.12) will help.
On 3/20/14 12:00 PM, Nick Pentreath wrote:
Hi
I am struggling to get this working too. I'm just trying locally for now,
running Shark 0.8.1, Hive 0.9.0
Nice post-mortem, thanks for the writeup. Hopefully someone will stumble
on this in the future and avoid the same headache you had :)
How would you force IPV4? I tried using preferIPv4Stack and setting
network.host to _eth0:ipv4_, but it still did not work. Even switched off
iptables at a
Hi, I have a document indexed in this format.
hits: [
{
_index: temp,
_type: test,
_source: {
brand: [
A,
B,
C,
D,
E,
F,
I have the same issue with your version and I don't see where in kibana i
can say : disable zero-fill checkbox.
On Thursday, March 20, 2014 2:29:24 PM UTC+1, Xwilly Azel wrote:
On Thursday, March 20, 2014 1:15:44 PM UTC+1, Isaac Hazan wrote:
The attached as well.
It’s another way to
Never mind, I'm an idiot, it clearly mentions it needs 0.90.x in the README
:(
On Wednesday, March 19, 2014 12:49:46 PM UTC-4, Alex at Ikanow wrote:
I downloaded the latest Kibana3, popped it on a tomcat instance sharing
space with my elasticsearch (0.19.11) instance and tried to connect
Thankyou for the reply.I am not getting any errors,but i am not able to
connect to my elasticsearch using java.Here my code.
import java.net.InetSocketAddress;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import
Hi Elasticsearch, still waiting to see if this is a known issue, possibly
that's resolved in a future release, or if this is something I did? I'd
appreciate knowing, at least, if anyone can help. Thanks much.
On Friday, March 14, 2014 5:29:10 PM UTC-4, Jon-Paul Lussier wrote:
Hey
Use port 9300
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 mars 2014 à 14:34, Venu Krishna yvgk2...@gmail.com a écrit :
Thankyou for the reply.I am not getting any errors,but i am not able to connect
to my elasticsearch using java.Here my code.
import
Actually this is my elasticsearch index http://localhost:9200/, as you
told i have replaced 9200 with 9300 in the above code ,then i executed the
application i am getting following exceptions.
Mar 20, 2014 7:17:45 PM org.elasticsearch.client.transport
WARNING: [Bailey, Gailyn] failed to get
nobody there to help me , help me please i am in need
On Thursday, March 20, 2014 3:31:42 PM UTC+5:30, Anikessh Jain wrote:
Can anybody help me it is a bit urgent .
On Thursday, March 20, 2014 3:09:56 PM UTC+5:30, Anikessh Jain wrote:
Hi All.
I am a newbie to Elastic search and I am
Thanks Martijn
Tried in v0.90.12 and it works. Thank You!
On Thursday, March 20, 2014 7:21:57 AM UTC-4, Martijn v Groningen wrote:
This is a bug and has been fixed. Can you try using the latest 0.90.x
release or maybe upgrade to the latest 1.0.x release?
On 20 March 2014 00:02, Riyaz
You are correct in your analysis of the fuzzy scoring. Fuzzy variants are
scored (relatively) the same as the exact match, because they are treated
the same when executed internally.
If you want to score exact matches higher, I would use a boolean
combination of an exact match and a fuzzy
On Thursday, March 20, 2014 3:18:35 PM UTC+1, Xwilly Azel wrote:
On Thursday, March 20, 2014 2:29:24 PM UTC+1, Xwilly Azel wrote:
On Thursday, March 20, 2014 1:15:44 PM UTC+1, Isaac Hazan wrote:
The attached as well.
It’s another way to circumvent the problem
Hi,
also I am getting the warning Message not fully read from one of my ES
Nodes.
I certainly googled a lot and found out that, different versions of JVM as
well as different versions of ES server and clients can cause this.
Well, I'm pretty sure, I have the same versions everywhere.
My
What did you download?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 20 mars 2014 à 14:54, Anikessh Jain anikesshjai...@gmail.com a écrit :
nobody there to help me , help me please i am in need
On Thursday, March 20, 2014 3:31:42 PM UTC+5:30, Anikessh Jain wrote:
Can
Hi,
All I'm doing is building a map and passing that to Gson for serialization.
A snippet from my map method:
logEntryMap.put(cs(User-Agent), values[9]);
context.write(NullWritable.get(), new Text(gson.toJson(logEntryMap)));
values[] is a String array. Everything that goes into the map that
What does Elasticsearch use to serve up responses?
Thanks!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
Hi,
If I understand well, the formula used for the term frequency part in the
default similarity module is the square root of the actual frequency. Is it
possible to modify that formula to include something like a
min(my_max_value,sqrt(frequency))? I would like to avoid huge tf's for
documents
There is something wrong with your set-up
How many ES node s you have ?
On which IP addresses are ES hosts listening ?
I understood you have 2 hosts , but it seems you have only one on your
local machine .
This is the code (a bit modified) I am using at the moment
public
My guess is that GSON adds the said field in its result. The base64 suggests
that there's some binary data in the mix.
By the way, can you show up more of your code - any reason why you create the JSON yourself rather than just pass
logEntryMap to Es-Hadoop?
It can create the json for you -
wget
https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.20.2.tar.gz
tar xvf elasticsearch-0.20.2.tar.gz
mv elasticsearch-0.20.2 elasticsearch
wget
http://github.com/elasticsearch/elasticsearch-servicewrapper/archive/master.zip
unzip master
mv
On Thursday, March 20, 2014 3:09:56 PM UTC+5:30, Anikessh Jain wrote:
Hi All.
I am a newbie to Elastic search and I am configuring Kibana with Logstash
and Redis and Elasticsearch in Centos 32 Bit and when i am trying to
start the service of elastic search i am getting the below error
I have unit tests for this MR job, and they show that the JSON output is a
string as I'd expect, so Gson is most likely not the cause.
I'm hesitant to show more code (owned by the work-place), but I can
describe it a little bit further:
- The mapper gets a W3C log entry
- The log entry is
please help on the above error
On Thursday, March 20, 2014 9:03:46 PM UTC+5:30, Anikessh Jain wrote:
wget
https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.20.2.tar.gz
tar xvf elasticsearch-0.20.2.tar.gz
mv elasticsearch-0.20.2 elasticsearch
wget
Don't bother trying digging deeper since I suspect network.
I tried many different configurations while trying to pinpoint the problem,
so I did not write down the various states, just the successes/failures.
Using the described methods, IPV4 was indeed working, but multicast was
still not
It depends on what you mean by serve, but transport is handled by Netty and
storage by Lucene. On top of them and a few more libraries, Elasticsearch
adds distribution management, search, percolation, aggregations, etc.
On Thu, Mar 20, 2014 at 3:51 PM, Joshua P jpetersen...@gmail.com wrote:
You might be wondering if it is using Tomcat or Glassfish or something
too. The answer is not usually. There is a plugin that will let you
install it in a servlet container but most folks just run it as a
standalone service. It has an init script and stuff.
Nik
On Thu, Mar 20, 2014 at 1:02
thnks for the info
how to run the service on ipv4 bcoz when i am starting ti is starting on
ipv6 instead on ipv4 ,how do i change that
On Thursday, March 20, 2014 10:30:35 PM UTC+5:30, David Pilato wrote:
I don't know.
So let me help you for elasticsearch:
wget
Thank you both! This is what I wanted to know.
On Thursday, March 20, 2014 1:05:00 PM UTC-4, Nikolas Everett wrote:
You might be wondering if it is using Tomcat or Glassfish or something
too. The answer is not usually. There is a plugin that will let you
install it in a servlet container
Sure - take a look at dynamic_templates - you define one under your index/type
and specify the match for your field.
You can either define the mapping for the fields that you want and leave the so-called catch-all (*) directive last or,
if you have type of naming convention, use that:
I can't seem to make my EC2 cluster of 2 nodes work.
- If I comment out the section below, I can connect to each instance
individually.
- With the sections included as below, I can connect to node 1, but node 2
gives me Request failed to get to the server (status code: 0):
cluster.name:
Hi Everyone,
I have put together some auto complete functionality based around the blog
posted on ES page (http://www.elasticsearch.org/blog/you-complete-me/). I
use the following to create the index.
PUT test_index
{
mappings: {
query : {
properties : {
product : { type
Hi,
I am following the elasticsearch chef cookbook tutorial here:
http://www.elasticsearch.org/tutorials/deploying-elasticsearch-with-chef-solo/
I am getting stuck on this step:
time ssh -t $SSH_OPTIONS $HOST sudo chef-solo --node-name elasticsearch-test-1
-j /tmp/node.json
Here is the error
On the same page I gave you, you have a chapter configuration.
I think you should at least read it.
Also on the guide, there is a search bar. Very useful to search for some
information. Try with network.
I'm sure you will find information you are looking for.
--
David ;-)
Twitter : @dadoonet
can't seem to make my EC2 cluster of 2 nodes work.
- If I comment out the section below, I can connect to each instance
individually and query it.
- With the sections included as below, I cannot query neither node, I get
Request failed to get to the server (status code: 0):
cluster.name:
/etc/elasticsearch$ more /var/log/elasticsearch/mycluster.log
[2014-03-20 19:00:37,635][INFO ][node ] [node_1] version
[0.90.10], pid[3520], build[0a5781f/2014-01-10T10:18:37Z]
[2014-03-20 19:00:37,635][INFO ][node ]
[node_1]initializing
...
Ok, I tried all the hints but now I have can't solve my original problem.
I need to do an update to a value of type custom_foo.
In my previous approach I would do ctx._source.custom_foo.value+=1.
But now, there is a vector and I dont know which index is custom_foo.
Is there any fast method to
Thanks for the very commendable description.
Do you have also a full stack trace of the Message not fully read
exception? Is it happening in the client?
Do you use any plugins?
Jörg
On Thu, Mar 20, 2014 at 3:37 PM, tufi tufan.oezdu...@yoc.com wrote:
Hi,
also I am getting the warning
Hello,
I would appreciate if someone can suggest optimal number of shards per ES
node for optimal performance or any recommended way to arrive at number of
shards given number of core and memory foot print.
Thanks in advance
Reagards
Rajan
--
You received this message because you are
Unfortunately, there is no way that we can tell you an optimal number. But
there is a way that you can perform some capacity tests, and arrive at
usable numbers that you can extrapolate from. The process is very simple:
- Create a single index, with a single shard, on a single
Assuming that this is my mapping:
{
event: {
properties: {
name: {
type: string,
index: analyzed,
fields: {
raw: {
type: string,
index: not_analyzed
}
}
},
*Muslim population is now Greater than non-Muslims. If anywhere Muslims are
tortured by non-Muslims, they should revolt. Before be killed, Muslim must
kill at least 10 non-Muslims. Almighty ALLAH must help Muslims and make
them win. Remember, this work is compulsory in INDIA, MYANMAR, CENTRAL
You can provide your own similarity to be used at the field level, but
recent version of elasticsearch allows you to access the tf-idf values in
order to do custom scoring [1]. Also look at Britta's recent talk on the
subject [2].
That said, either your custom similarity or custom scoring would
Any clues to what i am missing, i turned discovery trace on, but dont't see any
useful info.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
ES is creating the log files upon startup but they are empty? I switched
every log level to DEBUG and it started pouring more log into
elasticsearch.log still, no query or indexing is logged.
-rw-r--r-- 1 elasticsearch elasticsearch 0 Mar 21 00:54
elasticsearch_index_indexing_slowlog.log
are both machines in the same security group?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion
The logging configuration specifies how and what to log, but it does not
specify when or what actually constitutes a slow query/index. Not all
queries/index requests are logged, just the slow ones. You need to define
the threshold in the main elasticsearch.yml config file.
Hi Brad
I agree with what Mark and Zachary have said and will expand on these.
Firstly, shard and index level operations in ElasticSearch are
peer-to-peer. Single-shard operations will affect at most 2 nodes, the node
receiving the request and the node hosting an instance of the shard
Include -Djava.net.preferIPv4Stack=true to your JAVA_OPTS environment
variable before starting Elasticsearch. Also read up on the network page
that David referred to:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-network.html
If you want a control wrapper around
I've been trying to trouble shoot an issue with my single ES node.
When I went to go look at it, it was at 100% diskspace usage. A lot of this
issue was due to ES logs taking up space on the volume.
When I cleared out the logs and recovered a lot of the space and tried to
restart then I saw
I've got the issue and solution. In above mapping specified, _all enabled
should be True in order to enable _timestamp and store it.
Hence, closing the post.
On Wednesday, 19 March 2014 19:31:35 UTC+5:30, nishidha randad wrote:
Hi,
I've been facing an issue retrieving _timestamp for
I am planning to use elasticsearch (ES) for storing event logs. Per day,
the application should store nearly 3000+ events and size will be around
30-50K.
I need to take some statistics monthly, half yearly, yearly also year
old data can be ignored sometimes but data should be retained for
Hi,
Is there any way to get the timestamp of an ES index creation through ES
query?
Thanks in advance,
Nishidha
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
87 matches
Mail list logo