No it can be more, it depends on what sort of queries you are doing and
what data structures/types you are indexing.
Best bet is to keep throwing data at the index until the server can't take
it, then you know the limit.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email:
For people that face the same problems, it's because the analyzers used in
highlight is included all the ones of the fields in the query if you don't
turn require_field_match to true.
FYI.
Ivan Ji於 2014年8月5日星期二UTC+8下午8時08分31秒寫道:
The query command I used is as
{'multi_match': {'fields':
ElasticSearch uses LZF on stored fields (including _source). The storage
requirements will depend on your implementation and the complexity of your
data, however, planning for a 1:1 ratio +10% with compression enabled aught to
put you on the right path. Otherwise, you'll have to experiment to
Hi,
I am using ES 0.90.13 with 10 clusters. Java Sun 1.7.53.
We find out that the # of replicas on our ES clusters was zero.
Hence, we increased the # of replica last week on ES2 to 1.
Since then the cluster became yellow - green. It runs fine for several
hours and then became yellow (with X
Elasticsearch uses LZ4, see
http://blog.jpountz.net/post/35667727458/stored-fields-compression-in-lucene-4-1
For storage requirements, you need around twice the disk space if you
incrementally grow your index, because of additional segment merge space
overhead.
Jörg
On Tue, Aug 12, 2014 at
The HTTP module is not disabled so the three master eligible nodes can also
serve as result servers. Result servers can collect shard responses from
other nodes for sending the merge result back to clients, without having to
search or index. This is similar to a kind of a bridge or a proxy server.
So you are wrestling with aliases. You can not delete aliases by file
system operations. Have you checked
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-aliases.html#deleting
for deleting aliases?
Jörg
On Tue, Aug 12, 2014 at 4:10 AM, Sam2014
I'm very new in Elasticsearch and have a question about the hierarchical
tokenizer of a path. Here is my code example:
My mapping code:
PUT /my_index
{
settings: {
analysis: {
analyzer: {
path-analyzer: {
type: custom,
tokenizer: path-tokenizer
Thor Loki Odin etc...
:p
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web
Hi,
I have looked at TermsLookupFilter and it is a good approach to cache
frequently used filters. However, even if I write a custom filter plugin, I
cannot use a BitSet to hold any sort of document identifier. Even the _uid
field is converted into a TermFilter.
Assume a scenario where I
On Monday, 11 August 2014 15:31:28 UTC+2, bitsof...@gmail.com wrote:
I have 8 data nodes and 6 coordinator nodes in an active cluster running
1.2.1
I want to upgrade to 1.3.1
When reading
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html
the
http://www.elasticsearch.org/ can't be visit
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion
Hi~
I'm new to both solr and elasticsearch. I have read that both the two
support creating index on hdfs.
So, what's the difference between solr and elasticsearch in hdfs case?
--
View this message in context:
I would not store my indices on HDFS. :)
Too slow IMHO.
Use local storage and let elasticsearch distribute your data over multiple
machines. Basically, with elasticsearch you don't need HDFS to scale out.
I don't know for SOLR.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet
Does anyone have an idea hot to escape and in a query ???
07 август 2014, четвъртък, 18:27:08 UTC+3, Tihomir Lichev написа:
Hello,
I recently discovered very interesting problem
The analyzer is *whitespace:*
{
mappings: {
test: {
properties: {
title: {
Hey,
The default analyzer used in the completion suggester is the simple one,
which strips out numbers.
--Alex
On Thu, Aug 7, 2014 at 1:43 PM, Hemant hemant19...@gmail.com wrote:
Hello,
I was trying the following use cases using completion suggester -
1. Suggest Song on search song by id
This is not a problem of escape.
Always use match query. Never use query_string.
Jörg
On Tue, Aug 12, 2014 at 10:50 AM, Tihomir Lichev shot...@gmail.com wrote:
Does anyone have an idea hot to escape and in a query ???
07 август 2014, четвъртък, 18:27:08 UTC+3, Tihomir Lichev написа:
I think I don't agree ...
I'm using query_string because I want to give the users the ability to use
AND, OR +, - etc., out of the box.
I'm able to escape all other symbols except and , and I can use them as
part of the field content, also as part of the query like any regular
letter.
I dont
Once again, it is not related to escaping.
Why don't you use simple query_string?
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html
It wraps the crappy query_string into match query, correctly analyzed and
parsed to Elasticsearch
Thanks Alexander,
It worked with Standard Analyzer. Will look for different analyzers, and
chose the best which fits the use case.
Thanks,
Hemant
On Tuesday, August 12, 2014 2:21:48 PM UTC+5:30, Alexander Reelsen wrote:
Hey,
The default analyzer used in the completion suggester is the
That makes much more sense :)
Thanks, it works now!
And somewhere in the query_string docs should be mentioned that
simple_query_string should be preferred in where applicable :-P
12 август 2014, вторник, 12:07:58 UTC+3, Jörg Prante написа:
Once again, it is not related to escaping.
Why
Hi,
We are using Elasticsearch for our search application. currently we have 3
master nodes and 5 data nodes in our cluster with 160 indices with 5 shards
each and replica of 2.
Currently we have 20k users. We will create alias for each user with alias name
as user id. As the number of alias
On Tue, Aug 12, 2014 at 1:13 AM, Jeff Steinmetz jeffrey.steinm...@gmail.com
wrote:
Although I was specifically talking about documentation for the Java
search API.
For example, there is this
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/java-facets.html
But
this is my DSL
{
query:{
match_all:{}
},
aggs: {
range: {
date_range: {
field: @timestamp, format:
yyy.MM.dd.HH.mm.ss,
ranges: [{from: 2014.08.12.09.18.45, to:
Heya,
I'm pleased to announce the release of the Elasticsearch File System River
Plugin, version 1.2.0.
FS River Plugin offers a simple way to index local files into elasticsearch..
https://github.com/dadoonet/fsriver/
Release Notes - fsriver - Version 1.2.0
Fix:
* [84] - Empty fs river
Replace
.setTransportAddress(new InetSocketTransportAddress(localhost, 9300));
with
.addTransportAddress(new InetSocketTransportAddress('localhost', 9300)).
And I guess if you dont give cluster name, it automatically joins the
default cluster.
I tried the code that you provided and changed
hello guys,
i tryed to install elasticsearch 1.3.1 today on a dev machine and got a
problem related to mlockall that i have never seen before and cant wrap my
head around.
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x7f09617e040a, pid=3143,
Hi,
Does the TermsLookupFilter cache results in a bitmap/bitset? Or does it
cache the results of the filter completely without using bits for document
identifiers?
Thanks,
Sandeep
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
Can you try with Java 7u60?
Jörg
On Tue, Aug 12, 2014 at 11:58 AM, Markus Burger m4rkus.bur...@gmail.com
wrote:
hello guys,
i tryed to install elasticsearch 1.3.1 today on a dev machine and got a
problem related to mlockall that i have never seen before and cant wrap my
head around.
# A
sadly exactly the same error accours :/
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x7fae3d43440a, pid=21440, tid=140387243357952
#
# JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build
1.7.0_60-b19)
# Java VM: Java HotSpot(TM)
Hi David,
Thanks for your reply.
I'm not talking about scaling out.
Out team has a project providing web services based on Lucene. The current
architecture indexes the documents to lucene through some kafka-like queues.
But we have a problem: when massive documents comes for indexing(eg:
Hi David,
Thanks for your reply.
I'm not talking about scaling out.
Our team has a project providing web services based on Lucene. The current
architecture indexes the documents to lucene through some kafka-like queues.
But we have a problem: when massive documents comes for indexing(eg:
OK. Can you check if your mem lock setting for the S user allows to
reserve locked memory?
ulimit -l
This is by default on RHEL 64kb which should be changed to unlimited.
Jörg
On Tue, Aug 12, 2014 at 1:30 PM, Markus Burger m4rkus.bur...@gmail.com
wrote:
sadly exactly the same error accours
S User -- ES user
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
Hello.
There is a field with large amount of text. How I can get part of value,
such as 200KB? By analogy with the from () / size ().
Like as Give me the first 200KB part of object with this Id
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
[root@dev limits.d]# runuser -s /bin/bash elasticsearch -c 'ulimit -l'
unlimited
[root@dev limits.d]# sysctl vm.max_map_count
vm.max_map_count = 262144
markus
Am Dienstag, 12. August 2014 13:58:30 UTC+2 schrieb Jörg Prante:
S User -- ES user
--
You received this message because you are
Heya,
I'm pleased to announce the release of the Elasticsearch File System River
Plugin, version 1.3.0.
FS River Plugin offers a simple way to index local files into elasticsearch..
https://github.com/dadoonet/fsriver/
Release Notes - fsriver - Version 1.3.0
Update:
* [74] - Update to
And did you try it with elasticsearch? I mean indexing and still using the
service?
If you really hit an issue, you can think of allocating new index on dedicated
nodes and then move them to live nodes.
Using aliases would be even better so you'll be able to switch from one old
index to the
And
rpm -q libffi
gives
libffi-3.0.5-3.2.el6.x86_64
?
Jörg
On Tue, Aug 12, 2014 at 2:19 PM, Markus Burger m4rkus.bur...@gmail.com
wrote:
[root@dev limits.d]# runuser -s /bin/bash elasticsearch -c 'ulimit -l'
unlimited
[root@dev limits.d]# sysctl vm.max_map_count
vm.max_map_count =
exactly...
[root@dev ~]# rpm -q libffi
libffi-3.0.5-3.2.el6.x86_64
markus
Am Dienstag, 12. August 2014 14:51:45 UTC+2 schrieb Jörg Prante:
And
rpm -q libffi
gives
libffi-3.0.5-3.2.el6.x86_64
?
Jörg
On Tue, Aug 12, 2014 at 2:19 PM, Markus Burger m4rkus...@gmail.com
javascript:
Hi all,
I have a 3 node cluster . One is Master and others are eligible for master
when the master node failed. I installed CSV River plugin in master node.
The csv files which need to process is also in master node. When i am
running the CSV river plugin from master , then it trying to
I think you could set in elasticsearch.yml:
node.river: _none_
On nodes you don't want to allocate any river.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 12 août 2014 à 15:01:27, Sree (srssreej...@gmail.com) a écrit:
Hi all,
I have a 3 node
Same configuration runs here, on bare metal. Maybe you run a VM?
Just a shot in the dark, it looks like a bug in libffi, and there are more
recent versions like 3.0.5, so if you feel like hacking this bug, I would
try if libffi 3.1 from https://github.com/atgreen/libffi/ gives a different
I am not sure to understand if your question is about aggregations or
indexing speed?
On Tue, Aug 12, 2014 at 11:24 AM, 陈浩 hum...@gmail.com wrote:
this is my DSL
{
query:{
match_all:{}
},
aggs: {
range: {
date_range: {
Yeah its a Vmware VM but i have a couple of ES 1.1 sitting in another
Cluster without Issues, are there known issues with running ES in a VM ?
Thanks Jörg! I'll update libffi the next couple of days
markus
Am Dienstag, 12. August 2014 15:12:16 UTC+2 schrieb Jörg Prante:
Same configuration
The link doesn't work.
I am not sure about the difference, I have always deleted the same, curl
XPUT http... //to create, and curl XDelete /http:...
Did the log tell you anything, there are some weird errors in there.
On Tuesday, August 12, 2014 3:04:39 AM UTC-4, Jörg Prante wrote:
So you are
Hi Ashish,
How many documents do your queries typically retrieve? (the value of the
`size` parameter)
On Tue, Aug 12, 2014 at 12:48 AM, Ashish Mishra laughingbud...@gmail.com
wrote:
I recently added a binary type field to all documents with mapping
store: true. The field contents are large
_field_names tracks the field names of the current index document, so if
you need to be in the context of your nested documents to aggregate on
their field names. That would give something like:
GET test/_search
{
aggs: {
nested_docs: {
nested: {
path: a
},
aggs: {
There are so many VMs but, together with native code paths like JNA +
mlockall + libffi etc. the chance that something odd breaks down the road
is always there...
With Vmware VM and mlockall, not everything can work like on bare metal.
Memory management strongly depends on the host OS settings.
Trying using a filter aggregation:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html
The idea is that the filter is the outer most aggregation and the
aggregation you actually want to filter is the sub-aggregation.
Cheers,
Hi,
Unfortunately this is not possible.
On Mon, Aug 11, 2014 at 12:48 PM, arthur.miro...@progforce.com wrote:
Hello, I'm new in elasticsearch and I have a question.
How can I do paging for nested object and is it possible? (I need to get
comments with paging)
*My mappping:*
{
test: {
The link doesn't work.
I am not sure about the difference, I have always deleted the indices the
same way, curl -XPUT http... //to create, and curl -XDelete /http:...
I did not create any aliases, I am just using Get _aliases to see whats
still on the node.
Did the log tell you anything, there
Hi Jin,
It is possible to write a custom aggregation, you can for example look at
this plugin: https://github.com/algolia/elasticsearch-cardinality-plugin
that implements a aggregation as a plugin.
If that would work for you, another option would be to contribute this
aggregation to Elaticsearch
Hi all,
I know that JRE 1.7.0_55 is the 'recommended' jre for ES
1.2+:http://www.elasticsearch.org/blog/java-1-7u55-safe-use-elasticsearch-lucene/
There seems to have been some posts regarding Java 8/MVEL issues
(e.g. http://jira.codehaus.org/browse/MVEL-299) but is the current JRE 8
(11) ok
You can use Java 8u5
Do not use Java 8u11 with Groovy or Guava, there is a bug
https://bugs.openjdk.java.net/browse/JDK-8051012
Jörg
On Tue, Aug 12, 2014 at 4:00 PM, Derry O' Sullivan derr...@gmail.com
wrote:
Hi all,
I know that JRE 1.7.0_55 is the 'recommended' jre for ES 1.2+:
Hi Jorg,
Thanks for the fast response. Looks like it affects jre7U65 so i guess the
safe options are:
1.7.0 (55/60) or 1.8.0 (5)
Derry
On 12 August 2014 15:09, joergpra...@gmail.com joergpra...@gmail.com
wrote:
You can use Java 8u5
Do not use Java 8u11 with Groovy or Guava, there is a bug
@Jorg,
Thanks for the advice, I will make sure that I do so during actual
implementation, but this is purely for testing the connection.. Also, I see
a client.close() and a client.threadPool().shutdown(), but I do not see a
client.threadPool().close(). I am using ES v1.3.1.
@ Vivek,
I am not
Mark - isn't the shard allocation all/none a cluster wide setting? Hence
why it does that on all nodes?
Clinton - What you said makes sense, however if that procedure is incorrect
then the official upgrade page on the elasticsearch site should be changed,
as it states
When the process is
Also, Clinton, per the upgrade page it states the below, so what you are
saying is that re-enabling allocation after each node is restarted (going
from 1.2.1 to 1.31) that the below *will not* apply (incompatibility)
because shards would be going from 1.2.1 to 1.3.1 vs the reverse Correct?
I have a five node Elasticsearch cluster set up so that two nodes are in
one zone and the other three nodes are in a different zone (let's call them
SideA and SideB) via use of the forced awareness attributes. I also have a
sixth node that has Logstash on it. Logstash is outputting to one of
Just wondering, will remove/re-install get rid of the stagnant index since
I plan on upgrading from 1.2.1 to 1.2.2?
On Tuesday, August 12, 2014 9:42:10 AM UTC-4, Sam2014 wrote:
The link doesn't work.
I am not sure about the difference, I have always deleted the indices the
same way, curl
Actually I am using groovy... So 'localhost' and localhost are same fr
me... Are you getting object of transport client in your code...
On Tuesday, August 12, 2014, Kfeenz kfeeney5...@gmail.com wrote:
@Jorg,
Thanks for the advice, I will make sure that I do so during actual
implementation,
Yes I receive back a TransportClient back from the call client = new
TransportClient()
In debug I see that the nodeService.clusterName.value = mycluster as
expected.
But it still fails on the execute() call
On Tuesday, August 12, 2014 10:29:30 AM UTC-4, Vivek Sachdeva wrote:
Actually I am
There is one node with zen unicast that can not connect. I do not know how
to find more about that, it seems an EC2 issue.
Except you are being attacked in vain by the scripting vulnerability, there
are only syntax errors in configuration and queries... you should be able
to fix that.
Jörg
On
That's ok, this is how Elasticsearch works. There is no need to randomize
or shuffle primaries. They have exactly the same work to do as replicas.
Replicas are promoted to primaries automatically on demand.
Jörg
On Tue, Aug 12, 2014 at 4:24 PM, Andrew Ruslander
andrew.ruslan...@gmail.com
I created a gist for the mappings I am sending through node-es, with a
returned error:
MapperParsingException[Root type mapping not empty after parsing!
The gist is at
https://gist.github.com/KnowledgeGarden/b965b7e78f19f9be9025
Note, that if I remove the upper portion of the json:
topics: {
Yes, it's client.threadPool().shutdown().
Jörg
On Tue, Aug 12, 2014 at 4:17 PM, Kfeenz kfeeney5...@gmail.com wrote:
@Jorg,
Thanks for the advice, I will make sure that I do so during actual
implementation, but this is purely for testing the connection.. Also, I see
a client.close() and a
Hi,
Can someone please give me a hint, I'm having trouble getting a solution
for this.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Hi ,
Is there any plugin point like river or analyzer to add my own custom made
Lucene query ?
Thanks
Vineeth
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
Thanks for the response, Jörg. My only concern is that what if the two
sides are located some non-trivial geographic distance from each other? If
all the primaries live on SideB and there is a large quantity of updates
coming into a node on SideA, doesn't it have to forward all that to the
Latency is an issue, you are right. But that is not related to
primary/replica distribution.
If cluster state changes, e.g. new field names arrive, the master must be
reached very quickly for update, and the master pushes out the new state to
all nodes. It is crucial that state propagation must
hi,
you should not remove core.
please try to add that.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-put-mapping.html
2014年8月12日火曜日、Jack Parkjackp...@topicquests.orgさんは書きました:
I created a gist for the mappings I am sending through node-es, with a
returned
https://lh6.googleusercontent.com/-adNxVo3OpSc/U-o2sO7TcMI/BJ4/woxOBumFC80/s1600/head.png
As you can see by the image snapped from 'head,' I have unallocated shards
while a couple of nodes are empty. This happened when I recycled some
nodes. The last two indexes are freshly-created and
Your code works if you dont add cluster name to it. Tried with Java
this time.. :)
On Tue, Aug 12, 2014 at 7:47 PM, Kfeenz kfeeney5...@gmail.com wrote:
@Jorg,
Thanks for the advice, I will make sure that I do so during actual
implementation, but this is purely for testing the
The default cluster name is elasticsearch. Changing it in your code
works
On Tue, Aug 12, 2014 at 9:33 PM, Vivek Sachdeva
vivek.sachd...@intelligrape.com wrote:
Your code works if you dont add cluster name to it. Tried with Java
this time.. :)
On Tue, Aug 12, 2014 at 7:47 PM,
I upgraded to 1.3.1 and one of the indexes seems to backup ok now, though
the other one is still giving the error.
On Monday, August 11, 2014 8:40:44 PM UTC+3, Aleh Aleshka wrote:
Hello
I have a 1.2.2 cluster of 6 nodes with several indexes configured with 2
to 4 replicas.
I'm trying to
This one is for the devs, and Rashid in special: there is any new version
of Kibana in the works?
I'm asking this because I'm about to start a project in my company for log
management, and there are some requisites to it (user separation, event
correlation, histogram to compare two values, and
Just curious:
on all browsers here in silicon valley, I cannot raise any elasticsearch.org
Is it just me (or comcast?)
Other websites appear fine.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving
Working fine for me (in Brazil).
On Tuesday, August 12, 2014 1:57:48 PM UTC-3, Jack Park wrote:
Just curious:
on all browsers here in silicon valley, I cannot raise any
elasticsearch.org
Is it just me (or comcast?)
Other websites appear fine.
--
You received this message because
It appears that liquidweb (their host) has some problems. I and a
friend in Canada can open it on my cell phone, but nobody I know
around here can raise it on some other networks.
On Tue, Aug 12, 2014 at 9:58 AM, Antonio Augusto Santos
mkha...@gmail.com wrote:
Working fine for me (in Brazil).
I just hear second hand that the outage is pretty large.
On Tue, Aug 12, 2014 at 10:14 AM, Jack Park jackp...@topicquests.org wrote:
It appears that liquidweb (their host) has some problems. I and a
friend in Canada can open it on my cell phone, but nobody I know
around here can raise it on
Hello -
Github has a pretty slick search interface for issues, complete with a set
of qualifiers that users can stick onto their free-form text queries.
They're using ES for their code search, and I'm guessing for their issues
as well:
https://help.github.com/articles/searching-issues
Is there
Look into the Lucene query parser, which its the syntax that the query
string query uses. After that, look into the various Lucene contrib modules
that extend the query syntax (span near is one).
I do not think that anyone has implemented a new query parser as an
elasticsearch plugin yet, but I
I'd implement the query parser in your application and then build a the
queries and send then to Elasticsearch. The advantage of that is that you
don't have to bounce all the Elasticsearch nodes when you upgrade your
query language. Its what we did. Our code isn't elegant or pretty or
anything
My recommendation is to use the simple_query_string, which is a mini
language for itself.
Beside this, I am about completing a plugin for Contextual Query Language
(CQL) and Search/Retrieve via URL (SRU) for bibliographic searches, which
is very close to your question.
Thanks!
The top level of my mappings.json is changed from the gist to look like this:
{
core: {
properties: {
lox: {
And that appears to work.
On Tue, Aug 12, 2014 at 8:43 AM, Jun Ohtani joht...@gmail.com wrote:
hi,
you should not remove core.
please try to add that.
The query size parameter is 200.
Actual hit totals vary widely, generally around 1000-1. A minority are
much lower. About 10% of queries end up with just 1 or 0 hits.
On Tuesday, August 12, 2014 6:31:29 AM UTC-7, Adrien Grand wrote:
Hi Ashish,
How many documents do your queries
To add a little more information, the six nodes are broken up into three
groups. The first two have node.zone: first, the second are node.zone:
second and the third are node.zone: third.
I also have cluster.routing.allocation.awareness.attributes: zone in my
config.
So as you can see, the
If the 200kb number is fixed, then the simplest solution would be to store
that content separately in a new field. It does not need to be analyzed,
just stored.
Perhaps highlighters might work. Never used them, so it is just a guess.
Cheers,
Ivan
On Aug 12, 2014 8:17 AM, Dmitriy Bashkalin
I imagine script field can do this.
On Aug 12, 2014 6:38 PM, Ivan Brusic i...@brusic.com wrote:
If the 200kb number is fixed, then the simplest solution would be to store
that content separately in a new field. It does not need to be analyzed,
just stored.
Perhaps highlighters might work.
If you haven't already, see:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html#rolling-upgrades
In general, I would advise against doing automated updating unless you have
a significantly large cluster size. Mostly because nothing is guaranteed
version to
Great, thanks for the help. I did few things and I am not sure which ended
up wiping the index. I killed the cluster, I upgraded to 1.2.2, I rebooted
the 2 AWS instances, I indexed on each node separately, then I joined the
cluster. Things look fine now.
On Tuesday, August 12, 2014 10:46:33 AM
Hi David,
I'm afraid it would be a little expensive to move to ES.
I came across elephant-bird-lucene of twitter yesterday, and I will try it.
Thanks for your reply.
--
View this message in context:
Why it would be expensive?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 13 août 2014 à 03:43, Jianyi phoenix.w.2...@gmail.com a écrit :
Hi David,
I'm afraid it would be a little expensive to move to ES.
I came across elephant-bird-lucene of twitter yesterday, and I
94 matches
Mail list logo