The index uses the keyword tokenizer, with edge-ngram (and other) filters —
it only wants to match from the start of the string, for autocomplete.
The search analyser is also keyword, with various filters.
The pattern-replace filter for apostrophes is applied to both.
On Tuesday, October 7,
name_synonyms:
type: synonym
synonyms:
- 1,one
# - ,and,+=and
- ' = and'
How can I use YAML to correctly configure a synonym for ampersands and the
'plus' symbol and the word 'and'?
The above synonym for 1/one seems to work.
With ES, you can go up to the bandwidth limit the OS allows for writing I/O
(if you disable throttling etc.)
This means, if you write to one shard, it can be as fast as writing to
thousands of shards in parallel in summary. There is an OS limit for file
system buffers so the more shards, the more
I have read the nested docs and create mapping and get mapping as follow
fields: {
type: nested,
properties: {
_id: {
type: string
},
name: {
type: string
}
}
}
I have read the nested docs and create mapping and get mapping as follow
fields: {
type: nested,
properties: {
_id: {
type: string
},
name: {
type: string
}
}
}
I have read the nested docs and create mapping and get mapping as follow
fields: {
type: nested,
properties: {
_id: {
type: string
},
name: {
type: string
}
}
}
*I have read the nested docs and create mapping and get mapping as follow*
fields: {
type: nested,
properties: {
_id: {
type: string
},
name: {
type: string
}
}
}
Ciprian,
Thanks for your input - I had indeed missed that disk space failure and it
turns out I was hitting an intermittent disk space issue.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving
*I have seen the nested docs, and create mapping and getting mapping as
follow:- *
fields: {
type: nested,
properties: {
_id: {
type: string
},
name: {
type: string
}
}
Hi,
we are using version 1.3.2 of ES , each index have 4 shards.
I will try adjusting the store-throttling to an acceptable number without
crushing our disk IOs
Am Dienstag, 7. Oktober 2014 19:35:35 UTC+2 schrieb Ivan Brusic:
Currently shard allocation is throttled by default with a target
Hi all,
I'm using elasticsearch 1.2.1 with java 1.7.0_60. I have one index with 52
shards and 1 replica in three servers. Till yesterday all was working like
a charm, but yesterday we started facing an strange issue.
After every inserts/updates by bulk load one of the primary shards (allways
I have come upon an interesting problem with pagination that I was
wondering if anyone else solved elegantly. The problem can best be
described by twitter's dev docs:
https://dev.twitter.com/rest/public/timelines.
Essentially, using the from and size parameters
If you create item-centric documents (in your case, venues) and maintain an
exhaustive list of users who like that item then this can be a problem for
very popular items e.g. movies etc that can be liked by large numbers of
people. It would need constantly updating.
By contrast, a user-centric
Hi,
My ElasticSearch cluster is running on Windows platform.
Last weekend, one of the nodes in the cluster went down due to timeout
(network issue).
When network issue was solved, I had to manually restart the node to join
back the cluster.
Question: Is there any mechanism/script which can
Hey
I figured it out, I am able to update but I am not able to remove object
that satisfies the condition
PUT twitter/twit/1
{
list: [
{
tweet_id: 1,
a: b
},
{
tweet_id: 123,
a: f
}
]
}
POST /twitter/twit/1/_update
{
script: foreach (item :
Hey
*I figured it out, I am able to update but I am not able to remove object
that satisfies the condition*
PUT twitter/twit/1
{
list: [
{
tweet_id: 1,
a: b
},
{
tweet_id: 123,
a: f
}
]
}
POST /twitter/twit/1/_update
{
script: foreach (item :
*Hey *
*I figured it out, I am able to update but I am not able to remove object
that satisfies the condition*
PUT twitter/twit/1
{
list: [
{
tweet_id: 1,
a: b
},
{
tweet_id: 123,
a: f
}
]
}
POST /twitter/twit/1/_update
{
script: foreach (item :
Hey
I figured it out, I am able to update but I am not able to remove object
that satisfies the condition
PUT twitter/twit/1
{
list: [
{
tweet_id: 1,
a: b
},
{
tweet_id: 123,
a: f
}
]
}
POST /twitter/twit/1/_update
{
script: foreach (item :
one significant shortcoming is that you cannot do any highlighting.
Not necessarily true - see this feature which is primarily for the use
case of searching on an all type field but highlighting results using
detailed fields:
I have an ES cluster of 3 servers where indexes are configured with 5
shards and 2 replicas (so, every index has 5 primary shards and 10 replica
shards, with 5 shards allocated to each server). I have just upgraded from
1.0.0 to 1.3.4 by stopping one server at a time, updating ES then
I am able to create the snapshot however when i am trying to restore the
sanpshot in a different cluster then the indexes are getting created
however there is no data in the index
On Monday, September 22, 2014 5:16:02 PM UTC+5:30, David Pilato wrote:
Exactly.
Note that if you can simply «
Hi everyone,
we can't get Elasticsearch (1.3.4 at the moment) to start up on an MS
Server 2008 RC2 that is virtualized by VMWare. It was installed by using
the service.bat file.
In the elasticsearch.log in debug mode we don't see the message were it
prints wich ports it has bound to.
see
I understand this may depend on a lot of factors, but I am curious on what
is an efficient number of indexes for a large data set.
I would like to break up indexes by user and by date (I think) mostly
because it will make data management easier on my end.
I am wondering when Elasticsearch will
Hi everyone,
Elasticsearch Hadoop 2.0.2 and 2.1 Beta2, featuring Apache Storm integration
and Apache Spark SQL, have been released.
You can read all about them here [1].
Feedback is welcome!
Cheers,
http://www.elasticsearch.org/blog/elasticsearch-hadoop-2-0-2-and-2-1-beta2/
--
Costin
--
You
To be able to customize the default word boundary properties in Lucene's
StandardTokenizer, I created an ElasticSearch plugin to be able to do this
- https://github.com/bbguitar77/elasticsearch-analysis-standardext
As mentioned before, there are other tokenizers / filters that can be used,
but
Hi,
first of all I am a noob with elastic so bear with me.
I am interested to know if this scenario is possible:
A user posts some text , my app detects the language and index the text
with the spanish analyzer for example.
B user posts some text in english, my app detects it and indexex the
Hi,
Thanks for your answer. I think it will be the solution.
Le mardi 7 octobre 2014 11:53:18 UTC+2, Adrien Grand a écrit :
Hi,
Aggregations can only count documents. So if you want to count values, you
need to model your data in such a way that each value is going to be a
document, for
Hi,
It's ok, I think you are right, nested will be the solution.
Regards,
Rémi.
Le mardi 7 octobre 2014 11:38:44 UTC+2, David Pilato a écrit :
I have no idea. It's hard to understand what you exactly did without a
full reproduction.
A SENSE script posted as a GIST would help a lot I think.
Thanks. It's difficult to replicate w/o the data but I will try to ask on
github.
On Wednesday, October 8, 2014 6:04:52 AM UTC-4, Thibaut wrote:
Hi,
I would open up an issue on github. Even if it's just one node,
elasticsearch should restart.
Thanks,
Thibaut
On Tue, Oct 7, 2014 at
Hi Jorg,
I believe that there is difference between normal string mapping and
dynamic array mapping using bracket notation in elastic search . If we do
normal string type for ListofDescriptionIds in elastic search where i can
find exact difference in searching data in dynamic array and
Sorry. I don't understand what you mean.
So snapshot works?
Then you copied files to another place.
The you restored?
And what happened?
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs
Le 8 octobre 2014 à
Thank you for the detailed response.
On Tuesday, October 7, 2014 10:13:04 PM UTC-7, Karel Minařík wrote:
Hi,
what you're looking for is a proxy which can communicate with an OAuth
provider, with an OAuth provider (such as Google+ Sign-In, Sign in with
Twitter, etc), verify the cookies,
Heya,
We are pleased to announce the release of the Elasticsearch MVEL language
plugin, version 1.4.0.
The MVEL language plugin allows to have mvel as the language of scripts to
execute..
https://github.com/elasticsearch/elasticsearch-lang-mvel/
Release Notes - elasticsearch-lang-mvel -
Hi Matt
Plugin released today:
https://groups.google.com/d/msgid/elasticsearch/54356617.c24db40a.1b61.92b3SMTPIN_ADDED_MISSING%40gmr-mx.google.com
Feedbacks warmly welcomed.
Best
--
David Pilato | Technical Advocate | elasticsearch.com
david.pil...@elasticsearch.com
@dadoonet |
Heya,
We are pleased to announce the release of the Elasticsearch Mapper Attachment
plugin, version 2.4.0.
The mapper attachments plugin adds the attachment type to Elasticsearch using
Apache Tika..
https://github.com/elasticsearch/elasticsearch-mapper-attachments/
Release Notes -
Doh, it is on the very first page, right where the Connect button is:
Optional:
Quick Connect with 'URL' Parameter:
http://domain/?url=http://localhost:9200
I'm embarrassed.
Konstantin
On Tuesday, October 7, 2014 10:19:12 AM UTC-7, Konstantin Erman wrote:
I would like to use ElasticHQ,
Hi there, I've just downloaded the latest kibana and tried to start it.
Following the docs: unzip, and run bin/kibana
I get this error:
The Kibana Backend is starting up... be patient
LoadError: no such file to load -- puma/puma_http11
require at org/jruby/RubyKernel.java:1065
require at
Howdy All,
Just looking to get some advice on how to get the following dynamic mapping
working correctly. I'm fairly new to the mapping world in ES and would like
some help if possible. Can you let me know if this syntax is correct? Thanks
{
template: logstash-*,
settings: {
Heya,
We are pleased to announce the release of the Elasticsearch Memcached
transport plugin, version 2.4.0.
The memcached transport plugin allows to use the REST interface over
memcached..
https://github.com/elasticsearch/elasticsearch-transport-memcached/
Release Notes -
I just upgraded my elasticsearch from 1.2.1. I have 5 node cluster with 3
master 3 data nodes. (one master servers data)
During the upgrade, I restarted my whole cluster. (Till this time, my
cluster was very stable). The cluster status was red for almost 12 hours.
Since unassigned_shards was
I just upgraded my elasticsearch from 1.2.1. I have 5 node cluster with 3
master 3 data nodes. (one master servers data)
During the upgrade, I restarted my whole cluster. (Till this time, my
cluster was very stable). The cluster status was red for almost 12 hours.
Since unassigned_shards
Thanks, Mark. This seems to solve my problems. I followed the instructions
in
www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-service.html
to set up use of the service script. While I was at it I increased heap to
4G. So far, so good.
On Tuesday, October 7, 2014 2:16:19
Hi all,
I'm seeking to move our indices on disk from one volume to another, on the
same machine. We using version 0.90.10, and we can't change that at the
moment.
I tried copying the data to the new volume, changing the path.data setting
in elasticsearch.yml to point to the new directory, and
Copying live index files is very unlikely to work. Your best bet is to shutdown
the node, then copy the files, then restart the node with the new data path.
-- Andrew
From: Tim Swetonic sweto...@gmail.com
Reply: elasticsearch@googlegroups.com elasticsearch@googlegroups.com
Date: October 8,
This entirely depends on your data structure, volume and cluster sizing.
Hundreds works, thousands should be ok if you have a lot of nodes, tens of
thousands is even more nodes.
Aliases will also affect your requirements.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email:
Today I have upgraded 3 machines development ES cluster from 1.3.2 to 4.0.0
BETA1. As far as I can tell it went successfully - cluster state is green,
it responds to commands and all the data seems intact. New data keep coming
in and indexing.
BUT somehow I completely lost Kibana cooperation!
I have a cluster with 3 physical servers and elasticsearch 1.2.3 installed.
I updated some data at yestoday and then today I got a strange issue:
I query data as following:
{
query: {
bool: {
must: [
{
term: {
applicationDateRouting.patentDocumentId:
I have filtered query with 3 filters - query: iphone, color:5,
category:10, brand:25
How can i get amount of products in each brand which have color:5 and
category:10?
In Solr i did it like: facet.field={!ex=tag_facet_brand}facet_brand
How can i exclude 1 filter from aggregation context? (i
In Solr i did it like:
fq={!tag=tag_facet_brand}facet_brand:564facet.field={!ex=tag_facet_brand}facet_brand
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Hi All,
I've been using ES for a while and I'm really enjoying it, but we have a
few slow calls in our code. I have a few hunches around the code that's
using the client inefficiently, but I would like some definitive proof.
I've attempted to profile our application using YourKit when we're
Hello,
I'v been facing a problem on one of my ES nodes for a few days I can't
explain myself. The machine was recently rebooted and I seem to have lost
something.
Symptoms: the amount of used memory grows until the kernel triggers OOM
Killer and the garbage collector is never triggered
51 matches
Mail list logo