Il 12/11/2014 17:43, Alessandro
Bonfanti ha scritto:
Il 12/11/2014 17:20, Nikolas Everett
ha scritto:
On Wed, Nov 12, 2014 at 11:13 AM,
Alessandro Bonfanti bnf@gmail.com
Hi Mark. I am running into the same issue with custom aggregations. After
reading your reply, I found a fault in my readFrom() method too and I fixed
it but it still did not solve the problem. Did you need to fix anything
else?
--
View this message in context:
Hi,
We are trying to move data from a Kafka Topic to ElasticSearch.We are
getting data in JSON format in the Kafka Topic.We are planning to move this
data to Elastic Search and then finally visualize using kibana. We are right
now using Flume Elastic search sink using the default serializer for
Hi Robert,
The filters aggregation was added in version 1.4. As you are running 1.3.4,
you will need to upgrade your Elasticsearch cluster if you want to make use
of it
Colin
On Monday, 1 December 2014 12:49:09 UTC, Robert Gardam wrote:
I have a query that i'm trying to run against ES 1.3.4
Hello,
I am new to Elasticsearch and Kibana, but in the last months I developed
two visualizations for chord diagrams and hive plots in Kibana 3.
Now we are thinking about changing to Kibana 4.0.
I created two panels for the visualizations under the folders
...\kibana\src\app\panels
in Kibana 3
Hi all,
I wonder if anyone knows how to solve this one: I'm trying to mimic Linux
grep command behavior in Elastic Search.
*The problem:*
Sending search request for hello world should first return all matches
where these two words come together in a sentence. However, internal
Elastic Search
The first post should be approved .
On Wednesday, July 2, 2014 2:36:44 AM UTC+8, Clinton Gormley wrote:
Hi all
Recently we've had a few spam emails that have made it through Google's
filters, and there have been a calls for us to change to a
moderate-first-post policy. I am reluctant to
Hi Boaz,
I finally restarted the master node and it worked, indeed. It's anyway a
quite confusing error message / situation :)
Cheers,
On Sunday, November 30, 2014 11:45:15 PM UTC+1, Boaz Leskes wrote:
Hi Teo, Max,
From the query, I can see that it that cluster_state is not shipped. The
Agreed. Happy things work now.
On Tue, Dec 2, 2014 at 11:07 AM, Teo Ruiz teor...@gmail.com wrote:
Hi Boaz,
I finally restarted the master node and it worked, indeed. It's anyway a
quite confusing error message / situation :)
Cheers,
On Sunday, November 30, 2014 11:45:15 PM UTC+1, Boaz
I'm trying to write a mapreduce job where I can query elasticsearch so it
can return to me specific fields. Is there any way to do that?
My mapping contains about 30 fields and I will need just 4 of them
(_id,title,description,category)
The way I was doing it is to process each answer to get
Hi,
I am trying to download a copy of elasticsearch (the 1.3.4 .deb -although I
am having problems with all versions) and I get a timeout (or a zero byte
download). this is happening on both my work network and on the Amazon
server I am trying t install on. Worked yesterday.
Here's the
link
Jeremy,
Our download service is not functioning at the moment. We are working hard
to fix it.
In the meantime you can download the 1.3.4 debian package from the
following maven repository URL:
Simply specify the fields that you are interested in, in the query and you are
good to go.
[1]
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-fields.html
On 12/2/14 12:52 PM, Elias Abou Haydar wrote:
I'm trying to write a mapreduce job where I can query
Hi,
Full disclosure im a newb at this:
I have been trying to create an index using the below code:
private boolean createIndices( ) {
String indexName=279
String typeName=badge_type;
String mapping={\badge_type\: {\_type\:{\store\:\true\},
+ \_index\:{\enabled\:\true\},
+
Hi,
I'm trying to use the java-api to pass an array of parameters to a search
template. Since the SearchRequestBuilder.setTemplateParams only allows am
MapString,
String as parameter I'm stuck. Any ideas are wellcomed. I'm using
Elasticsearch 1.4.0
Search Template created via sense:
GET
I have used Tsung to load test my clusters. It's very easy to install and
configure.
It will give more insight about memory usage, i/o, network i/o etc.
I suggest that you read this blog entry
https://engineering.helpshift.com/2014/tsung/ about it.
You'll find here
Currently I query ES for a single key value pair like this:
String query_key = abc;
String query_val = def;
searchRequestBuilder.setQuery(QueryBuilders.matchQuery(query_key,
query_val)).execute().actionGet();
Now, instead of single key-value pair, I have the following key-value pair
map:
May be this could help:
QueryBuilder qb = multiMatchQuery(
joe smith england,
name.first,
name.last,
address.country);
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
This has been fixed in elasticsearch 1.5 (branch 1.x):
https://github.com/elasticsearch/elasticsearch/pull/8255
https://github.com/elasticsearch/elasticsearch/pull/8255
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
On Mon, Dec 1, 2014 at 10:42 PM, N Bijalwan ahcir...@gmail.com wrote:
We are using manifolcf to crawl web pages and then index them through
Elastic search.
Is there way to get only few lines that contain the searched keyword in
response of elastic search query instead of whole content. Like
Hi everyone,
I'm trying to figure out how to do an histogram facet :
{
query : {
match_all : {}
},
facets : {
histo1 : {
histogram : {
field : field_name,
interval : 100
}
}
}
}
where the field
cluster_name : elasticsearch,
nodes : {
*ExaKHrlBREaM41s68j5zTQ* : {
timestamp : 1417529361647,
name : Sleek,
transport_address : inet[/192.168.2.117:9300],
host : localhost,
ip : [ inet[/192.168.2.117:9300], NONE ],
indices : {
docs : {
i found my error, it was in the JSON this is the format that it should have:
{
settings:{
index:{
analysis: {
filter: {
autocomplete_filter: {
type: edge_ngram,
min_gram: 2,
Thanks Nik for very descriptive solution. I also did some mapping mistakes
for which i was not able to get highlighted text in response for sample
data.
I fixed it by using folllowing mapping
http://localhost:9200/cnn/test/_mapping
{
test: {
properties: {
file: {
type:
Setting store to yes isn't actually required. It might increase
performance in some cases at the cost of extra disk space. I leave it
false everywhere and have no trouble.
Nik
On Tue, Dec 2, 2014 at 10:00 AM, N Bijalwan ahcir...@gmail.com wrote:
Thanks Nik for very descriptive solution. I
I don’t really get what you are looking for.
But may be you'd simply like to set node.name in elasticsearch.yml?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
Hi everybody,
I have encountered very strange behaviour of the search when the search
query is wrapped into characters,
for example query text: products matches no documents, while products
matches all the documents from the index.
I won't post the index creation definitions (i.e.
As specified in documentation node-filter-cache
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-cache.html#node-filter
:
The cache implements an LRU eviction policy: when a cache becomes full, the
least recently used data is evicted to make way for new data.
Its a range query on all terms less than products. You'll want to use
match instead of query_string and you won't see weird stuff like that.
On Tue, Dec 2, 2014 at 11:02 AM, Anthony Andrushchenko amrma...@gmail.com
wrote:
Hi everybody,
I have encountered very strange behaviour of the search
Hi Nikolas,
thank you very much for your fast and accurate response!
Best,
Tony
вторник, 2 декабря 2014 г., 18:06:49 UTC+2 пользователь Nikolas Everett
написал:
Its a range query on all terms less than products. You'll want to use
match instead of query_string and you won't see weird
I've done some poking around with hot_threads during spikes as you've
suggested, put some of the output here:
https://gist.github.com/mmcguinn/bb9de3f5f534d2581f62
As noted in the gist - normally the hottest threads are from the management
pool (at around 4-8%) and they are normally in a lock
Ok. thts a good suggestion. i'll use store to no if yes is not very
essential.
naveen
On Tuesday, 2 December 2014 20:33:03 UTC+5:30, Nikolas Everett wrote:
Setting store to yes isn't actually required. It might increase
performance in some cases at the cost of extra disk space. I leave it
Can anyone help me on this?
Thanks,
Min
在 2014年12月1日星期一UTC-8下午5时12分32秒,Min Zhou写道:
Hi all,
From the source code, seems that one parent type can only have one child
type. Can one parent type have multiple children types in one index?
Min
--
You received this message because you are
Hey All,
I know we can use a plugin to get started in ES, every once in awhile
load values for a key in our es index, and then use these values when
running the scoring of a query run against the documents.
Not really sure what plugin start base we need to do with this.
To give the basics,
Need to store some fields with a kind of mask and perform searches with the
mask and without it.
For example, store a field of type 123456-45.
I want this value to be returned to fetch both 12345 as 123.4 or 6-45.
I always know what the characters to be ignored when mapping, but not in
the
As noted here --
https://groups.google.com/forum/#!searchin/elasticsearch/snapshot$20duration/elasticsearch/bCKenCVFf2o/TFK-Es0wxSwJ
-- the time it takes to perform a snapshot increases the more snapshots you
take. This eventually can become untenable. So far, the only solution
seems to be
They're here:
https://github.com/spinn3r/kibana-4-deb
if anyone wants them.
If you have any issues feel free to issue a pull request.
Thanks!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving
has anyone seen this problem? my mapping says that the field is of type
geo_point. but when i read documents using java api and get the sourcemap,
the type of the field is string and i can't cast it to a GeoPoint.
...
lat_lng : {
type : geo_point,
lat_lon : true
I assume you have ported old data from 0.90 into 1.3 and continued to use
the same index in 1.3?
If so, the reason you see extra load is probably due to the feature that ES
1.3 tries to merge older 0.90 Lucene segments with new segments. By doing
that, the segments are upgraded in the background
If you mean allocation filtering (like
cluster.routing.allocation.exclude._ip) then you just need to specify all
three ip addresses with commas between them.
Nik
On Tue, Dec 2, 2014 at 4:00 PM, Mark Walkom markwal...@gmail.com wrote:
What do you mean by decommission, there is no API call for
You are correct, I left indices in place. Unfortunately, I have been able
to replicate the behavior on one of the machines after deleting all indices
from it and starting fresh.
I'm going to try and replicate on fresh vm now that I have resources
available (it's a vmware + chef setup).
On
Hi,
We are using Elasticsearch 1.3.2 and having issues running queries with
aggregations on unmapped fields. Some documents in the index will have this
nested aggregation field but some will not. Did some initial research but
did not find a way to tell ES to ignore unmapped fields in
I index exception messages and among other fields there a a couple (Code,
Status) which used to be numbers, so default dynamic mapping happily
mapped them as integers.
Then after some time it appeared that those fields are not necessarily
integers, they can be strings. When this happens
I have created a pull request which creates this feature
https://github.com/logstash-plugins/logstash-output-elasticsearch/pull/25
El miércoles, 5 de noviembre de 2014 17:41:07 UTC+13, Alejandro Alves
escribió:
Hello,
I want to have different indexes depending on the type, ie if the type is
Some defaults in segment merge settings have been changed since 0.90 so I
suggest to check if you want to modify them. What you see is not much of
concern, it reveals that receiving docs is active and the merge runs try to
keep up with indexing the documents.
Jörg
On Tue, Dec 2, 2014 at 10:19
What specifically do you want to know?
There is this in the docs -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html#jvm-version
On 3 December 2014 at 09:34, Sitka sitkaw...@gmail.com wrote:
I've been looking thru the docs for a compatibility matrix for ELK and
Jonathan,
Did you find a solution to this? I've been facing pretty much the same
issue since I've added nested documents to my index - delete percentage
goes really high and an explicit optimize leads to an OOM.
Thanks.
On Saturday, August 23, 2014 8:08:32 AM UTC-7, Jonathan Foy wrote:
Hello
Hello,
I have a very large data set spread over multiple indexes that I want to
basically grab each record/transform into another index. Reading the docs
points me towards scan scroll and then some bulk indexing. What concerns
me is failure during this copy it seems there is no way to
There is another approach which is listed below.
kafka - logstash consumer (which is logstash-kafka) - es
you can check logstash-kafka at https://github.com/joekiller/logstash-kafka;
And, I tried the approach which you mentioned above.
However, for me, es sink of flume is kind of unstable.
Anyway,
Hey folks...
I'm upgrading to the newest version of Elastic Search, and noticed that
ElasticSearchException was renamed to ElasticsearchException.
Was this done across the board for lots of classes? Just this class? A
mistake? Just curious what drove this.
- Tim
--
You received this
Hello
This is something I still struggle with, though not to the degree that I
once did. I've been in production for several months now with limited
issues, though I still don't consider it to be a solved problem for myself,
as it requires regular manual maintenance.
First, I threw more
Hi:
Is there a configuration variable that I could set on an ES instance
that can control the maximum number of client connections ? What is the
default limit on the max. connections ? and how can I change it ?
If I have a SPARK application that is retrieving results from an ES
instance, I
+1
I wrote some own panels too. Really want to know howto quickly port into
kibana 4.
在 2014年12月2日星期二UTC+8下午5时06分59秒,Georg Seibt写道:
Hello,
I am new to Elasticsearch and Kibana, but in the last months I developed
two visualizations for chord diagrams and hive plots in Kibana 3.
Now we are
I've had some issues with high IO exacerbated by lots of deleted docs as
well. I'd get deleted docs in the 30%-40% range on some indexes. We
attacked the problem in two ways:
1. Hardware. More ram and better SSDs really really helped. No consumer
grade SSDs for me.
2. Tweak some merge
Copy pasting from the relevant GitHub issue for future reference:
(https://github.com/elasticsearch/elasticsearch/issues/8735)
In the next marvel release we will have automatic support for this. I wonder
if things would work for you if enter https://mydomain/ in the server box of
sense. It
In the type_table mentioned above, assinging = = DIGIT instead of =
=ALPHA yields the indexing as I expected.
On Monday, 24 November 2014 23:18:58 UTC+5:30, Anand kumar wrote:
Hi all,
I've been using elasticsearch-1.2.1 and I've been indexing .xml and .jsp
file content.
And this is
Thanks Mark.
So, there is also no possibility to use the written libraries?
Am Mittwoch, 3. Dezember 2014 04:58:27 UTC+1 schrieb Mark Walkom:
KB3 and 4 are not compatible, any dashboards will need to be redesigned.
On 3 December 2014 at 14:26, chenlin rao rao.c...@gmail.com javascript:
I can't comment on that directly sorry as it's outside my area, you might
be better off raising an issue on github -
https://github.com/elasticsearch/kibana/issues
On 3 December 2014 at 17:57, Georg Seibt seibtge...@gmail.com wrote:
Thanks Mark.
So, there is also no possibility to use the
58 matches
Mail list logo