Sorry for the delay.
Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.
Unfortunately, I found some other problems, and one looks like a blocker
After whole ES cluster powerdown, ES just started replaying 'no mapping for
... name of
Hi,
I want to know the strategy which marvel follows to store the data.
Like for how long it store the data and how it flushes the data and how
much data can be stored any limitations.
how it manages the data which is stored via any cluster.
Please help me to let me know this.
Thanks
--
You
It stores data forever so you basically need to remove old data after some days.
curator could help here.
In the future, you will have built in elasticsearch a feature which will do
that. But by now, you need to take care of it yourself.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr /
How can we delete the data from marvel.
some Instruction or commands would be helpful.
Thanks
On Monday, March 10, 2014 12:13:22 PM UTC+5:30, Shilpi Agrawal wrote:
Hi,
I want to know the strategy which marvel follows to store the data.
Like for how long it store the data and how it flushes
You can not delete from Marvel yet.
But you can run
curl -XDELETE http://localhost:9200/.marvel-
or use curator:
https://github.com/elasticsearch/curator
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 10 mars 2014 à 09:27:25, Shilpi
Hi,
I had started a snapshot request on a freshly-indexed ES 1.0.1 cluster with
cloud plugin installed, but unfortunately the EC2 access keys configured did
not have S3 permissions, so ES was in a weird state, so I sent a DELETE
snapshot request and it's stuck for more than a couple of hours,
Hi,
I’ve allocated max and min heap same with *ES_HEAP_SIZE=1200m* in
elasticsearch and using *bootstrap.mlockall: true *as suggested by
elasticsearch so that process memory won’t get swapped.
But when elasticsearch I start its taking more memory than max heap
mentioned; like 1.4g and
Hi gkwelding,
I have checked explicitly on my box and the value for MAX_OPEN_FILES and
MAX_MAP_COUNT has been set to 65535.
--
View this message in context:
Hi,
As you can see, on the other hand, aggregations give counts for 4, 5,
6, long and name while facets don't. This is due to term selection:
by default aggregations only return the top 10 terms (configurable through
the `size` parameter) and those top terms are sorted by count desc, then
term
Hey there,
I have one index containing various types. It does happen, that a field with
the same name is mapped as string in one type and as integer in another
type.
The following query on a type t1, where sample_field is mapped as string
{
query: {
match_all: {}
},
facets: {
Heya,
I am pleased to announce the release of the Elasticsearch File System River
Plugin, version 0.5.0.
FS River Plugin offers a simple way to index local files into elasticsearch..
https://github.com/dadoonet/fsriver/
Release Notes - fsriver - Version 0.5.0
Fix:
* [53] - file.filename
Heya,
I am pleased to announce the release of the Elasticsearch File System River
Plugin, version 1.0.0.
FS River Plugin offers a simple way to index local files into elasticsearch..
https://github.com/dadoonet/fsriver/
Release Notes - fsriver - Version 1.0.0
Update:
* [48] - Update to
Hi,
I'm trying to keep some scripts within config/scripts but elasticsearch
seems that it cannot locate them. What could be a possible reason for this?
When need to invoke it es fails with the following
No such property: scriptname for class: Script1
Any ideas?
Thanks
--
You received this
Hello,
I'm developing a tomcat webserver application that uses ElasticSearch 1.0
(Java API). There is a client facing desktop application that communicates
with the server so all the code for ElasticSearch is on that one instance
and it is used by all our clients. With that being said I am
Hi,
I'm using the Elasticsearch Perl module and need guidance on setting up
mappings.
I'm using the bulk() method to index data. Here is an example of the
structure of the data :
$response = $e-bulk(
index : idx-2014.03.10,
type : my_type,
body : [
{
I also tried it with
{
query: {
match_all: {}
},
facets: {
t2.stats: {
statistical: {
field: t2.sample_field
}
}
}
}
but that results in the same error message.
--
View this message in context:
Hello ,
Here its told how to index data for completion suggester -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-suggesters-completion.html#search-suggesters-completion
But then it assumes that there is only one value for the field suggest.
What happens if there
Kibana cannot do a the Histogram of the cumulative value of a field as
describe at:https://github.com/elasticsearch/kibana/issues/740
To overcome that I created a separate index where I calculate myself the
total and saved it to Elasticsearch.
The mapping looks as follows:
curl -XPOST
Hi Dom
First, make sure you're using the new Search::Elasticsearch client
https://metacpan.org/pod/Search::Elasticsearch - we've just renamed it to
avoid namespace clashes with older clients.
Then: to configure the mapping yourself, you need to do it before you index
any data (using the bulk
I have a job that makes heavy use to ES, to the point that it affects the
cluster. Is it possible to:
- add a replica
- force the extra replica to a specific node
- isolate some of the queries to that particular node?
Thanks.
--
You received this message because you are subscribed to
When I shutdown a node that holds a replica and updates are happening to
the rest of the cluster, then re-start this node, it seems that the entire
replica is being copied again to that node.
Is there a way to make ES just update that node with the updates that
happened while it was down?
Thanks for tips.Yes I am reusing requestbuilder as stated in example in docs so
this can be the case.
I will try to reinstantiate the request builder and I will let you know.
Btw is there a way how to simply bulk index json/xml file as like as in Solr?
This is extremely useful feature isolating
Here's a basic example you can try:
https://gist.github.com/bly2k/9468905
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Perhaps you can give this a try:
https://github.com/sonian/elasticsearch-jetty
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Thank you for your answer.
Renaming would create other problems for me, so I went with putting the
types in different indexes.
--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/Fields-with-same-name-mapped-differently-in-different-types-tp4051423p4051447.html
Righto - I will try add some.
-Nick
On Wednesday, 5 March 2014 13:48:58 UTC, Jörg Prante wrote:
Yes, there are no tests yet.
Jörg
On Wed, Mar 5, 2014 at 2:24 PM, mooky nick.mi...@gmail.com
javascript:wrote:
I am ready to create a pull request - its actually quite a simple change.
Hi Clinton,
I really appreciate the fast reply. We're now using Search::Elasticsearch.
I'm still having a problem getting mappings set and hope you can help.
What I'm trying to do is turn of the analyzer for certain fields. Here is
what I have :
In Perl :
%mappings = (
I stand corrected. Clint is right. ES will try to apply only diffs as much
as possible at the segment level. But if your underlying segments have
diverged significantly since the replica node went down, it is likely that
you'll end up with copying a lot more than the diffs (document-wise).
Unfortunately, subqueries are not supported. What you can do is dump the
results of your first query into an index. And then you can do a
terms_stats facet on that dump to get your final results:
I updated the gist to reflect the answer.
https://gist.github.com/dmregister/9467385
It turns out, the index_analyzer and search_analyzer simple do not work
well with numbers. Changing them to keyword did the trick.
On Friday, March 7, 2014 7:09:39 PM UTC-5, David Register wrote:
The query
Thanks for the response Binh. Is there a way to pipe the response from my
first query directly into a new index without round tripping the data back
to the server that requested the query?
On Mon, Mar 10, 2014 at 1:09 PM, Binh Ly binhly...@yahoo.com wrote:
Unfortunately, subqueries are not
If you do something like this, you should get the epoch value in
milliseconds. Then you can use that value to initialize whatever object you
want:
ScriptDocValues v = (ScriptDocValues) doc().get(dateField);
if (v != null !v.isEmpty()) {
long epoch_ms
Hi all,
As part of our performance exercize, we have been trying characterize
the Insertion performance of ElasticSearch (0.90.7). Here is our setup:
*Nodes:* 3 AWS m1.xlarge (16G)
*Memory:* 8G Heap on each node.
*Indices:* 5 aliases, 3 indexes per alias, 2 shards per index. (30 shards),
1
There is a special ES indexing data model, as you surely already have
noted. You can only index a subset of valid JSON into ES. For example, each
ES JSON doc must be an object. Arrays must be single-valued, unnested. So,
arbitrary source JSON must be transformed, and due to the field/value
Thank you very much for answer.
I think that elasticsearch-jetty plugin handles only http request, am I
right? (REST API requests)
I have alrery disable http by *http.enabled: false* option and I want to
secure comunication amog the nodes.
For access to the cluster I use JAVA API. For me it´s
I have a question up on SO here:
For some reason, I can't sort or search in a multi_field in one particular
index:
http://stackoverflow.com/questions/22305891/elasticsearch-multi-field-type-search-and-sort-issue
--
You received this message because you are subscribed to the Google Groups
Accidentally hit submit early.
I have a very similar mapping where multi_field is working fine. All the
mapping information is on the SO thread as well as explains and other
results.
Any help that any of you can provide, I would greatly appreciate.
On Monday, March 10, 2014 5:07:03 PM
Check out the new multilingual search plugin for Elasticsearch:
http://www.basistech.com/elasticsearch/
(tokenization, lemmatization, decompounding, Noun Phrase Extraction, POS
tagging, along with entity extraction and entity resolution in Asian,
European and Middle Eastern languages.)
Is anyone using memcached with ES as a plugin. I am wondering if it
provides any benefits over the caching in heap.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
I'm running Logstash with Redis, Kibana, and Elasticsearch.
All is working well. But when I look in the configured location for data,
as set in the elasticsearch.yml file I see nothing. The directory is empty.
But I can see my data in Kibana and view it in Redis.
So where is Elasticsearch
I am using the ElasticSearch for Hadoop library defined in
http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/pig.html;
to read data in PIG for Hadoop.
There is a field household_ages defined as short in my index type. But
it is actually an array since this field has multiple
Hi Everyone,
We're having another user group meeting next week on Monday in Sydney,
Australia. If you're interested in coming along then head to our Meetup
page and RSVP -
http://www.meetup.com/Elasticsearch-Sydney-Meetup/events/165807792/
This time around we have two talks from Elasticsearch
I'm new to Elasticsearch and recently started playing around with version
1.0.1. Just saw your question and noticed that the data files at
INSTALL PATH/data/ES CLUSTER NAME/nodes/0/indices/INDEX
NAME/0/index/segment_1
Note that lucene data is stored in segments.
On Mon, Mar 10, 2014 at 4:47
Hi Igor, It seems that the S3 bucket had "PUT only permissions". Regards,Swaroop 10.03.2014, 17:40, "Igor Motov" imo...@gmail.com:That's strange. Wrong S3 permissions should have caused it to failed immediately. Could you provide any more details about the permissions, so I can reproduce it?
44 matches
Mail list logo