Hi All,
I like to if ElasticSearch has any undo delete operation.
So any document deleted using delete api can be recovered until it is
deleted from disk using optimize api flag only_expunge_deletes.
If it is not there I like to know what are the challenges to develop that
as we already have
You are on the right track and you found already the answer to your
question, examine your queries. They seem to be cached and eat your heap.
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/filter-caching.html
Jörg
On Fri, Jul 18, 2014 at 5:09 PM, Ned Campion
Yes, I think this is somehow related to Matt's Join Filter
https://github.com/elasticsearch/elasticsearch/pull/3278
Jörg
On Sat, Jul 19, 2014 at 4:24 AM, Don Clore cloredo...@gmail.com wrote:
I am pretty sure this is not supported, but it'd be great to explicit
confirmation/denial.
Not sure how you could find that Linux and Windows are certified or if
there are certifications at all.
The platform on which Elasticsearch runs is server-side Java so this is
probably your question.
I for myself run Elasticsearch on Java 7 and Java 8, on Red Hat Linux
Enterprise 6, Mac OS X
Hello Pulkit ,
The best option i can suggest would be to take snapshot of the index before
the delete opertaion.
And later retrieve the document that you need.
Else you might need to look at lucene level.
I believe lucene has features to tag a version and restoring to it later.
Thanks
http://stackoverflow.com/questions/24841835/possibly-to-use-current-date-as-synonim#
Possibly to use current date as synonim ? For example for query
latest/breaking news I want to get last news in search. How to do it ?
--
You received this message because you are subscribed to the Google
I'll try it as soon as I can!
thanks,
Alfredo
:-)
Il giorno venerdì 18 luglio 2014 10:08:14 UTC+2, Jörg Prante ha scritto:
Hi,
I released a Log4j2 Elasticsearch appender
https://github.com/jprante/log4j2-elasticsearch
in the hope it is useful.
Best,
Jörg
--
You received this
Your filter cache is only taking up 3GB of the heap, which fits with the
default limit of 10% of heap space. So the filter cache is not at fault
here.
I would look at the two usual suspects:
* field data - how much space is this consuming? Try:
curl
Hi,
here is a new release of JDBC river/feeder plugin for Elasticsearch
https://github.com/jprante/elasticsearch-river-jdbc/releases/tag/1.2.2.0
Highlights:
- update to Elasticsearch 1.2.2
- more reliable bulk indexing
- properly handle SQL insert/update/select statement types
- dropping empty
Is there any easy way to get just the relevant error messages out of an
exception?
I know there are multiple shards and each one can potentially return a
different error message but 99% of the time, the error is because I've made
a mistake.
This means I get the same message 50+ times in a
10 matches
Mail list logo