Alright, we'll try upgrading. Thanks :)
Meanwhile, any advice on how to fix an inconsistency once it is found? Is
there an API to forcefully sync nodes, or at-least reindex from a
specific node?
On Tuesday, November 25, 2014 8:44:44 PM UTC+2, Itamar Syn-Hershko wrote:
I suggest you upgrade
If this is replicas only, you should be able to set replica count to 0 and
then after a while back to 2 again
If this is sharded, then no, you'll have to reindex from scratch.
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
see if this helps:
https://github.com/elasticsearch/cookbook-elasticsearch/pull/213
*ryanvanderpol https://github.com/ryanvanderpol *commented on 17 Jul
https://github.com/elasticsearch/cookbook-elasticsearch/pull/213#issuecomment-49336142
Just as a follow up, the issue I was running in to
I am increasing the number of open files descriptors for the root user by
adding the following entry in /etc/sysctl.conf
fs.file-max = 751864
However when I am restarting elastic search as root user then elastic
search is starting with
max_file_descriptors value as 4096
Why the above
Can I just say this project looks really excellent. Thank you for doing it
and sharing it Henrik!
Best,
Emrul
On Monday, June 23, 2014 7:46:08 PM UTC+1, Hendrik Dev wrote:
The current master branch does not work with 1.2.1 out of the box but it
should be easy to fix this.
Just clone the
Hello Folks.
How are you?
I installed a Syslog agent on Windows server and some of directories on
Windows server are Persian. In Linux box I installed Logstash,Syslog-ng and
Kibana but when my Linux receive Logs from Windows server the name shown as
. How can I solve it?
Cheers.
--
You
hi.. i am trying to highlight the comment but its getting highlight of my
comment field as well as appstore_name field content also.. this is my
query:
curl -XGET 'localhost:9200/gold.reviews/_search?pretty' -d'{
query:{
bool:{
must:[{match:{appstore_name:{query:apple app
Thanks! I'll check out the link you sent and see if it'll help...
We're are running version 1.0.0:
version: {
number: 1.0.0,
build_hash: a46900e9c72c0a623d71b54016357d5f94c8ea32,
build_timestamp: 2014-02-12T16:18:34Z,
build_snapshot: false,
lucene_version: 4.6
}
On
Thanks! I'll check out the link and let you know if it helped.
We are running version 1.0.0
On Tuesday, November 25, 2014 6:41:36 PM UTC+2, Itamar Syn-Hershko wrote:
minimum_master_nodes still doesn't protect you from all possible failure
scenarios, see
so far i understood, you need both columns in your database table.
Otherwise the river wont be able to do the checks. The river compares the
updated_at date with own date. If that date equal or in the future than the
one from the river, the river try to update/insert your record to
When the elasticsearch instance restarted, how do you check? Did you check
using this command cat /proc/elasticsearch instance pid/limits ?
Jason
On Wed, Nov 26, 2014 at 6:09 PM, Vijay Tiwary vijaykr.tiw...@gmail.com
wrote:
I am increasing the number of open files descriptors for the root
Hi all,
I was wondering if it is possible to save a scripted metric aggregation in
Elasticsearch and to use it in Kibana 4 as a metric in a Data Table.
Thanks in advance!
Anna
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe
Highlighting extracts terms from the query and your query contains the
words apple app store. You can fix this by providing a highlight_query or
by setting another setting who's name I've forgotten. I believe it is
require_field_match or something.
On Nov 26, 2014 6:27 AM, pavan.530530
Hi,
I was using marvel in my test environment. As we had authentication
plugin on the REST interface, and noticed the error like:
marvel.agent.exporter ... failed to upload index template, stopping export
I’m pretty sure it’s blocked by the auth plugin, but the question is why
would marvel
Please, help me with the bool filter in my query. I need a products which
contain markdowned SKUs which have stock more than 0. I wrote the query,
but it returns me the markdowned products only (including those, which skus
have stock = 0). Here's the query.
Hi Anna,
can you please elaborate a bit ?
Hendrik
On Wednesday, November 26, 2014 2:19:14 PM UTC+1, Anna wrote:
Hi all,
I was wondering if it is possible to save a scripted metric aggregation in
Elasticsearch and to use it in Kibana 4 as a metric in a Data Table.
Thanks in advance!
Everything looks correct to me though it’s not a complete script so I can not
reproduce it.
It might be an issue with the tool you are using to send the request.
Because you are doing a GET _search (which is correct), some browsers/tools
don’t support GET with a body.
So you end up running a
Hi guys,
We're seeing exactly the same error. This is the error we see:
org.elasticsearch.search.SearchParseException: [.marvel-2014.11.26][0]:
from[-1],size[1]: Parse Failure [Failed to parse source [{size:1,sort:{
@timestamp:{order:desc}}}]]
Full trace
here:
hi all,
which is the most common main db, to work with ES, so it will be easy to
sync between the main db to ES.
preferably it should accept data in the same structure.
thanks!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe
I am using Elastic Search 1.3.5.
I have fields like filename, foldername and content. In query I am trying
to highlight matched content.
In Query if keyword only matches in content then keyword is highlighted and
its becoming bold. However if keyword matches in (filename or foldername)
and
Can you post an example document? Can you post your mapping? Mapping is
important because depending on how your mapping is set up you'll get a
totally different highlighter implementation.
Nik
On Wed, Nov 26, 2014 at 11:26 AM, Deepak Mehta hopeligh...@gmail.com
wrote:
I am using Elastic
Hi,
Are there other Java clients that talk to the ES HTTP API that people like
to use, other than Jest?
Thanks,
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
--
You received this message because you are
There isn't such a thing. There are rivers that try to sync other sources
with Elasticsearch but I'm not a big fan. I'd let your application keep
the index up to date.
Nik
On Wed, Nov 26, 2014 at 11:26 AM, Lior Goldemberg lio...@gmail.com wrote:
hi all,
which is the most common main db,
Hi Hendrik,
thanks for your interest.
I would like to approach the following use case:
In Kibana 4, I would like to create data table containing a column with a
metric. Instead of a predefined metric (e.g., min, max, or average), I
would like to use a custom metric (i.e, a scripted metric)
Yes. By Using the same command the max open files count is 4096
On Wednesday, November 26, 2014 6:30:02 PM UTC+5:30, Jason Wee wrote:
When the elasticsearch instance restarted, how do you check? Did you check
using this command cat /proc/elasticsearch instance pid/limits ?
Jason
On Wed,
I'm trying to upgrade from ES 1.1.1 to ES 1.4.0. I need to updated my .MVEL
scripts to groovy so in my Java code I did this:
updateRequestBuilder.setScript(scriptValue,
ScriptService.ScriptType.INLINE);
updateRequestBuilder.setScriptLang(groovy);
My unit tests are failing which makes me thing
I storing documents as shown below:
keywords: [
{
name: #nfl,
type: general hashtag word,
postag: nnp
},
{
name: #bill,
type: general
I've read the recommendations for ES_HEAP_SIZE
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html
which
basically state to set -Xms and -Xmx to 50% physical RAM.
It says the rest should be left for Lucene to use (OS filesystem caching).
But I'm confused on how
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content inteligently. That means most
of its actual storage is off heap (its a java buzz-phrase). Anyway,
Linux will serve reads from mmaped files from its page cache. That is why
you want
I see, but I'm running on Windows. Is the behavior similar, or does this
not exist on Windows?
On Wednesday, November 26, 2014 1:01:02 PM UTC-6, Nikolas Everett wrote:
Lucene runs in the same JVM as Elasticsearch but (by default) it mmaps
files and then iterates over their content
I imagine all operating systems have some kind of disk caching. I just
happen to be used to linux.
On Wed, Nov 26, 2014 at 2:42 PM, BradVido bradyvido...@gmail.com wrote:
I see, but I'm running on Windows. Is the behavior similar, or does this
not exist on Windows?
On Wednesday, November
Is there any notion triggering a re-election of the master node?
I'm currently running 1.2.4, and I have an instance that is scheduled for
retirement (my favorite!) and it just so happens that it's my master node.
What can I do to avoid the dreaded RED state? Is there some mechanism
that
Indeed the behaviour is the same on Windows and Linux: memory that is not
used by processes is used by the operating system in order to cache the
hottest parts of the file system. The reason why the docs say that the rest
should be left to Lucene is that most disk accesses that elasticsearch
On Wed, Nov 26, 2014 at 3:47 PM, Erik theRed j.e.redd...@gmail.com wrote:
Is there any notion triggering a re-election of the master node?
I'm currently running 1.2.4, and I have an instance that is scheduled for
retirement (my favorite!) and it just so happens that it's my master node.
What
actually im looking for a way to backup my ES data,
since it cant be used as primary db at this stage.
so i thought to insert my data to another db - just for backup
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this
Thanks, Nik -
There's no data on the node so it sounds like master reelection should fail
over fairly quickly.
On Wednesday, November 26, 2014 2:58:43 PM UTC-6, Nikolas Everett wrote:
On Wed, Nov 26, 2014 at 3:47 PM, Erik theRed j.e.r...@gmail.com
javascript: wrote:
Is there any
How can i search documents from one index by filtering with two fields data
?
Ex: i want to search like username and passwords fields data from the
index, if it matches i need to fetch all the documents of that match in
specific index and type
--
You received this message because you are
Where the original content comes from?
I mean that if you already have a source of truth, you can use it in case of
major failure.
You can have a look at couchbase. They offer a plugin for elasticsearch that
emulate a couchbase cluster and then they replicate data from the primary
cluster to
I had a similar issue.
I manged to get the parameters from the post by doing:
@Override
public void handleRequest(final RestRequest request, final RestChannel
channel) {
MapString,String params = new HashMapString, String();
I have a folder that holds all the logs that I've setup for logstash-forwarder
to ship as below:
paths: [
/var/log/**/*.log
]
All my logs are organized in nested folders within the /var/log folder such as:
/var/log/2014/01/01/some.log
/var/log/2014/01/02/other.log
Seems like
Thanks Kelsey that could be useful. I managed to get my UI framework
(ExtJS) to play better with POST so I am not dependent on having to use GET
any more
On Wed Nov 26 2014 at 5:54:43 PM Kelsey Hamer kelsey.ha...@gmail.com
wrote:
I had a similar issue.
I manged to get the parameters from the
Hi Folks,
Two new versions in time for the weekend (a long one for the US!) - 1.4.1
and 1.3.6
More details are available at
http://www.elasticsearch.org/blog/elasticsearch-1-4-1-released/
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
David,
Is there a way to reduce the dependency of running the elastic server when
we deploy the code to other environments?
Thanks,
Vijaya
On Thu, Nov 20, 2014 at 1:19 PM, David Pilato da...@pilato.fr wrote:
Sorry. I don't understand the first question. May be you could post links
to the
Document looks like this,
{
_index: mdb-pod1-1,
_type: file,
_id: 81f87ac0-8362-43f0-bdf2-485d109bed18,
_score: 1,
_source: {
fileName: documents.txt,
fileExtension: txt,
pid: 1,
folderName: Documents,
trash: false,
content: This
Hi there,
I've got some data sources being parsed and written into EL from
logstash, and it would be great to report on additional metadata
related to the record stored in EL. e.g. for a network flow record,
reporting information like the BGP AS (Autonomous System) name related
to the source and
That'd be a custom job as KB just pulls and displays data from ES, assuming
it has everything in the doc.
On 27 November 2014 at 17:04, Chris Bennett ch...@ceegeebee.com wrote:
Hi there,
I've got some data sources being parsed and written into EL from
logstash, and it would be great to
What if you just change the Elasticsearch version in your setup
(attributes, role overrides, etc)? There is no patch/commit needed to
install 1.x versions with the cookbook. (There are just special cases where
overriding attributes doesn't work as expected.)
Is there anything printed to the
Actually i want to aggreagate by name of keywords field. However I only
want those keywords which are of type general hashtag word. I am using
this query
{
from : 0,
size : 0,
query : {
filtered : {
query : {
match_all : { }
},
filter : {
bool : {
That'd be a custom job as KB just pulls and displays data from ES,
assuming it has everything in the doc.
Thanks Mark - was suspecting that was the case.
The only problem I've found with my playing with ELK thus far is the
sheer explosion of ideas about what I can do to visualise report on
I have set up my ELK stack on a single server and tested it on a very small
setup to get a hands-down on ELK.
I want to use ELK for my system logs analysis.
Now, I have been reading about ES that it has no security. Also read
something like this:
DO NOT have ES publicly accessible. That's the
Hello,
I am storing strings containing special characters in a not_analyzed field.
In order to search with query_string on this field I am escaping the
special characters. I tried tow approach. But both seem to work only
partially:
QueryParser.escape(value)
and
String regex =
If you put any data on a system that is easily accessible to the internet
then there is a probability that people can get to it.
If you are using AWS or similar, then make sure you use something like
iptables and/or a reverse proxy with authentication.
On 27 November 2014 at 16:39, Siddharth
Hi All,
I am using Hive 0.13.1 and trying to create an external table so data can
me loaded from Hive to Elasticsearch. However I keep getting the following
error. I have tried with following jars but same error. I will really
appreciate for any pointers.
Thanks
- Atul
property
So if internet is off then there is not any security concern??
If it's on, then any measures to be taken?
On Thursday, 27 November 2014 11:37:32 UTC+5:30, Mark Walkom wrote:
If you put any data on a system that is easily accessible to the internet
then there is a probability that people can
54 matches
Mail list logo