Sorry for the cross post, but it applies to both lists. Given the
increasing use of ELK I am hoping we can kickstart a community driven repo
for Kibana dashboards.
To that end I've setup a repository on Github at
https://github.com/markwalkom/kibana-dashboards and I'm asking anyone with
interestin
Hi,
Here is school entity mapping
school:
mappings:
name: ~
rankings:
type: "nested"
properties:
id: ~
I get a problem with query for finding all school contains rankings element
length > 0. I try use filter scrip
Thanks so much for the feedback, Ivan.
One more question: We have two different forms of rotated files (on *IX
systems; no Windows servers):
1. Standard log4j rotation: The XXX.log file is renamed to XXX-.log
and a new XXX.log file is created. The name doesn't change, but the inode
changes.
2.
It sounds like you are running into GC problems, which is inevitable when
your cluster is at capacity. A few things;
You're running java with a >32GB heap, which will mean your pointers are no
longer compressed and this can/will adversely impact GC.
What ES version are you on, what java version an
ES compresses by default since 0.90.6, it doesn't de-dupe though.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 23 June 2014 04:22, Oleksandr Kunytsia wrote:
> Hi!
>
> I am starting using logstash + elasticsearch
Not that I know of. But there is a known but very rare bug (fixed in
0.90.8) which can cause data loss upon a node restart:
https://github.com/elasticsearch/elasticsearch/issues/4502
Maybe you run into that?
On Sun, Jun 22, 2014 at 10:18 PM, Rohit Jaiswal
wrote:
> Yes, it did when we restarted
Yes, it did when we restarted the node while trying to reproduce this
problem. We also were able to access the data using the Scan search api
after restarting the node.
However we have seen quite a few of the bulk update errors in our 20-node
production cluster and have suffered data loss on othe
If you restart the node it's on, it doesn't come back?
On Sun, Jun 22, 2014 at 10:01 PM, Rohit Jaiswal
wrote:
> Hi Boaz,
>Thanks for replying. After we get this error, the cluster
> health changes to Yellow with a replica shard in Unassigned state. Is there
> a specific way to r
Hi Boaz,
Thanks for replying. After we get this error, the cluster
health changes to Yellow with a replica shard in Unassigned state. Is there
a specific way to recover that shard? We dont want to lose other data on
that shard.
Thanks,
Rohit
On Sun, Jun 22, 2014 at 12:50 PM, Boaz
hmm. Yeah, I can now see that in the code. Another option is to use the
allocation filtering api to move the shard off the node and then cancel the
rule after it was done:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html#index-modules-allocation
Hello,
I'm not quite familiar with the ways ElasticSearch and Lucene works, but I
know that there isn't really such a thing as updating a document. "Update"
request are completed by a delete and an insert. So I assume it is "not as
efficient", say, as an SQL update. So I've been wondering if it
Hi Rohit,
This issue means update fails anyway, but it breaks the entire request. You
should indeed set the retry_on_conflict option to make the update request
succeed. PS - you should really upgrade - a lot has happened and was fixed
since 0.90.2 ...
Cheers,
Boaz
On Monday, June 16, 2014 10
Hi guys,
I've got an ES cluster of two data nodes and one no-data node (serving the
kibana website). It receives approx. 40 mio. loglines a day, and normally
has no issue with this.
If I stop reading in for a short time - and start again -the queue is
emptied about 50x faster than it is filled.
Hi!
I am starting using logstash + elasticsearch. How to verify that the data
in the ES really is compressed?
I am confused that the index files have quite a lot of lines that are
repeated in the logs, and thus could be compressed.
/ Oleksandr
--
You received this message because you are su
I am indexing the following document as follows:
{
"_index": "transactions",
"_type": "transaction",
"_id": "1",
"_score": 1,
"_source": {
"title": "another backup",
"action":
Two things to add, to make Elasticsearch/Solr comparison more fair.
In the ES mapping, you did not disable the _all field.
If you have _all field enabled, all tokens will be indexed twice, one for
the field, one for _all.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/map
I have following ES mapping defined,
curl "localhost:9200/codeindexroute/filecontracts/_mapping"
{
"codeindexroute":{
"mappings":{
"filecontracts":{
"_routing":{
"required":true
},
"_source":{
"compress":false,
"includes":["filePath","parserType"],
"excludes":["content"]
},
"properties":{
"conten
Hi,
I'm trying to index documents in elastic search. I'm using elastic search
1.2.1, from the Java API.
My cluster is remote, 3 nodes on 3 servers (one node on each server),
optimised for indexing (one shard per node, no replication).
For this, i read a CSV file, from which i generate mapping fi
18 matches
Mail list logo