I have been getting error: 'xyz' is not a function, got undefined where xyz
is name of my panel. I have changed the name of controllers and modules to
xyz.
Is it possible to write a panel from scratch? How is the rendering done?
Where are the controllers defined?
On Friday, February 14,
I've mapped several fields as byte and float, but when I retrieve the
search result (using the
PHP library), those fields are returned as strings in the JSON, is that
correct?
If you have the _source field enabled, you should just get back the original
JSON you put in, regardless of mapping.
It seems that some of the systems are going to be generating more logs than
we previously thought (approx 250GB a day) so 14TB won't last too long. So
we are looking into new nodes.
I think we currently have enough compute power (won't know until this goes
live) so the current nodes can be
The default analyzer is standard. If I change it to keyword I can get regex
working. But I want both to work simultaneously.
For ex, Lets say I push this event to elasticsearch via logstash this is
my new string.
In kibana search,
If I look for message:string, it should return me this is my
No it's not possible.
On 6 January 2015 at 18:45, phani.nadimi...@goktree.com wrote:
Hi All,
Can we maintain common data repository (data folder) for all the data
nodes in a cluster?
can we maintain common data folder for dedicated data nodes ? will
this be possible (common
Hi,
I am querying elasticsearch for multiple parallel requests using single
transport client instance in my application.
I got the below exception for the parallel execution. Hot to overcome the
issue.
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException:
rejected execution
It might be related then to the underlying JDBC driver. Try to change the driver or see if there are any parameters that
you could set.
Some drivers are buggy in that they prevent the JVM from shutting down correctly as they keep hanging on to the
underlying connections.
You could try using a
Hi,
We are using Google Compute Engine to deploy our applications (running on
top of CentOS Linux).
Is there any written reference on the best practices to secure our
ElasticSearch instance running on the cloud?
Best regards
--
You received this message because you are subscribed to the
Hi Utkarsh
I want to backup old data from ElasticSearch, options are Cassandra and
Hadoop. I want to know which one is better in terms of integration,
scalability and performance.
In cassandra, only do we need to install plugin or there are other pieces
of code that we may need to write.
On
Im new in elasticsearch,i installed elasticsearch version -
elasticsearch-1.4.2 and mapper-attachment-plugin -2.4.1 .
Im refering this site
https://github.com/elasticsearch/elasticsearch-mapper-attachments but there
is error like 'MapperParsingException-No handler for type [attachment]
Hard to say but if you are running this in a unit test for example, it’s most
likely a refresh issue.
You need to refresh your index before running the first search.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
Hello,
I am new to ElasticSearch and I was building a simple service that would
store some documents. I used a local node like :
ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder();
settings.put(node.name, ligatus-dsp-aggregation-embedded);
settings.put(path.data,
There are two ways to perform regex matching with Elasticsearch and both
require multi-fields
http://www.elasticsearch.org/guide/en/elasticsearch/reference/0.90/mapping-multi-field-type.html
.
The first way is to create a not_analyzed subfield like on the link above
and query it like
did you restart elasticsearch?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs
Le 6 janv. 2015 à 14:15, Shashi
Well ES is bound to 0.0.0.0. Each node *gets* traffic from other nodes, it
just can't communicate with its own public IP.
My guess is that on normal systems where
NetworkInterface.getNetworkInterfaces returns 10.255.207.123, Elasticsearch
would not even try to connect to itself when going
Hey Waldo,
I had a look over some of the code we have for Pelias
https://github.com/pelias/pelias and I managed to get it to work, the
trick being that you need to index the business location as a 'geo-shape'
'point' type:
As per the documentation cat api will display output similar to
curl 192.168.56.10:9200/_cat/nodes
SP4H 4727 192.168.56.30 9300 1.4.2 1.8.0_25 72.1gb 35.4 93.9mb 79 239.1mb 0.45
3.4h d m Boneyard
_uhJ 5134 192.168.56.10 9300 1.4.2 1.8.0_25 72.1gb 33.3 93.9mb 85 239.1mb 0.06
3.4h d * Athena
What do you expect to see?
The node is started.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs
Le 6 janv. 2015 à 17:42, Ian Bates
Can you share the query and example results please?
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Author of RavenDB in Action http://manning.com/synhershko/
On Tue, Jan 6, 2015 at 10:11 PM, Michael Irani
The best way is to add more nodes.
There isn't much you can do with that amount of data!
On 7 January 2015 at 06:09, David Mavashev crypti...@gmail.com wrote:
Hi,
I have a cluster of 20 nodes, 1 TB/day of data indexed, right now we only
keep the last 3 days opened but the customer wants us
Nice spot, I've raised an issue for it
https://github.com/elasticsearch/elasticsearch/issues/9170
On 7 January 2015 at 02:24, ajay.bh...@gmail.com wrote:
As per the documentation cat api will display output similar to
curl 192.168.56.10:9200/_cat/nodes
SP4H 4727 192.168.56.30 9300 1.4.2
Are both running the same ES and java versions?
Can you telnet between the data and master nodes on 9300?
On 7 January 2015 at 09:56, sh...@gethashed.com wrote:
Howdy - I cannot get two ec2 servers to connect to one another as a
cluster. The servers are successfully discovering themselves
Hi Mark
Thank you for your response.
Yes both have the same ES version [ same AMI with ES pre-installed]. I can
telnet between the two servers on 9300, 9200, 9400. Running netstat shows
the two machines are actively connected on port 9300.
The original machine which becomes the master machine
Sure. I simplified the query to keep things focused.
This query takes about 3 seconds to run:
{
size: 0,
aggs: {
top-fingerprints: {
terms: {
field: fingerprint,
size: 50
},
aggs: {
That is a ton of data to keep open. Can you squish it somehow?
On Tue, Jan 6, 2015 at 3:24 PM, Mark Walkom markwal...@gmail.com wrote:
The best way is to add more nodes.
There isn't much you can do with that amount of data!
On 7 January 2015 at 06:09, David Mavashev crypti...@gmail.com
Hi,
I have a cluster of 20 nodes, 1 TB/day of data indexed, right now we only
keep the last 3 days opened but the customer wants us to open 6 months of
indexes.
We don't care about query execution times but only that the indexing
throughput wouldn't get hurt.
Is there anything we can do in
Howdy - I cannot get two ec2 servers to connect to one another as a
cluster. The servers are successfully discovering themselves via the
supplied AWS credentials with proper permissions, however I the non-master
server continually connects to and fails when joining the master. I have
Ok so i got the 1.4.0.8.
When that happened I got an error saying that Integrated Security was not
supported with this driver.
I had to put sqljdbc_auth.dll into they system path variables.
After that it started to work. Windows Auth only!!
Thanks for the help.
On Friday, January 2,
Hello,
I am attempting to upgrade a 10 node cluster from 1.3.2 to 1.4.2 - I
upgraded the first node,removed and reinstalled latest versions of plugins
- two non-site plugins are river-twitter (version 2.4.1) and jdbc (version
1.4.0.8 and also tried 1.4.0.7) - and when starting the node I see
Shane,
What's JVM version is ES running under, as Mark asked? That exception
usually indicates that you're running two different JVMs in the cluster,
which unfortunately is not supported due to how Java serializes exceptions.
Ross
On Wednesday, 7 January 2015 10:22:46 UTC+11, shane adams
I tried increasing the logging but it didn't give me anymore info.
I'm sure it is the jdbc driver as I have had to uninstall / reinstall
everything several times and without that jdbc river driver it's fine.
Since the search is working fine is there any issue with me having to kill
it to
Hello,
We have 10 data nodes and 1 master node and we have 20,000 lines
of logs/sec (peek).
Now we send our logs through our program by using BI(Bulk Insert) to 1
master node.
I'm wondering if I could BI to other node(data or master), because we will
add more logs.
Or should I add more master
Hi,
Update is delete and add. I mean, instead of updating existing document, it
deletes it and adds it as new document.
And those deleted documents are just marked as deleted and aren’t actually
removed from index until the segment merge.
IDF doesn’t take those deleted-but-not-removed document
specific either:
root@ip-10-0-0-45:bddevw07[1005]:~/elasticsearch curl -XGET '
http://localhost:9200/derbysoft-20150106/_settings'
{test-20150106:{settings:{index:{creation_date:1420502319287,uuid:yuHSFauVTL-SwKVAwaRdCg,number_of_replicas:1,number_of_shards:3,version:{created:1040199}
root@ip
It's not recommended to run an Elasticsearch cluster across geographically
dispersed locations.
You cannot assign both primaries and replicas to a single node, it defeats
the purpose! So it's as design.
On 7 January 2015 at 14:08, Mathew D mathew.degerh...@gmail.com wrote:
Hi all,
I've
Hi,
Your mapping isn't correct. If you remove properties around metrics, it
should work.
-
curl -XPUT http://localhost:9200/testagg/testagg/_mapping; -d'
{
testagg: {
properties: {
timeStamp: {
format: dateOptionalTime,
type: date
},
metrics: {
it is there any one can help me?i submit this problem in github,the author
tell me to submit problem here.i hope someone can help me.
在 2015年1月6日星期二UTC+8上午11时13分18秒,haoc...@gmail.com写道:
hello all,
Recently,i encounter a very strange problem.like the title,In the
search,when i set the index
Did you create the repository on cluster B?
How?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 janv. 2015 à 08:19, Thomas Ardal thomasar...@gmail.com a écrit :
I'm using the snapshot/restore feature of Elasticsearch, together with the
Azure plugin to backup
On Tuesday, January 6, 2015 7:15:55 PM UTC+5:30, David Pilato wrote:
did you restart elasticsearch?
--
*David Pilato* | *Technical Advocate* | *Elasticsearch.com
http://Elasticsearch.com*
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr |
Hi Michael,
In general the more buckets being returned by the parent aggregator the
top_hits is nested in, the more work the top_hits agg needs to do, but I
didn't come across performance issues with `size` on terms agg being set to
50 and the time it takes to execute increasing 30 times when
I'm using the snapshot/restore feature of Elasticsearch, together with the
Azure plugin to backup snapshots to Azure blob storage. Everything works
when doing snapshots from a cluster and restoring to the same cluster. Now
I'm in a situation where I want to restore an entirely new cluster
Do you have only one node? Are you using REST or Java client?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 janv. 2015 à 08:18, Shashi shashikant.ba...@isanasystems.com a écrit :
On Tuesday, January 6, 2015 7:15:55 PM UTC+5:30, David Pilato wrote:
did you
That was exactly what I was missing. I didn't create the repository named
elasticsearch_logs on cluster B. After I created it, the backup runs
smoothly.
Thanks, David!
On Wednesday, January 7, 2015 8:31:08 AM UTC+1, David Pilato wrote:
Did you create the repository on cluster B?
How?
--
I have my code that uses aggregation and child aggrgation on a query. It
worked fine in 1.0.0. Now if I switch to 1.4.2, it does not work. I added
the groovy dependency in my pom file. Do I have to make any specific
changes for aggregation to work?
Thanks!
--
You received this message
Hi All,
We are encountering ArrayOutOfBoundsException (occurs Randomly) whenever
i use scrolling. This happens when i scroll for the next resultset (after
the first). I pass the scroll id from the previous request. I
get ArrayOutofBounds exception from ES (500 error). This happens randomly.
Hello,
I am having issues installing elasticsearch. When I first started to run
the elasticsearch.bat I received warnings about insufficient disk space, so
I stopped the process and deleted music off my machine. Unfortunately, I
did not copy the error messages.
But on the second attempt
46 matches
Mail list logo