Sorry it is alias not alice
On Friday, March 21, 2014 10:11:53 AM UTC+5:30, Chetana wrote:
I am planning to use elasticsearch (ES) for storing event logs. Per day,
the application should store nearly 3000+ events and size will be around
30-50K.
I need to take some statistics monthly,
Hi
I am also facing same issue of no dicovery.ec2 logs getting generated for
elasticsearch 1.0.1 and cloud-aws plugin 2.0.0.RC
I tried the corrected version of setting given:
cloud:
aws:
access_key: XX
secret_key: XX
region: ap-southeast-1
discovery:
type: ec2
They are sharing same security group name and are deployed in same region?
If unicast does not work using private IP, aws plugin won't work either.
Can you from one node run
curl http://secondnodeip:9200/
Same from second node.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr /
Hello,
Thanks for your reply. What do you mean by not possible to escape it ?
Could you provide a sample code in Java, that would work if the necessary
changes would be implemented?
Jean-Noel
On Thursday, March 20, 2014 1:51:54 PM UTC+4, Adrien Grand wrote:
Hi,
The aggregation doesn't
So, What is Clear Cache API for ?
On Wed, Mar 19, 2014 at 9:22 PM, joergpra...@gmail.com
joergpra...@gmail.com wrote:
JVM heap objects are reclaimed by garbage collection, not by clear cache
command.
Jörg
On Wed, Mar 19, 2014 at 4:17 AM, Gary Gao garygaow...@gmail.com wrote:
Hi,
Hi,
I am facing this weird issue where I have created an ES cluster with 3 data
nodes and 1 master node.
All data nodes are dual octa core CPU with 32 GB RAM and the master is a
quad core 16GB machine
I am trying to insert around 3000 records per second with replica set as 1
. Each record is
For example, if you have a resident filter cache configured and there is
heap congestion, clear cache may temporarily help to release objects back
to the JVM. After a while they are GCed.
Usually, in most other cases, cached objects are GCed when memory becomes
low, so there is not much to worry
While browsing the lingo3g manual I came across with
http://download.carrotsearch.com/lingo3g/1.9.0/manual/#chapter.lexical-resources
Which states that we can customize the name of the label as per pre defined
Word/Label dictionary.
So I have some doubts on basis of that:
1) Where exactly these
On Fri, Mar 21, 2014 at 8:15 AM, Jean-Noël Rivasseau elva...@gmail.comwrote:
Thanks for your reply. What do you mean by not possible to escape it ?
Could you provide a sample code in Java, that would work if the necessary
changes would be implemented?
The nested field mapper stores data as
Hey folks,
I'd like to setup logstash to keep track of all search queries made against
my elasticsearch cluster along with the amount of time it takes to return
the results and number of results returned. Ideally the log file should
contain something like:
{ query: foobar, time_taken: 100ms,
Thanks David for such a quick reply.
I tried you command from one node to second node and got following result.
{
status : 503,
name : cleandata-DataNode-1,
version : {
number : 1.0.1,
build_hash : 5c03844e1978e5cc924dab2a423dc63ce881c42b,
build_timestamp :
Have you checked http://download.carrotsearch.com/lingo3g/manual/#section.esand
https://github.com/carrot2/elasticsearch-carrot2 and
https://github.com/carrot2/elasticsearch-carrot2/tree/master/src/main/resources?
Jörg
On Fri, Mar 21, 2014 at 9:08 AM, Prashant Agrawal
@mauri, thank you for such interesting analysis.
On 21/03/2014 1:01 PM, Mauri ma...@proactive-edge.com.au wrote:
Hi Brad
I agree with what Mark and Zachary have said and will expand on these.
Firstly, shard and index level operations in ElasticSearch are
peer-to-peer. Single-shard
Hi,
I'm using the below aggs for getting max, avg values from 2 data fields.
But the result is coming in unix timeformat I guess. Can I get the result
normal time format,
query==
aggs : {
max_time : {
max : {
script : doc['gi_xdr_info_end_time'].value -
Don't try ec2 discovery until you have tested that:
- you can connect from one machine to another on port 9300 ( nc as client
and server, basic networking/ firewalling)
- run a simple aws ec2 describe instances call with the API key you plan to
use, and you can see the machines you need there.
status is incorrect but I guess it's due to the fact your data node is not a
master node and can't find a master.
How did you set unicast?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 21 mars 2014 à 09:33:35, Geet Gangwar (geetgang...@gmail.com) a
Hi Jörg,
Wile exploring more I got the answer for first 3 points just wanted to get
clarification for point 4:
4) How I can check the built-in word databases wrt ES for clustering, is
word-dictionary.en.xml is the built-in databse file for ES if yes where I
can find in ES after configuring ES
On Fri, Mar 21, 2014 at 5:41 AM, Chetana ambha.car...@gmail.com wrote:
1. Is it a good idea to create shards based on size/period or create one
shard with multiple alices based on filter condition?
I would recommend on using time-based indices, you can hear about the
rationale at
I have set private ip of my data node.
discovery.zen.ping.unicast.hosts: [10.142.181.16]
and have disabled the multicast
discovery.zen.ping.multicast.enabled: false
Regards
Geet
On Fri, Mar 21, 2014 at 2:17 PM, David Pilato da...@pilato.fr wrote:
status is incorrect but I guess it's due
I'm not sure I understand your 4th question... The Lingo3G manual
(pointed to by Jörg) has an explicit location where lexical resources
should be placed:
If you have any custom lexical resources then the override folder
is${es.home}/config/ by default.
So, for example, placing
From the 4th point I mean to say that where exactly I can have a look for
default word-dictionary in ES (as per pre setup I installed ES + carrot2 +
copied Java API for lingo3g) though I have not copied the word dictionary
manually to my config.
As I have not copied the defaults dictionary from
Hi David,
I tried unicast with making my removing the node.master=false. Now any of
my node can become master.
I this case unicast is working but I have to manually do the unicat setting
in both nodes.
I have also installed cloud-aws plugin on both nodes, still no logs are
getting generated for
you can find my elasticsearch window i.e overview window and browse window.
my java code you can see below.
from these windows i think you can get what i am doing.
i just create new indexes with type and their id simply .
before this i din't created any cluster or any node e.t.c. ,from my
Trying to submit a pull request. Getting a 403
-Nick
On Monday, 10 March 2014 17:28:24 UTC, mooky wrote:
Righto - I will try add some.
-Nick
On Wednesday, 5 March 2014 13:48:58 UTC, Jörg Prante wrote:
Yes, there are no tests yet.
Jörg
On Wed, Mar 5, 2014 at 2:24 PM, mooky
Hi Dawid,
so if you specify your own resources these will have a priority.
Is it like the custom resources(if specified) will have priority or it will
override the default one.
How clustering will happen in below scenario:
1) Default resources are enabled , Custom resources(having empty tags
Hi all,
I am currently trying to set up a complete ElasticSearch + LogStash +
Kibana stack on Amazon Web Services OpsWorks using the following tutorial
:
http://devblog.springest.com/complete-logstash-stack-on-aws-opsworks-in-15-minutes/
Most of the things run fine except for ElasticSearch.
It looks like you are using elasticsearch-http-basic plugin and that
plugin doesn't support ES 1.0
https://github.com/Asquera/elasticsearch-http-basic/issues/9
On Friday, March 21, 2014 9:50:02 PM UTC+11, cha...@pocketplaylab.com wrote:
Hi all,
I am currently trying to set up a complete
Oh, ok thanks...
I had to update ES because the version of Kibana I am using wasn't
supporting the previous one. I guess I'll have to downgrade everything or
wait.
Thanks a lot!
On Friday, March 21, 2014 6:01:51 PM UTC+7, Kevin Wang wrote:
It looks like you are using
Hi,
I am setting up a system consisting of elasticsearch-logstash-kibana for
log analysis. I am using one machine (2 GB RAM, 2 CPUs) running logstash,
kibana and two instances of elasticsearch. Two other machines, each
running logstash-forwarder are pumping logs into the ELK system.
The
Hi All:
I have been trying to write a unitTest that tests some indexing and
search functionality against an embedded instance of elastic search so
that I do not have to rely on a test instance of elastic search being
available. Towards this goal, I create a local node instance like so :
What is the default log level in logstash? how do I run with log level set
to error (meaning only errors shall be logged)
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an
Hi everyone,
I'm brand new to ES and am trying to use it to create a basic analytics app.
I'm running into a problem that I can't seem to get sorted out myself:
I have documents like this:
{
_index: hpstats
_type: articles
_id: http://www.standaard.be/cnt/dmf20140321_01034888-2014-03-21-11-07
The get response is like so :
GetResponse gr = client.prepareGet(testindex, testDocument,
testDocument.getDocumentId()).execute().actionGet();
The searchResponse is like so :
SearchResponse sr =
client.prepareSearch(testindex).setTypes(testDocument)
Hello All,
I'm trying to figure out a little problem in ES 1.0.
When using 0.90 I have for a filter that pushes words together by shingling
with an empty token separator, however, since upgrading to 1.0 it seems to
switch to a space if the token_separator field is empty (i.e. token_filter:
).
And of course then you come up with the answer and you feel like a complete
idiot. Just in case anyone else runs into the issue:
You need to specify in the mapping that the URL field should be
not_analyzed (index: not_analyzed)
On Friday, March 21, 2014 1:33:27 PM UTC+1, Jan van Vlimmeren
I'm trying to use fvh with span_near queries but it appears to be totally
broken. Other query types work, even it's query_string equivalent. Is
there anything I am doing incorrectly here? Or is there a work around that
I can employ in the meantime? Below is a recreation:
# Set up index
FYI this is ES 1.0.1
On Friday, March 21, 2014 1:00:33 PM UTC, Harry Waye wrote:
I'm trying to use fvh with span_near queries but it appears to be totally
broken. Other query types work, even it's query_string equivalent. Is
there anything I am doing incorrectly here? Or is there a work
I am not sure if I missed something, but what you mentioned I believe I
already tried as showing in my original post.
I can connect to each machine individually and I am able ti index and query
it fine with default configuration without any zen or ec2 settings. But,
when I turned them on like I
Hi,
I would like to index documents of type A and B that have an m:n relation
between them (indexing 1:n is pretty straightforward using parent/child
relationships or nested documents).
For example, I would like to find all documents of type A that have a
related document B which itself has a
Hi Julie,
Facetflow (https://facetflow.com/) is a hosted search solution (based on
Elasticsearch) which supports Azure.
Generally you would have your website send the data over the Elasticsearch
API to have it indexed by the search engine. We do however support:
- Synonyms
- Multilingual
I am not sure if I missed something, but what you mentioned I believe I
already tried as showing in my original post.
I can connect from one instance to another.
I can connect to each machine individually and I am able to index and query
it fine with default configuration without any zen or ec2
Hi- I'm using Kibana 3 to dig through ES logs that are of several types:
email logs and several type of IDS logs which have been parsed and inserted
in ES. My problem is that fields selected for one type are always visible -
even when displaying other types where the fields are not present. The
Greetings,
Let me say up-front, I am a huge fan and proponent of Elasticsearch. It is
a beautiful tool.
So, that said, it surprises me that Elasticsearch has such a pedestrian
flaw, and serializes it's Exceptions using Java Serialization.
In a big shop it is quite difficult (i.e. next to
I would be very interested in that pull request, too.
Changing every exception transport to a textual JSON error seems a proper
alternative. I haven't tried Jackson ObjectMapper on the exception classes
that are present in ES but it should be possible.
Jörg
On Fri, Mar 21, 2014 at 5:18 PM,
Thanks Zack.
So on single node this test will tell us how much single node with Single
shard can get us. Now if we want to deploy more shards/per node then we
need take into consideration, that more shard/per node would consume more
resources ( File Descriptor, Memory, etc..) and performance
One of the main usage of having a data-less node is that it would act as a
coordinator between the other nodes. It will gather all the responses from
the other nodes/shards and reduce them into one.
In your case, the data-less node is gathering all the data from just one
node. In other words, it
I wonder why you are asking for this feature? If its because Java broke
backward comp. on serialization of InetAddress that we use in our
exceptions, then its a bug in Java serialization, hard for us to do
something about it.
You will loose a lot by trying to serialize exceptions using JSON,
Thank you very much.
I figured the name slowlog meant it was verbose and not directives to log
actions that surpass a given threshold. I lowered the config to 1ms and now
I can see the logs.
On Thursday, 20 March 2014 22:56:36 UTC-3, Ivan Brusic wrote:
The logging configuration specifies
Ok, we are seeing this error in the log, any clues?
Error injecting constructor, java.lang.IllegalStateException: This is a
proxy used to support circular references involving constructors. The
object we're proxying is not constructed yet. Please wait until after
injection has completed to use
If it happened once, then by definition it will happen again. History repeats
itself. ;-)
What exactly would you lose?
You are simply trading one rigid serialization scheme for another more lenient
one.
Yes, you would have to introduce something like Jackson's Object Mapper, but
that seems to
I do not think there is a default log level in logstash. You can either
exclude events on the client or on the server side.
On the server side, you can simply apply a drop filter to your input:
http://logstash.net/docs/1.4.0/filters/drop
On the client side, it all depends on your application.
You have a type - should be default not defualt
On 3/21/14 8:37 PM, Brian Stempin wrote:
Hi Costin,
Thanks for the response -- that sounds like what I need. I re-ran my job with
a mapping that included
dynamic_templates, and I'm still having issues. Here's my mapping...am I using
Thanks a lot Ivan, great answer.
Suppose I use in my script my own formula for tf (with
_index[field][term].tf()) and set the boost_mode to replace, does
elasticsearch calculate the tf two times or once only? In other words, is
it computionnally efficient to calculate my own tf? Should I turn
Not trivializing the bug at all, god knows I spend close to a week tracing
it down to a JVM backward incompatibility change, but this happened once
over the almost 5 years Elasticsearch existed. To introduce a workaround to
something that happened once, compared to potential bugs in the
I have a six node cluster: 2 master nodes and 4 client / data nodes. I
have two indicies. One with data and one that is set aside for future
use. I'm having trouble with the indicie that is in use.
After making some limits.conf configuraiton changes and restarting the
impacted nodes, one of
If this is a concern, why not have your client's use the REST api so they
don't need to worry about matching their java version with the java version
of the search cluster?
Thanks,
Matt Weber
On Fri, Mar 21, 2014 at 12:07 PM, kimchy kim...@gmail.com wrote:
Not trivializing the bug at all,
What I'm interested in would be a perspective that ES nodes could
communicate with other ES nodes by transparent (readable) data streams
specified by an ES node protocol, independent of Java serialization. So,
ES nodes in the long run could be implemented in even another language on
the JVM that
We've built our own Elasticsearch Client that has niceties like; OAuth, the
ability to swap Clusters for maintenance, Zookeeper integration, ease of
configuration, retries, etc.
Otherwise we'd have a lot of wheel reinventing going on.
Plus, the Java Client is pretty nice, after all. ;-)
Thanks
Hello. I'm trying to make a 2D contingency table in Kibana (eg domain by
_type). The below in Chrome/Sense returns reasonable results, but how do I
get this displayed in kibana? I'm trying to use a table panel but can't
figure out where to put the query - maybe a different panel or future
Building on the foundation of Elasticsearch and the Elasticsearch-Ruby
client, Chewy is a Ruby gem that extends (and simplifies) the
Elasticsearch application search architecture, while also provides tighter
integration with Rails.
This post provided an introduction to Chewy (with code
I am trying to run a simple search and for some reason it's not working. I
might just be making a simple mistake here. I indexed 2 documents:
[ec2-user@ip-10-80-140-13 ~]$ curl -XPUT '
http://localhost:9200/users/payments/2' -d '{
namespace: ns1,
id: 1,
fastId: f1,
version:
Hi all,
I am working on an autocomplete implementation, and am trying to do the
following:
Given a user query: minneapolis hotel ivy
I want to be able to query one or more fields, and boost the relevancy for
each term that matches. Many are already jumping up and saying the match
query
Yes, that was it! Thanks!
Dennis
On Friday, March 21, 2014 11:29:13 AM UTC-7, Boaz Leskes wrote:
Hi Dennis,
Do you have action.auto_create_index: false in your elasticsearch.yml? if
so see
http://www.elasticsearch.org/guide/en/marvel/current/#relevant-settings
Cheers,
Boaz
On
Please ignore I inadvertently updated one of the documents.
On Fri, Mar 21, 2014 at 1:29 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
I am trying to run a simple search and for some reason it's not working. I
might just be making a simple mistake here. I indexed 2 documents:
The below code doesn't seem to match the document for some reason. Same
query when run directly using REST API works. Am I doing something wrong in
the code?
queryString=fields.field1:value1;
searchResponse = client.prepareSearch()
.setQuery(QueryBuilders.queryString(queryString)).execute()
Are you just searching in your code or indexing as well?
Could it be caused because you did not refresh before searching?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 21 mars 2014 à 22:13, Mohit Anchlia mohitanch...@gmail.com a écrit :
The below code doesn't seem to
I posted earlier with an issue that I spent quite a bit of time trying to
figure out how to setup EC2 cluster, I followed documentation, but still
couldn't get it to work, I have 2 i.4xlarge instances
So, I decided to sale down the problem and work with one instance:
I installed the latest
oh, I forgot refresh. Let me try that. By default it's 1 sec right?
On Fri, Mar 21, 2014 at 2:18 PM, David Pilato da...@pilato.fr wrote:
Are you just searching in your code or indexing as well?
Could it be caused because you did not refresh before searching?
--
David ;-)
Twitter :
I'm trying to build a basic understanding of how indexing and searching
works, hopefully someone can either point me to good resources or explain!
I'm trying to figure out what having multiple coordinator nodes as
defined in the elasticsearch.yml would do, and what having multiple search
load
Hi,
We have been seeing sporadic NodeDisconnectedException and
NoNodeAvailableException in our ES cluster (0.90.7).
Our cluster is made up of 2 data nodes. One data node has a single primary
shard and one data node has a single replica shard. We connect to using the
Java TransportClient
Thanks! refresh was the issue. Is there a easy way to find how many
documents are in index using Java API?
On Fri, Mar 21, 2014 at 2:24 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
oh, I forgot refresh. Let me try that. By default it's 1 sec right?
On Fri, Mar 21, 2014 at 2:18 PM, David
Term frequencies are stored within Lucene, so there is no calculating of
the value, just a lookup in the data structure. You can disable term
frequencies and then create your own in the script, but it would be easier
to calculate that value at index time so that you can access it within your
A couple of things;
1. You should have n/2+1 masters in your cluster, where n = number of
nodes. This helps prevent split brain situations and is best practise.
2. Your master nodes can store data, this way you don't need to add more
nodes to fulfil the above.
Your indexing scenario
Awesome, ok, thank you.
Is the logic behind not allowing storage on master nodes to both:
Take advantage of a system with limited storage resources
and
Have a dedicated results aggregator/search handler?
I can imagine if I had a particularly badly written gnarly search, trying
to deal with the
Yes you can leverage a master to be a search node in that way.
We have a 15 node cluster with 3 masters, I'm thinking I'll add another 2
when we add a few more data nodes in the next few weeks. Essentially you
want an uneven number of masters to ensure a quorum is reached. But when
you start
Fixed. Pull request
here: https://github.com/elasticsearch/elasticsearch/pull/5491
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
ElasticHQ, Marvel, bigdesk and kopf are some of the better monitoring
plugins.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 22 March 2014 03:56, Rajan Bhatt rajanbh...@sbcglobal.net wrote:
Thanks Zack.
So on
What version are you running?
It's odd this would happen if, when you set replica's to zero, the cluster
state is green and your index is ok.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 22 March 2014 06:15,
Dashboards can be exported to JSON, and are obviously savable to the
'kibana-int' index as JSON. Question is, how do you upload them?
We're doing some work with dynamic fields in the dashboards and it seems
that its impossible to create them from within the UI ... so you have to do
it locally
client.prepareCount()
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 21 mars 2014 à 22:34, Mohit Anchlia mohitanch...@gmail.com a écrit :
Thanks! refresh was the issue. Is there a easy way to find how many documents
are in index using Java API?
On Fri, Mar 21, 2014
You should update your version to latest 0.90.x version or 1.0.1 although it
probably won't solve your network issue.
I suppose you don't have anything in nodes logs?
How much HEAP did you give to your nodes?
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 21 mars 2014 à
How can we help you?
You did not send what you are doing actually.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 22 mars 2014 à 00:22, Deepikaa Subramaniam deeps.subraman...@gmail.com a
écrit :
Hi guys,
I am new to Elastic Search. Have setup my env use C# +Nest to
IIRC it does exist. Have a look at dashboard settings (don't remember the exact
tab name though).
By default, gist and file import/export are disabled.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 22 mars 2014 à 03:00, Matt m...@nextdoor.com a écrit :
Dashboards can
83 matches
Mail list logo