That doesn't make a lot of sense then. Why would you be able to set the
publish host to something different if they both have to bind to the same
interface? I'm not understanding what the purpose of these configuration
bits is. I was also told in the IRC room that this was how you separated
Your cluster is in a red state, which means you have unassigned primary
shards.
Install an ES plugin like elastichq or kopf to give you an idea on what is
happening.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On
Sometime back i did a basic Proof of Concept of Distributed Tracing, based
on Google
Dapper(http://research.google.com/pubs/pub36356.htmlhttp://www.linkedin.com/redirect?url=http%3A%2F%2Fresearch%2Egoogle%2Ecom%2Fpubs%2Fpub36356%2Ehtmlurlhash=SFa-_t=tracking_anet
), using Elastic Search.
Here
Hello,
My cluster consists of 20 nodes (3 masters + 17 data nodes) handles apr.
100 indices (8 shards + 1 replica). At the moment there is no client nodes,
all bulk requests balancing between 3 master nodes, all search requests
balancing between all 20 nodes. Application can close cold indices
Hi Dawid,
Is there any attribute in lingo3g to suppress the label name returned by the
ES with respect to multiple keyword separated by comma.
For ex.
If my cluster query returns label name as :
1) India development , india , hello india
2) mobile samsung , motorola g 205, micromax canvas
Is it like the custom resources(if specified) will have priority or it will
override the default one.
Every resource file is read once, from the first location it is found
at. From there an internal dictionary is built and used for the
algorithm.
2) Default resources are disabled by setting
Due to limitations of TCP/IP server socket bind, you can do the following:
- just use network.host to configure an IP address. This is the most common
case.
- use network.bind_host to a single internal IP address and
network.publish_host to a single external IP address on the same(!) network
In the linked example I have only one record, and it contains data for all
fields.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
I understand, so this package will replace the original kibana ?? that's
right ??
My problem, is that I've tried to run the command lines provided but my
server is running under RHEL6, so some of this command doesn't work fine,
like the npm install, so I've tried to use rhel command but still
Hello, I've already published a topic about this plugin, but I didn't say
that I'm using a RHEL server.
Does any of you have tried to install it under RHEL ? Because some of the
command line does not work like npm install
Could you please help me installing this plugin, because I'm pulling my
Today the ES cluster still works as expected.
Still don't know the reason why it failed in the first place or what I did
to fix it.
Maybe a slow cluster restart helped: stopping all nodes and then starting
only one node so it can become master instead of restarting all at once and
letting them
I am newer in Elastic Search.
I would like to count the number of times a particular phrase appears in a
document. I am using the match_phase query.
How can I can I do that for:
1. single document.
2. all the documents under root index/indextype - if it possible how to
do that using a
Hi,
Some feedback on this subject, the latest ES patches made my day. Using ES
1.0.3 solved the issue. Thanks :)
Le lundi 7 avril 2014 09:35:00 UTC+2, Dunaeth a écrit :
Hi Lee,
This issue could exactly match what we're experiencing, we'll wait for the
next revision then and see if it
Thanks Ankush, that worked!
Em quarta-feira, 16 de abril de 2014 18h01min28s UTC-3, Ankush Jhalani
escreveu:
You could do something like
*+\Gisele Bundchen\^5 legal settlement^1 jail^2 lawsuit^2* which
would mean results must have Gisele Bundchen while others are optional and
help in
Does it work? after changing the {type : date}
And setting a date format -
{
messages : {
_timestamp : {
enabled : true
},
properties : {
app_event_time : {
type : date,
format : /MM/dd HH:mm:ss
},
Does it work? after changing the {type : date}
And setting a date format -
{
messages : {
_timestamp : {
enabled : true
},
properties : {
app_event_time : {
type : date,
format : /MM/dd HH:mm:ss
Apologies the #create document section of the gist was incorrectly
repeating the index mapping
I have updated
it:
https://gist.github.com/alexeiemam/04eda6fff5915f9cef66#file-es_geo_distance-sh-L23
On Wednesday, April 16, 2014 6:57:45 PM UTC+1, Binh Ly wrote:
Strange. The only thing I can
Thanks!
On Wednesday, April 16, 2014 6:39:00 PM UTC+3, vineeth mohan wrote:
Hello Aleh ,
Both should be good for your purpose.
But then if you want to match against abc def ( that is with the space)
tomorrow , the array type will alone help.
You can disable the analyzer for the field
Hi,
Thanks for the repy. Helped a lot. I also have a query about index rate
total. Is it
1. no of documents indexed per sec
2. no of indexing req coming per sec
3. no of indexing requests completed per sec
Can you also provide any links to documentation about what each metric
exactly represents
Hi Mark,
Thank you for your comments.
Regarding the monitoring. We use the Diamond ES collector which saves
metrics every 30 seconds in Graphite. ElasticHQ is nice, but does
diagnostics calculations for the whole runtime of the cluster instead of
last X minutes. It does have nice diagnostics
Hi,
Thanks for the reply. Helped a lot. I also have a query about index rate
total. Is it
1. no of documents indexed per sec
2. no of indexing requests coming per sec
3. no of indexing requests completed per sec
In my case i gave 42 files for indexing
{
_shards: {
total: 10,
Hi,
Thanks for the reply. Helped a lot. I also have a query about index rate
total. Is it
1. no of documents indexed per sec
2. no of indexing requests coming per sec
3. no of indexing requests completed per sec
In my case i gave 42 files for indexing and following is the output of
Hi Binh Ly,
Any chance you could have a look at this?
There is a post about scripting parent/child relationships that might be
useful:
https://groups.google.com/d/topic/elasticsearch/cZaK0R-UmHw/discussion but
as far as I know that is not possible at the moment
Thnx!
regards,
Sven
On
17 new indices every day - whew. Why don't you use shard overallocating?
https://groups.google.com/forum/#!msg/elasticsearch/49q-_AgQCp8/MRol0t9asEcJ
Jörg
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop
Mark,thanks for your response. We hit the same problem last night ( before
making any of your suggested changes ). Thankfully this time we did a whole
load of analysis that may be of use.
We had one data node that got a java heap size error and left the pool of
our elasticsearch cluster. We
On 4/17/14, 4:32 AM, Dunaeth wrote:
Hi,
Some feedback on this subject, the latest ES patches made my day. Using
ES 1.0.3 solved the issue. Thanks :)
Great! Glad to hear it worked for you!
;; Lee
--
You received this message because you are subscribed to the Google Groups
elasticsearch
Also here is what we are seeing in our logs. If it's an issue I'm more then
happy to raise a ticket in github:
From the master elasticsearch log:
[2014-04-16 18:57:17,298][WARN ][*transport* ] [Byrrah] Received response
for a request that has timed out, sent [31116ms] ago, timed out
Hello everybody,
I am using ElasticSearch in order to switch our main storage system from a
RDBMS system to a NoSQL system. For a particular functionality, I need to
use some nested elements. I know how to query them but I would like to know
if it is possible to count how many nested elements
The error below seems to indicate that my credentials aren't correct, which
I know they are. Could the error below something else?
//YML //
cloud:
aws:
access_key: ##
secret_key: ##
discovery:
type: ec2
What do you generally do to evaluate your search system's performance? Do
you use a metrics based approach where they can compare how changes to
scoring, analysis, or similarities effect hits in a quantitative way? Or
something more manual?
Going through Intro to Information
Maybe MLR (machine based learning for ranking) is of some interest, if you
do not know much about your document relevancy.
I use BM25 Okapi. For library catalogs, I have document zones like
subject headings, title, author, identifiers and other supplemental texts
like abstracts. All searches are
Hi,
I have a bunch of documents that need to be deleted using the
DeleteByQuery API. I would like these to be deleted using the bulk api so
that I dont have to call deleteByQuery multiple times. I see that the Bulk
API only takes in a DeleteRequest. Is there any way I can group together a
- I am on ES version 1.0.1
- Installed the 'head' plugin and it works. But marvel does not. Did a
reinstall and it did not help
- Also went through the discussion of a previous similar post ...
Screenshot:
[@mil-ora2 elasticsearch-1.0.1]$ ls -l plugins/
total 4
drwxr-xr-x 4
hi folks,
In my document there is a field which contians only URL as it value.
forexample {URL :
http://www.mohit-kumar-yadav.com\123124343\login_user.html;
}
{URL : http://www.mohit-kumar-yadav.com\home_user.html}
how can i search these documents.
I am using following query :-
1. Curl -XGET '
hi,
in my setup, marvel node is different from production cluster.. the
production nodes send data to marvel node.. marvel node had OOM exception.
this brings me to the quesiton, how much heap does it need? i ran with
default config.
in my prod cluster, i have a load balancer which is no data
Hi all,
Currently I am working with elasticsearch-hadoop library with EsOutputFormat
that
is writing to elasticsearch,
But it looks to me like the writing is slow (elasticsearch-hadoop works
with HTTP bulks on port 9200)
So my question is it worth to try to write something of my own that will
I am also facing the same issue.
Right now, I am just doing a filter myself, but I would assume this is a
common use case, an ES must have a way to deal with it?
On Tuesday, April 15, 2014 6:24:52 PM UTC-7, Joris Bolsens wrote:
I am using the javascript API and want to do a search and have it
Filter (range filter on the date/time field) is exactly the way to do this.
Another possibility is using rolling indexes (e.g. an index per day, like
the logstash indexes are defined) but that obviously depends on a lot of
other business concerns and isn't really viable for most applications
--
Thanks Itamar.
So are you saying it's not possible to ask ES for the most recent X objects
that match the given query? Only to say give me the last 30 days of
objects?
On Thursday, April 17, 2014 2:39:43 PM UTC-7, Itamar Syn-Hershko wrote:
Filter (range filter on the date/time field) is
For recent X just sort on the _timestamp field and specify X as the page
size
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-timestamp-field.html
--
Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer Consultant
Hi,
Some of the reasons behind using of REST/HTTP are:
- no extra dependencies required (the transport client add 8MB+)
- fairly good performance. This is a hot topic however due to Map/Reduce parallel nature, it's very likely that one will
overload ES before having to switch to the transport
Running Ubuntu 12.04 64bit, the logstash init script does not work.
here's the script that came with logstash deb
In particular I don't understand how the script is trying to parse
something from the logstash pid, before it even starts the program..?
log_daemon_msg Starting $DESC
Just tried to upgrade elasticsearch 1.1.0 to 1.1.1 (with the cloud-aws
plugin 2.1.0), and am no longer able to start any nodes:
2014-04-18 01:19:42,754 [INFO] node - [Skywalker] version[1.1.1],
pid[22901], build[f1585f0/2014-04-16T14:27:12Z]
2014-04-18 01:19:42,767 [INFO] node - [Skywalker]
Hi,
We are using elasticsearch versionb 1.01.1. I did the following test for
analyzer
PUT /test
{
settings:{
analysis: {
analyzer: {
whitespace:{
type: pattern,
pattern:s+
The S3 gateway from the cloud-aws 2.1.0 plugin works fine up to
elasticsearch 1.1.0, but appears to be broken with 1.1.1, see my other post.
On Friday, April 11, 2014 1:01:34 AM UTC-7, David Pilato wrote:
What is the cloud-aws plugin version please?
--
*David Pilato* | *Technical
The upstart job also doesn't seem to work, it just keeps dying over and
over again never logging anything to the logfile.
If i manually start logstash everything works normally.
On Thursday, April 17, 2014 6:12:38 PM UTC-7, OJ LaBoeuf wrote:
Running Ubuntu 12.04 64bit, the logstash init
Hi ES users,
Is there anyway we can perform the text search present in the images or pdf
files through elasticsearch.
I mean to say that suppose I have pdf/image(will be stored in ES as base64
format) file indexed in ES. And if that image file contains prashant as
text in it so is there a way I
47 matches
Mail list logo