Of the various logstash groups, the following is the one that I have found
to be the most active and helpful:
https://groups.google.com/forum/#!forum/logstash-users
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubs
have used Query and Update and Delete and they all
work similarly in this regard. Just a guess.
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
<http://ci.openstack.org/logstash.html>.
Brian
On Monday, June 23, 2014 6:51:56 AM UTC-4, Mark Walkom wrote:
>
> I'm definitely open to expanding this.
>
> I am thinking it might even grow to include LS configs (eg custom grok
> patterns), as they are an important part of the vis
ert either, but it's been a lot of fun to
rediscover Elasticsearch from the ELK perspective (auto-mapping,
auto-creation of indices, and so on).
Brian
On Saturday, June 21, 2014 10:42:37 AM UTC-4, Ivan Brusic wrote:
>
> The path shows an windows file name, so I am not sure if usin
rum/logstash-users> group is also
rather active and is a good place for logstash-specific help.
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
running on a
dual-code machine), I still didn't lose any data. So I am not sure of the
high data loss scenarios he describes in his missive; I have seen no
evidence of any data loss due to false insert positives at all.
Brian
On Friday, June 20, 2014 6:30:27 PM UTC-4, Mark Walkom wrote:
>
&
quot; : true,
*"_all" : { "enabled" : false },*
"properties" : {
"message" : { "type" : "string" },
"host" : { "type" : "string" },
"UUID" : { "type&qu
So GNU tail -F piped into logstash with the stdin filter
works perfectly in my evaluation setup and will likely form the first stage
of any log forwarder we end up deploying,
Brian
On Thursday, June 19, 2014 8:48:34 AM UTC-4, Thomas Paulsen wrote:
>
> We had a 2,2TB/d installation
t is doing!). However, this provides no means of authentication.
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegrou
d to get several people
up and running with a one-button (as it were) build, install, load, and
test. Awesome job, Elasticsearch.com! You make me look good!
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from thi
pend on logstash (or anything else) doing that
for me. It's already done by the base ES install package.
Brian
On Monday, June 16, 2014 8:03:33 AM UTC-4, Alexander Reelsen wrote:
>
> Hey,
>
> which ES version are you using? Seems to work with the latest version. You
> can als
behalf
of the ELK stack, I add the following option to the command line:
-Des.index.query.default_field=message
And now, my default field for a Kibana (Lucene) query is message and not
_all.
And _all is well (pun intended!).
Brian
--
You received this message because you are subscribed to
For example, I keep my Elasticsearch configurations for use with the ELK
stack within this directory:
*/opt/config/elk/current*
So my start-up script calls the elasticsearch command as follows:
$ES_HOME/elasticsearch -d ... -Des.path.conf=*/opt/config/elk/current* ...
Hope this helps!
Brian
ly discovered).
Telling people to query for message:work instead of just work does not
endear me to them.
Is there some way to configure Kibana 3 to change the default field in its
Lucene query to message instead of _all?
Thank you in advance!
Brian
--
You received this message because yo
:
transport.tcp.port: 9302
Hope this helps.
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To
ually wish to query, and
also disable the _all field.
Hope this helps!
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+un
/kibana3
$ ln -s /opt/elk/current/kibana-3.1.0 /opt/elk/current/plugins/kibana3/_site
Verifying the link:
$ ls -l /opt/elk/current/plugins/kibana3/_site
lrwxr-xr-x 1 brian admin 33 Jun 11 14:16
/opt/elk/current/plugins/kibana3/_site -> /opt/elk/current/kibana-3.1.0
Now it's just as if K
As an aside, I am also wondering why this link
<http://www.elasticsearch.org/downloads/1-2-0/> is still active and
available when it was supposed to be pulled.
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsub
<https://github.com/revington/node-kibana> but I am not
sure of the version of Kibana 3 that is used there, and it would be nice to
go as directly as possible to the Elasticsearch sites, whether the
Downloads page or the official github repo, so that I can more easily get
the most recent
ana3/_site*/kibana-3.1.0/...
where kibana-3.1.0 is the root directory in the .gz archive?
Thanks!
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an ema
Thanks so much, Ivan. That's a very important distinction.
Brian
On Friday, June 6, 2014 12:28:56 PM UTC-4, Ivan Brusic wrote:
>
> Plugins are essential to ES's success and are not going away any time
> soon. The river plugins, aka cluster singletons, are the ones which are
I should also point out that I had to edit a file in the metadata-snapshot
file to change around the s3 keys and bucket name to match what development
was expecting.
On Friday, June 6, 2014 1:11:57 PM UTC-4, Brian Lamb wrote:
>
> Hi all,
>
> I want to do a one time copy of th
"snapshots" : [ ]
}
This leads me to believe that I am not connecting the snapshot correctly
but I'm not sure what I am doing incorrectly. Regenerating the index on
development is not really a possibility as it took a few months to generate
the index the first time around. If there
still the case.
And of course, your own plug-in has a much better chance to be updated to
match exactly each new ES version to which you migrate. That's one of the
downsides of third-party plug-ins: They lock you into older ES versions
until the author gets a chance to update the plug-in
s a strong suggestion but rather as a case study
of what worked well for me.
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@
Hello.
Is it possible to not index a nested document? I know I could add each
field of the nested document to the mapping file and set index = no, but it
would be nice if I could specify this at the nested document level. There
are many fields in my nested doc I would have to type out.
Also, I
use all of the settings
are being controlled within elasticsearch.yml and not the template (e.g.
number of shards, number of replicas, and so on), eliminating the settings
from the template is desired, even if I have to leave it in but set its
value to the empty JSON object.
If this is the w
he source document does not have the
geo.coordinates field.
On Sunday, June 1, 2014 4:28:24 PM UTC-4, Alexander Reelsen wrote:
>
> Hey,
>
> you could index this as a geo shape (as this is valid GeoJSON). If you
> really need the functionality for a geo_point, you need to chang
I am new to Elasticsearch and I am trying to index a json document with a
nonstandard lat/long format.
I know the standard format for a geo_point array is [lon, lat], but the
documents I am indexing has format [lat, lon].
This is what the JSON element looks like:
"geo": {
"type": "Poin
Hi Clinton,
Thank you for your suggestion. What will that do for the existing data?
Will I still be able to store copyrightYear as either a number or a string?
Thanks,
Brian Lamb
On Friday, May 23, 2014 1:09:46 PM UTC-4, David Pilato wrote:
>
> w00t! Sounds like I missed that
Hi Clinton,
Thank you for your suggestion. What will that do for the existing data?
Will I still be able to store copyrightYear as either a number or a string?
Thanks,
Brian Lamb
On Friday, May 23, 2014 1:09:46 PM UTC-4, David Pilato wrote:
>
> w00t! Sounds like I missed that
get an error:
Error in one or more bulk request actions:
index: /myIndex/myType/73148865 caused MapperParsingException[failed to
parse [copyright.copyrightYear]]; nested: NumberFormatException[For input
string: "2013/2014"];
Note that the adding uses the Elastica client
I removed all the extra allocation stuff. When I did that, the shards were
all propogated. Health is green again.
On Thursday, May 22, 2014 6:56:24 PM UTC-4, Brian Wilkins wrote:
>
> Went back and read the page again. So I made one master, workhorse, and
> balancer with rackid of rac
Went back and read the page again. So I made one master, workhorse, and
balancer with rackid of rack_two for testing. One master shows rackid of
rack_one. All nodes were restarted. The shards are still unassigned. Also,the
indices in ElasticHQ are empty.
--
You received this message because yo
Thanks for your reply. I set the node.rack to rack_one on all the nodes as
a test. In ElasticHQ, on the right it shows no indices. It is empty. In my
master, I see that the nodes are identifying with rack_one (all of them).
Any other clues?
Thanks
Brian
On Thursday, May 22, 2014 5:10:25 PM
" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10
}
Is there an incorrect setting? I also installed ElasticHQ. It tells me the
same information.
Brian
aramedic)
which was cause for a few minutes of 'whoa'.
Restarted node-1, waited for it to init, then restarted node-2 and all was
well.
Question is: is restarting ES in a cluster mode using the built-in service
facility the 'right' way? Or would it be better to halt / start us
/etc/puppet/modules/elasticsearch-master
/etc/puppet/modules/elasticsearch-loadbalancer
/etc/puppet/modules/elasticsearch-workhorse
In there, include elasticsearch and set any necessary settings in the
class definitions for each module?
Brian
--
You received this message because you are subscribed
Hello ES Users.
Does anyone have some documentation as to what ES considers valid
geometries? Polygons that I am trying to load into ES are created in an
ESRI/SQL stack, and we use the validation methods in this stack to validate
our geometrres . But when I try to load some of the geometries in
M UTC-6, Honza Král wrote:
>
> Hi Brian,
>
> that message you are seeing is not an error - it's a warning from the
> python logging system that you don't have any logging configured. So
> when elasticsearch tries to log something it cannot.
>
> I'd suggest
Hello.
I'm trying to bulk load about 550k records with spatial data into
ElasticSearch. After about 20 mins, an error occurs "No Handlers Can Be
Found For Logger elasticsearch', then the connection times out and the
Python scripts stops.
The Python loading script was working fine before adding
ve yourself
> better insight into your cluster.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 9 May 2014 23:08, Brian Wilkins wrote:
>>
>> I am on RH
Are there any gotchas I should be aware of when creating a document that
could contain thousands of pages of text ( a Company and thousands of
nested Files ) in addition to dozens/hundreds of fields?
On Friday, May 9, 2014 9:54:40 AM UTC-7, Brian Jones wrote:
>
> It seems like nesting the
if
need be.
On Friday, May 9, 2014 9:10:19 AM UTC-7, Brian Jones wrote:
>
> I have an index with parent documents ( Companies ), that have children (
> Files ). Each Company can have hundreds of Files. Companies and Files
> both have many fields.
>
> The search I'm t
I have an index with parent documents ( Companies ), that have children (
Files ). Each Company can have hundreds of Files. Companies and Files
both have many fields.
The search I'm trying to perform is the Company that best matches based on
it's own fields and the fields of it's children ( t
I am on RHEL 6. I can send messages from my Logstash shipper to Redis to
Elasticsearch. I installed logstash via RPM on all my servers and I
installed elasticsearch 1.0.3 via RPM. When I issue the command via curl to
check my node status, I get two different versions. In Kibana 3, it tells
me t
ilter" : {
"range" : {
"date" : {
"gte": "1998-10-30 00:00:00",
"lte": "1998-11-30 00:00:00"
}
}
}
}'
The issue I'm running into is that if my document contains the word
"sentence"
disable_allocation": true}}'
When you're done:
curl -X PUT http://hostname:9200/_cluster/settings -d
'{"transient":{"cluster.routing.allocation.disable_allocation": false}}'
We use this to keep Elasticsearch up to date without downtime across 10s of
c
Is there a way to make an es host _tell_ me what it's data directory value
is? I'm thinking that, just as one can curl localhost to obtain version,
or snapshot information, one might be able to curl localhost for values
like 'data directory'
Thanks,
brian
--
You r
3-28 19:00:00
2014-03-29 19:00:00
2014-03-30 19:00:00
```
So that's not exactly what we want, the times should be at midnight.
That throws our reports off by a day. Normally we could just correct for
this by adding `post_zone`, and we've done that in the past, but the DST
b
will having the prod hosts bound to IP:9200 screw up the
works?
~brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@g
nts me from
spotting possible abuse or user-error. Is there any way for me to disable
ES's type-guessing or to provide a default guess? I'd rather have ES
default to a string than to fail a M/R job because its type-guess was wrong.
Brian
On Thu, Mar 20, 2014 at 12:26 PM, Costin Leau wr
ext type, which just seems
silly. Using the built-in JSON serializer is just not convenient in this
case.
Brian
On Thu, Mar 20, 2014 at 11:18 AM, Costin Leau wrote:
> My guess is that GSON adds the said field in its result. The base64
> suggests that there's some binary data in the mi
I've not touched the default mapping. I'm not sure why
it would try to parse it as anything other than a string.
I'll turn on TRACE logging and see what happens.
Brian
On Wed, Mar 19, 2014 at 5:35 PM, Costin Leau wrote:
> Hi,
>
> How do you pass the json to es-hadoop?
bulk(RestClient.java:120)
at
org.elasticsearch.hadoop.rest.RestRepository.sendBatch(RestRepository.java:147)
csUriParams.d does not appear in my mapping, so I never explicitly asked
for it to be treated as a date.
Any idea why ES is trying to treat it as a date?
Thanks,
Bria
It does, thanks.
Brian
On Fri, Mar 14, 2014 at 11:29 AM, Costin Leau wrote:
> Hey,
>
> There is but in the big picture it doesn't make any difference. If the
> data is already in JSON format then es-hadoop can stream the data directly
> without having to do any conversi
Hi,
I'm currently using the elasticsearch-hadoop component to load data into my
ES cluster. Currently, the ESOutputFormat will accept a Map or a Text that is already in JSON format. My question: Is there
a performance advantage to using one over the other?
Thanks,
Brian
--
You rec
So when done this way I can put it on the master and not all the slave
nodes. THANK YOU! I knew there had to be something better than what I was
doing. Plus this looks much more flexible. I saw this page before, but I
didn't realize it for the entire cluster where as the default-mapping is
:49:16 AM UTC-5, Brian wrote:
>
> I am currently running 4 nodes, 3 data nodes with master=false, and 1
> master node, with data=false. When making changes to my
> default-mapping.json file, I am uncertain if I have just not done something
> right, or if I need to go put this sa
I am currently running 4 nodes, 3 data nodes with master=false, and 1
master node, with data=false. When making changes to my
default-mapping.json file, I am uncertain if I have just not done something
right, or if I need to go put this same file on all four hosts. My setup
currently does hav
So did anyone ever have an answer as to why the TTL wasn't actually being
disabled? I have a default TTL, but I have 1 index that I would like for
new documents to not be given a TTL. This index already exists, I just
want to disable the TTL so as new ones arrive they aren't given a TTL.
I ha
Histogram#Number_of_bins_and_width
I'm pretty happy with the results.
Brian
On Friday, February 14, 2014 9:31:58 AM UTC-5, Georges@Bibtol wrote:
>
> Hi everyone,
>
> I have multiple facets on text, integer, date.
>
> I use range filter on some integer facets but I have to "ma
ot;missing" aggregation,
but I don't see any aggregation equivalent for other count.
Does anyone have any suggestions on how to replicate this functionality
using aggregations?
Brian
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" g
Thank you Jörg,
You've been a great help! I have everything working now.
On Saturday, February 1, 2014 11:24:49 AM UTC-6, Jörg Prante wrote:
>
> Use 0.90.10 or 1.0.0.RC1
>
> Jörg
>
> On Sat, Feb 1, 2014 at 7:03 AM, Brian Easley
> > wrote:
>
>> I sorted
I sorted that out, but am still inable to view metrics inside marvel:
No results. There no results because no indices were found that match your
selected time span.
I do see quite a number of elasticsearch metrics inside kibana however,
including: index_stats 1784
Does anyone have any idea on
I'm a newcomer to elasticsearch. Where should I comment out the sysctl
lines ?
On Friday, January 31, 2014 2:43:25 PM UTC-6, Jörg Prante wrote:
>
> I's openvz that does not allow to run sysctl. It's not permission, it's
> because of openvz guest architecture. Such a guest does not have access t
Hello,
I am unable to properly start elasticsearch 0.90.9 and higher inside Debian
wheezy openvz containers with the following error:
Starting ElasticSearch Server:sysctl: permission denied on key
'vm.max_map_count'
Outside of running this on bare metal, is there any other way to get around
t
I'm not using the Service Wrapper for Elasticsearch.
I specify the ES_HEAP_SIZE when I start Elasticsearch like this:
> ES_HEAP_SIZE=4g /usr/local/elasticsearch/bin/elasticsearch
Is there a place I can set this so it does not need to be specified on
launch?
--
You received this message becau
arch/reference/current/setup-configuration.html
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html#field-data
>
>
> --Alex
>
>
> On Wed, Dec 18, 2013 at 10:51 PM, Brian Jones
> > wrote:
>
>> I'm using the Terms Facet wit
Hi everyone, we're considering moving from Sphinx to Elastic Search, but I
want to make sure it is a good fit before rewriting our infrastructure.
Currently we have 20 dual octo core 2690 machines with 32GB of ram. We
handle about 2,000 queries per second with the existing setup, but we have
s
I'm using the Terms Facet with Elasticsearch V0.20.2. The server has 8 x
Intel Xeon E5-2680 v2 processors and 15GB of memory.
My Terms Facet queries work great as long as the number of documents in the
index is small ( eg. less than 20,000 ). When the system hits more,
pushing into the hundre
101 - 171 of 171 matches
Mail list logo