Hi all,
Pleased to announce release 1.0.0 of elastic4s.
https://github.com/sksamuel/elastic4s
Available on maven central.
Thanks to everyone who contributed to this release.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from
Not sure about your architecture. May be you have good reasons for that but
running more than one node per machine is not what I'd recommend.
But here, may be they are client only nodes?
Specifying port range is OK. When your node starts, it tries to ping b001:9300,
then b001:9301, …
One node
How did you install Marvel?
You need to add Marvel plugin on every node and each node must be restarted.
Also take care of the setting marvel.agent.exporter.es.hosts if you use
other host/port than localhost:9200
Jörg
--
You received this message because you are subscribed to the Google
Hi,
As I am using Docker Elasticsearch is not running during the build process.
I have posted my Dockerfile below.
https://github.com/damm/dockerfiles/blob/master/elasticsearch/Dockerfile#L7
FWIW my /_node/stats is correct and fully populated.
Additionally I have looked over
Please, I'd like to forward you to the nice Elasticsearch company to ask
them if they can provide the service you request, for example, something
like a native Mac OS X dmg package of the ELK stack, with OOTB experience.
I'm quite sure they have something like this in the pipeline, because
This is typically something that you can do using a terms aggregation[1].
It would look something like:
{
aggs : {
top_ips : {
terms : {
field : ip_address, // - change field name accordingly
min_doc_count: 100
}
}
}
}
[1]
Hey,
you might want to read:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/testing-framework.html
should help you to create tests which have a whole elasticsearch cluster
running in the background.
--Alex
On Thu, Feb 13, 2014 at 7:37 PM, joergpra...@gmail.com
Thanks for the prompt response. Aliases will surely help me in resolving
filter issue across indexes but I didn't get how would I exclude certain
fields globally from highlightling. Our requirement is to highlight all the
fields except few fields.
On Thursday, 13 February 2014 19:49:10
Hey,
the standard thai analyzer supports a stopwords_path in the mapping, no
need to reference to that ThaiWordFilterFactory...
Should help you.
--Alex
On Fri, Feb 14, 2014 at 3:06 AM, Min Cha minslo...@gmail.com wrote:
Hello Nik.
Thanks for your advice.
I had just tried as you advice.
Wow. Have to upgrade ES from 0.9 to 1.x.
Thank you.
On Friday, February 14, 2014 1:08:30 PM UTC+4, Adrien Grand wrote:
This is typically something that you can do using a terms aggregation[1].
It would look something like:
{
aggs : {
top_ips : {
terms : {
Thanks.
If you dont mind, can you give me a specific example or explain more
specific?
I cant`t understand your advice.
2014년 2월 14일 금요일 오후 6시 55분 44초 UTC+9, Alexander Reelsen 님의 말:
Hey,
the standard thai analyzer supports a stopwords_path in the mapping, no
need to reference to that
Hi,
I yesterday moved my our cluster which have single node from 09.11 to 1.0
and I am getting following message from elasticsearch log
[2014-02-14 11:14:40,528][WARN ][transport.netty ]
[mipLoggingCenter.mi-pay.com] Message not fully read (request) for [0] and
action [], resetting
Did you upgrade all nodes and restart them all?
It sounds like node mipLoggingCenter.mi-pay.com/10.3.57.34 is still running a
0.90.x version.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 14 février 2014 à 12:22:11, Aamir Khan
May I ask which OS you are running ES on?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on
There is no right (or wrong) answer. The best way is to test. Start with
the default 5 shards and load real data into it at the rate that you expect
in production. And then query it at the rate that you expect in production
- check throughput and response times. Then run your facets, sorts,
I see, I didn't understand your original question. Unfortunately, you
cannot say something like highlight all fields except field A, B, C. You
can only list the fields you want highlighting on, and in addition you can
use wildcards when specifying field names like for example:
{
_source:
Heya,
Just released elasticsearch spring factories project 1.0.0 for elasticsearch
1.0.
I guess I can call this release the Valentine's day edition :-)
https://github.com/dadoonet/spring-elasticsearch
Older versions are in branch 0.x:
https://github.com/dadoonet/spring-elasticsearch/tree/0.x
I am looking for the same answer. ever got to find out how?
On Thursday, January 2, 2014 3:50:59 AM UTC-5, spezam . wrote:
Hello,
in Kibana 3 is possible to set from the dashboard settings, the index
settings.
I'm using for this a day based timestamping, with an index pattern such as
It's actually not that difficult. Just need a little patience learning
AngularJS. The easiest way to start is to look:
1) src/app/panels is where all the panels live - copy one out of here (I'd
start with the text panel), create a new folder - new name based on your
panel name, and edit and
We're almost there!
This is the result of the query that I have posted:
- hits: {
- total: 3
- max_score: 4.724929
- hits: [
- {
- _index: website
- _type: structure
- _id: 7
- _score: 4.724929
- fields:
Sorry, typo:
This is the result of the query *you posted*:
Il giorno venerdì 14 febbraio 2014 14:29:38 UTC+1, Luca Pau ha scritto:
We're almost there!
This is the result of the query that I have posted:
- hits: {
- total: 3
- max_score: 4.724929
- hits: [
Not yet, I'm still using Kibana 2 because of this issue.
On Friday, February 14, 2014 2:21:38 PM UTC+1, Pascal Larivee wrote:
I am looking for the same answer. ever got to find out how?
On Thursday, January 2, 2014 3:50:59 AM UTC-5, spezam . wrote:
Hello,
in Kibana 3 is possible to set
I have the following setup:
indexes:
website:
client: default
finder: ~
settings:
index:
analysis:
analyzer:
my_analyzer:
type:
It should be the fuzziness property:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-flt-query.html
More details about how you can customize it here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/common-options.html#fuzziness
--
You
Hello,
I've updated the info on my page which contains fairly old info (approx. ES
0.90.3)
*https://en.opensuse.org/User:Tsu2/Install_and_Intro_Logstash-Elasticsearch-Kibana*https://en.opensuse.org/User:Tsu2/Install_and_Intro_Logstash-Elasticsearch-Kibana
to
Thanks both for your input.
@Jörg:
I understand ES uses all available process memory. I meant jvm memory
usage, which it tries to reclaims when it exceeds 75% (due
to -XX:CMSInitiatingOccupancyFraction=75) option.
I don't know what kind of queries use Lucene FST, could you be kind enough
to
Can you try something like this:
[logstash-].MM.DD,[dc1_logstash-].MM.DD,[dc2_logstash-].MM.DD
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
I ran '_cache/clear' which cleaned up fielddata, id_cache and jvm memory
usage dropped ~10.5GB - ~5 GB..
Shouldn't ES itself clear up these cache when jvm memory usage becomes
really high? I see the gc count kept increasing but not a lot of memory was
reclaimed until I ran _cache/clear..
On
Hello,
I want to know when and if I should manually call optimize on
elasticsearch. This blog seems to say it's a bad idea:
http://gibrown.wordpress.com/2013/01/24/elasticsearch-five-things-i-was-doing-wrong/
However, there must be a reason for optimize to be exposed in the rest api.
[]'s
Yes - I downloaded the master from GitHub and was running still seeing the
issue.
On Friday, February 14, 2014 10:38:49 AM UTC-5, Binh Ly wrote:
I'm curious if you're running the latest Kibana, or an older one.
The field query has been deprecated (and removed in ES 1.0) which is the
cause
I upgraded elasticsearch to 0.90.11 and installed marvel. Congratulations
on a really nice tool!
Now I have a small issue: since marvel is generating quite a lot of data
(for our develop system), I would like to configure an automatic delete of
old data. Is there such an option? I didn't find
Note that I'm using the past tense only because I reverted back to ES
0.90.9, not because I figured out how to solve the issue :)
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it,
On Friday, February 14, 2014 4:06:06 AM UTC-8, Binh Ly wrote:
May I ask which OS you are running ES on?
IIRC Docker is a management tool for LXC.
So, it does pique the thoughts...
Where should marvel be pulling stats in an LXC deployment? It's not fully
isolated like other virtualization
Would the histogram facet work for you?
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-facets-histogram-facet.html
BTW, if you've upgraded to 1.0, you might want to look at aggregations
which are more powerful than facets.
--
You received this message because you
Strange. Just want to confirm your behavior again. I just downloaded Kibana
3 and I hover on the top left and says Kibana 3 milestone pre 5. I clear
all my browser cache to ensure there is no old Kibana code lurking around.
Then I create a new logstash dashboard. Then I go to the filtering
There is no automatic bucketing in Elasticsearch. I mimic the behavior with
an expensive process that uses many smaller fixed ranges which are reduced
into the number of buckets needed on the client side. Easily the slowest
part of my query. My goal was to wait for the facet refactor (which has
I am a new user of Elasticsearch and Logstach.
I have downloaded new versions of these tools (directly from ElasticSearch
download page, exacly: Elasticsearch 1.0.0 and Logstash 1.3.3).
After running both I noticed that Elasticsearch throws such exceptions like
below. Is it possible to run
This can *absolutely* be fixed in ElasticSearch. It's not a problem
with Lucene, but with how ES data is mapped onto the Lucene data model.
The problem is that types and fields use local names instead of
fully-qualified names. As far as Lucene is concerned, there would be a
field named user.id
Strange, I just downloaded the latest Kibana, and I created 2 simple
logstash indexes, logstash-2014.01.29 and a_logstash-2014.01.29. Then I
went into Kibana with a new dashboard and set the index timestampping to
day and pattern to [a_logstash-].MM.DD,[logstash-].MM.DD
My histogram
For now, you can use the elasticsearch_http output (instead of
elasticsearch) and you should be able to get LS 1.3.3 going with ES 1.0.
For example:
output {
elasticsearch_http {
host = localhost
}
}
--
You received this message because you are subscribed to the Google Groups
Hi,
I have a small elasticsearch 0.9.11 cluster in Windows Azure that uses the
Azure Cloud Plugin for node discovery.
When trying to upgrade to elasticsearch 1.0.0 today, I noticed that when I
active the Azure Cloud Plugin, elasticsearch won't start :
[2014-02-14
Ah! You're right. I need to release it soon.
By now, you can try 2.0.0.RC1-SNAPSHOT as the version name. It should work.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 14 février 2014 à 18:19:00, Hugo Leclerc (hugo.lecl...@radio-canada.ca) a
écrit:
Here?
https://oss.sonatype.org/index.html#nexus-search;gav~org.elasticsearch~elasticsearch~1.0.0~~
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 14 février 2014 à 18:08:07, Robin Verlangen (ro...@us2.nl) a écrit:
Hi there,
Where can I find the
Works like a charm. Thanks!
On Friday, February 14, 2014 12:21:19 PM UTC-5, David Pilato wrote:
Ah! You're right. I need to release it soon.
By now, you can try 2.0.0.RC1-SNAPSHOT as the version name. It should work.
--
*David Pilato* | *Technical Advocate* | *Elasticsearch.com*
Heya,
Just released elasticsearch-cloud-azure 2.0.0 for elasticsearch 1.0.0.
https://github.com/elasticsearch/elasticsearch-cloud-azure
Next version (2.1.0) will add Azure Cloud Storage repository for Snapshot and
Restore.
Contributions/PR/Issues/Doc warmly welcomed!
:-)
--
David Pilato |
My previous installation was around 3 weeks old, after upgrading to the
latest Kibana it seems it started working just great.
Thanks a million!
On Friday, February 14, 2014 5:55:03 PM UTC+1, Binh Ly wrote:
Strange, I just downloaded the latest Kibana, and I created 2 simple
logstash
Shards should distribute over the 2 nodes assuming they are part of a
single cluster. Theoretically, yes more shards *distributed across multiple
nodes* will increase indexing speed. But you can still be limited by other
resources such as network, CPU, memory so it's hard to say how much
Binh,
First let me thank you for helping me track down what’s going on here.
So I can confirm that everything I see is the same as what you saw (with the
exception that at the end of the Kibana version mine says [master]). If I
enter a query for:
querystring must
query : action:connect
I haven't thought of automating, but it seems to me that it should be easy
to address manually.
Haven't looked to see I this can be done in Marvel, but in
elasticsearch-head and elasticsearch-hq both display the indices.
Since data is indexed by data, you can select the indexes you wish and
Chris,
I tried your suggestion, in the table panel, I opened 1 row, and then
filtered (magnifying glass) on 1 field. It indeed added a field filter -
must, field, and value. However, it re-executed all the queries properly.
The new filter translated to this part in the query (which looks valid
I managed to split the shards by restarting ES on the master, then
retested. Throughput is the same.
4500/sec seems a bit low, each doc is just 8k. Network doesn't seems to be
the bottleneck. I check the IO on disk, and it's between 0 (probably when
it's buffering before flushing, and 50/70).
Hi Boaz,
fs: {
data: [
{
path: /data/elasticsearch/shared/docker/nodes/0
}
],
Does not appear to be. This field is populated when I run it without
Docker; so is it expecting any particular file to exist like /etc/fstab or
/etc/mtab?
I provisioned an IO 300 disk, no improvement at all.
Logstash is running on the same instance as the master node.
On Friday, February 14, 2014 2:10:47 PM UTC-5, Bastien Chong wrote:
I managed to split the shards by restarting ES on the master, then
retested. Throughput is the same.
I'm a little clueless when it comes to java options in ES, and was
wondering where I define things like GC? I did try setting it in
ES_JAVA_OPTS under /etc/default/elasticsearch, however when I did ES
wouldn't start, so either my syntax is wrong, or something else.
Can anyone provide a few
What is the importance of Index template?
Could someone explain with an example?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Hi Thomas,
Marvel itself doesn't have a setting for this, but you can have a look at
this tool, built by the logstash team to help management indices with time
based data: https://github.com/elasticsearch/curator
Cheers,
Boaz
On Friday, February 14, 2014 4:41:36 PM UTC+1, Thomas Andres
It's hard to diagnose things offline, but is it possible for you to run
another logstash somewhere else (like maybe on the second box) and both of
them in parallel and see what your combined ES throughput is. So they would
be both writing to the same single ES cluster.
--
You received this
I've got indexes storing the same kind of data split into weekly chunks -
there has been some fairly substantial variation in data volume.
I've got a mapping change I need to make across all the back data, and I'm
thinking it might make sense to try to rebalance the documents per shard so
that
Here is a previous discussion on Rice/Sturges:
https://groups.google.com/forum/#!msg/elasticsearch/CAZhIHtB1UI/Exzd2_DanbAJ
Never did sit down and finally understand the paper Jörg linked. :) I
really should find the time to revisit the issue since my implementation is
costly.
Ivan
On Fri, Feb
It does indeed sound like some metrics are not available from your
environment. Marvel/ES uses Sigar to collect these metrics
(https://support.hyperic.com/display/SIGAR/Home). Each OS has different
ways to provide (or not provide) these metrics. If you absolutely cannot
get these metrics, you
If you find yourself repeatedly creating indexes with some similar
characteristics, then you create an index template. Whenever you create a
new index and if a template exists that matches it, then the template will
be applied along with whatever you have predefined inside the template. An
FWIW I hack bin/elasticsearch.in.sh for modifying the JAVA_OPTS there.
Jörg
On Fri, Feb 14, 2014 at 8:33 PM, Mark Walkom ma...@campaignmonitor.comwrote:
I'm a little clueless when it comes to java options in ES, and was
wondering where I define things like GC? I did try setting it in
Tony,
We appear to be getting the information back in the cgroup currently; so
what's provided is fairly good. It would be better obviously to grab them
from the cgroup and push it in; but that would be external to the container
(unless you mounted the cgroups in the container)
So for my case
^^ Same. For the package install, it's under
/usr/share/elasticsearch/bin/elasticsearch.in.sh.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Dan, no problem, I can build a version against 0.19.8
Jörg
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view
argh I just wrote a reply but google ate it apparently.
So, are you suggesting that scrapping parent/child and simply storing all
of the retailer data in the product document is a safer bet. I imagine we
could rate limit our product indexing. However this now gives me two
concerns: 1 - the
66 matches
Mail list logo