Hi Jörg,
thanks for the advise, it seams to be my solution.
Are there any API javadocs for ES?
It takes me 3 to 4 times longer writing something for ES than for Solr
because of searching through the sources
and no useful javadocs.
Bernd
Am Freitag, 1. August 2014 16:07:10 UTC+2 schrieb Jörg
Hi again,
a quick report regarding compression:
we are using a 3-TB btrfs-volume with 32k block size now which reduced the
amount of data from 3,2 TB to 1,1TB without any segnificant performance
losses ( we are using a 8 CPU, 20 GB Memory machine with an iSCSI.Link to
the volume ).
So for us
Hi
After network connectivity failure we have corrupted index recovery loop in
cluster.
Any help will be greatly appreciated!
*Cluster state: *
*elasticsearch version 1.3.0*
cluster_name : segments,
status : red,
timed_out : false,
number_of_nodes : 6,
number_of_data_nodes : 6,
Hi Colin,
I now could solve the problem thanks to your advice. I set the shard_size
to 0 (max) and then it works. I still don't understand it a 100%.
Thanks for your patience.
Valentin
On Thursday, July 31, 2014 9:21:12 AM UTC+2, Colin Goodheart-Smithe wrote:
The Elasticsearch log files can
Hey,
are you using HTTP keep alive connections? If not consider switching to
them, as reopening a new TCP connection not only results in high latencies
but also removes file handle resources from the elasticsearch process (the
number of open files). if your client/language does not support this,
Hey,
this is exactly what logstash is for, so you may want to give it a try, as
it is already there. :-)
Also you can use the geoip filter to extract the ip address from the header
as well, granted you log that one.
--Alex
On Sat, Jul 19, 2014 at 6:26 AM, Otis Gospodnetic
Hey,
there are several possibilties to increase performance. First you can have
your own index for your percolation queries, so it scales independently
from your data (there are use-cases where people do not have increasing
data, but ever increasing amount of percolators). Second you can filter
What API are you using to communicate with ES ?
03 август 2014, неделя, 11:14:59 UTC+3, Ayush написа:
I am new to elastic search, I have created an index cmn with a type
mention. I am trying to import data from my existing solr to
elasticsearch, so I want to map an existing field to the _id
Hey,
maybe, exactly like that. However you should provide some more information
and sample queries, so other people can help you.
--Alex
On Tue, Jul 22, 2014 at 7:55 PM, IronMike sabdall...@gmail.com wrote:
How can I combine phrases like flight attendant AND/OR/NOT Boeing
airlines ?
--
Hey,
Just a remote guess without knowing more: On your client side, the
exception is wrapped, so you need to unwrap it first.
--Alex
On Wed, Jul 23, 2014 at 9:47 AM, Cosmin-Radu Vasii
cosminradu.va...@gmail.com wrote:
I am using the dataless NodeClient to connect to my cluster (version is
Hi all,
Hope you can give me some pointers on this topic. I'm trying to figure out
what is going wrong in my setup/config but I cannot figure it out.
I have two servers. Server A hosts a public website with the elasticsearch
index.
Server B retrieves XML productfeeds , parses the feeds, and
Hey,
can you add some more information here? What are doing when this happens?
Heavy indexing? Did you check the logfiles before that? Are there
exceptions? What elasticsearch version are you using? What JVM version are
you using? A bit of context would be great!
--Alex
On Sun, Jul 27, 2014
Your question is about a relation RAM amount / Storage capacity. There is
no answer yes or no, because
1. You can combine even tiny RAM with terabytes of storage, or tiny storage
with huge RAM. ES will work - but what is it good for? The question is what
query types you use and what is the
Hey
min/max_children is part of the query to define how many children a
matching parent document may have, but it is not exposed. A possible
solution would be to execute a count query with a has_parent on the matched
parent.
--Alex
On Tue, Jul 29, 2014 at 4:20 AM, Maxime Nay
Hey,
in order for kibana to work, elasticsearch needs to load all @timestamp
values into memory. This exception tries to tell the user, that loading
this into memory would result in an Out-of-Memory exception, so
elasticsearch aborts this request. You could start elasticsearch with more
memory
Hey
this means, that the logstash forwarder could not connect to its configured
endpoint, because that endpoint does not seem to run. Check if that service
is up and running or maybe just misconfigured.
--Alex
On Thu, Jul 31, 2014 at 2:56 PM, Indirajith V indiraji...@gmail.com wrote:
Hi
Hey,
from the top of my head, joda (the underlying library used in Elasticsearch
for date handling) only does millisecond and so does Elasticsearch. Maybe
you can break down the micro to milliseconds and try again? Also, you might
need to create your own date format, see
hey,
you can configure the count of the retured elements using the size
parameter, see
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-suggesters.html#search-suggesters
or the examples
Hey,
you could simply index an 'all' color field value for every document, and
then use that for your suggestions?
--Alex
On Mon, Aug 4, 2014 at 10:50 AM, Tihomir Lichev shot...@gmail.com wrote:
Hi, Is there a way to tell the context suggester to display results for
all the categories ?
I
Now this is strange, I have same mapping/settings for both ES and Solr, but
ES ist boosting wrong!!!
If I add a boost to Solr all boosted hits are listed first.
If I add the same boost to ES only some of the boosted hits are listed
first.
Bernd
Am Montag, 4. August 2014 08:13:10 UTC+2 schrieb
Hi,
on a second thought, you may have ran into this one as well:
https://github.com/elasticsearch/elasticsearch/issues/7086
On Mon, Aug 4, 2014 at 10:27 AM, Alexander Reelsen a...@spinscale.de wrote:
Hey,
Just a remote guess without knowing more: On your client side, the
exception is
Sure, along with some additional info.
- I use the Java API within a Grails application
- I use Elasticsearch version 0.90.5
### code to create a Transportclient.
Code is executed on server B, pointing to ES instance on Server A
I tried to increase the timeout to check if this would help
We are indexing all sort of events (Windows, Linux, Apache, Netflow and so
on...) and impact is defined in speed of the Kibana GUI / how long it takes
to load 7 or 14 days of data. Thats what is important for my colleagues.
Am Montag, 4. August 2014 10:52:25 UTC+2 schrieb Mark Walkom:
What
When scroll-scanning
http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.x/search-request-scroll.html#scroll-scan,
I keep scrolling until I get a result that returns 0 docs, which is what
the docs seem to suggest that I should do.
But the final request (the one that returns 0
Hi,
I'm planning upgrade from ES 0.90.5 to 1.31 and our current setup uses
shared gateway.
what is the recommended procedure to migrate the data from shared gw to
local ?
Thanks
Asher
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
This is correct. On the last request, no hits are returned because all
shards have already been drained of results. If you look at shards.total
and shards.failed, you'll see they are also 0
clint
On 4 August 2014 12:54, Tim S timsti...@gmail.com wrote:
When scroll-scanning
Yes, it makes sense in this case, it's just confusing because it happens
differently in other situations - when doing a normal scroll (not
scanning), shards.successful is non-zero even when you've reached the point
where there's no more results (and even if you keep going). And if you do a
Maybe a typo?
For this example
PUT /test/test/1
{
name : einzelhandel,
type : oa
}
PUT /test/test/2
{
name : grosshandel,
type : oa
}
PUT /test/test/3
{
name : grosshandel,
type : closed
}
POST /test/test/_search
{
explain: true,
query: {
bool: {
must: {
Yes, missing Javadoc online is a pity. So I have prepared a javadoc for
Elasticsearch 1.3.1 here
http://xbib.org/elasticsearch/1.3.1/apidocs/
Jörg
On Mon, Aug 4, 2014 at 8:13 AM, Bernd Fehling bernd.fehl...@gmail.com
wrote:
Hi Jörg,
thanks for the advise, it seams to be my solution.
Are
You are aware of the fact the kind of search performance you mean
depends on RAM and virtual memory organization of the cluster, not on
storage, so without any siginifcant performace losses could be expected ?
Jörg
Am 04.08.14 12:41, schrieb horst knete:
We are indexing all sort of events
Thanks a lot.
+1
This MUST be on the ES Web page.
Am Montag, 4. August 2014 14:00:59 UTC+2 schrieb Jörg Prante:
Yes, missing Javadoc online is a pity. So I have prepared a javadoc for
Elasticsearch 1.3.1 here
http://xbib.org/elasticsearch/1.3.1/apidocs/
Jörg
On Mon, Aug 4, 2014 at
Hi,
This error arrived today after I rebooted my Mac.
It might have been something that OS X has updated since last time, I don't
know that for sure.
I'm running elasticsearch: stable 1.3.1, HEAD. Installed with Homebrew.
Any suggestions to how I can resolve this problem?
$ ps aux|grep java
Actually I can't follow you.
My query should be:
must_match(fieldname text, value einzelhandel, boost 200) AND
should_match(fieldname oa, value 1, boost 400)
Where is my typo?
Ahh you mean the value of oa with 1 is a string whereas the mapping is
integer?
Am Montag, 4. August 2014
On Sun, Aug 3, 2014 at 7:43 PM, Mark Walkom ma...@campaignmonitor.com
wrote:
Shard size will depend entirely on how many shards you've set and how big
the index is.
Allocation of data to shards happens in a round-robin manner, so balancing
isn't needed.
What do you mean by shards changing
Thanks Alexander . Not sure why we have setting in .yml file for
Access-Control-Allow-Methods: OPTIONS, POST, GET, PUT . This raises
question on the configuration usage .
On Mon, Aug 4, 2014 at 3:31 AM, Alexander Reelsen a...@spinscale.de wrote:
Hi,
Elasticsearch currently does not allow
I did migrate to ES 1.3.1
I did try to do the same trick as before, but it's fail to PUT oryginal,
just dumped settings.
Any ideas?
curl -XGET localhost:9200/_template?pretty template_all
curl -XPUT localhost:9200/_template/*?pretty -d @template_all
*{*
* error :
Hi,
I'm trying LookupScript example here:
https://github.com/imotov/elasticsearch-native-script-example/blob/master/src/main/java/org/elasticsearch/examples/nativescript/script/LookupScript.java
The idea of my script is to pre-cache all child documents in LookupScript
instance, but I want to
You have 1g heap only configured for Elasticsearch. You should increase it.
Or try to disable bloom filter loading, this might help when recovering
from large indices with unique terms. It will be disabled by default in ES
1.4
Heya,
We are pleased to announce the release of the Elasticsearch Azure cloud plugin,
version 2.4.0.
The Azure Cloud plugin allows to use Azure API for the unicast discovery
mechanism and add Azure storage repositories..
https://github.com/elasticsearch/elasticsearch-cloud-azure/
Release
So because of problems with boosted numeric field values in Elasticsearch I
should change my mapping
and change all my 60 million documents from integer (or short) to string?
Wouldn't it be better to fix this in Elasticsearch?
At least it works with Solr, so Lucene is not the problem.
I will
Update to this. I installed the lastest version and this fixed my issue
for a while until I ran a search with a query with highlighting that should
not have hit on the nested documents but they appeared in the results.. At
this point I started to see the nested documents hitting on every
*Ok, got it:*
*I did it template by template*
*When you capture template (for example: logstash*
curl -XGET localhost:9200/_template/logstash?pretty template_logstash
*you get:*
cat template_logstash
{
logstash : {
order : 0,
template : logstash-*,
settings : {
Hi,
I am trying to perform a filtered aggregations across multiple indices and
types and am coming across a problem when filtering on fields that exist in
one type and not another. The query fails to parse with the error failed
to find geo_point field [place] when trying to use the geo
Are you refreshing the index after inserting the test documents? I could be
simply a matter of timing.
--
Ivan
On Sun, Aug 3, 2014 at 8:22 AM, John D. Ament john.d.am...@gmail.com
wrote:
Hi
So after running a few rounds of local automated tests, I've noticed that
sometimes I get the wrong
Hi,
We're looking at this at the moment.
We've found a couple: searchly, found, bonsai.
Is there one you can recommend?
Our app is in AWS Europe, so having it there would be a big bonus point.
Thanks,
Ricardo
On Wednesday, July 25, 2012 11:30:12 AM UTC+1, Alex Brasetvik wrote:
On Jul
The only way to achieve the result you are seeking is to use parent/child
documents:
http://www.elasticsearch.org/blog/managing-relations-inside-elasticsearch/
http://www.spacevatican.org/2012/6/3/fun-with-elasticsearch-s-children-and-nested-documents/
Cheers,
Ivan
On Mon, Aug 4, 2014 at 9:10
Hi,
Is It possible to verify that the value of a field in elasticsearch is in a
external list?
I have a blacklist of ip addresses and I want to check if a field value is
in this blacklist.
Thanks a lot.
--
You received this message because you are subscribed to the Google Groups
Yes. This issue was opened at my request. It seems there are bugs in ES.
And this is a pretty big one in my opinion.
On Aug 4, 2014 12:38 PM, Alexander Reelsen a...@spinscale.de wrote:
Hi,
on a second thought, you may have ran into this one as well:
Hello,
Is it possible to limit the amount of history indices marvel keeps around
via configuration? If so, how?
Thanks!
Daniel
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it,
Thanks everyone for replying. As it turns out, all our problems stemmed
from our index schema.
Since our app was heavily modeled after social networks, we had to store
our users' followers, and their IDs. To do that, each of our users had an
array called follower_ids -- the IDs of the people
Hello,
My ES node is running very high and this is the thread from running
hot_threads. Can someone help me understand why CPU is so high?
This is a 1 node cluster running a linux machine with 16 cores and 72GB of
RAM.
::: [c9q9o027][VTde-tWsTcm9oe7vBZuadA][inet[/10.1.6.21:9300]]
48.3%
Hi Ricardo,
I'm a founder of Bonsai, and have been hosting search as a service since
2009, so am happy to answer whatever questions you might have. Most
providers on AWS will support the EU region (ourselves included). Happy to
follow up with more, either here or off-list.
On Mon, Aug 4, 2014
Because integer fields have no norms, it is quite uncommon to use them for
boosting. More common is the use for interpreting integer values as input
for scoring algorithm with function score.
Which Solr version is this? Solr did not follow the Lucene default in
previous versions regarding integer
From a list of terms, I'm trying to find all the records that have those
terms in a specific field and aggregate on that field. As an example, I
have a list of records that contain a city name. I'd like the user to free
form type words (some of which would be the city name). Then I'd like to
The options are for cross origin resource sharing (CORS) only
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html#modules-http
Jörg
On Mon, Aug 4, 2014 at 3:27 PM, Vijay Dodla vijay.rem...@gmail.com wrote:
Thanks Alexander . Not sure why we have setting
Do you run many segments in your shards? If so, you should consider to run
optimize.
Jörg
On Mon, Aug 4, 2014 at 6:46 PM, Melanie Zamora mzam...@springcm.com wrote:
Hello,
My ES node is running very high and this is the thread from running
hot_threads. Can someone help me understand why
Our cluster has started to spit out error messages related to purge
failures. It also appears to have adversely impacted performance. The error
message is below, any ideas on where to look to fix this error?
[2014-08-04 13:37:22,949][WARN ][indices.ttl ] [prod2] failed
to execute
You should switch to using bulk indexing instead of indexing an individual
documents. Also, considering switching off the refresh interval (set it to
-1) for the duration of your bulk indexing.
Cheers,
Ivan
On Mon, Aug 4, 2014 at 3:08 AM, Dennis de Boer datdeb...@gmail.com wrote:
Sure,
Javadocs also available at
http://jenkins.elasticsearch.org/job/Elasticsearch%20Master%20Branch%20Javadoc/Elasticsearch_API_Documentation/
http://javadoc.kyubu.de/elasticsearch/ (unofficial)
--
Ivan
On Mon, Aug 4, 2014 at 5:28 AM, Bernd Fehling bernd.fehl...@gmail.com
wrote:
Thanks a lot.
I would think timing as an issue if I got a shorter list of results. I'm
getting too many results in this case.
On Monday, August 4, 2014 12:03:35 PM UTC-4, Ivan Brusic wrote:
Are you refreshing the index after inserting the test documents? I could
be simply a matter of timing.
--
Ivan
An update: I have installed curator 1.2.2 by downloading the zip archive,
unpacking it, and then installing it directly:
$ cd curator-1.2.2
$ sudo python setup.py install
Not sure if it's the fix since the previous version of curator, or else the
pip-less install. But either way, it's working
What version of Elasticsearch? Of Java?
How is TTL being used? For example, one extreme is to constantly add log
data and then delete old data. This case is, of course, best handled with
time-based indices and a tool such as curator to delete old data by index
and not by individual document
Is it possible to provide a minimal test case with docs to reproduce this?
Jörg
On Mon, Aug 4, 2014 at 8:05 PM, John D. Ament john.d.am...@gmail.com
wrote:
I would think timing as an issue if I got a shorter list of results. I'm
getting too many results in this case.
On Monday, August 4,
Alex,
By the way, is this bug seen with the TransportClient also, or just the
NodeClient?
Thanks!
Brian
On Monday, August 4, 2014 4:27:35 AM UTC-4, Alexander Reelsen wrote:
Hey,
Just a remote guess without knowing more: On your client side, the
exception is wrapped, so you need to
Hello All,
I have a ELK setup 'out of the box' . My goal is to parse apache logs via
logstash and display it in kibana.
I would like to know if it is mandatory to create an index on elasticsearch
so as to store the result from apache logs(I have logstash.conf
output=elasticsearch)
--
You
Just node client and only for bulk request
On Aug 4, 2014 10:36 PM, Brian brian.from...@gmail.com wrote:
Alex,
By the way, is this bug seen with the TransportClient also, or just the
NodeClient?
Thanks!
Brian
On Monday, August 4, 2014 4:27:35 AM UTC-4, Alexander Reelsen wrote:
Hey,
Hello,
Like many others, I have the ELK stack. With very little data in elastic
search, kibana 3 is super fast, but in my production environment, kibana
sometimes even fails to show any data.
Here are my hardware specs.
kibana + ES + nginx = m2.2xlarge + 20GB JVM Heap + 1TB ssd EBS volume
5
By default, Elasticsearch automatically creates an index if a document is
being added and the index doesn't already exist.
Logstash automatically specifies a time-based index with day precision for
each log entry. In other words:
logstash-2014.07.28
logstash-2014.07.29
logstash-2014.07.30
I have started to implement stored procedure calls. They are not complete
in JDBC plugin. At the moment it is an undocumented (incomplete) feature
that can register field names to callable statement result parameters. You
hit the nail - how to map result set output to field names is not done yet.
Well, I can surely help test it out as it becomes ready for consumption,
given a little guidance on usage (being undocumented and all :-)). But
yeah, mapping will be key. Specifically, I have a column coming out of the
SP (the first column, called domain) that will need to be mapped to the
_id
You could check the slow log or hot threads to see if there is anything.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 5 August 2014 07:42, Tony Chong tonyjch...@gmail.com wrote:
Hello,
Like many others, I
No, you need something like curator -
https://github.com/elasticsearch/curator - to handle it.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 5 August 2014 02:43, Daniel Schonfeld downwindab...@gmail.com wrote:
Hi,
I'm new to elasticsearch, so please bear with me.
I am using logstash to ship sendmail logs into elasticsearch.
For any particular mail, sendmail logs the to address and from
addresses in different log entries,
resulting in (at least) two different elasticsearch documents per mail
(they do
73 matches
Mail list logo