It looks like a networking issue, at least based on No route to host in
the error.
Can you ping the master when this is happening, what about doing a telnet
test?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 13
Hi Mark,
Thanks for replying.
The master (10.1.4.197) and other nodes can be reached while the problem
node(10.1.4.196) is not reachable.
So, we can see the cluster status at that moment
status : yellow,
timed_out : false,
unassigned_shards : 0,
On Thursday, March 13, 2014 2:03:44 PM
I never tested that kind of doc (unamed arrays) and I think it might be your
issue.
Could you test indexing a single doc without couchbase and see if issue comes
from here?
Also you didn't mention which versions you are using (es, couchbase plugin)
--
David ;-)
Twitter : @dadoonet /
Thanks David.
Is there any command to check what data is present in each shard?
On 13 March 2014 11:27, David Pilato da...@pilato.fr wrote:
Answered inline.
--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 13 mars 2014 à 06:50, Nitesh Earkara enit...@gmail.com a
Hi folks, I've been trying to figure out the default analyzer for
'_all'. At first, I was simply thinking that it would be the standard
analyzer. But as my testing shows, it's not the case at all (stop
words are kept?!)? After some testing, it would appear to be using the
standard tokenizer, with
Use Linux Disk encryption (LUKS) and trusted computers for encrypting files.
Block device encryption is outside the scope of Lucene.
Jörg
On Thu, Mar 13, 2014 at 6:03 AM, Ivan Brusic i...@brusic.com wrote:
If you stick to non-analyzed term queries, you can always encrypt your
data before it
You can search using _routing and give a document id as the routing key.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 13 mars 2014 à 07:21:33, Nitesh Earkara (enit...@gmail.com) a écrit:
Thanks David.
Is there any command to check what data is
It depends on your elasticsearch version I guess as in 1.0, standard analyzer
does not remove stop words anymore.
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 13 mars 2014 à 08:06:55, Jeffrey 'jf' Lim (jfs.wo...@gmail.com) a écrit:
Hi folks, I've
Hi,
I had similar situation and I created the following issue.
https://github.com/elasticsearch/elasticsearch/issues/5245
And this issue is already closed.
Unfortunately, this behavior is the limitation of nested object currently.
I hope this will answer your needs.
Regards,
Jun
2014-02-20
the repeat filter only applies to terms that actually get stemmed. ie if
you have goes it will be stemmed to go but with the repeat filter it
will also emit goes in addition to go
makes sense?
simon
On Thursday, March 13, 2014 12:38:00 AM UTC+1, Nikita Tovstoles wrote:
Could someone please
That must be the service not open.
在 2014年3月13日星期四UTC+8下午2时10分22秒,Hui写道:
Hi Mark,
Thanks for replying.
The master (10.1.4.197) and other nodes can be reached while the problem
node(10.1.4.196) is not reachable.
So, we can see the cluster status at that moment
status : yellow,
I rethought this problem last night. The solutions I've presented already
are a lot less efficient than they could be, as they increase the work per
doc by a factor of the number of buckets (ie 24h * 28d = 672).
It'd be much more efficient to calculate this rolling average client side
in a single
perfect ! thanks ! :)
Le mercredi 12 mars 2014 17:52:56 UTC+1, Romain NIO a écrit :
Hi,
I'm facing some issues with the plugin bettermap in Kibana. Kibana is
not able to load the background of the map. It seems that the API cannot
(logs from chrome) :
1. Request URL:
On 12 March 2014 21:55, Ben Hirsch benhir...@gmail.com wrote:
I will know the 5-10 id's needed to be fetched at run-time. With
script_fields how would I access the children with those specific id's?
With script fields, you have access to the whole _source field, so you
would need to write a
did you set the same cluster name on both nodes?
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 13 mars 2014 à 09:57:35, Guillaume Loetscher (sterfi...@gmail.com) a écrit:
Hi,
First, thanks for the answers and remarks.
You are both right, the issue
Hi Dome,
Do you mean the service of 10.1.4.196 is not open? Yes, the service should
be stopped when it was rebooted.
But the master node 10.1.4.197 has removed the problem node 10.1.4.196 when
it cannot ping the machine 10.1.4.196.
The cluster should be fine after this operation. Do I
Appreciate that Clint. But I was asking whether I could do without having
to modify mappings - see ref to another post seemingly alluding to that
That post refers to using the keyword_repeat token filter to index stemmed
and unstemmed tokens in the same positions. It won't work for your use
On 12 March 2014 23:32, Michael Schlenzka mich...@schlenzka.com wrote:
I do not want the sum of all the values of the key-value-pairs. I want to
boost each document (with a specific key) only with the value for the
matching key/color (e.g. if searching for documents with blue as color each
On 13 March 2014 05:15, Ivan Brusic i...@brusic.com wrote:
That said, your Elasticsearch server is still accessible to anyone over
the internet. I
Or somebody on your network is infected with a bot.
--
You received this message because you are subscribed to the Google Groups
elasticsearch
Can you telnet from each box to port 9300 on the other box?
Does your bridge support multicast? If not, you could use unicast instead.
clint
On 13 March 2014 10:31, Guillaume Loetscher sterfi...@gmail.com wrote:
Sure
Node # 1:
root@es_node1:~# grep -E '^[^#]'
Hi,
I am trying to use the terms lookup filter not with a user provided list or
a single document but with a list of ids from a lot of documents already
indexed by elasticsearch. Any ideas who one could save a roundtrip similar
to the single document example?
Cheers
Valentin
--
You received
Hi
I am new to elasticsearch and am trying out the attachement plugin. I'm a
bit confused on how to handle the meta-data from the attachements.
I have created a simple mapping as example. I explicitly store the 'title'
field, other fields are by default not stored.
PUT /test/file/_mapping
{
*Jun Ohtani*
Thanks. This helped. Now my conscience is clear =)
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To
Hi David,
I have done following steps u suggested. The exact string search is working
now.
But when I'm trying the below query for string matching it's giving null
result.
May this is very basic and I'm doing something wrong. I'm a week old on
elasticsearch and trying to understand the
Hi All,
I've released version 1.1.0 of Elasticsearch Image Plugin.
The Image Plugin is an Content Based Image Retrieval Plugin for
Elasticsearch using LIRE (Lucene Image Retrieval). It allows users to index
images and search for similar images.
Changes in 1.1.0:
- Added limit in image
Dear all,
In order to overwrite some index settings I created a custom template:
{
logstash2 : {
order : 1,
template : logstash-*,
settings :
{ index.number_of_replicas : 0 }
}
}
Template is placed in /etc/elasticsearch/config/templates/logstash2.json
[root@logstash elasticsearch]# ls
Hello,
I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6
You wrote, the OOM killer killed the ES process. With 32g (and the swap
size), the process must be very big. much more than you configured. Can you
give more info about the live size of the process, after ~2 hours? Are
there more application processes on the box?
Jörg
On Thu, Mar 13, 2014 at
Hello Jörg
Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting
Now the process went back down to 25% usage, from now on it will go back
up, and won't stop going up.
Sorry for spamming
- - - - - - - - - -
Sincerely:
Hicham Mallah
Software Developer
mallah.hic...@gmail.com
00961 700 49 600
On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah
If you plan to do this frequently then go with the raw field. It'll be
faster.
If you want to fool around without changing any mappings then use a script
filter to get the field from the _source. It isn't efficient at all. I'd
suggest guarding it with a more efficient filter. Like so:
After restarting nodes I'm also getting a bunch of errors for calls to the
index stats API *after* the node has come back up. Seems like there's some
issue here where stats API calls fails, does not time out and causes a
backup of other calls until a thread pool is full?
Mar 13 12:22:31
I originally thought that using multi-fields would require manual mapping
if the entire data model + thought that keyword_repeat offers an
alternative not requiring mapping changes. After your comments + peeking at
KeywordRepeatFilter src I see I was wrong on both. Thanks for your help!
On Mar 13,
Missed your last email. Ignore my suggestion and use the raw field:)
On Thu, Mar 13, 2014 at 8:50 AM, Nikolas Everett nik9...@gmail.com wrote:
If you plan to do this frequently then go with the raw field. It'll be
faster.
If you want to fool around without changing any mappings then use a
Total shot in the dark here but try taking the hashmark out of the node
names and see if that helps?
On Thursday, March 13, 2014 5:31:30 AM UTC-4, Guillaume Loetscher wrote:
Sure
Node # 1:
root@es_node1:~# grep -E '^[^#]' /etc/elasticsearch/elasticsearch.yml
cluster.name: logstash
Hi David,
I have done following steps u suggested. The string search is working now.
But for filter I have to always pass strings in lowercase; where as for
query text search I can give the proper string sequence inserted in doc.
query shown below.
May be this is very basic and I'm doing
Hello Zachary,
Thank you for your quick and working responses!
I've previously tried with double array method and didn't worked, I should
have missed something at that time.
And thanks also for the object method, didn't know :)
Have a good day all,
Erdal.
Le mercredi 12 mars 2014 20:05:11
Hi
I am having an elasticsearch cluster. I am using the unicast to discover
nodes. Can I add nodes to list dynamically without restarting the cluster?
I tried to do this with the prepareUpdateSettings but i got ignoring
transient setting [discovery.zen.ping.unicast.hosts], not dynamically
No problem, glad to help! The syntax is definitely kinda gross, I'll try
to write some docs on it soon to help others.
Let me know if you run into any more problems, and feel free to open a
ticket at the Elasticsearch-PHP github repo, I keep a closer eye on tickets
than the mailing list :)
Yes. Just launch the new node and set its unicast values to other running nodes.
It will connect to the cluster and the cluster will add him as a new node.
You don't have to modify existing settings, although you should do it to have
updated settings in case of restart.
--
David ;-)
Twitter :
message field has been analyzed using standard analyzer. It means that you
message content has been indexed using lowercase.
a Term Filter does not analyze your query.
DEBUG is than debug.
If you want to find your term in the inverted index, you have either to analyze
your query (matchQuery
Can you gist up the output of these two commands?
curl -XGET http://localhost:9200/_nodes/stats;
curl -XGET http://localhost:9200/_nodes;
Those are my first-stop APIs for determining where memory is being
allocated.
By the way, these settings don't do anything anymore (they were depreciated
yes
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 13 mars 2014 à 15:00:10, Hari Prasad (iamhari1...@gmail.com) a écrit:
Is this the case the even if discovery.zen.ping.multicast.enabled is false?
On Thursday, 13 March 2014 19:17:19 UTC+5:30, David
Is this the case the even if discovery.zen.ping.multicast.enabled is false?
On Thursday, 13 March 2014 19:17:19 UTC+5:30, David Pilato wrote:
Yes. Just launch the new node and set its unicast values to other running
nodes.
It will connect to the cluster and the cluster will add him as a new
Hi,
Do you know if it is possible to use Select2 autocomplete library with
elasticsearch indexes? It looks great and it would be nice to use
elasticsearch instead of real system objects for this autocomplete.
Take a look at Select2 library home page:
http://ivaynberg.github.io/select2/
Regards,
Hello Zachary,
Thanks for your reply and the pointer to the settings.
Here are the output of the commands you requested:
curl -XGET http://localhost:9200/_nodes/stats;
curl -XGET http://localhost:9200/_nodes;
https://gist.github.com/codebird/9529114
- - - - - - - - - -
Sincerely:
Hicham
Ok Thank you :)
On Thursday, 13 March 2014 19:35:52 UTC+5:30, David Pilato wrote:
yes
--
*David Pilato* | *Technical Advocate* | *Elasticsearch.com*
@dadoonet https://twitter.com/dadoonet |
@elasticsearchfrhttps://twitter.com/elasticsearchfr
Le 13 mars 2014 à 15:00:10, Hari Prasad
@Xiao Yu : nope, it's not working also.
@Clinton Gormley : Yes, just after the no matching id error, a telnet
from Node 1 to node 2 is possible, and I got a valid connection.
All, please reming that after such issue, if I manually stop the service on
Node 2, then restart it, it will manage to
On Monday, July 15, 2013 12:23:37 PM UTC+2, Jörg Prante wrote:
I'm not sure how this error type listing can help you. You must catch
every exception on the bulk response. If you encounter one, you should
stop indexing no matter what happend.
I can see how a list of all the error types can
Jorg the issue is after the JVM giving back memory to the OS, it starts
going up again, and never gives back memory till its killed, currently
memory usage is up to 66% and still going up. HEAP size is currently set to
8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
8
Hi.
that would be my assumption as well. By the way, I am getting this same
warning that you are getting. Very similar scenario (2 nodes in a cluster
- all works fine when everything is running. Warning appears on client if
one of the nodes is taken down).
I am using v. 0.90 - not sure if
Enter
ip addr show
or
ifconfig
and check if MULTICAST is configured on the interface.
Jörg
On Thu, Mar 13, 2014 at 4:29 PM, Guillaume Loetscher sterfi...@gmail.comwrote:
Definitely a multicast problem.
I've decided to switch to unicast, and I manage to shutdown any nodes
(elected
One more thing - I notice that functionally, the client is still able to
communicate to the remaining active node. so I guess this warning is just
a warning. must be some background thread that periodically looks for
the missing node, while the main Client instance can still communicate to
I am having trouble finding how to install the above plugin? I installed
Elastic Search with Dubian.
Typically On my local linux machine I did /bin/plugin , I am not sure
where is the 'bin/plugin goes with the dubian installation?
Thanks
--
You received this message because you are
Great, I am interested in trying this.
On Thursday, March 13, 2014 7:09:38 AM UTC-4, Kevin Wang wrote:
Hi All,
I've released version 1.1.0 of Elasticsearch Image Plugin.
The Image Plugin is an Content Based Image Retrieval Plugin for
Elasticsearch using LIRE (Lucene Image Retrieval). It
Hi Martin,
For your situation, I suggest:
1.Disable HTTP run your ES node only in internal network.
2.Making a middleware to provide a restful service so that other could
communicate.
3. Your middleware will listen use client api (ex: java client) to work
with ES cluster return result back.
Hi all,
Does Marvel work at all on 0.90.1 - even in a degraded fashion?
I know that its minimum is 0.90.9. I have a possible performance-related
failure to chase down where Marvel might be very useful in finding it.
However, I don't want to change the conditions of the problem until we
On my local machine, I do this: bin/plugin -install ...
With debian installation, I am not sure where the bin/plugin' folder is?
Anyone knows?
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving
I have a function, in which I call docFieldLongs. In unit testing I need to
create and override this function and also return ScriptDocValues.Longs.
@Test
public void testRunAsLongs throws Exception()
{
script = new MaxiScoreScript(params){
@Override
Hi Edward,
Sadly, Marvel is incompatible with 0.90.1 and will disable itself upon
startup.
Cheers,
Boaz
On Thursday, March 13, 2014 5:18:32 PM UTC+1, Edward Sargisson wrote:
Hi all,
Does Marvel work at all on 0.90.1 - even in a degraded fashion?
I know that its minimum is 0.90.9. I have
Hello again,
setting bootstrap.mlockall to true seems to have made memory usage slower,
so like at the place of elasticsearch being killed after ~2 hours it will
be killed after ~3 hours. What I see weird, is why is the process releasing
memory one back to the OS but not doing it again? And why
Hi, Kevin:
Create Index
refhttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html#mappingssays
mappings can be included in index settings JSON. Are you saying that's
not supported by the Java client? (Fwiw, I am seeing the same - just wanted
to
If I start elasticsearch from the bin folder not using the wrapper, I get
these exceptions after about 2 mins:
Exception in thread elasticsearch[Adam X][generic][T#5]
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.init(BytesStore.java:62)
at
There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with
index:
codec:
bloom:
load: false
Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.
Jörg
My instincts says this is not the proper use of aggregation but want to
check w/ people who have actually used it. We want to bucket on a very high
cardinality field and return **ALL** buckets (no size limit). For example,
imagine documents representing people and their parents:
person -
It should be in /usr/share/elasticsearch/bin/
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 13 mars 2014 à 17:19:49, ZenMaster80 (sabdall...@gmail.com) a écrit:
On my local machine, I do this: bin/plugin -install ...
With debian installation, I am
Thanks - I figured it out as soon as I posted.
I found this explained the directory structure well.
https://gist.github.com/mystix/5460660
On Thursday, March 13, 2014 1:48:07 PM UTC-4, David Pilato wrote:
It should be in /usr/share/elasticsearch/bin/
--
*David Pilato* | *Technical
For sorting, elasticsearch lets me specify how I want to deal with fields
that contain multiple numeric values, so I can have elasticsearch use e.g.
the max value in each document.
Is there a similar option I can use when aggregating documents? For
example, I might want to get the average of
Thanks guys.
I've made some changes to my bulk indexing. I'm now kicking off java bulk
loaders with 8 threads a piece on 3 of our 11 servers. This initially did
not help, so I went in and checked out the hot_threads in ElasticHQ.
Virtually all CPU was being allocated to building
It worked fine after manually setting all the environment variables.
I would say this though.
Server a : ES out of box works from home dir.
Server b : ES out of box neither works from /usr/lib/ nor does it work from
home dir. Only way is to manually set env parameters.
Both servers are
Yes you'd need to store the content_type to get it back. The _source field
in your case is actually nothing more than the base64 of your raw input
document at the time you indexed it.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To
You won't see your template in the list API, but if you create a new index
named logstash-whatever, it should take effect properly unless it is
overriden by another template with a higher order.
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
Can you do a hot_threads while this is happening?
curl localhost:9200/_nodes/hot_threads
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
prepareIndex() is to index a document. What you want is prepareCreate(). I
have an example here (check the method createIndexFullMapping()):
https://github.com/bly2k/es-java-examples/blob/master/admin/IndexAdminExample.java
--
You received this message because you are subscribed to the Google
As you are new to all this, I'm wondering what you would like to achieve here
or why do you think this is important for your use case.
I meant that by default elasticsearch is doing all that reroute thing for you
if a node is added or removed so you don't need to take care of that.
To answer,
yep, that's what i used (see my prior post)
On Thu, Mar 13, 2014 at 1:08 PM, Binh Ly binhly...@yahoo.com wrote:
prepareIndex() is to index a document. What you want is prepareCreate(). I
have an example here (check the method createIndexFullMapping()):
:) I was confused. I meant to reply to OP. Thanks!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this
You have 2 timestamp fields: @timestamp, and timestamp. Looks like the
timestamp field is the one that cannot be parsed. I see this value in the
first doc: timestamp:Mar 13 12:15:39. You either need to format this
properly from the LS side, or use the right date format on the ES side.
--
You
I have a 4 box ES installed, the version that I am using is 0.90.10 but it
fails once in a week. What I am getting is 50X error in kibana. When I
check the log one of the nodes are stuck. It is fine after restart. The
memories are fine for them. What else can I check ?
--
You received this
On usual days I am seeing this log all the time in only one box:
@4000532216d90c452aac [2014-03-13
16:36:31,205][DEBUG][action.admin.cluster.stats] [Lloigoroth] failed to
execute on node [Mo2-u0RSQT6qqbMjW1CWag]
@4000532216d90c45327c
Hi, I have few weblogs in a hive table that I'd like to visualize in kibana.
ES is on the same node as hive server.
Followed directions from this
page
http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/hive.html
I can create a table using esstorage handler, but when I tried
Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't seem
to have changed anything.
It is at 63% after 2 hours and a half up time.
Watching stuff on Bigdesk everything seems to be normal:
Memory:
Committed: 7.8gb
Used: 4.5gb
The used is going up and down normally, so heap is
would one of you wise sages offer up the magical incantations for working
with ES source in intellij? specifically, what are the configuration steps
from cloning the github repo through to debugging a running ES instance? i
have had no luck with either following the README, random mailing list
fwiw, I fixed my issue below by using prepareCreate().setSource() - rather
than .setSettings() with idx config in the following format:
{settings : {
index: {
number_of_shards : 1,
number_of_replicas: 0,
analysis : {
analyzer: {
lowercase_keyword: {
From the fsriver doc:
curl -XPUT 'localhost:9200/_river/mydocs/_meta' -d '{
type: fs,
fs: {
url: /tmp,
update_rate: 90,
includes: *.doc,*.pdf,
excludes: resume
}
}'
How does this translate to the Python API?
Thanks,
Kent
--
You received this message because you
Hi,
I am currently having trouble with fairly slow and intensive queries
causing excessive load on my elasticsearch cluster and I would like to know
people's opinions on ways to mitigate or prevent that excessive load.
We attempt about 50 of these slow queries per second, and they take an
Hello Kent,
you can always access the raw transport and send any request you wish
for the unsupported APIs:
from elasticsearch import Elasticsearch
es = Elasticsearch()
data, status = es.transport.perform_request('PUT', '/_river/',
body={'type': 'fs',})
Hope this helps,
Honza Kral
On Thu,
It doesn't look like the elasticsearch-py API covers the river use case.
When I've run into things like this I've always just run a manual CURL
request, or if I need to do it from within a script I just do a basic
command with requests, ala
Adding a mutate on these messages on the LS side to drop the timestamp
field did the trick. This is sort of puzzling though since that field is a
stock LS field and worked in a similar case.
Eg.
Mar 12 16:54:14 worked
Mar 13 12:59:39 failed
Thanks,
-Chris
On Thu, Mar 13, 2014 at 1:33 PM,
Hi,
I was struggling to load json document to ES from Hive. Later realised that
there was a mistake in documentation.
CREATE EXTERNAL TABLE json (data STRING
http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/hive.html#CO17-1)
STORED BY
I have a simple query
insert into table eslogs select * from eslogs_ext;
Hive and elasticsearch are running on the same host.
To execute the script I'm following the directions from the link.
http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/hive.html
There are two
How much heap, what java version, how big are your indexes?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 14 March 2014 11:11, Jos Kraaijeveld m...@kaidence.org wrote:
I forgot to mention, I'm running
I believe you are just witnessing the OS caching files in memory. Lucene
(and therefore by extension Elasticsearch) uses a large number of files to
represent segments. TTL + updates will cause even higher file turnover
than usual.
The OS manages all of this caching and will reclaim it for
@Mark:
The heap is set to 2GB, using mlockall. The problem occurs with both
OpenJDK7 and OracleJDK7, both the latest versions. I have one index, which
is very small:
index:
{
primary_size_in_bytes: 37710681
size_in_bytes: 37710681
}
@Zachary Our systems are set up to alert when memory is about
Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25. There are known bugs in the most recent
Oracle JVM versions which have not been resolved yet. u25 is the most
recent safe version. I don't think that's your problem, but it's a good
general
Also, are there other processes running which may be causing the problem?
Does the behavior only happen when ES is running?
On Thursday, March 13, 2014 8:31:18 PM UTC-4, Zachary Tong wrote:
Cool, curious to see what happens. As an aside, I would recommend
downgrading to Java 1.7.0_u25.
Hey,
I have a parent/child relationship between Item and Player.
{
item: {
_parent: {
type: player
},
_routing: {
required:true,
path:account
},
properties: {
Something to add:
When I index an item, I reference his parent with its id, not his account
name. Is that part of the problem? Can I use the account to set the item's
parent when indexing it? And if so, how would elasticsearch know that I'm
using this field?
On Thursday, March 13, 2014
On 3/13/14, 1:37 AM, Dunaeth wrote:
I tried to clear all caches to see if it could help but the fielddata
breaker estimated size is still skyrocketing... If it's not cache
issue, and it's linked with our data inserts, I can only think about
insert process or percolation queries. Any idea ?
Hi Echin,
Since the problem node ip is defined in the client es connection by JAVA
API, I guess the client will still try to connect to this node. So, there
are such warnings.
It should be fine for client to keep working with the cluster. However, in
my case, the java client is not reachable
1 - 100 of 106 matches
Mail list logo