Hieu: The stack trace you shared, is from a search request with a mvel
script in a function_score query. I don't think that this is causing
threads to get stuck and it just pops up here because in general a query
with scripts take more cpu time. You can easily verify this if you stop
search
Thanks for the response, Martijn! We'll consider upgrading, but it'd be
great to root cause the issue.
On Tuesday, September 9, 2014 12:03:57 AM UTC-7, Martijn v Groningen wrote:
Patrick: I have never seen this, but this means the openjdk on FreeBSD
doesn't support cpu sampling of threads.
We have seen a similar issue in our cluster (CPU usage and search time
suddenly went up slowly for the master node over a period of one day, until
we restarted). Is there a easy way to confirm that it's indeed the same
issue mentioned here?
Below is the output of our hot threads on this node
Hi,
Epic fail:
[2014-09-05 22:14:00,043][DEBUG][action.admin.cluster.node.hotthreads]
[Bloodsport] failed to execute on node [fLwUGA_eSfmLApJ7SyIncA]
org.elasticsearch.ElasticsearchException: failed to detect hot threads
at
FYI, this turned out to be a real bug. A fix has been committed and will be
included in the next release.
On Wednesday, August 27, 2014 11:36:03 AM UTC+2, Martin Forssen wrote:
I did report it https://github.com/elasticsearch/elasticsearch/issues/7478
--
You received this message because
Thank you!
On 29 août 2014, at 08:49, Martin Forssen m...@recordedfuture.com wrote:
FYI, this turned out to be a real bug. A fix has been committed and will be
included in the next release.
On Wednesday, August 27, 2014 11:36:03 AM UTC+2, Martin Forssen wrote:
I did report it
Hi Patrick,
Did you see the same stuck thread via jstack or the hot thread api that
Martin reported? This can only happen if scan search was enabled (by
setting search_type=scan in a search request)
If that isn't the case then something else is maybe stuck.
Martijn
On 29 August 2014 09:58,
Hi Patrick,
I this problem happens again then you should execute the hot threads api:
curl localhost:9200/_nodes/hot_threads
Documentation:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html#cluster-nodes-hot-threads
Just pick a node in your
I see the same problem. We are running 1.1.1 on a 13-node cluster (3 master
and 5+5 data). I see stuck threads on most of the data nodes, I had a look
around on one of them. Top in thread mode shows:
top - 08:08:20 up 62 days, 18:49, 1 user, load average: 9.18, 13.21, 12.67
Threads: 528 total,
It'd be worth raising this as an issue on Github if you are concerned, at
least then the ES devs will see it :)
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 27 August 2014 18:34, Martin Forssen
I did report it https://github.com/elasticsearch/elasticsearch/issues/7478
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Hello,
I'm running an ELK install for few months now, and few weeks ago I've noticed a
strange behavior: ES had some kind of stuck thread consuming 20-70% of a CPU
core. It remained unnoticed for days. Then I've restarted ES, and it all came
back to normal, until it started again 2 weeks later
(You should really set Xms and Xmx to be the same.)
But it's not faulty, it's probably just GC which should be visible in the
logs. How much data do you have in your cluster?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web:
On 25 août 2014, at 13:51, Mark Walkom ma...@campaignmonitor.com wrote:
(You should really set Xms and Xmx to be the same.)
Ok, I'll do this next time I restart.
But it's not faulty, it's probably just GC which should be visible in the
logs. How much data do you have in your cluster?
NAME
14 matches
Mail list logo