long GC pauses but only one 1 host in the cluster

2014-02-25 Thread T Vinod Gupta
im seeing this consistently happen on only 1 host in my cluster. the other
hosts don't have this problem.. what could be the reason and whats the
remedy?

im running ES on a ec2 m1.xlarge host - 16GB ram on the machine and i
allocate 8GB to ES.

e.g.
[2014-02-25 09:14:38,726][WARN ][monitor.jvm  ] [Lunatica]
[gc][ParNew][1188745][942327] duration [48.3s], collections [1]/[1.1m],
total [48.3s]/[1d], memory [7.9gb]-[6.9gb]/[7.9gb], all_pools {[Code
Cache] [14.5mb]-[14.5mb]/[48mb]}{[Par Eden Space]
[15.7mb]-[14.7mb]/[66.5mb]}{[Par Survivor Space]
[8.3mb]-[0b]/[8.3mb]}{[CMS Old Gen] [7.8gb]-[6.9gb]/[7.9gb]}{[CMS Perm
Gen] [46.8mb]-[46.8mb]/[168mb]}


thanks

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHau4ysuCGHKbgf9WaJ224fHdk0FZuCGG%3DTykAookVNYeGOARQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: long GC pauses but only one 1 host in the cluster

2014-02-25 Thread Mark Walkom
Depends on a lot of things; java version, ES version, doc size and count,
index size and count, number of nodes.
What are you monitoring the cluster with as well?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 25 February 2014 20:21, T Vinod Gupta tvi...@readypulse.com wrote:

 im seeing this consistently happen on only 1 host in my cluster. the other
 hosts don't have this problem.. what could be the reason and whats the
 remedy?

 im running ES on a ec2 m1.xlarge host - 16GB ram on the machine and i
 allocate 8GB to ES.

 e.g.
 [2014-02-25 09:14:38,726][WARN ][monitor.jvm  ] [Lunatica]
 [gc][ParNew][1188745][942327] duration [48.3s], collections [1]/[1.1m],
 total [48.3s]/[1d], memory [7.9gb]-[6.9gb]/[7.9gb], all_pools {[Code
 Cache] [14.5mb]-[14.5mb]/[48mb]}{[Par Eden Space]
 [15.7mb]-[14.7mb]/[66.5mb]}{[Par Survivor Space]
 [8.3mb]-[0b]/[8.3mb]}{[CMS Old Gen] [7.8gb]-[6.9gb]/[7.9gb]}{[CMS Perm
 Gen] [46.8mb]-[46.8mb]/[168mb]}


 thanks

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/CAHau4ysuCGHKbgf9WaJ224fHdk0FZuCGG%3DTykAookVNYeGOARQ%40mail.gmail.com
 .
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624ajFysyP0qG7JtOVxdTuGO4H98C0bL0docZuuoQifpFvA%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: long GC pauses but only one 1 host in the cluster

2014-02-25 Thread joergpra...@gmail.com
Is this node showing more activity than others? What kind of workload is
this, indexing/search? Are caches used, for filter/facets?

Full GC runs caused by CMS Old Gen may be a sign that you are close at
memory limits and need to add nodes, but it could also mean a lot of other
different things.

Jörg

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEHSS9tmb26PPcm5uB4QNurczaXvo8iRXkj9APCFUuBHQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.