Re: Growing old-gen size

2015-03-23 Thread Mark Walkom
There is a whole bunch more stats you can get from various APIs, see them
listed here https://www.elastic.co/search?q=stats

On 24 March 2015 at 02:52,  wrote:

> Thanks, do you know if there's more memory metric reporting that I'm
> missing? I'd like to figure out what's growing the fastest/largest.
> Fielddata I think should show up in the 'fm' column of the node stats.  I'm
> mostly curious what I'm missing in adding up the memory requirements, from
> the node stats I have so far fielddata and segment memory are the two
> dominant components but they don't add up to more than 50% of the max heap.
>
> I know there are some things missing from the stats like metadata memory
> usage, but I'm assuming those are smaller components.
>
> On Sunday, March 22, 2015 at 6:42:29 PM UTC-4, Mark Walkom wrote:
>>
>> Sounds like you're just utilising your cluster to it's capacity.
>>
>> If you are seeing GC causing nodes to drop out you probably want to
>> consider either moving to doc values, reducing your dataset or adding more
>> nodes/heap.
>>
>> On 21 March 2015 at 07:25,  wrote:
>>
>>> Hello, I'm trying to better understand our ES memory usage with relation
>>> to our workload.  Right now 2 of our nodes have above-average heap usage.
>>> Looking at _stats their old-gen is large, ~6gb.  Here's the _cat usage I've
>>> been trying to make sense of:
>>>
>>> _cat/nodes?v&h=host,v,j,hm,fm,fcm,sm,siwm,svmm,sc,pm,im,fce,fe,hp
>>>
>>> hostv   j hm  fm
>>> fcm  smsiwm  svmm sc  pm im fce fe hp
>>> host1  1.3.4 1.7.0_71  7.9gb   1.2gb  34.3mb
>>> 2.8gb 1.3mb7.4kb 13144  -1b 0b   0  0 82
>>> host2  1.3.4 1.7.0_71  7.9gb   888.2mb 20.3mb
>>> 1.9gb  0b 0b  8962   -1b 0b   0  0 67
>>> host3  1.3.4 1.7.0_71  7.9gb   1.1gb 29mb
>>> 2.5gb  0b  0b 11070  -1b 0b   0  0 70
>>> host4  1.3.4 1.7.0_71  7.9gb   845.2mb 21.6mb
>>> 1.8gb  179.8kb  448b  8024   -1b 0b   0  0 55
>>> host5  1.3.4 1.7.0_71  7.9gb   1.3gb 40.7mb
>>> 2.8gb  0b  0b 12615 -1b 0b   0  0 83
>>>
>>> host1 and host5 when they do GC it looks like they drop ~5-10% so they
>>> bump against the 75% mark again very soon afterwords.  Their oldgen stays
>>> relatively big, host5 currently has ~6gb of old gen.
>>>
>>> Last week we had an incident where a node started having long GC times
>>> and then eventually dropped out of the cluster, so that's the fear.  It
>>> didn't seem like the GC was making any progress, wasn't actually reducing
>>> the memory size.
>>>
>>> There must be something using heap that's not reflected in this _cat
>>> output?  The collection_count for the oldgen is increasing but the
>>> used_in_bytes isn't significantly reducing.  Is that expected?
>>>
>>> thanks for any tips!
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/5812e26e-a04e-4e03-9990-0c84f843a4ef%
>>> 40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/5271138c-f4a6-4261-9c57-82bfcaec36fb%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_WN5uOqTHK2kJueP8Vwe6FyjdmLwiOz5%2BH2h-EeBryWQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Growing old-gen size

2015-03-23 Thread mjdude5
Thanks, do you know if there's more memory metric reporting that I'm 
missing? I'd like to figure out what's growing the fastest/largest.  
Fielddata I think should show up in the 'fm' column of the node stats.  I'm 
mostly curious what I'm missing in adding up the memory requirements, from 
the node stats I have so far fielddata and segment memory are the two 
dominant components but they don't add up to more than 50% of the max heap.

I know there are some things missing from the stats like metadata memory 
usage, but I'm assuming those are smaller components.

On Sunday, March 22, 2015 at 6:42:29 PM UTC-4, Mark Walkom wrote:
>
> Sounds like you're just utilising your cluster to it's capacity.
>
> If you are seeing GC causing nodes to drop out you probably want to 
> consider either moving to doc values, reducing your dataset or adding more 
> nodes/heap.
>
> On 21 March 2015 at 07:25, > wrote:
>
>> Hello, I'm trying to better understand our ES memory usage with relation 
>> to our workload.  Right now 2 of our nodes have above-average heap usage.  
>> Looking at _stats their old-gen is large, ~6gb.  Here's the _cat usage I've 
>> been trying to make sense of:
>>
>> _cat/nodes?v&h=host,v,j,hm,fm,fcm,sm,siwm,svmm,sc,pm,im,fce,fe,hp
>>
>> hostv   j hm  fm   
>> fcm  smsiwm  svmm sc  pm im fce fe hp
>> host1  1.3.4 1.7.0_71  7.9gb   1.2gb  34.3mb 
>> 2.8gb 1.3mb7.4kb 13144  -1b 0b   0  0 82
>> host2  1.3.4 1.7.0_71  7.9gb   888.2mb 20.3mb 
>> 1.9gb  0b 0b  8962   -1b 0b   0  0 67
>> host3  1.3.4 1.7.0_71  7.9gb   1.1gb 29mb
>> 2.5gb  0b  0b 11070  -1b 0b   0  0 70
>> host4  1.3.4 1.7.0_71  7.9gb   845.2mb 21.6mb 
>> 1.8gb  179.8kb  448b  8024   -1b 0b   0  0 55
>> host5  1.3.4 1.7.0_71  7.9gb   1.3gb 40.7mb  
>> 2.8gb  0b  0b 12615 -1b 0b   0  0 83
>>
>> host1 and host5 when they do GC it looks like they drop ~5-10% so they 
>> bump against the 75% mark again very soon afterwords.  Their oldgen stays 
>> relatively big, host5 currently has ~6gb of old gen.
>>
>> Last week we had an incident where a node started having long GC times 
>> and then eventually dropped out of the cluster, so that's the fear.  It 
>> didn't seem like the GC was making any progress, wasn't actually reducing 
>> the memory size.
>>
>> There must be something using heap that's not reflected in this _cat 
>> output?  The collection_count for the oldgen is increasing but the 
>> used_in_bytes isn't significantly reducing.  Is that expected?
>>
>> thanks for any tips!
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/5812e26e-a04e-4e03-9990-0c84f843a4ef%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5271138c-f4a6-4261-9c57-82bfcaec36fb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Growing old-gen size

2015-03-22 Thread Mark Walkom
Sounds like you're just utilising your cluster to it's capacity.

If you are seeing GC causing nodes to drop out you probably want to
consider either moving to doc values, reducing your dataset or adding more
nodes/heap.

On 21 March 2015 at 07:25,  wrote:

> Hello, I'm trying to better understand our ES memory usage with relation
> to our workload.  Right now 2 of our nodes have above-average heap usage.
> Looking at _stats their old-gen is large, ~6gb.  Here's the _cat usage I've
> been trying to make sense of:
>
> _cat/nodes?v&h=host,v,j,hm,fm,fcm,sm,siwm,svmm,sc,pm,im,fce,fe,hp
>
> hostv   j hm  fm
> fcm  smsiwm  svmm sc  pm im fce fe hp
> host1  1.3.4 1.7.0_71  7.9gb   1.2gb  34.3mb
> 2.8gb 1.3mb7.4kb 13144  -1b 0b   0  0 82
> host2  1.3.4 1.7.0_71  7.9gb   888.2mb 20.3mb
> 1.9gb  0b 0b  8962   -1b 0b   0  0 67
> host3  1.3.4 1.7.0_71  7.9gb   1.1gb 29mb
> 2.5gb  0b  0b 11070  -1b 0b   0  0 70
> host4  1.3.4 1.7.0_71  7.9gb   845.2mb 21.6mb
> 1.8gb  179.8kb  448b  8024   -1b 0b   0  0 55
> host5  1.3.4 1.7.0_71  7.9gb   1.3gb 40.7mb
> 2.8gb  0b  0b 12615 -1b 0b   0  0 83
>
> host1 and host5 when they do GC it looks like they drop ~5-10% so they
> bump against the 75% mark again very soon afterwords.  Their oldgen stays
> relatively big, host5 currently has ~6gb of old gen.
>
> Last week we had an incident where a node started having long GC times and
> then eventually dropped out of the cluster, so that's the fear.  It didn't
> seem like the GC was making any progress, wasn't actually reducing the
> memory size.
>
> There must be something using heap that's not reflected in this _cat
> output?  The collection_count for the oldgen is increasing but the
> used_in_bytes isn't significantly reducing.  Is that expected?
>
> thanks for any tips!
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/5812e26e-a04e-4e03-9990-0c84f843a4ef%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_pDg%3DtnwgthPAJYvO7dEJce93ogkzFaxwbTWZioPn8BA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Growing old-gen size

2015-03-20 Thread mjdude5
Hello, I'm trying to better understand our ES memory usage with relation to 
our workload.  Right now 2 of our nodes have above-average heap usage.  
Looking at _stats their old-gen is large, ~6gb.  Here's the _cat usage I've 
been trying to make sense of:

_cat/nodes?v&h=host,v,j,hm,fm,fcm,sm,siwm,svmm,sc,pm,im,fce,fe,hp

hostv   j hm  fm   
fcm  smsiwm  svmm sc  pm im fce fe hp
host1  1.3.4 1.7.0_71  7.9gb   1.2gb  34.3mb 
2.8gb 1.3mb7.4kb 13144  -1b 0b   0  0 82
host2  1.3.4 1.7.0_71  7.9gb   888.2mb 20.3mb 
1.9gb  0b 0b  8962   -1b 0b   0  0 67
host3  1.3.4 1.7.0_71  7.9gb   1.1gb 29mb
2.5gb  0b  0b 11070  -1b 0b   0  0 70
host4  1.3.4 1.7.0_71  7.9gb   845.2mb 21.6mb 
1.8gb  179.8kb  448b  8024   -1b 0b   0  0 55
host5  1.3.4 1.7.0_71  7.9gb   1.3gb 40.7mb  
2.8gb  0b  0b 12615 -1b 0b   0  0 83

host1 and host5 when they do GC it looks like they drop ~5-10% so they bump 
against the 75% mark again very soon afterwords.  Their oldgen stays 
relatively big, host5 currently has ~6gb of old gen.

Last week we had an incident where a node started having long GC times and 
then eventually dropped out of the cluster, so that's the fear.  It didn't 
seem like the GC was making any progress, wasn't actually reducing the 
memory size.

There must be something using heap that's not reflected in this _cat 
output?  The collection_count for the oldgen is increasing but the 
used_in_bytes isn't significantly reducing.  Is that expected?

thanks for any tips!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5812e26e-a04e-4e03-9990-0c84f843a4ef%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.