Re: Measuring JVM memory for containers

2017-08-08 Thread 'Allen Reese' via mechanical-sympathy
I've found that -Xms=-Xmx  is best as it avoids resizes.
I suggest running your app with NMT enabled, and looking at the output:
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html#enable_nmt
 -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions 
-XX:+PrintNMTStatistics


There are several things you can trip over when running a jvm under cgroups or 
docker: (I'm missing a couple I can't think off off my head).
1. Heapsize -Xmx2. DirectMemory allocations (DirectByteBuffer)  
-XX:MaxDirectMemorySize3. Code cache4. Metaspace -XX:MaxMetaspaceSize5. Unsafe 
allocations.6. Jni Allocations7. Threads per stack -Xss
Most of these you can manage via flags, but there are a few you can't that I 
don't recall directly off hand.And see [2], for all the flags, it gets tedious.

Luckily, 8u121b34  (8u131 for those of you without an Oracle support contract)  
has a useful option [1] that can help with 
this.-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

8u121b34 and later will automatically set the # of cpus specified without any 
flags, but this doesn't take sharing into account.
This configures the max memory the jvm sees to be what docker has specified.  
(Unfortunately only supports docker, and not other cgroups styles or cgroups 
v2).
See http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-July/027464.html  
for what looks like a very promising JEP on this front.
The one downside I didn't point out is that the UseCGroupMemoryLimitForHeap 
flag doesn't control Unsafe Allocations or JNI allocations, so you will need to 
handle approximations for that yourself.  :)
Hope that helps a little.
--Allen Reese


[2] is an empty example program pointing out many of the flags and output.

[1]:  
http://www.oracle.com/technetwork/java/javaseproducts/documentation/8u121-revision-builds-relnotes-3450732.html8170888
 hotspot runtime [linux] Experimental support for cgroup memory limits in 
container (ie Docker) environments6515172 hotspot runtime 
Runtime.availableProcessors() ignores Linux taskset command8161993 hotspot gc 
G1 crashes if active_processor_count changes during startup

[2]:[areese@refusesbruises ]$ java -XX:MaxDirectMemorySize=1m -Xms256m  
-Xmx256m -XX:NativeMemoryTracking=summary  -XX:+UnlockDiagnosticVMOptions 
-XX:+PrintNMTStatistics -cp . test
Native Memory Tracking:
Total: reserved=1600200KB, committed=301232KB-                 Java Heap 
(reserved=262144KB, committed=262144KB)                            (mmap: 
reserved=262144KB, committed=262144KB)  -                     Class 
(reserved=1059947KB, committed=8043KB)                            (classes 
#391)                            (malloc=3179KB #129)                           
  (mmap: reserved=1056768KB, committed=4864KB)  -                    Thread 
(reserved=10323KB, committed=10323KB)                            (thread #10)   
                         (stack: reserved=10280KB, committed=10280KB)           
                 (malloc=32KB #54)                             (arena=12KB #20) 
-                      Code (reserved=249631KB, committed=2567KB)               
             (malloc=31KB #296)                             (mmap: 
reserved=249600KB, committed=2536KB)  -                        GC 
(reserved=13049KB, committed=13049KB)                            (malloc=3465KB 
#111)                             (mmap: reserved=9584KB, committed=9584KB)  -  
                Compiler (reserved=132KB, committed=132KB)                      
      (malloc=1KB #21)                             (arena=131KB #3) -           
       Internal (reserved=3277KB, committed=3277KB)                            
(malloc=3245KB #1278)                             (mmap: reserved=32KB, 
committed=32KB)  -                    Symbol (reserved=1356KB, 
committed=1356KB)                            (malloc=900KB #64)                 
            (arena=456KB #1) -    Native Memory Tracking (reserved=34KB, 
committed=34KB)                            (malloc=3KB #32)                     
        (tracking overhead=32KB) -               Arena Chunk (reserved=305KB, 
committed=305KB)                            (malloc=305KB)  
[areese@refusesbruise ]$ 

  From: Sebastian Łaskawiec 
 To: mechanical-sympathy  
 Sent: Friday, August 4, 2017 6:38 AM
 Subject: Re: Measuring JVM memory for containers
   
I think you're right Tom. Here is a good snippet from the "Java Performance" 
book [1].

I'll experiment with this a little bit further but it looks promising. 
Thanks for the hint and link!
[1] 
https://books.google.pl/books?id=aIhUAwAAQBAJ&printsec=frontcover&dq=Java+Performance:+The+Definitive+Guide:+Getting+the+Most+Out+of+Your+Code&hl=en&sa=X&redir_esc=y#v=onepage&q='-XX%3AMaxRam'%20java&f=false

On Friday, 4 August 2017 09:44:32 UTC+2, Tom Lee wrote:
Neat, didn't know about MaxRAM or native memory tracking.
RE:

The

Re: Measuring JVM memory for containers

2017-08-04 Thread areese via mechanical-sympathy


On Friday, August 4, 2017 at 10:08:15 AM UTC-7, Gil Tene wrote:
>
> Note that -XX:+UnlockExperimentalVMOptions 
> -XX:+UseCGroupMemoryLimitForHeap is simply an alternative to setting Xmx: 
> "...When these two JVM command line options are used, and -Xmx is not 
> specified, the JVM will look at the Linux cgroup configuration, which is 
> what Docker containers use for setting memory limits, to transparently size 
> a maximum Java heap size.". 
>
> However, the flags have no meaning when you set Xmx explicitly, and 
> virtually all applications that do any sizing actually set their Xmx.
>
> Also, the flag has no effect on non-Heap memory consumption.
>
> Right, I thought it was setting OS::available_memory, but 
nope: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/5f1d1df0ea49
 

> The main benefit of -XX:+UseCGroupMemoryLimitForHeap in the context of 
> containers tends to be not the "main" application JVMs, but all the (often 
> short lived) little utility things people run without specifying -Xmx 
> (start/stop/admin command things that are java-based, javac, monitoring 
> thing, etc.). Since HotSpot will default to an Xmx that is 1/4 of the 
> (host) system memory size, this can create some surprises in container 
> environments. The effect is somewhat dampened by the fact that most of 
> these things start with an Xmx that is much smaller (1/64th of host system 
> memory), those surprises happen less often as the JVMs for short running 
> things often don't expand to use the full Xmx. But still. But even 1/64th 
> of system memory can become a problem as container environments run many 
> 10s and maybe 100s of JVMs on commodity machines with 100s of GB of memory. 
> In addition to setting a default Xmx that depends on container limits, 
> -XX:+UseCGroupMemoryLimitForHeap will [i think/hope] also set a default 
> -Xms (at 1/64 of the container limit) which can help. 
>
> My view is that -XX:+UseCGroupMemoryLimitForHeap is *currently* a 
> "somewhat meaningless" flag, and will remain so until it is on by default 
> (at which point it will have a true beneficial effect, so lets hope it gets 
> out the experimental phase soon). The logic for this claim is simple: Any 
> application for which someone would have the forethought for explicitly 
> adding -XX:+UseCGroupMemoryLimitForHeap, would likely already have -Xmx 
> setting. Or stated differently: chances are that any java command NOT 
> specifying -Xmx will NOT have -XX:+UseCGroupMemoryLimitForHeap flag set, 
> unless it is the default.
>
>
So how I tripped over this was because we do builds in docker containers, 
and convincing all executions of the jvm within the build system to set 
-Xmx is seemingly futile.
For our build system, that's why we force it on via a shell script.

That's why I point out the JEP, because this flag is an ok stopgap measure 
for when you don't have control over the java flags being passed.



 

> On Friday, August 4, 2017 at 9:17:09 AM UTC-7, are...@yahoo-inc.com wrote:
>>
>> In addition to what Gil says, which is excellent, here's what I've found. 
>>  (I tried to reply via email, but that got eaten, so apologies if you get 
>> double posts. :()
>>
>> I've found that -Xms=-Xmx  is best as it avoids resizes.
>>
>> I suggest running your app with NMT enabled, and looking at the output:
>>
>> https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html#enable_nmt
>>  -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions 
>> -XX:+PrintNMTStatistics
>>
>>
>> There are several things you can trip over when running a jvm under 
>> cgroups or docker: 
>> (I'm missing a couple I can't think off off my head).
>>
>> 1. Heapsize -Xmx
>> 2. DirectMemory allocations (DirectByteBuffer)  -XX:MaxDirectMemorySize
>> 3. Code cache
>> 4. Metaspace -XX:MaxMetaspaceSize
>> 5. Unsafe allocations.
>> 6. Jni Allocations
>> 7. Threads per stack -Xss
>>
>> Most of these you can manage via flags, but there are a few you can't 
>> that I don't recall directly off hand.
>> And see [2], for all the flags, it gets tedious.
>>
>>
>> Luckily, 8u121b34  (8u131 for those of you without an Oracle support 
>> contract)  has a useful option [1] that can help with this.
>> -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
>>
>> 8u121b34 and later will automatically set the # of cpus specified without 
>> any flags, but this doesn't take sharing into account.
>>
>> This configures the max memory the jvm sees to be what docker has 
>> specified.  (Unfortunately only supports docker, and not other cgroups 
>> styles or cgroups v2).
>> See 
>> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-July/027464.html  
>> for what looks like a very promising JEP on this front.
>>
>> The one downside I didn't point out is that the 
>> UseCGroupMemoryLimitForHeap flag doesn't control Unsafe Allocations or JNI 
>> allocations, so you will need to handle approximations for that yourself. 
>>  :)
>>
>> Hope that helps a little.
>>
>> --

Re: Measuring JVM memory for containers

2017-08-04 Thread Gil Tene
Note that -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap 
is simply an alternative to setting Xmx: "...When these two JVM command 
line options are used, and -Xmx is not specified, the JVM will look at the 
Linux cgroup configuration, which is what Docker containers use for setting 
memory limits, to transparently size a maximum Java heap size.". 

However, the flags have no meaning when you set Xmx explicitly, and 
virtually all applications that do any sizing actually set their Xmx.

Also, the flag has no effect on non-Heap memory consumption.

The main benefit of -XX:+UseCGroupMemoryLimitForHeap in the context of 
containers tends to be not the "main" application JVMs, but all the (often 
short lived) little utility things people run without specifying -Xmx 
(start/stop/admin command things that are java-based, javac, monitoring 
thing, etc.). Since HotSpot will default to an Xmx that is 1/4 of the 
(host) system memory size, this can create some surprises in container 
environments. The effect is somewhat dampened by the fact that most of 
these things start with an Xmx that is much smaller (1/64th of host system 
memory), those surprises happen less often as the JVMs for short running 
things often don't expand to use the full Xmx. But still. But even 1/64th 
of system memory can become a problem as container environments run many 
10s and maybe 100s of JVMs on commodity machines with 100s of GB of memory. 
In addition to setting a default Xmx that depends on container limits, 
-XX:+UseCGroupMemoryLimitForHeap will [i think/hope] also set a default 
-Xms (at 1/64 of the container limit) which can help. 

My view is that -XX:+UseCGroupMemoryLimitForHeap is *currently* a "somewhat 
meaningless" flag, and will remain so until it is on by default (at which 
point it will have a true beneficial effect, so lets hope it gets out the 
experimental phase soon). The logic for this claim is simple: Any 
application for which someone would have the forethought for explicitly 
adding -XX:+UseCGroupMemoryLimitForHeap, would likely already have -Xmx 
setting. Or stated differently: chances are that any java command NOT 
specifying -Xmx will NOT have -XX:+UseCGroupMemoryLimitForHeap flag set, 
unless it is the default.

On Friday, August 4, 2017 at 9:17:09 AM UTC-7, are...@yahoo-inc.com wrote:
>
> In addition to what Gil says, which is excellent, here's what I've found. 
>  (I tried to reply via email, but that got eaten, so apologies if you get 
> double posts. :()
>
> I've found that -Xms=-Xmx  is best as it avoids resizes.
>
> I suggest running your app with NMT enabled, and looking at the output:
>
> https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html#enable_nmt
>  -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions 
> -XX:+PrintNMTStatistics
>
>
> There are several things you can trip over when running a jvm under 
> cgroups or docker: 
> (I'm missing a couple I can't think off off my head).
>
> 1. Heapsize -Xmx
> 2. DirectMemory allocations (DirectByteBuffer)  -XX:MaxDirectMemorySize
> 3. Code cache
> 4. Metaspace -XX:MaxMetaspaceSize
> 5. Unsafe allocations.
> 6. Jni Allocations
> 7. Threads per stack -Xss
>
> Most of these you can manage via flags, but there are a few you can't that 
> I don't recall directly off hand.
> And see [2], for all the flags, it gets tedious.
>
>
> Luckily, 8u121b34  (8u131 for those of you without an Oracle support 
> contract)  has a useful option [1] that can help with this.
> -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
>
> 8u121b34 and later will automatically set the # of cpus specified without 
> any flags, but this doesn't take sharing into account.
>
> This configures the max memory the jvm sees to be what docker has 
> specified.  (Unfortunately only supports docker, and not other cgroups 
> styles or cgroups v2).
> See 
> http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-July/027464.html  
> for what looks like a very promising JEP on this front.
>
> The one downside I didn't point out is that the 
> UseCGroupMemoryLimitForHeap flag doesn't control Unsafe Allocations or JNI 
> allocations, so you will need to handle approximations for that yourself. 
>  :)
>
> Hope that helps a little.
>
> --Allen Reese
>
>
>
> [2] is an empty example program pointing out many of the flags and output.
>
>
> [1]:  
> http://www.oracle.com/technetwork/java/javaseproducts/documentation/8u121-revision-builds-relnotes-3450732.html
> 8170888 hotspot runtime [linux] Experimental support for cgroup memory 
> limits in container (ie Docker) environments
> 6515172 hotspot runtime Runtime.availableProcessors() ignores Linux 
> taskset command
> 8161993 hotspot gc G1 crashes if active_processor_count changes during 
> startup
>
> [2]:
> [areese@refusesbruises ]$ java -XX:MaxDirectMemorySize=1m -Xms256m 
>  -Xmx256m -XX:NativeMemoryTracking=summary  -XX:+UnlockDiagnosticVMOptions 
> -XX:+PrintNMTStatistics -cp . test
>
> Native

Re: Measuring JVM memory for containers

2017-08-04 Thread areese via mechanical-sympathy
In addition to what Gil says, which is excellent, here's what I've found. 
 (I tried to reply via email, but that got eaten, so apologies if you get 
double posts. :()

I've found that -Xms=-Xmx  is best as it avoids resizes.

I suggest running your app with NMT enabled, and looking at the output:
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html#enable_nmt
 -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions 
-XX:+PrintNMTStatistics


There are several things you can trip over when running a jvm under cgroups 
or docker: 
(I'm missing a couple I can't think off off my head).

1. Heapsize -Xmx
2. DirectMemory allocations (DirectByteBuffer)  -XX:MaxDirectMemorySize
3. Code cache
4. Metaspace -XX:MaxMetaspaceSize
5. Unsafe allocations.
6. Jni Allocations
7. Threads per stack -Xss

Most of these you can manage via flags, but there are a few you can't that 
I don't recall directly off hand.
And see [2], for all the flags, it gets tedious.


Luckily, 8u121b34  (8u131 for those of you without an Oracle support 
contract)  has a useful option [1] that can help with this.
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

8u121b34 and later will automatically set the # of cpus specified without 
any flags, but this doesn't take sharing into account.

This configures the max memory the jvm sees to be what docker has 
specified.  (Unfortunately only supports docker, and not other cgroups 
styles or cgroups v2).
See http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-July/027464.html  
for what looks like a very promising JEP on this front.

The one downside I didn't point out is that the UseCGroupMemoryLimitForHeap 
flag doesn't control Unsafe Allocations or JNI allocations, so you will 
need to handle approximations for that yourself.  :)

Hope that helps a little.

--Allen Reese



[2] is an empty example program pointing out many of the flags and output.


[1]:  
http://www.oracle.com/technetwork/java/javaseproducts/documentation/8u121-revision-builds-relnotes-3450732.html
8170888 hotspot runtime [linux] Experimental support for cgroup memory 
limits in container (ie Docker) environments
6515172 hotspot runtime Runtime.availableProcessors() ignores Linux taskset 
command
8161993 hotspot gc G1 crashes if active_processor_count changes during 
startup

[2]:
[areese@refusesbruises ]$ java -XX:MaxDirectMemorySize=1m -Xms256m 
 -Xmx256m -XX:NativeMemoryTracking=summary  -XX:+UnlockDiagnosticVMOptions 
-XX:+PrintNMTStatistics -cp . test

Native Memory Tracking:

Total: reserved=1600200KB, committed=301232KB
- Java Heap (reserved=262144KB, committed=262144KB)
(mmap: reserved=262144KB, committed=262144KB) 
 
- Class (reserved=1059947KB, committed=8043KB)
(classes #391)
(malloc=3179KB #129) 
(mmap: reserved=1056768KB, committed=4864KB) 
 
-Thread (reserved=10323KB, committed=10323KB)
(thread #10)
(stack: reserved=10280KB, committed=10280KB)
(malloc=32KB #54) 
(arena=12KB #20)
 
-  Code (reserved=249631KB, committed=2567KB)
(malloc=31KB #296) 
(mmap: reserved=249600KB, committed=2536KB) 
 
-GC (reserved=13049KB, committed=13049KB)
(malloc=3465KB #111) 
(mmap: reserved=9584KB, committed=9584KB) 
 
-  Compiler (reserved=132KB, committed=132KB)
(malloc=1KB #21) 
(arena=131KB #3)
 
-  Internal (reserved=3277KB, committed=3277KB)
(malloc=3245KB #1278) 
(mmap: reserved=32KB, committed=32KB) 
 
-Symbol (reserved=1356KB, committed=1356KB)
(malloc=900KB #64) 
(arena=456KB #1)
 
-Native Memory Tracking (reserved=34KB, committed=34KB)
(malloc=3KB #32) 
(tracking overhead=32KB)
 
-   Arena Chunk (reserved=305KB, committed=305KB)
(malloc=305KB) 
 
[areese@refusesbruise ]$ 


On Friday, August 4, 2017 at 8:58:43 AM UTC-7, Gil Tene wrote:
>
> Yes,you can do a lot of estimation,but measuring the actual memory usage 
> against the actual limit you expect to be enforced is a the main thing you 
> have to do... You'll want to make sure the things you measure have actually 
> been exercised well before you take the measurement, and you *always* want 
> to pad that measured memory use since you may easily miss some use that can 
> happen in the future.
>
> In doing memory use measurement, it's important to understand that 
> reported memory use usually

Re: Measuring JVM memory for containers

2017-08-04 Thread Gil Tene
Yes,you can do a lot of estimation,but measuring the actual memory usage 
against the actual limit you expect to be enforced is a the main thing you 
have to do... You'll want to make sure the things you measure have actually 
been exercised well before you take the measurement, and you *always* want 
to pad that measured memory use since you may easily miss some use that can 
happen in the future.

In doing memory use measurement, it's important to understand that reported 
memory use usually only accounts for physical memory that was actually 
touched by the program, because Linux does on-demand physical page 
allocation at page modification time, NOT at mapping or allocation time. 
And on demand physical page allocation only occurs when the contents of the 
page has been modified for the first time. Even tho it seems like most of 
the memory is "allocated" when you start, actual emory use will grow over 
time as you exercise behaviors that actually make use of the memory you 
allocated. The parts of the heap, code cache, meatspace, etc. that were 
allocated but not yet used or exercised by your program will NOT show up 
in memory.usage_in_bytes. The same goes for any off heap memory your 
process may be using (e.g. using DirectByteBuffers): allocation does not 
show up as usage until the first (modifying) touch.

Here are some things that can help in make your observed memory use reflect 
eventual memory use:

Making sure the heap is "real" is simple:
- Set -Xms to be equal to -Xmx (to avoid starting lower and expanding later)
- Use -XX:+AlwaysPreTouch to make sure all heap pages were actually touched 
(which would force physical memory to actually be allocated, which will 
make them show up in the used balance),
With these two settings, you will ensure that all heap pages (for all of 
Xmx) will actually be physically allocated.

For the non-heap parts of memory it gets more complicated. And HotSpot has 
TONS of that sort of memory (eventual RSS that goes far above Xmx), so 
ignoring it or going for some "10-20% slop" rule will usually come back to 
bite you hours into a run. significant memory HotSpot manages outside of 
the Xmx-sized heap includes various GC things that only get populated as a 
result of actually being exercised (card tables, G1 remembered set support 
structures), Metaspace that only gets touched when actually filled with 
classes, Code cache memory that only gets touched when actually filled with 
JIT'ed code, etc. As a result, you really want to make sure you application 
gets some "exercise" before you measure the process memory footprint. 

- Run load for a while (make sure your code has gone through it's paces, 
JITs have done their job, etc.). My rule of thumb for "a while" means at 
least 10 minutes of actual load and at least 100K operations (of whatever 
operations you actually do). Don't settle for or try to draw conclusions 
from silly 10-20 seconds or 1000 op micro-tests.

- Make sure to have experienced several oldgen collections with whatever 
collector you are using. This can be more challenging that you think, 
because most GC tun in g for HotSpot is focused on delaying this stuff as 
much as possible, making some of the cannot-be-avoided behaviors only occur 
hours into normal runs under load. To combat this, you can use tools that 
can be added as agents and intentionally exercise the collector with your 
actual application running. E.g. my HeapFragger 
(https://github.com/giltene/HeapFragger) has this exact purpose: to aid 
application testers in inducing inevitable-but-rare garbage collection 
events without having to wait days for them to happen. It can be capped to 
use a tiny fraction of the heap, and configurable amount of CPU (set by 
controlling it's allocation rate), and can generally put the collector 
through it's paces in a matter of minutes, including forcing oldgen 
compaction in CMS, and making G1 deal with remembered sets. For G1 at 
least, this exercise can make a HUGE difference in the observed process 
memory footprint, folks have seen remembered set footprint grow to as big 
as 50% of Xmx after being exercised (and I think it can go beyond that in 
theory).

And remember that even after you've convinced yourself that all has been 
exercised, you should pad the observed result by some safely factor slop.

On Friday, August 4, 2017 at 12:18:47 AM UTC-7, Sebastian Łaskawiec wrote:
>
> Thanks a lot for all the hints! They helped me a lot.
>
> I think I'm moving forward. The key thing was to calculate the amount of 
> occupied memory seen by CGroups. It can be easily done using:
>
>- /sys/fs/cgroup/memory/memory.usage_in_bytes
>- /sys/fs/cgroup/memory/memory.limit_in_bytes
>
> Calculated ratio along with Native Memory Tracking [1] helped me to find a 
> good balance. I also found a shortcut which makes setting initial 
> parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit). 
> The downside is that with MaxRAM paramete

Re: Measuring JVM memory for containers

2017-08-04 Thread Sebastian Łaskawiec
I think you're right Tom. Here is a good snippet from the "Java 
Performance" book [1].

I'll experiment with this a little bit further but it looks promising. 

Thanks for the hint and link!

[1] 
https://books.google.pl/books?id=aIhUAwAAQBAJ&printsec=frontcover&dq=Java+Performance:+The+Definitive+Guide:+Getting+the+Most+Out+of+Your+Code&hl=en&sa=X&redir_esc=y#v=onepage&q='-XX%3AMaxRam'%20java&f=false

On Friday, 4 August 2017 09:44:32 UTC+2, Tom Lee wrote:
>
> Neat, didn't know about MaxRAM or native memory tracking.
>
> RE:
>
> The downside is that with MaxRAM parameter I lose control over Xms.
>
>
> Oh, it doesn't work? Can't track down definitive info from a quick Google 
> around, but this seems to imply it should: 
> https://stackoverflow.com/questions/19712446/how-does-java-7-decide-on-the-max-value-of-heap-memory-allocated-xmx-on-osx
>  
> ... It's a few years old, but this comment sticks out from the OpenJDK 
> copy/paste in the StackOverflow answer:
>
>   // If the initial_heap_size has not been set with InitialHeapSize
>>   // or -Xms, then set it as fraction of the size of physical memory,
>>   // respecting the maximum and minimum sizes of the heap.
>
>
> Seems to imply InitialHeapSize/Xms gets precedence. Perhaps that 
> information is out of date / incorrect ... a look at more recent OpenJDK 
> source code might offer some hints.
>
> If Xms isn't an option for some reason, is 
> InitialRAMFraction/MaxRAMFraction available? Maybe something else to look 
> at. In any case, thanks for the info!
>
> On Fri, Aug 4, 2017 at 12:18 AM, Sebastian Łaskawiec <
> sebastian...@gmail.com > wrote:
>
>> Thanks a lot for all the hints! They helped me a lot.
>>
>> I think I'm moving forward. The key thing was to calculate the amount of 
>> occupied memory seen by CGroups. It can be easily done using:
>>
>>- /sys/fs/cgroup/memory/memory.usage_in_bytes
>>- /sys/fs/cgroup/memory/memory.limit_in_bytes
>>
>> Calculated ratio along with Native Memory Tracking [1] helped me to find 
>> a good balance. I also found a shortcut which makes setting initial 
>> parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit). 
>> The downside is that with MaxRAM parameter I lose control over Xms.
>>
>> [1] 
>> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
>> [2] https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/
>>
>> On Thursday, 3 August 2017 20:16:50 UTC+2, Tom Lee wrote:
>>>
>>> Hey Sebastian,
>>>
>>> Dealt with a similar issues on Docker a few years back -- safest way to 
>>> do it is to use some sort of heuristic for your maximum JVM process size. 
>>> Working from a very poor memory and perhaps somebody here will tell me this 
>>> is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic 
>>> we used at the time for max total JVM process size was something like:
>>>
>>>  +  + 
>>> slop
>>>
>>> Easy enough to see these values via -XX:+PrintFlagsFinal if they're not 
>>> explicitly defined by your apps. We typically had Xmx somewhere between 
>>> 8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a 
>>> few hundred MB, in some weird cases it was multiples of the JVM heap size.
>>>
>>> The "slop" was for things we hadn't accounted for, but we really should 
>>> have included things like the code cache size etc. as Meg's estimate above 
>>> does. I think we used ~10% of the JVM heap size, which was probably 
>>> slightly wasteful, but worked well enough for us. Suggest you take the 
>>> above heuristic and mix it up with Meg's idea to include code cache size 
>>> etc. & feel your way from there. I'd personally always leave at least a few 
>>> hundred megs additional overhead on top of my "hard" numbers because I 
>>> don't trust myself with such things. :)
>>>
>>> Let's see, what else. At the time our JVM -- think this was an Oracle 
>>> Java 8 JDK -- set MaxDirectMemorySize to the value of Xmx by default, 
>>> implying the JVM process *could* (but not necessarily *would*) grow up 
>>> to roughly double its configured size to accommodate heap + direct buffers 
>>> if you had an application that made heavy use of direct buffers and put 
>>> enough pressure on the heap to grow it to the configured Xmx value (or as 
>>> we typically did, set Xmx == Xms).
>>>
>>> Where possible we would constrain MaxDirectMemorySize to something 
>>> "real" rather than leaving it to this default, preferring to have the JVM 
>>> throw up an OOME if we were allocating more direct memory than we expected 
>>> so we could get more info about the failure rather than worrying about the 
>>> OOM killer hard kill the entire process & not being able to understand why. 
>>> YMMV.
>>>
>>> One caveat: I can't quite remember 
>>> if Unsafe.allocateMemory()/Unsafe.freeMemory() count toward your 
>>> MaxDirectMemorySize ... perhaps somebody else here more familiar with the 
>>> JVM internals could weigh in on that. Perhaps another t

Re: Measuring JVM memory for containers

2017-08-04 Thread Tom Lee
Neat, didn't know about MaxRAM or native memory tracking.

RE:

The downside is that with MaxRAM parameter I lose control over Xms.


Oh, it doesn't work? Can't track down definitive info from a quick Google
around, but this seems to imply it should:
https://stackoverflow.com/questions/19712446/how-does-java-7-decide-on-the-max-value-of-heap-memory-allocated-xmx-on-osx
... It's a few years old, but this comment sticks out from the OpenJDK
copy/paste in the StackOverflow answer:

  // If the initial_heap_size has not been set with InitialHeapSize
>   // or -Xms, then set it as fraction of the size of physical memory,
>   // respecting the maximum and minimum sizes of the heap.


Seems to imply InitialHeapSize/Xms gets precedence. Perhaps that
information is out of date / incorrect ... a look at more recent OpenJDK
source code might offer some hints.

If Xms isn't an option for some reason, is
InitialRAMFraction/MaxRAMFraction available? Maybe something else to look
at. In any case, thanks for the info!

On Fri, Aug 4, 2017 at 12:18 AM, Sebastian Łaskawiec <
sebastian.laskaw...@gmail.com> wrote:

> Thanks a lot for all the hints! They helped me a lot.
>
> I think I'm moving forward. The key thing was to calculate the amount of
> occupied memory seen by CGroups. It can be easily done using:
>
>- /sys/fs/cgroup/memory/memory.usage_in_bytes
>- /sys/fs/cgroup/memory/memory.limit_in_bytes
>
> Calculated ratio along with Native Memory Tracking [1] helped me to find a
> good balance. I also found a shortcut which makes setting initial
> parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit).
> The downside is that with MaxRAM parameter I lose control over Xms.
>
> [1] https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/
> tooldescr007.html
> [2] https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/
>
> On Thursday, 3 August 2017 20:16:50 UTC+2, Tom Lee wrote:
>>
>> Hey Sebastian,
>>
>> Dealt with a similar issues on Docker a few years back -- safest way to
>> do it is to use some sort of heuristic for your maximum JVM process size.
>> Working from a very poor memory and perhaps somebody here will tell me this
>> is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic
>> we used at the time for max total JVM process size was something like:
>>
>>  +  +
>> slop
>>
>> Easy enough to see these values via -XX:+PrintFlagsFinal if they're not
>> explicitly defined by your apps. We typically had Xmx somewhere between
>> 8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a
>> few hundred MB, in some weird cases it was multiples of the JVM heap size.
>>
>> The "slop" was for things we hadn't accounted for, but we really should
>> have included things like the code cache size etc. as Meg's estimate above
>> does. I think we used ~10% of the JVM heap size, which was probably
>> slightly wasteful, but worked well enough for us. Suggest you take the
>> above heuristic and mix it up with Meg's idea to include code cache size
>> etc. & feel your way from there. I'd personally always leave at least a few
>> hundred megs additional overhead on top of my "hard" numbers because I
>> don't trust myself with such things. :)
>>
>> Let's see, what else. At the time our JVM -- think this was an Oracle
>> Java 8 JDK -- set MaxDirectMemorySize to the value of Xmx by default,
>> implying the JVM process *could* (but not necessarily *would*) grow up
>> to roughly double its configured size to accommodate heap + direct buffers
>> if you had an application that made heavy use of direct buffers and put
>> enough pressure on the heap to grow it to the configured Xmx value (or as
>> we typically did, set Xmx == Xms).
>>
>> Where possible we would constrain MaxDirectMemorySize to something "real"
>> rather than leaving it to this default, preferring to have the JVM throw up
>> an OOME if we were allocating more direct memory than we expected so we
>> could get more info about the failure rather than worrying about the OOM
>> killer hard kill the entire process & not being able to understand why.
>> YMMV.
>>
>> One caveat: I can't quite remember if 
>> Unsafe.allocateMemory()/Unsafe.freeMemory()
>> count toward your MaxDirectMemorySize ... perhaps somebody else here more
>> familiar with the JVM internals could weigh in on that. Perhaps another
>> thing to watch out for if you're doing "interesting" things with the JVM.
>>
>> I found this sort of "informed guess" to be much more reliable than
>> trying to figure things out empirically by monitoring processes over time
>> etc. ... anyway, hope that helps, curious to know what you ultimately end
>> up with.
>>
>> Cheers,
>> Tom
>>
>> On Thu, Aug 3, 2017 at 10:31 AM, Meg Figura  wrote:
>>
>>> Hi Sebastian,
>>>
>>> Our product runs within the JVM, within a (Hadoop) YARN container.
>>> Similar to your situation, YARN will kill the container if it goes over the
>>> amount of memory reserved for the contai

Re: Measuring JVM memory for containers

2017-08-04 Thread Sebastian Łaskawiec
Thanks a lot for all the hints! They helped me a lot.

I think I'm moving forward. The key thing was to calculate the amount of 
occupied memory seen by CGroups. It can be easily done using:

   - /sys/fs/cgroup/memory/memory.usage_in_bytes
   - /sys/fs/cgroup/memory/memory.limit_in_bytes
   
Calculated ratio along with Native Memory Tracking [1] helped me to find a 
good balance. I also found a shortcut which makes setting initial 
parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit). 
The downside is that with MaxRAM parameter I lose control over Xms.

[1] 
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
[2] https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/

On Thursday, 3 August 2017 20:16:50 UTC+2, Tom Lee wrote:
>
> Hey Sebastian,
>
> Dealt with a similar issues on Docker a few years back -- safest way to do 
> it is to use some sort of heuristic for your maximum JVM process size. 
> Working from a very poor memory and perhaps somebody here will tell me this 
> is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic 
> we used at the time for max total JVM process size was something like:
>
>  +  + slop
>
> Easy enough to see these values via -XX:+PrintFlagsFinal if they're not 
> explicitly defined by your apps. We typically had Xmx somewhere between 
> 8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a 
> few hundred MB, in some weird cases it was multiples of the JVM heap size.
>
> The "slop" was for things we hadn't accounted for, but we really should 
> have included things like the code cache size etc. as Meg's estimate above 
> does. I think we used ~10% of the JVM heap size, which was probably 
> slightly wasteful, but worked well enough for us. Suggest you take the 
> above heuristic and mix it up with Meg's idea to include code cache size 
> etc. & feel your way from there. I'd personally always leave at least a few 
> hundred megs additional overhead on top of my "hard" numbers because I 
> don't trust myself with such things. :)
>
> Let's see, what else. At the time our JVM -- think this was an Oracle Java 
> 8 JDK -- set MaxDirectMemorySize to the value of Xmx by default, implying 
> the JVM process *could* (but not necessarily *would*) grow up to roughly 
> double its configured size to accommodate heap + direct buffers if you had 
> an application that made heavy use of direct buffers and put enough 
> pressure on the heap to grow it to the configured Xmx value (or as we 
> typically did, set Xmx == Xms).
>
> Where possible we would constrain MaxDirectMemorySize to something "real" 
> rather than leaving it to this default, preferring to have the JVM throw up 
> an OOME if we were allocating more direct memory than we expected so we 
> could get more info about the failure rather than worrying about the OOM 
> killer hard kill the entire process & not being able to understand why. 
> YMMV.
>
> One caveat: I can't quite remember 
> if Unsafe.allocateMemory()/Unsafe.freeMemory() count toward your 
> MaxDirectMemorySize ... perhaps somebody else here more familiar with the 
> JVM internals could weigh in on that. Perhaps another thing to watch out 
> for if you're doing "interesting" things with the JVM.
>
> I found this sort of "informed guess" to be much more reliable than trying 
> to figure things out empirically by monitoring processes over time etc. ... 
> anyway, hope that helps, curious to know what you ultimately end up with.
>
> Cheers,
> Tom
>
> On Thu, Aug 3, 2017 at 10:31 AM, Meg Figura  > wrote:
>
>> Hi Sebastian,
>>
>> Our product runs within the JVM, within a (Hadoop) YARN container. 
>> Similar to your situation, YARN will kill the container if it goes over the 
>> amount of memory reserved for the container. Java heap sizes (-Xmx) for the 
>> apps we run within containers vary from about 6GB to about 31GB, so this 
>> may be completely inappropriate if you use much smaller heaps, but here is 
>> the heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to 
>> the JVM and adjustJvmMemoryForYarn() gives the size of the container we 
>> request.
>>
>> private static int getReservedCodeCacheSize(int jvmMemory)
>> {
>> return 100;
>> }
>>
>> private static int getMaxMetaspaceSize(int jvmMemory)
>> {
>> return 256;
>> }
>>
>> private static int getCompressedClassSpaceSize(int jvmMemory)
>> {
>> return 256;
>> }
>>
>> private static int getExtraJvmOverhead(int jvmMemory)
>> {
>> if (jvmMemory <= 2048)
>> {
>> return 1024;
>> }
>> else if(jvmMemory <= (1024 * 16))
>> {
>> return 2048;
>> }
>> else if(jvmMemory <= (1024 * 31))
>> {
>> return 5120;
>> }
>> else
>> {
>> return 8192;
>> }
>> }
>>
>> public static int adjustJvmMemoryForYarn(int jvmMemory)
>> {
>> if (jvmMemory == 0)
>> {
>> return 0;
>> }
>> 
>> return jvmMemory +
>

Re: Measuring JVM memory for containers

2017-08-03 Thread Tom Lee
Hey Sebastian,

Dealt with a similar issues on Docker a few years back -- safest way to do
it is to use some sort of heuristic for your maximum JVM process size.
Working from a very poor memory and perhaps somebody here will tell me this
is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic
we used at the time for max total JVM process size was something like:

 +  + slop

Easy enough to see these values via -XX:+PrintFlagsFinal if they're not
explicitly defined by your apps. We typically had Xmx somewhere between
8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a
few hundred MB, in some weird cases it was multiples of the JVM heap size.

The "slop" was for things we hadn't accounted for, but we really should
have included things like the code cache size etc. as Meg's estimate above
does. I think we used ~10% of the JVM heap size, which was probably
slightly wasteful, but worked well enough for us. Suggest you take the
above heuristic and mix it up with Meg's idea to include code cache size
etc. & feel your way from there. I'd personally always leave at least a few
hundred megs additional overhead on top of my "hard" numbers because I
don't trust myself with such things. :)

Let's see, what else. At the time our JVM -- think this was an Oracle Java
8 JDK -- set MaxDirectMemorySize to the value of Xmx by default, implying
the JVM process *could* (but not necessarily *would*) grow up to roughly
double its configured size to accommodate heap + direct buffers if you had
an application that made heavy use of direct buffers and put enough
pressure on the heap to grow it to the configured Xmx value (or as we
typically did, set Xmx == Xms).

Where possible we would constrain MaxDirectMemorySize to something "real"
rather than leaving it to this default, preferring to have the JVM throw up
an OOME if we were allocating more direct memory than we expected so we
could get more info about the failure rather than worrying about the OOM
killer hard kill the entire process & not being able to understand why.
YMMV.

One caveat: I can't quite remember
if Unsafe.allocateMemory()/Unsafe.freeMemory() count toward your
MaxDirectMemorySize ... perhaps somebody else here more familiar with the
JVM internals could weigh in on that. Perhaps another thing to watch out
for if you're doing "interesting" things with the JVM.

I found this sort of "informed guess" to be much more reliable than trying
to figure things out empirically by monitoring processes over time etc. ...
anyway, hope that helps, curious to know what you ultimately end up with.

Cheers,
Tom

On Thu, Aug 3, 2017 at 10:31 AM, Meg Figura  wrote:

> Hi Sebastian,
>
> Our product runs within the JVM, within a (Hadoop) YARN container. Similar
> to your situation, YARN will kill the container if it goes over the amount
> of memory reserved for the container. Java heap sizes (-Xmx) for the apps
> we run within containers vary from about 6GB to about 31GB, so this may be
> completely inappropriate if you use much smaller heaps, but here is the
> heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to the
> JVM and adjustJvmMemoryForYarn() gives the size of the container we request.
>
> private static int getReservedCodeCacheSize(int jvmMemory)
> {
> return 100;
> }
>
> private static int getMaxMetaspaceSize(int jvmMemory)
> {
> return 256;
> }
>
> private static int getCompressedClassSpaceSize(int jvmMemory)
> {
> return 256;
> }
>
> private static int getExtraJvmOverhead(int jvmMemory)
> {
> if (jvmMemory <= 2048)
> {
> return 1024;
> }
> else if(jvmMemory <= (1024 * 16))
> {
> return 2048;
> }
> else if(jvmMemory <= (1024 * 31))
> {
> return 5120;
> }
> else
> {
> return 8192;
> }
> }
>
> public static int adjustJvmMemoryForYarn(int jvmMemory)
> {
> if (jvmMemory == 0)
> {
> return 0;
> }
>
> return jvmMemory +
>getReservedCodeCacheSize(jvmMemory) +
>getMaxMetaspaceSize(jvmMemory) +
>getCompressedClassSpaceSize(jvmMemory) +
>getExtraJvmOverhead(jvmMemory);
> }
>
>
>
> If the app uses any significant off-heap memory, we just add this to the
> container size.
>
> Obviously, this isn't optimal, but it does prevent the "OOM killer" from
> kicking in. I'm interested to see if anyone has a better solution!
>
> -Meg
>
>
>
> On Thursday, August 3, 2017 at 5:17:11 AM UTC-4, Sebastian Łaskawiec wrote:
>>
>> Hey,
>>
>> Before digging into the problem, let me say that I'm very happy to meet
>> you! My name is Sebastian Łaskawiec and I've been working for Red Hat
>> focusing mostly on in memory store solutions. A while ago I attended JVM
>> performance and profiling workshop lead by Martin, which was an incredible
>> experience to me.
>>
>> Over the last a couple of days I've been working on tuning and sizing our
>> app for Docker Containers. I'm especially interest

Re: Measuring JVM memory for containers

2017-08-03 Thread Meg Figura
Hi Sebastian,

Our product runs within the JVM, within a (Hadoop) YARN container. Similar 
to your situation, YARN will kill the container if it goes over the amount 
of memory reserved for the container. Java heap sizes (-Xmx) for the apps 
we run within containers vary from about 6GB to about 31GB, so this may be 
completely inappropriate if you use much smaller heaps, but here is the 
heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to the 
JVM and adjustJvmMemoryForYarn() gives the size of the container we request.

private static int getReservedCodeCacheSize(int jvmMemory)
{
return 100;
}

private static int getMaxMetaspaceSize(int jvmMemory)
{
return 256;
}

private static int getCompressedClassSpaceSize(int jvmMemory)
{
return 256;
}

private static int getExtraJvmOverhead(int jvmMemory)
{
if (jvmMemory <= 2048)
{
return 1024;
}
else if(jvmMemory <= (1024 * 16))
{
return 2048;
}
else if(jvmMemory <= (1024 * 31))
{
return 5120;
}
else
{
return 8192;
}
}

public static int adjustJvmMemoryForYarn(int jvmMemory)
{
if (jvmMemory == 0)
{
return 0;
}

return jvmMemory +
   getReservedCodeCacheSize(jvmMemory) +
   getMaxMetaspaceSize(jvmMemory) +
   getCompressedClassSpaceSize(jvmMemory) +
   getExtraJvmOverhead(jvmMemory);
}



If the app uses any significant off-heap memory, we just add this to the 
container size.

Obviously, this isn't optimal, but it does prevent the "OOM killer" from 
kicking in. I'm interested to see if anyone has a better solution!

-Meg



On Thursday, August 3, 2017 at 5:17:11 AM UTC-4, Sebastian Łaskawiec wrote:
>
> Hey,
>
> Before digging into the problem, let me say that I'm very happy to meet 
> you! My name is Sebastian Łaskawiec and I've been working for Red Hat 
> focusing mostly on in memory store solutions. A while ago I attended JVM 
> performance and profiling workshop lead by Martin, which was an incredible 
> experience to me. 
>
> Over the last a couple of days I've been working on tuning and sizing our 
> app for Docker Containers. I'm especially interested in running JVM without 
> swap and constraining memory. Once you hit the memory limit, the OOM Killer 
> kicks and takes your application down. Rafael wrote pretty good pragmatic 
> description here [1].
>
> I'm currently looking for some good practices for measuring and tuning JVM 
> memory size. I'm currently using:
>
>- The JVM native memory tracker [2]
>- pmap -x, which gives me RSS
>- jstat -gccause, which gives me an idea how GC is behaving
>- dstat which is not CGroups aware but gives me an overall idea about 
>paging, CPU and memory
>
> Here's an example of a log that I'm analyzing [3]. Currently I'm trying to 
> adjust Xmx and Xms correctly so that my application fills the constrained 
> container but doesn't spill out (which would result in OOM Kill done by the 
> kernel). The biggest problem that I have is how to measure the remaining 
> amount of memory inside the container? Also I'm not sure why the amount of 
> committed JVM memory is different from RSS reported by pmap -x? Could you 
> please give me a hand with this?
>
> Thanks,
> Sebastian
>
> [1] https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
> [2] 
> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
> [3] https://gist.github.com/slaskawi/a6ddb32e1396384d805528884f25ce4b
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Measuring JVM memory for containers

2017-08-03 Thread Sebastian Łaskawiec
Hey,

Before digging into the problem, let me say that I'm very happy to meet 
you! My name is Sebastian Łaskawiec and I've been working for Red Hat 
focusing mostly on in memory store solutions. A while ago I attended JVM 
performance and profiling workshop lead by Martin, which was an incredible 
experience to me. 

Over the last a couple of days I've been working on tuning and sizing our 
app for Docker Containers. I'm especially interested in running JVM without 
swap and constraining memory. Once you hit the memory limit, the OOM Killer 
kicks and takes your application down. Rafael wrote pretty good pragmatic 
description here [1].

I'm currently looking for some good practices for measuring and tuning JVM 
memory size. I'm currently using:

   - The JVM native memory tracker [2]
   - pmap -x, which gives me RSS
   - jstat -gccause, which gives me an idea how GC is behaving
   - dstat which is not CGroups aware but gives me an overall idea about 
   paging, CPU and memory

Here's an example of a log that I'm analyzing [3]. Currently I'm trying to 
adjust Xmx and Xms correctly so that my application fills the constrained 
container but doesn't spill out (which would result in OOM Kill done by the 
kernel). The biggest problem that I have is how to measure the remaining 
amount of memory inside the container? Also I'm not sure why the amount of 
committed JVM memory is different from RSS reported by pmap -x? Could you 
please give me a hand with this?

Thanks,
Sebastian

[1] https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
[2] 
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
[3] https://gist.github.com/slaskawi/a6ddb32e1396384d805528884f25ce4b

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.