Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-23 Thread Robert Engels



See https://github.com/golang/go/issues/9849

Go has no limit, you use ulimit to control.



-Original Message-
>From: Kevin Chadwick 
>Sent: Jan 23, 2020 10:26 AM
>To: golang-nuts@googlegroups.com
>Subject: Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?
>
>On 2020-01-23 14:18, robert engels wrote:
>> There is nothing “special” about it - generally the Go process calls 
>> “malloc()” and fails with OOM (unable to expand the process memory size), 
>> but the OS can kill processes in a low system memory condition without them 
>> calling malloc (OOM killer kills the hogs). If you process is dying due to 
>> the OOM killer you have configuration problems.
>> 
>
>Because GO has it's own limit right? Whereas firefox/chrome didn't bother,
>likely because who are firefox/chrome to limit your systems memory use when it
>is the OS job. I guess GO limits it for panics or because Linux has a
>questionable default of unlimited or maybe because it supports so many
>platforms, otherwise it does not make any sense to me. The point is that if
>polling has race issues then I would investigate whether an OOM return/panic
>might be a timely indicator to the application/routine to clean house.
>
>> When malloc() fails in a GC system, it could be because the free space is 
>> fragmented. In a compacting and moving GC, it will shift objects around to 
>> make room (Go does not does this, most Java collectors do). Additionally, 
>> what I was primarily pointing out, rather than failing the GC will free 
>> “soft refs” to make room.
>
>Right, I believe tiny go does some compacting but you are better off without
>dynamic memory at all on a micro without an MMU. This compacting doesn't really
>solve the issue in a dynamic world with an MMU either, but just makes the 
>system
>more predictable, doesn't it?
>
>-- 
>You received this message because you are subscribed to the Google Groups 
>"golang-nuts" group.
>To unsubscribe from this group and stop receiving emails from it, send an 
>email to golang-nuts+unsubscr...@googlegroups.com.
>To view this discussion on the web visit 
>https://groups.google.com/d/msgid/golang-nuts/7fea4901-16df-93e7-8a7c-9b95beb9a33e%40gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/58993198.1164.1579797754762%40wamui-bella.atl.sa.earthlink.net.


Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-23 Thread Kevin Chadwick
On 2020-01-23 14:18, robert engels wrote:
> There is nothing “special” about it - generally the Go process calls 
> “malloc()” and fails with OOM (unable to expand the process memory size), but 
> the OS can kill processes in a low system memory condition without them 
> calling malloc (OOM killer kills the hogs). If you process is dying due to 
> the OOM killer you have configuration problems.
> 

Because GO has it's own limit right? Whereas firefox/chrome didn't bother,
likely because who are firefox/chrome to limit your systems memory use when it
is the OS job. I guess GO limits it for panics or because Linux has a
questionable default of unlimited or maybe because it supports so many
platforms, otherwise it does not make any sense to me. The point is that if
polling has race issues then I would investigate whether an OOM return/panic
might be a timely indicator to the application/routine to clean house.

> When malloc() fails in a GC system, it could be because the free space is 
> fragmented. In a compacting and moving GC, it will shift objects around to 
> make room (Go does not does this, most Java collectors do). Additionally, 
> what I was primarily pointing out, rather than failing the GC will free “soft 
> refs” to make room.

Right, I believe tiny go does some compacting but you are better off without
dynamic memory at all on a micro without an MMU. This compacting doesn't really
solve the issue in a dynamic world with an MMU either, but just makes the system
more predictable, doesn't it?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/7fea4901-16df-93e7-8a7c-9b95beb9a33e%40gmail.com.


Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-23 Thread robert engels
There is nothing “special” about it - generally the Go process calls “malloc()” 
and fails with OOM (unable to expand the process memory size), but the OS can 
kill processes in a low system memory condition without them calling malloc 
(OOM killer kills the hogs). If you process is dying due to the OOM killer you 
have configuration problems.

When malloc() fails in a GC system, it could be because the free space is 
fragmented. In a compacting and moving GC, it will shift objects around to make 
room (Go does not does this, most Java collectors do). Additionally, what I was 
primarily pointing out, rather than failing the GC will free “soft refs” to 
make room.



> On Jan 23, 2020, at 6:13 AM, Kevin Chadwick  wrote:
> 
> On 2020-01-20 18:57, Robert Engels wrote:
>> This is solved pretty easily in Java using soft references and a hard memory 
>> cap. 
>> 
>> Similar techniques may work here.
> 
> One of the only things I dislike about GO compared to C is the arbitrary 
> memory
> allocation but it has great benefits in coding time and I expect you can 
> handle
> an array allocation panic etc. and keep track of your buffers.
> 
> I know that OpenBSD sets limits by default which need to be raised for
> chrome/firefox to prevent OOM death etc. which isn't the case on Linux without
> default limits (last I heard). I believe Linux kills the hogging process 
> instead!
> 
> I seem to remember OpenBSD devs saying the OS provides opportunity for Firefox
> to manage it's own OOM condition with these limits in place. I took that to 
> mean
> that Linux defaults have made it difficult to handle this properly in general,
> but I may lack understanding of the general issue?
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/97c30a53-1b99-6424-fd00-9a7ae10a77f6%40gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/8F45240D-E57E-4655-98BE-4E103A222405%40ix.netcom.com.


Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-23 Thread Kevin Chadwick
On 2020-01-20 18:57, Robert Engels wrote:
> This is solved pretty easily in Java using soft references and a hard memory 
> cap. 
> 
> Similar techniques may work here.

One of the only things I dislike about GO compared to C is the arbitrary memory
allocation but it has great benefits in coding time and I expect you can handle
an array allocation panic etc. and keep track of your buffers.

I know that OpenBSD sets limits by default which need to be raised for
chrome/firefox to prevent OOM death etc. which isn't the case on Linux without
default limits (last I heard). I believe Linux kills the hogging process 
instead!

I seem to remember OpenBSD devs saying the OS provides opportunity for Firefox
to manage it's own OOM condition with these limits in place. I took that to mean
that Linux defaults have made it difficult to handle this properly in general,
but I may lack understanding of the general issue?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/97c30a53-1b99-6424-fd00-9a7ae10a77f6%40gmail.com.


Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-20 Thread robert engels
This is actually SoftRef - which are like a WeakRef but they are only collected 
under memory pressure.

If the WeakRef package works, I assume it could be modified to enable “soft 
ref” like functionality. It was my understanding that you need GC/runtime 
support to truly make this work, but maybe they have an Unsafe/CGO way. I 
haven’t really researched the WeakRef packages for Go.



> On Jan 20, 2020, at 4:58 PM, Eric S. Raymond  wrote:
> 
> Robert Engels :
>> This is solved pretty easily in Java using soft references and a hard memory 
>> cap. 
> 
> That'd be nice, but the onnly weak-references package I've found doesn't seem 
> to allow
> more than one weakref per target. That's really annoying, because my use case 
> is
> a target object for a many-to-one mapping that should become GCable when the
> last of its source objects is GCed.
> 
> Is there a weakrefs implementation out tere that will do that?
> -- 
>   http://www.catb.org/~esr/;>Eric S. Raymond
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/20200120225853.GA75815%40thyrsus.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/A97065E1-66FD-42A5-9E6B-7AFA5CD96A68%40ix.netcom.com.


Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-20 Thread Eric S. Raymond
Robert Engels :
> This is solved pretty easily in Java using soft references and a hard memory 
> cap. 

That'd be nice, but the onnly weak-references package I've found doesn't seem 
to allow
more than one weakref per target. That's really annoying, because my use case is
a target object for a many-to-one mapping that should become GCable when the
last of its source objects is GCed.

Is there a weakrefs implementation out tere that will do that?
-- 
http://www.catb.org/~esr/;>Eric S. Raymond


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/20200120225853.GA75815%40thyrsus.com.


Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-20 Thread Robert Engels
This is solved pretty easily in Java using soft references and a hard memory 
cap. 

Similar techniques may work here. 

> On Jan 20, 2020, at 11:22 AM, Christian Mauduit  wrote:
> 
> Hi,
> 
> That is a generic question and I think that if you want to keep an approach 
> with a "global indicator of how much memory is used", your approach is OK. 
> You might also want to store this information of "should I throttle" in a 
> cache or something, the cache could be just a shared atomic flag.
> 
> But, from my experience, ugly OOMs can come very fast. So a ticker might just 
> not catch them. On modern hardware, you can allocate gigabytes of memory 
> within seconds, and hit the roof before your ticker based watcher notices 
> anything.
> 
> The best rules I have seen to enforce proper memory control is:
> 
> - channels are blessed as they are limited in size
> - bind everything else to a limit. More specifically, enfore limits for:
>  - maps (limit the number of entries)
>  - arrays
>  - generally speaking, anything which can *grow*
> 
> It is tedious, but on top of avoiding the "my ticker does not run often 
> enough" problem, you will also be able to take a decision (should I allocate 
> or not) based on local, actionable facts. The problem with the global 
> approach is that at some point, when the program is running out of memory, 
> you will not know *WHAT* is taking so much space. So you might end up 
> stopping allocating Foos when the problem is that there are too many Bars 
> allocated. I have seen this happening over and over.
> 
> So from my experience, just:
> 
> - observe, monitor your program to see what is taking most memory
> - enforce limitation on that part to ensure it is bounded
> - make this configurable as you will need to fine-tune it
> - write a test that would OOM without memory control
> 
> Sometimes the naive approach of those low-level limits does not work.
> 
> Example: you have a map[int64]string and you can affort ~1Gb for it, but the 
> size of string can range from 1 byte to 1Mb, with an average of 1Kb, and you 
> need to store 1M keys. With such a case, if I want to be serious about OOMs 
> and if it is a real problem, I would bite the bullet and simply track the 
> total amount of bytes stored. Maybe not the precise amount of data, summing 
> the length of all strings could be enough.
> 
> But the bottom line is -> I fear there is no "one size fits all" solution for 
> this.
> 
> It is expected that library providers do not implement this as they wish to 
> offer generic multi-purpose tools, and memory enforcement is rather an 
> application, final-product requirement.
> 
> Best luck, OOMs are hard.
> 
> Christian.
> 
>> On 20/01/2020 09:22, Urjit Singh Bhatia wrote:
>> Hi folks,
>> I am trying to figure out if someone has a decent solution for max memory 
>> usage/mem-pressure so far. I went through some of the github issues related 
>> to this (SetMaxHeap proposals and related discussions) but most of them are 
>> still under review:
>>  * https://go-review.googlesource.com/c/go/+/46751
>>  * https://github.com/golang/go/issues/16843
>>  * https://github.com/golang/go/issues/23044
>> My use case is that I have an in-memory data store and producers can 
>> potentially drive it to OOM, causing the server to drop all the data. I'd 
>> like to avoid it as much as possible by potentially informing the producers 
>> to slow down/refuse new jobs for a bit till some data is drained if I has 
>> some sort of a warning system. I think rabbitmq does something similar  
>> https://www.rabbitmq.com/memory.html
>>  * How are others in the community handling these situations? It seems
>>like most of database-ish implementations will just OOM and die.
>>  * Has anyone implemented a mem pressure warning mechanism?
>> Does something like this make sense for detecting high mem usage? (Of course 
>> other programs can randomly ask for memory from the OS so this isn't going 
>> to be leak-proof)
>>funcmemWatcher(highWatermarkuint64, memCheckIntervaltime.Duration) {
>>s:={}
>>// Count how many times we are consecutively beyond a mem watermark
>>highWatermarkCounter:=-1
>>forceGCInterval:=10
>>forrangetime.NewTicker(memCheckInterval).C {
>>runtime.ReadMemStats(s)
>>ifs.NextGC >=highWatermark {
>>log.Println("Approaching highWatermark") // Approaching highWatermark
>>continue
>>}
>>// Crossing highWatermark
>>ifs.HeapAlloc >=highWatermark {
>>ifhighWatermarkCounter <0{
>>// Transitioning beyond highWatermark
>>log.Println("High mem usage detected! Crossed high watermark")
>>// Start counters
>>highWatermarkCounter=0
>>forceGCInterval=10
>>log.Println("Reject/Throttle new work till usage reduces")
>>continue
>>} else{
>>// Still above highWatermark
>>highWatermarkCounter++
>>ifhighWatermarkCounter >=forceGCInterval {
>>log.Println("Forcing GC...")
>>runtime.GC()
>>

Re: [go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-20 Thread Christian Mauduit

Hi,

That is a generic question and I think that if you want to keep an 
approach with a "global indicator of how much memory is used", your 
approach is OK. You might also want to store this information of "should 
I throttle" in a cache or something, the cache could be just a shared 
atomic flag.


But, from my experience, ugly OOMs can come very fast. So a ticker might 
just not catch them. On modern hardware, you can allocate gigabytes of 
memory within seconds, and hit the roof before your ticker based watcher 
notices anything.


The best rules I have seen to enforce proper memory control is:

- channels are blessed as they are limited in size
- bind everything else to a limit. More specifically, enfore limits for:
  - maps (limit the number of entries)
  - arrays
  - generally speaking, anything which can *grow*

It is tedious, but on top of avoiding the "my ticker does not run often 
enough" problem, you will also be able to take a decision (should I 
allocate or not) based on local, actionable facts. The problem with the 
global approach is that at some point, when the program is running out 
of memory, you will not know *WHAT* is taking so much space. So you 
might end up stopping allocating Foos when the problem is that there are 
too many Bars allocated. I have seen this happening over and over.


So from my experience, just:

- observe, monitor your program to see what is taking most memory
- enforce limitation on that part to ensure it is bounded
- make this configurable as you will need to fine-tune it
- write a test that would OOM without memory control

Sometimes the naive approach of those low-level limits does not work.

Example: you have a map[int64]string and you can affort ~1Gb for it, but 
the size of string can range from 1 byte to 1Mb, with an average of 1Kb, 
and you need to store 1M keys. With such a case, if I want to be serious 
about OOMs and if it is a real problem, I would bite the bullet and 
simply track the total amount of bytes stored. Maybe not the precise 
amount of data, summing the length of all strings could be enough.


But the bottom line is -> I fear there is no "one size fits all" 
solution for this.


It is expected that library providers do not implement this as they wish 
to offer generic multi-purpose tools, and memory enforcement is rather 
an application, final-product requirement.


Best luck, OOMs are hard.

Christian.

On 20/01/2020 09:22, Urjit Singh Bhatia wrote:

Hi folks,

I am trying to figure out if someone has a decent solution for max 
memory usage/mem-pressure so far. I went through some of the github 
issues related to this (SetMaxHeap proposals and related discussions) 
but most of them are still under review:


  * https://go-review.googlesource.com/c/go/+/46751
  * https://github.com/golang/go/issues/16843
  * https://github.com/golang/go/issues/23044

My use case is that I have an in-memory data store and producers can 
potentially drive it to OOM, causing the server to drop all the data. 
I'd like to avoid it as much as possible by potentially informing the 
producers to slow down/refuse new jobs for a bit till some data is 
drained if I has some sort of a warning system. I think rabbitmq does 
something similar  https://www.rabbitmq.com/memory.html


  * How are others in the community handling these situations? It seems
like most of database-ish implementations will just OOM and die.
  * Has anyone implemented a mem pressure warning mechanism?

Does something like this make sense for detecting high mem usage? (Of 
course other programs can randomly ask for memory from the OS so this 
isn't going to be leak-proof)


funcmemWatcher(highWatermarkuint64, memCheckIntervaltime.Duration) {
s:={}
// Count how many times we are consecutively beyond a mem watermark
highWatermarkCounter:=-1
forceGCInterval:=10
forrangetime.NewTicker(memCheckInterval).C {
runtime.ReadMemStats(s)
ifs.NextGC >=highWatermark {
log.Println("Approaching highWatermark") // Approaching highWatermark
continue
}
// Crossing highWatermark
ifs.HeapAlloc >=highWatermark {
ifhighWatermarkCounter <0{
// Transitioning beyond highWatermark
log.Println("High mem usage detected! Crossed high watermark")
// Start counters
highWatermarkCounter=0
forceGCInterval=10
log.Println("Reject/Throttle new work till usage reduces")
continue
} else{
// Still above highWatermark
highWatermarkCounter++
ifhighWatermarkCounter >=forceGCInterval {
log.Println("Forcing GC...")
runtime.GC()
forceGCInterval=forceGCInterval *2 // Some kind of back-off
}
}
} else{
ifhighWatermarkCounter >=0{
// reset counters - back under the highWatermark
log.Warn().Msg("Mem usage is back under highWatermark")
highWatermarkCounter=-1
forceGCInterval=10
}
}
}
}

--
You received this message because you are subscribed to the Google 
Groups "golang-nuts" group.

[go-nuts] Is there some kind of a MaxHeapAlarm implementation?

2020-01-20 Thread Urjit Singh Bhatia
Hi folks,

I am trying to figure out if someone has a decent solution for max memory 
usage/mem-pressure so far. I went through some of the github issues related 
to this (SetMaxHeap proposals and related discussions) but most of them are 
still under review:

   - https://go-review.googlesource.com/c/go/+/46751
   - https://github.com/golang/go/issues/16843
   - https://github.com/golang/go/issues/23044

My use case is that I have an in-memory data store and producers can 
potentially drive it to OOM, causing the server to drop all the data. I'd 
like to avoid it as much as possible by potentially informing the producers 
to slow down/refuse new jobs for a bit till some data is drained if I has 
some sort of a warning system. I think rabbitmq does something similar  
https://www.rabbitmq.com/memory.html 

   - How are others in the community handling these situations? It seems 
   like most of database-ish implementations will just OOM and die.
   - Has anyone implemented a mem pressure warning mechanism?

Does something like this make sense for detecting high mem usage? (Of 
course other programs can randomly ask for memory from the OS so this isn't 
going to be leak-proof)

func memWatcher(highWatermark uint64, memCheckInterval time.Duration) {
> s := {}
> // Count how many times we are consecutively beyond a mem watermark
> highWatermarkCounter := -1
> forceGCInterval := 10
> for range time.NewTicker(memCheckInterval).C {
> runtime.ReadMemStats(s)
> if s.NextGC >= highWatermark {
> log.Println("Approaching highWatermark") // Approaching highWatermark
> continue
> }
> // Crossing highWatermark
> if s.HeapAlloc >= highWatermark {
> if highWatermarkCounter < 0 {
> // Transitioning beyond highWatermark
> log.Println("High mem usage detected! Crossed high watermark")
> // Start counters
> highWatermarkCounter = 0
> forceGCInterval = 10
> log.Println("Reject/Throttle new work till usage reduces")
> continue
> } else {
> // Still above highWatermark
> highWatermarkCounter++
> if highWatermarkCounter >= forceGCInterval {
> log.Println("Forcing GC...")
> runtime.GC()
> forceGCInterval = forceGCInterval * 2 // Some kind of back-off
> }
> }
> } else {
> if highWatermarkCounter >= 0 {
> // reset counters - back under the highWatermark
> log.Warn().Msg("Mem usage is back under highWatermark")
> highWatermarkCounter = -1
> forceGCInterval = 10
> }
> }
> }
> }
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/3408da0e-f420-4b3d-98f0-d0a06e95d398%40googlegroups.com.