Hi,
That is a generic question and I think that if you want to keep an
approach with a "global indicator of how much memory is used", your
approach is OK. You might also want to store this information of "should
I throttle" in a cache or something, the cache could be just a shared
atomic flag.
But, from my experience, ugly OOMs can come very fast. So a ticker might
just not catch them. On modern hardware, you can allocate gigabytes of
memory within seconds, and hit the roof before your ticker based watcher
notices anything.
The best rules I have seen to enforce proper memory control is:
- channels are blessed as they are limited in size
- bind everything else to a limit. More specifically, enfore limits for:
- maps (limit the number of entries)
- arrays
- generally speaking, anything which can *grow*
It is tedious, but on top of avoiding the "my ticker does not run often
enough" problem, you will also be able to take a decision (should I
allocate or not) based on local, actionable facts. The problem with the
global approach is that at some point, when the program is running out
of memory, you will not know *WHAT* is taking so much space. So you
might end up stopping allocating Foos when the problem is that there are
too many Bars allocated. I have seen this happening over and over.
So from my experience, just:
- observe, monitor your program to see what is taking most memory
- enforce limitation on that part to ensure it is bounded
- make this configurable as you will need to fine-tune it
- write a test that would OOM without memory control
Sometimes the naive approach of those low-level limits does not work.
Example: you have a map[int64]string and you can affort ~1Gb for it, but
the size of string can range from 1 byte to 1Mb, with an average of 1Kb,
and you need to store 1M keys. With such a case, if I want to be serious
about OOMs and if it is a real problem, I would bite the bullet and
simply track the total amount of bytes stored. Maybe not the precise
amount of data, summing the length of all strings could be enough.
But the bottom line is -> I fear there is no "one size fits all"
solution for this.
It is expected that library providers do not implement this as they wish
to offer generic multi-purpose tools, and memory enforcement is rather
an application, final-product requirement.
Best luck, OOMs are hard.
Christian.
On 20/01/2020 09:22, Urjit Singh Bhatia wrote:
Hi folks,
I am trying to figure out if someone has a decent solution for max
memory usage/mem-pressure so far. I went through some of the github
issues related to this (SetMaxHeap proposals and related discussions)
but most of them are still under review:
* https://go-review.googlesource.com/c/go/+/46751
* https://github.com/golang/go/issues/16843
* https://github.com/golang/go/issues/23044
My use case is that I have an in-memory data store and producers can
potentially drive it to OOM, causing the server to drop all the data.
I'd like to avoid it as much as possible by potentially informing the
producers to slow down/refuse new jobs for a bit till some data is
drained if I has some sort of a warning system. I think rabbitmq does
something similarĀ https://www.rabbitmq.com/memory.html
* How are others in the community handling these situations? It seems
like most of database-ish implementations will just OOM and die.
* Has anyone implemented a mem pressure warning mechanism?
Does something like this make sense for detecting high mem usage? (Of
course other programs can randomly ask for memory from the OS so this
isn't going to be leak-proof)
funcmemWatcher(highWatermarkuint64, memCheckIntervaltime.Duration) {
s:=&runtime.MemStats{}
// Count how many times we are consecutively beyond a mem watermark
highWatermarkCounter:=-1
forceGCInterval:=10
forrangetime.NewTicker(memCheckInterval).C {
runtime.ReadMemStats(s)
ifs.NextGC >=highWatermark {
log.Println("Approaching highWatermark") // Approaching highWatermark
continue
}
// Crossing highWatermark
ifs.HeapAlloc >=highWatermark {
ifhighWatermarkCounter <0{
// Transitioning beyond highWatermark
log.Println("High mem usage detected! Crossed high watermark")
// Start counters
highWatermarkCounter=0
forceGCInterval=10
log.Println("Reject/Throttle new work till usage reduces")
continue
} else{
// Still above highWatermark
highWatermarkCounter++
ifhighWatermarkCounter >=forceGCInterval {
log.Println("Forcing GC...")
runtime.GC()
forceGCInterval=forceGCInterval *2 // Some kind of back-off
}
}
} else{
ifhighWatermarkCounter >=0{
// reset counters - back under the highWatermark
log.Warn().Msg("Mem usage is back under highWatermark")
highWatermarkCounter=-1
forceGCInterval=10
}
}
}
}
--
You received this message because you are subscribed to the Google
Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to golang-nuts+unsubscr...@googlegroups.com
<mailto:golang-nuts+unsubscr...@googlegroups.com>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/golang-nuts/3408da0e-f420-4b3d-98f0-d0a06e95d398%40googlegroups.com
<https://groups.google.com/d/msgid/golang-nuts/3408da0e-f420-4b3d-98f0-d0a06e95d398%40googlegroups.com?utm_medium=email&utm_source=footer>.
--
Christian Mauduit __/\__ ___
uf...@ufoot.org \~ ~ / (`_ \ ___
https://ufoot.org /_o _\ \ \_/ _ \_
int q = (2 * b) || !(2 * b); \/ \___/ \__)
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/golang-nuts/3ddf2728-ec41-303c-12e2-bf378ba67cc7%40ufoot.org.