Thanks for the response Michael.
> It looks like a Windows minidump (unsurprisingly) can't follow the
relation. A regular goroutine stack trace dump that gets emitted on a
forced exit or crash is able to show both sides of the system stack switch
with a little bit of configuration.
Yeah, that
The example goroutine in the original post is parked waiting for some other
goroutine to finish up the GC cycle. Somewhere, a goroutine is getting
stuck trying to finish it up, which could possibly be a deadlock. (I am
especially suspicious of a deadlock bug because most threads are stopped
I am pretty sure runtime is supposed to crash the process if it slows the
allocators “too much” (I believe there are some config settings to control
this).
If you have enough Go routines it may look like they are hung - you need to
track specific routines by their ID. The stack certainly looks
That's an interesting idea, I probably wouldn't have thought of that on my
own. Is that expected behavior for memory pressure on Windows+golang? I
don't have much windows experience, so my assumption would be that the
Windows equivalent of the OOMKiller would kick in and just kill the
Feels like a memory leak to me. I would look for growing heap size in the gc
logs. I am guessing that the system is not completely hung - but rather the
runtime is having a hard time obtaining more memory, so it is slowing the
allocators to a rate that makes them appear hung.
It may be that
This is an odd one. For reference, this is a customer machine, Windows
server 2016, compiled with go1.20.11. The application just hangs after a
number of days; windows minidump reveals that most threads are doing this:
Goroutine 462 - User: unicode/utf16/utf16.go:106 unicode/utf16.Decode