I second Jason's message, and +1 to off-heap memory as a last resort. Here are a few more details:
For a starting point on how to reduce memory allocations directly, see https://go.dev/doc/gc-guide#Optimization_guide. Note that this may require restructuring your program in places. (e.g. passing byte slices to functions to be filled instead of returning byte slices; that sort of thing.) RE: pooling memory, take a look look at sync.Pool ( https://pkg.go.dev/sync#Pool). A sync.Pool can be really effective at reducing the number of memory allocations that are made in the steady-state. On Monday, October 30, 2023 at 2:33:21 PM UTC-4 Jason E. Aten wrote: > Similar to Robert's suggestion, you could just use non-GC-ed memory within > the process. > > https://github.com/glycerine/offheap provides an example. > > The central idea is that the Go GC will never touch memory that you have > requested > yourself from the OS. So you can make your own Arenas. > https://en.wikipedia.org/wiki/Region-based_memory_management > > But I would save these as last resorts of course. Before that: > > a) can you reduce the objects allocated per request? > b) can you allocate everything else on the stack? There are flags to see > why things are escaping to the heap, use those in your analysis. > (This is by far the simplest and fastest thing. Since the stack is > automatically unwound when the user request finishes, typically, there is > no GC to do.) > c) can you allocate a pool of objects that is just reused instead of > allocating for each new user request? > d) Is there anything that can be effectively cached and re-used instead of > allocated? > > Use the profiler pprof to figure out what is going on. > -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/184e160a-d30d-43e4-a822-b7dc61a03b1bn%40googlegroups.com.