Re: [go-nuts] Any recommendation for structured logging library in Golang?

2020-11-17 Thread Mike Spreitzer
> From: ChrisLu 
> To: golang-nuts 
> Date: 11/17/2020 11:22 PM
> Subject: [EXTERNAL] [go-nuts] Any recommendation for structured 
> logging library in Golang?
> Sent by: golang-nuts@googlegroups.com
> 
> I am considering moving from glog to structured logging. I tried 
> logrus, go-kit, uber/zap, but could not find one good fit. In short,
> this is the desired format:
> [info][timestamp] [filename:line_number] message k1=v1 k2=v2 ...
> It shows the correct file name(need to pop out a few call stacks) 
> and line number in a customizable format, not as key-value pair for 
> each line which is just too verbose to have the extra "time=" "file=".
> Please let me know the one you actually use.
> 
> Thanks!
> Chris

How about one that has fine-grained debug leveling, like glog or klog, 
too?

Thanks,
Mike


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/OF42CB2A54.2706B41C-ON85258624.0027F1D3-85258624.00280B00%40notes.na.collabserv.com.


Re: [go-nuts] Mysterious RSS memory spike

2020-08-23 Thread Mike Spreitzer
BTW, how are you measuring RSS?  That is a trick all by itself!  See 
https://groups.google.com/g/golang-nuts/c/LsOYrYc_Occ/m/LbjLAsL6BwAJ

Regards,
Mike

golang-nuts@googlegroups.com wrote on 08/23/2020 11:04:47 AM:

> From: Manish R Jain 
> To: "golang-nuts" 
> Date: 08/23/2020 11:04 AM
> Subject: [EXTERNAL] [go-nuts] Mysterious RSS memory spike
> Sent by: golang-nuts@googlegroups.com
> 
> Hey Gophers,
> 
> I'm puzzled by a mysterious RSS memory spike in my Go program, when 
> all memory allocations are happening via Cgo. I assert that there 
> are no memory leaks in the program. And have written another C 
> program with similar logic which does NOT show RSS memory spiking. 
> So, I suspect this is something to do with Go memory.
> 
> Program:
> 
> https://github.com/dgraph-io/ristretto/pull/186
> 
> This PR creates a Go memtest program, which does this:
> - Uses z.Calloc and z.Free to allocate Go struct (S) and a byte 
> slice inside it. All allocations are happening in Cgo, and being 
> type casted into Go. No allocations are happening in Go (except a 32
> MB fill slice).
> - z.NumAllocBytes is tracking memory allocated and freed by these calls.
> - Increases memory usage to 16 GB (as reported by z.NumAllocBytes).
> - Decreases it back to 1 GB.
> - Repeats this cycle.
> - On Ctrl+C, it deallocates everything and asserts that Cgo memory 
> allocated is zero.
> 
> I was concerned about memory fragmentation, so created a very 
> similar C program which does the same thing (memtestc).
> 
> Please feel free to run either of the Go or C programs. They should 
> compile and run easily.
> 
> Behavior:
> 
> Run the program with: `go build . && ./memtest` . Go pprof heap 
> shows 32 MB used, to account for the fill slice. However, RSS 
> reported keeps roughly increasing every cycle.
> 
> I'm using Go 1.14.4 and on it, RSS jumps to 22GB after a few cycles. 
> memtestc (C equivalent, compiled with gcc) does not show this 
> behavior. The RSS goes down to 1GB-ish every cycle.
> 
> Any clues why the RSS is much higher than expected in Go and keeps 
> climbing in Go?
> 
> ?
> Manish
> Founder, https://dgraph.io
> [image removed] 
> -- 
> You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, 
> send an email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/
> d/msgid/golang-nuts/a592db0482f1843c694a9486379a47dd%40frontapp.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/OFD256528A.E55B2BFE-ON852585CD.00651CB3-852585CD.0065262F%40notes.na.collabserv.com.


Re: [go-nuts] Re: basics of understanding golang memory usage

2020-08-23 Thread Mike Spreitzer
BTW, how are you measuring RSS?  That is a trick in and of itself!

https://groups.google.com/g/golang-nuts/c/LsOYrYc_Occ/m/LbjLAsL6BwAJ

Regards,
Mike

golang-nuts@googlegroups.com wrote on 08/18/2020 01:56:30 AM:

> From: Mike Spreitzer 
> To: golang-nuts 
> Date: 08/18/2020 01:56 AM
> Subject: [EXTERNAL] [go-nuts] Re: basics of understanding golang memory 
usage
> Sent by: golang-nuts@googlegroups.com
> 
> I collected data on an example program from four angles.  See 
> https://docs.google.com/document/d/
> 1KUz7IjnD93X2VTVkRAhhNa7rHScCaIH8GLrsoaDIW_g for the raw data and my
> puny attempts to correlate the four views.  There are some exact 
> equalities, some near matches, some gross differences in things that
> seem like they should be the same, and things I wasn't able to match
> in any way.  This begs lots of questions.
> 
> Why is memory.usage_in_bytes from cgroups so much smaller than the 
> other top-line measurements?  What is statm.size counting that the 
> coredump "all" is not?
> 
> I assume that the text part of the connection-agent binary is 
> shared.  Why is the readonly data part of the connection-agent 
> binary _not_ shared?
> 
> What is statm.data counting that MemStats.Sys is not?
> 
> Why is MemStats.HeapIdle so much bigger than the coredump's heap free 
spans?
> Why is MemStats.heapReleased so much bigger than the coredump's 
heapreleased?
> In MemStats, (HeapIdle - HeapReleased) = 1,490,944
> In coredump, (heap free spans) - (heap released) = 1,613,824 --- not
> so very different.  What is making the roughly 52 MB difference 
> between MemStats and the coredump?
> 
> Why is the bss in the coredump so huge?  Does it correspond to 
> anything in any of the other views?
> 
> What do the three big anonymous blocks in the procfs view correpond 
> to in the other views?
> 
> Thanks,
> Mike
> 
> -- 
> You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, 
> send an email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/
> 
d/msgid/golang-nuts/7c44fbd8-976a-4bfa-acf4-fa4a0d6bad35o%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/OFA58CB7B7.2124468F-ON852585CD.0064FE8B-852585CD.006514F7%40notes.na.collabserv.com.


RE: [go-nuts] Debug http calls made using go http client

2020-08-23 Thread Mike Spreitzer
Krishna can put a use of TCPMon or tcpdump inside the Kubernetes Pod. 
Either of these things can be added to the main container, or put in 
another container in the Pod.

Regards,
Mike

golang-nuts@googlegroups.com wrote on 08/23/2020 01:06:20 PM:

> From: "Tamás Gulácsi" 
> To: golang-nuts 
> Date: 08/23/2020 01:06 PM
> Subject: [EXTERNAL] Re: [go-nuts] Debug http calls made using go http 
client
> Sent by: golang-nuts@googlegroups.com
> 
> You can print every step with net/http/httptrace 's ClientTrace.

> krishna...@gmail.com a következőt írta (2020. augusztus 23., 
> vasárnap, 15:55:50 UTC+2):
> Hello Dimas,
> 
> Thank you for your response. My application is running in a 
> kubernetes cluster and I will not be able to run TCPMon or TCPDump 
> separately, as access is restricted. I was looking for something 
> that can be embedded within the go application.
> 
> Regards,
> Krishna
> 
> On Sun, Aug 23, 2020 at 7:13 PM Dimas Prawira  
wrote:
> There are several tools which you can use to help to inspect, 
> 
> 1. TCPmon, is a java-based tool for inspecting http call in between 
> server and client. TCPmon also can be used to simulate slow connection. 
> 
> Work mechanism of TCPmon is as a proxy. So if I describe it as below
> 
> [Your apps] ---> [tcpmon] ---> [server]
> 
> 2. TCPdump, is a linux app which can be use to dump TCP connection 
> in and out. This can be help to inspect HTTP request / HTTP come to 
> the server.
> 
> 3. Traceroute
> You may want to inspect / trace connection from your server to 
> vendor's server using traceroute, maybe the problem is in the 
connection.
> 
> Hope that's helpful
> 
> On Sat, Aug 22, 2020, 01:59 krishna...@gmail.com  
wrote:
> Hello Gophers,
> 
> I am making multiple http calls from my go application to an 
> external vendor's http server using the go standard http client. 
> I've set a 10 second timeout for my context. Everything works fine. 
> 
> However, I get random timeouts in my application due to these HTTP 
> calls. On further investigation, I found that the http calls to the 
> vendor's server take longer than 10 seconds. 
> During this period of timeouts, the vendor says they've not received
> any HTTP requests. How do I verify that the http requests are made 
> from my app? If the requests are made from my app, how can I figure 
> out what's causing the delay?
> 
> I tried debugging using the HTTP client trace, but couldn't find any
> actionable information. Any suggestions on how to debug/fix this issue ?
> 
> Thanks
> - Krishna 
> -- 
> You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, 
> send an email to golang-nuts...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/
> 
d/msgid/golang-nuts/2d454dda-6670-48ef-85a2-0a42216dcd29n%40googlegroups.com.
> -- 
> You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, 
> send an email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/
> 
d/msgid/golang-nuts/72972301-5163-4dfd-8886-1ed2b8efadc4n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/OFBF44A5F2.6B132B86-ON852585CD.00648478-852585CD.0064AEF2%40notes.na.collabserv.com.


[go-nuts] Re: basics of understanding golang memory usage

2020-08-17 Thread Mike Spreitzer
I collected data on an example program from four angles.  See 
https://docs.google.com/document/d/1KUz7IjnD93X2VTVkRAhhNa7rHScCaIH8GLrsoaDIW_g 
for the raw data and my puny attempts to correlate the four views.  There 
are some exact equalities, some near matches, some gross differences in 
things that seem like they should be the same, and things I wasn't able to 
match in any way.  This begs lots of questions.

Why is memory.usage_in_bytes from cgroups so much smaller than the other 
top-line measurements?  What is statm.size counting that the coredump "all" 
is not?

I assume that the text part of the connection-agent binary is shared.  Why 
is the readonly data part of the connection-agent binary _not_ shared?

What is statm.data counting that MemStats.Sys is not?

Why is MemStats.HeapIdle so much bigger than the coredump's heap free spans?
Why is MemStats.heapReleased so much bigger than the coredump's heap 
released?
In MemStats, (HeapIdle - HeapReleased) = 1,490,944
In coredump, (heap free spans) - (heap released) = 1,613,824 --- not so 
very different.  What is making the roughly 52 MB difference between 
MemStats and the coredump?

Why is the bss in the coredump so huge?  Does it correspond to anything in 
any of the other views?

What do the three big anonymous blocks in the procfs view correpond to in 
the other views?

Thanks,
Mike

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/7c44fbd8-976a-4bfa-acf4-fa4a0d6bad35o%40googlegroups.com.


[go-nuts] Re: basics of understanding golang memory usage

2020-08-17 Thread Mike Spreitzer
FYI, after asking on Slack I got some useful pointers:

- https://golang.org/pkg/runtime/#MemStats - definitions of memory stats 
from the golang runtime
- 
https://github.com/prometheus/client_golang/blob/master/prometheus/go_collector.go
 
- the code that publishes golang runtime memory stats as Prometheus metrics.
- https://man7.org/linux/man-pages/man2/madvise.2.html - has some clues 
about memory blocks in special in-between states.  This explains why the 
above stuff talks about some of the memory that a golang process has as 
being "released to the OS".
- https://github.com/golang/go/issues/32284 - has an interesting 
discussion.  I looked around more than a little, and could not find it 
collected in a wiki page or blog entry or article.
- https://godoc.org/golang.org/x/debug/cmd/viewcore - tool for crawling 
coredumps, mentioned in issue 32284.
- https://golang.org/pkg/runtime/#hdr-Environment_Variables - things you 
can set in GODEBUG.

So I tried viewing the memory usage of a program from four angles.  I got 
some consistency, and some inconsistencies.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/fcd175d6-f6eb-4d3f-a4a2-510f577ad18eo%40googlegroups.com.


[go-nuts] basics of understanding golang memory usage

2020-08-14 Thread Mike Spreitzer
I have several basic questions.  I examined a couple of golang programs, 
and got really confused.

I started by looking at a kube-proxy from Kubernetes release 1.15 (probably 
either 1.15.9 or 1.15.12) on Linux 4.15.0 using `/proc/[pid]/statm`.  
As a reminder, here is what man procfs has to say about that:

  Provides information about memory usage, measured in pages.  The
  columns are:
  size   (1) total program size
 (same as VmSize in /proc/[pid]/status)
  resident   (2) resident set size
 (same as VmRSS in /proc/[pid]/status)
  shared (3) number of resident shared pages (i.e., backed 
by a file)
 (same as RssFile+RssShmem in /proc/[pid]/status)
  text   (4) text (code)
  lib(5) library (unused since Linux 2.6; always 0)
  data   (6) data + stack
  dt (7) dirty pages (unused since Linux 2.6; always 0)

Here is what I saw:

/proc/10368$ cat statm
35207 8684 6285 4026 0 26265 0

The page size is 4 KiB.  That’s about 16 MB of code plus 108 MB of 
data+stack, plus another 20 MB of something else.  What is that something 
else?  Why is the data+stack so huge?

Next I looked at `maps`:

/proc/10368$ sudo cat maps

0040-013ba000 r-xp  fc:01 777880 
/usr/local/bin/kube-proxy

013ba000-026e9000 r--p 00fba000 fc:01 777880 
/usr/local/bin/kube-proxy

026e9000-02749000 rw-p 022e9000 fc:01 777880 
/usr/local/bin/kube-proxy

02749000-0277 rw-p  00:00 0 

c0-c00020 rw-p  00:00 0 

c00020-c00040 rw-p  00:00 0 

c00040-c000a0 rw-p  00:00 0 

c000a0-c00400 rw-p  00:00 0 

7f4b47346000-7f4b49937000 rw-p  00:00 0 

7ffcd6169000-7ffcd618a000 rw-p  00:00 0  [stack]

7ffcd61d-7ffcd61d3000 r--p  00:00 0  [vvar]

7ffcd61d3000-7ffcd61d5000 r-xp  00:00 0  [vdso]

ff60-ff601000 r-xp  00:00 0  
[vsyscall]


The block from c0 to c00400 is about 67 MB, and the block from 
7f4b47346000 to 7f4b49937000 is about another 40 MB.  So that must be most 
of the 108 MB data+stack.  The fact that they are distinct memory ranges 
suggest they are two different kinds of usage.  Which range is which usage?


The first block for /usr/local/bin/kube-proxy is the 4026 pages that procfs 
reports for text.  Why are the other two /usr/local/bin/kube-proxy blocks 
not reported as text?  They add up to about that mysterious 20 MB.  Is the 
first block included in "number of resident shared pages (i.e., backed by a 
file)"?

Next, I tried scraping /metrics from a different golang process.  I have 
lots of questions about the memory related metrics.  Is there any more 
definition of the go_memstats_… metrics beyond their HELP lines?

For example, what is the difference beween

# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in 
use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 4.881872e+06

and

# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 7.33184e+06

?

It clearly is not

# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.885952e+07

… and what is that, anyway?

And then there is this:

# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 5.7155584e+07

… is that “released” as in “cumulative over release operations in the past” 
or as in “currently in released-to-os state”?

Same question for “obtained” in

# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.619136e+07


Then there is

# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack 
allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 917504

does “use by the stack allocator” refer to overhead of the allocator, or 
use for stacks themselves?

There are several metrics described as being “obtained from system” for 
various purposes, but they do not add up to what appears to be the total.

# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.619136e+07

# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures 
obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384

# HELP go