Re: [go-nuts] Access to variable names on goroutine stacks and on the heap in the runtime

2019-04-10 Thread vaastav anand
Is the debug info exported in the binary in DWARF format? And if so would 
this package work https://golang.org/pkg/debug/dwarf/?
What about the global variables or the ones allocated on the heap? Are they 
also not available inside the runtime either?

On Wednesday, 10 April 2019 13:28:49 UTC-7, Ian Lance Taylor wrote:
>
> On Tue, Apr 9, 2019 at 7:43 AM > 
> wrote: 
> > 
> > I have been working on a research project where I have been modifying 
> the runtime such that I can control the goroutines that are scheduled as 
> well as get access to the values of program variables. 
> > I know I can access the stack through the g struct for a goroutine but I 
> was wondering if someone could tell me how to get the symbol/object table 
> so that I can figure out the names of the local variables on the stack for 
> the goroutine as well as the variables on the heap. 
> > Any help would be greatly appreciated. 
>
> The names of local variables on the stack are recorded only the debug 
> information, which is not loaded into memory.  You would need to 
> locate the binary, open it, and look at the debug info.  Getting a 
> local variable name from the debug info is complex, but Delve and gdb 
> manage to do it. 
>
> That is, getting the names of local variables is technically possible 
> but quite hard.  I wouldn't recommend this approach. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Access to variable names on goroutine stacks and on the heap in the runtime

2019-04-10 Thread vaastav anand
> That is the bare bones of the DWARF information.  That will let you 
read the DWARF info, but it won't help you map PC and SP values to 
variable names. 

I am not sure why this is the case. I thought along with the dwarf info, 
once I have the frame information for each goroutine's stack I could end up 
mapping the values to the variables. This frame information is available 
from runtime's Stack function in src/runtime/mprof.go
Maybe I missed something?

> Correct.  Heap variables don't have names at all in any case

Ok, I am assuming anything on the heap is essentially referenced by the 
pointer variable on the stack? 

Sorry, if the following is a stupid question : Do the global variables have 
no name or is that something that is present in the debugging information? 
(I used to think that the global variables would be somewhere in the code 
segment and thus must have debugging info associated with it.)

PS Thanks so much for all the help! I really do appreciate it.

On Wednesday, 10 April 2019 16:58:36 UTC-7, Ian Lance Taylor wrote:
>
> On Wed, Apr 10, 2019 at 4:34 PM vaastav anand  > wrote: 
> > 
> > Is the debug info exported in the binary in DWARF format? 
>
> Yes. 
>
> > And if so would this package work https://golang.org/pkg/debug/dwarf/? 
>
> That is the bare bones of the DWARF information.  That will let you 
> read the DWARF info, but it won't help you map PC and SP values to 
> variable names. 
>
> > What about the global variables or the ones allocated on the heap? Are 
> they also not available inside the runtime either? 
>
> Correct.  Heap variables don't have names at all in any case. 
>
> Ian 
>
> > On Wednesday, 10 April 2019 13:28:49 UTC-7, Ian Lance Taylor wrote: 
> >> 
> >> On Tue, Apr 9, 2019 at 7:43 AM  wrote: 
> >> > 
> >> > I have been working on a research project where I have been modifying 
> the runtime such that I can control the goroutines that are scheduled as 
> well as get access to the values of program variables. 
> >> > I know I can access the stack through the g struct for a goroutine 
> but I was wondering if someone could tell me how to get the symbol/object 
> table so that I can figure out the names of the local variables on the 
> stack for the goroutine as well as the variables on the heap. 
> >> > Any help would be greatly appreciated. 
> >> 
> >> The names of local variables on the stack are recorded only the debug 
> >> information, which is not loaded into memory.  You would need to 
> >> locate the binary, open it, and look at the debug info.  Getting a 
> >> local variable name from the debug info is complex, but Delve and gdb 
> >> manage to do it. 
> >> 
> >> That is, getting the names of local variables is technically possible 
> >> but quite hard.  I wouldn't recommend this approach. 
> >> 
> >> Ian 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to golan...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Access to variable names on goroutine stacks and on the heap in the runtime

2019-04-10 Thread vaastav anand
Ahh that makes sense thank you.

I think this file in Delve : 
https://github.com/go-delve/delve/blob/master/pkg/proc/bininfo.go does 
exactly what I need if I am not wrong.

On Wednesday, 10 April 2019 21:04:49 UTC-7, Ian Lance Taylor wrote:
>
> On Wed, Apr 10, 2019 at 5:27 PM vaastav anand  > wrote: 
> > 
> > > That is the bare bones of the DWARF information.  That will let you 
> > read the DWARF info, but it won't help you map PC and SP values to 
> > variable names. 
> > 
> > I am not sure why this is the case. I thought along with the dwarf info, 
> once I have the frame information for each goroutine's stack I could end up 
> mapping the values to the variables. This frame information is available 
> from runtime's Stack function in src/runtime/mprof.go 
> > Maybe I missed something? 
>
> I was unclear.  I don't mean that it can't be done.  I mean that you 
> will need a lot of code beyond what is provided by debug/dwarf. 
>
> > > Correct.  Heap variables don't have names at all in any case 
> > 
> > Ok, I am assuming anything on the heap is essentially referenced by the 
> pointer variable on the stack? 
>
> Yes, or by a global variable, and possibly indirectly via other pointers. 
>
> > Sorry, if the following is a stupid question : Do the global variables 
> have no name or is that something that is present in the debugging 
> information? (I used to think that the global variables would be somewhere 
> in the code segment and thus must have debugging info associated with it.) 
>
> Global variable names should be present in the debug info. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Local Go module

2019-04-16 Thread vaastav anand
You could put such packages at $GOPATH/src/local/name_of_package.

This means that if I have a project that was using the specific project, 
the import would be as follows

*import "local/name_of_package" *

Although, if the package is available online (such as github), it is better 
to "install" such packages using the 'go get' command. This way the package 
automatically ends up in your GOPATH.

On Tuesday, 16 April 2019 17:13:09 UTC-7, Joshua wrote:
>
> Is there any way to install a package locally so that other projects can 
> use it? In the gradle world, this is done using the "install" task.
>
> Joshua
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Strange error after adding another sample code

2019-04-29 Thread vaastav anand
The issue is that you have multiple package names in the same folder. In 
this specific case it is easygen and easygen_test.
You should have 1 package per folder.
The reason your test files did not raise this issue is because _test.go 
files are automatically excluded when packages are being compiled. But its 
not the same case with _execute.go files

On Monday, 29 April 2019 16:18:29 UTC-7, sunto...@gmail.com wrote:
>
> What does the following error really means? 
>
> can't load package: package ./.: found packages easygen (config.go) and 
> easygen_test (example_execute.go) in /path/to/go-easygen/easygen
> cmd/easygen/flags.go:12:2: found packages easygen (config.go) and 
> easygen_test (example_execute.go) in /path/to/src/
> github.com/go-easygen/easygen
>
> I'm been adding example tests to my package using the "_test" as 
> package without any problem. However, the one I've just added,
>
> https://github.com/go-easygen/easygen/blob/master/example_execute.g0
>
> gives me errors now. and I have no idea why. 
>
> Here is how to duplicate the problem. 
> WIthin the `go-easygen/easygen` folder:
>
>
> $ go test ./... 
> ok  _/path/to/go-easygen/easygen (cached)
> ok  _/path/to/go-easygen/easygen/cmd/easygen (cached)
> ok  _/path/to/go-easygen/easygen/egCal(cached)
> ok  _/path/to/go-easygen/easygen/egVar(cached)
>
>
> $ mv example_execute.g0 example_execute.go
>
>
> $ go test ./... 
> can't load package: package ./.: found packages easygen (config.go) and 
> easygen_test (example_execute.go) in /path/to/go-easygen/easygen
> cmd/easygen/flags.go:12:2: found packages easygen (config.go) and 
> easygen_test (example_execute.go) in /path/to/src/
> github.com/go-easygen/easygen
>
>
> $ head -1 example_test.go > /tmp/f1
>
>
> $ head -1 example_execute.go > /tmp/f2
>
>
> $ diff /tmp/f1 /tmp/f2 && echo same 
> same
>
>
>
> Please help. Thx!
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Need help to launch hello.go

2019-04-29 Thread vaastav anand
It could be due to some anti-virus scanner deleting files.
Here is the relevant issue : https://github.com/golang/go/issues/26195

On Monday, 29 April 2019 22:02:33 UTC-7, Avetis Sargsian wrote:
>
> I set GOTMPDIR to E:\temp folder
>>
> and here is the result 
>
> PS F:\GoWorckspace\src\hello> go install
> open E:\temp\go-build447177998\b001\exe\a.out.exe: The system cannot find 
> the file specified.
>
> PS F:\GoWorckspace\src\hello> go build
> open E:\temp\go-build140959642\b001\exe\a.out.exe: The system cannot find 
> the file specified. 
>
> PS F:\GoWorckspace\src\hello> go run hello.go
> open E:\temp\go-build609689226\b001\exe\hello.exe: The system cannot find 
> the file specified.
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Cause of SIGBUS panic in gc?

2019-04-29 Thread vaastav anand
I'd be very surprised if the anonymous goroutine is the reason behind a 
SIGBUS violation.
So, if I remember SIGBUS correctly, it means that you are issuing a 
read/write to a memory address which is not really addressable or it is 
misaligned. I think the chances of the address being misaligned are very 
low.so it really has to be a non-existent address.
It can happen if you have try to access memory outside the region mmaped 
into your application.
If your application has any kind of mmap or shared memory access, I would 
start there.
In any case your best bet is to somehow reproduce the bug consistently and 
then figure out which memory access is causing the fault.



On Monday, 29 April 2019 21:59:34 UTC-7, Justin Israel wrote:
>
>
> On Thursday, November 29, 2018 at 6:22:56 PM UTC+13, Justin Israel wrote:
>>
>>
>>
>> On Thu, Nov 29, 2018 at 6:20 PM Justin Israel > > wrote:
>>
>>> On Thu, Nov 29, 2018 at 5:32 PM Ian Lance Taylor >> > wrote:
>>>
 On Wed, Nov 28, 2018 at 7:18 PM Justin Israel >>> > wrote:
 >
 > I've got a service that I have been testing quite a lot over the last 
 few days. Only after I handed it off for some testing to a colleague, was 
 he able to produce a SIGBUS panic that I had not seen before:
 >
 > go 1.11.2 linux/amd64
 >
 > The service does set up its own SIGINT/SIGTERM handling via the 
 typical siginal.Notify approach. The nature of the program is that it 
 listens on nats.io message queues, and receives requests to run tasks 
 as sub-processes. My tests have been running between 40-200 of these 
 instances over the course of a few days. But this panic occurred on a 
 completely different machine that those I had been testing...
 >
 > goroutine 1121 [runnable (scan)]:
 > fatal error: unexpected signal during runtime execution
 > panic during panic
 > [signal SIGBUS: bus error code=0x2 addr=0xfa2adc pc=0x451637]
 >
 > runtime stack:
 > runtime.throw(0xcf7fe3, 0x2a)
 > /vol/apps/go/1.11.2/src/runtime/panic.go:608 +0x72
 > runtime.sigpanic()
 > /vol/apps/go/1.11.2/src/runtime/signal_unix.go:374 +0x2f2
 > runtime.gentraceback(0x, 0x, 0x0, 
 0xc0004baa80, 0x0, 0x0, 0x64, 0x0, 0x0, 0x0, ...)
 > /vol/apps/go/1.11.2/src/runtime/traceback.go:190 +0x377
 > runtime.traceback1(0x, 0x, 0x0, 
 0xc0004baa80, 0x0)
 > /vol/apps/go/1.11.2/src/runtime/traceback.go:728 +0xf3
 > runtime.traceback(0x, 0x, 0x0, 
 0xc0004baa80)
 > /vol/apps/go/1.11.2/src/runtime/traceback.go:682 +0x52
 > runtime.tracebackothers(0xc00012e780)
 > /vol/apps/go/1.11.2/src/runtime/traceback.go:947 +0x187
 > runtime.dopanic_m(0xc00012e780, 0x42dcc2, 0x7f83f6ffc808, 0x1)
 > /vol/apps/go/1.11.2/src/runtime/panic.go:805 +0x2aa
 > runtime.fatalthrow.func1()
 > /vol/apps/go/1.11.2/src/runtime/panic.go:663 +0x5f
 > runtime.fatalthrow()
 > /vol/apps/go/1.11.2/src/runtime/panic.go:660 +0x57
 > runtime.throw(0xcf7fe3, 0x2a)
 > /vol/apps/go/1.11.2/src/runtime/panic.go:608 +0x72
 > runtime.sigpanic()
 > /vol/apps/go/1.11.2/src/runtime/signal_unix.go:374 +0x2f2
 > runtime.gentraceback(0x, 0x, 0x0, 
 0xc0004baa80, 0x0, 0x0, 0x7fff, 0x7f83f6ffcd00, 0x0, 0x0, ...)
 > /vol/apps/go/1.11.2/src/runtime/traceback.go:190 +0x377
 > runtime.scanstack(0xc0004baa80, 0xc31270)
 > /vol/apps/go/1.11.2/src/runtime/mgcmark.go:786 +0x15a
 > runtime.scang(0xc0004baa80, 0xc31270)
 > /vol/apps/go/1.11.2/src/runtime/proc.go:947 +0x218
 > runtime.markroot.func1()
 > /vol/apps/go/1.11.2/src/runtime/mgcmark.go:264 +0x6d
 > runtime.markroot(0xc31270, 0xc00047)
 > /vol/apps/go/1.11.2/src/runtime/mgcmark.go:245 +0x309
 > runtime.gcDrain(0xc31270, 0x6)
 > /vol/apps/go/1.11.2/src/runtime/mgcmark.go:882 +0x117
 > runtime.gcBgMarkWorker.func2()
 > /vol/apps/go/1.11.2/src/runtime/mgc.go:1858 +0x13f
 > runtime.systemstack(0x7f83f7ffeb90)
 > /vol/apps/go/1.11.2/src/runtime/asm_amd64.s:351 +0x66
 > runtime.mstart()
 > /vol/apps/go/1.11.2/src/runtime/proc.go:1229
 >
 > Much appreciated for any insight.

 Is the problem repeatable?

 It looks like it crashed while tracing back the stack during garbage
 collection, but I don't know why since the panic was evidently able to
 trace back the stack just fine.

>>>
>>>
>>> Thanks for the reply. Unfortunately it was rare and never happened in my 
>>> own testing of thousands of runs of this service. The colleague that saw 
>>> this crash on one of his workstations was not able to repro it after 
>>> attempting another run of the

Re: [go-nuts] Cause of SIGBUS panic in gc?

2019-04-29 Thread vaastav anand
Ok, so in the 2nd piece of code you posted, is some request being pushed 
onto some OS queue? If so, is it possible that you may be maxing the queue 
out and then pushing something else into it and that could cause a SIGBUS 
as well This seems super farfetched tho but it is hard to debug without 
really knowing what the application might really be doing.

On Monday, 29 April 2019 22:57:40 UTC-7, Justin Israel wrote:
>
>
>
> On Tue, Apr 30, 2019 at 5:43 PM vaastav anand  > wrote:
>
>> I'd be very surprised if the anonymous goroutine is the reason behind a 
>> SIGBUS violation.
>> So, if I remember SIGBUS correctly, it means that you are issuing a 
>> read/write to a memory address which is not really addressable or it is 
>> misaligned. I think the chances of the address being misaligned are very 
>> low.so it really has to be a non-existent address.
>> It can happen if you have try to access memory outside the region mmaped 
>> into your application.
>> If your application has any kind of mmap or shared memory access, I would 
>> start there.
>> In any case your best bet is to somehow reproduce the bug consistently 
>> and then figure out which memory access is causing the fault.
>>
>
> My application isn't doing anything with mmap or shared memory, and my 
> direct and indirect dependencies don't seem to be anything like that 
> either. Its limited to pretty much nats.io client, gnatds embedded 
> server, and a thrift rpc. 
>
> It seems so random that I doubt I could get a reproducible crash. So I can 
> really only try testing this on go 1.11 instead to see if any of the GC 
> work in 1.12 causes this.
>
>
>>
>>
>> On Monday, 29 April 2019 21:59:34 UTC-7, Justin Israel wrote:
>>>
>>>
>>> On Thursday, November 29, 2018 at 6:22:56 PM UTC+13, Justin Israel wrote:
>>>>
>>>>
>>>>
>>>> On Thu, Nov 29, 2018 at 6:20 PM Justin Israel  
>>>> wrote:
>>>>
>>>>> On Thu, Nov 29, 2018 at 5:32 PM Ian Lance Taylor  
>>>>> wrote:
>>>>>
>>>>>> On Wed, Nov 28, 2018 at 7:18 PM Justin Israel  
>>>>>> wrote:
>>>>>> >
>>>>>> > I've got a service that I have been testing quite a lot over the 
>>>>>> last few days. Only after I handed it off for some testing to a 
>>>>>> colleague, 
>>>>>> was he able to produce a SIGBUS panic that I had not seen before:
>>>>>> >
>>>>>> > go 1.11.2 linux/amd64
>>>>>> >
>>>>>> > The service does set up its own SIGINT/SIGTERM handling via the 
>>>>>> typical siginal.Notify approach. The nature of the program is that it 
>>>>>> listens on nats.io message queues, and receives requests to run 
>>>>>> tasks as sub-processes. My tests have been running between 40-200 of 
>>>>>> these 
>>>>>> instances over the course of a few days. But this panic occurred on a 
>>>>>> completely different machine that those I had been testing...
>>>>>> >
>>>>>> > goroutine 1121 [runnable (scan)]:
>>>>>> > fatal error: unexpected signal during runtime execution
>>>>>> > panic during panic
>>>>>> > [signal SIGBUS: bus error code=0x2 addr=0xfa2adc pc=0x451637]
>>>>>> >
>>>>>> > runtime stack:
>>>>>> > runtime.throw(0xcf7fe3, 0x2a)
>>>>>> > /vol/apps/go/1.11.2/src/runtime/panic.go:608 +0x72
>>>>>> > runtime.sigpanic()
>>>>>> > /vol/apps/go/1.11.2/src/runtime/signal_unix.go:374 +0x2f2
>>>>>> > runtime.gentraceback(0x, 0x, 0x0, 
>>>>>> 0xc0004baa80, 0x0, 0x0, 0x64, 0x0, 0x0, 0x0, ...)
>>>>>> > /vol/apps/go/1.11.2/src/runtime/traceback.go:190 +0x377
>>>>>> > runtime.traceback1(0x, 0x, 0x0, 
>>>>>> 0xc0004baa80, 0x0)
>>>>>> > /vol/apps/go/1.11.2/src/runtime/traceback.go:728 +0xf3
>>>>>> > runtime.traceback(0x, 0x, 0x0, 
>>>>>> 0xc0004baa80)
>>>>>> > /vol/apps/go/1.11.2/src/runtime/traceback.go:682 +0x52
>>>>>> > runtime.tracebackothers(0xc00012e780)
>>>>>> > /vol/apps/go/1.11.2/src/runtime/traceback.go:947 +0x

Re: [go-nuts] Cause of SIGBUS panic in gc?

2019-04-29 Thread vaastav anand
I have encountered a SIGBUS with go before but I was hacking inside the 
runtime and using shared mem with mmap.

goroutines are assigned IDs incrementally and each goroutine at bare 
minimum has 2.1KB stack space in go1.11 down from 2.7KB in go1.10 if I 
recall correctly. So, at the very least at that point you could have easily 
burnt through at least 7.5GB of memory. I am not sure what could happen if 
you somehow exceed the amount of memory available. Seems like that is a 
test you could write and see if launching more goroutines than that could 
fit in the size of memory could actually cause a SIGBUS.

On Monday, 29 April 2019 23:25:52 UTC-7, Justin Israel wrote:
>
>
>
> On Tue, Apr 30, 2019 at 6:09 PM vaastav anand  > wrote:
>
>> Ok, so in the 2nd piece of code you posted, is some request being pushed 
>> onto some OS queue? If so, is it possible that you may be maxing the queue 
>> out and then pushing something else into it and that could cause a SIGBUS 
>> as well This seems super farfetched tho but it is hard to debug without 
>> really knowing what the application might really be doing.
>>
>
> I want to say that I really appreciate you taking the time to try and give 
> me some possible ideas, even though this is a really vague problem. I had 
> only hoped someone had encountered something similar. 
>
> So that line in the SIGBUS crash is just trying to add a subscription to a 
> message topic callback in the nats client connection:
> https://godoc.org/github.com/nats-io/go-nats#Conn.Subscribe 
> It's pretty high level logic at my application level. 
>
> One thing that stood out to me was that in the crash, the goroutine id 
> number was 3538668. I had to double check to confirm that the go runtime 
> just uses an insignificant increasing number. I guess it does indicate that 
> the application turned over > 3 mil goroutines by that point. I'm wondering 
> if this is caused by something in the gnatsd embedded server (
> https://github.com/nats-io/gnatsd/tree/master/server) since most the 
> goroutines do come from that, with all the client handling going on. If we 
> are talking about something that is managing very large queues, that would 
> be the one doing so in this application.
>  
>
>>
>> On Monday, 29 April 2019 22:57:40 UTC-7, Justin Israel wrote:
>>>
>>>
>>>
>>> On Tue, Apr 30, 2019 at 5:43 PM vaastav anand  
>>> wrote:
>>>
>>>> I'd be very surprised if the anonymous goroutine is the reason behind a 
>>>> SIGBUS violation.
>>>> So, if I remember SIGBUS correctly, it means that you are issuing a 
>>>> read/write to a memory address which is not really addressable or it is 
>>>> misaligned. I think the chances of the address being misaligned are very 
>>>> low.so it really has to be a non-existent address.
>>>> It can happen if you have try to access memory outside the region 
>>>> mmaped into your application.
>>>> If your application has any kind of mmap or shared memory access, I 
>>>> would start there.
>>>> In any case your best bet is to somehow reproduce the bug consistently 
>>>> and then figure out which memory access is causing the fault.
>>>>
>>>
>>> My application isn't doing anything with mmap or shared memory, and my 
>>> direct and indirect dependencies don't seem to be anything like that 
>>> either. Its limited to pretty much nats.io client, gnatds embedded 
>>> server, and a thrift rpc. 
>>>
>>> It seems so random that I doubt I could get a reproducible crash. So I 
>>> can really only try testing this on go 1.11 instead to see if any of the GC 
>>> work in 1.12 causes this.
>>>
>>>
>>>>
>>>>
>>>> On Monday, 29 April 2019 21:59:34 UTC-7, Justin Israel wrote:
>>>>>
>>>>>
>>>>> On Thursday, November 29, 2018 at 6:22:56 PM UTC+13, Justin Israel 
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Nov 29, 2018 at 6:20 PM Justin Israel  
>>>>>> wrote:
>>>>>>
>>>>>>> On Thu, Nov 29, 2018 at 5:32 PM Ian Lance Taylor  
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Wed, Nov 28, 2018 at 7:18 PM Justin Israel  
>>>>>>>> wrote:
>>>>>>>> >
>>>>>>>> > I've got a service that I have been testing quite a lot over the 
>>>>>>>> last few da

Re: [go-nuts] Cause of SIGBUS panic in gc?

2019-04-30 Thread vaastav anand
The stack trace only lists goroutines that are not dead/not system 
goroutines/not the goroutine that is calling the traceback function. 
(src/runtime/traceback.go)
Additionally, I don't think go reclaims any memory from dead goroutines. 
allgs struct in src/runtime/proc.go file in the go source code holds all 
the goroutines that have been created during the lifetime of the program 
and it is all heap allocated. I don't know if the garbage collector 
reclaims any of these dead goroutines. If it doesn't, which I don't think 
it does because nothing ever seems to be removed from allgs.

On Monday, 29 April 2019 23:54:54 UTC-7, Justin Israel wrote:
>
>
>
> On Tue, Apr 30, 2019 at 6:33 PM vaastav anand  > wrote:
>
>> I have encountered a SIGBUS with go before but I was hacking inside the 
>> runtime and using shared mem with mmap.
>>
>> goroutines are assigned IDs incrementally and each goroutine at bare 
>> minimum has 2.1KB stack space in go1.11 down from 2.7KB in go1.10 if I 
>> recall correctly. So, at the very least at that point you could have easily 
>> burnt through at least 7.5GB of memory. I am not sure what could happen if 
>> you somehow exceed the amount of memory available. Seems like that is a 
>> test you could write and see if launching more goroutines than that could 
>> fit in the size of memory could actually cause a SIGBUS.
>>
>
> The stack trace only listed 282 goroutines, which seems about right 
> considering the number of clients that are connected. Its about 3 
> goroutines per client connection, plus the other stuff in the server. I 
> think it just indicates that I have turned over a lot of client connections 
> over time. 
>  
>
>>
>> On Monday, 29 April 2019 23:25:52 UTC-7, Justin Israel wrote:
>>>
>>>
>>>
>>> On Tue, Apr 30, 2019 at 6:09 PM vaastav anand  
>>> wrote:
>>>
>>>> Ok, so in the 2nd piece of code you posted, is some request being 
>>>> pushed onto some OS queue? If so, is it possible that you may be maxing 
>>>> the 
>>>> queue out and then pushing something else into it and that could cause a 
>>>> SIGBUS as well This seems super farfetched tho but it is hard to debug 
>>>> without really knowing what the application might really be doing.
>>>>
>>>
>>> I want to say that I really appreciate you taking the time to try and 
>>> give me some possible ideas, even though this is a really vague problem. I 
>>> had only hoped someone had encountered something similar. 
>>>
>>> So that line in the SIGBUS crash is just trying to add a subscription to 
>>> a message topic callback in the nats client connection:
>>> https://godoc.org/github.com/nats-io/go-nats#Conn.Subscribe 
>>> It's pretty high level logic at my application level. 
>>>
>>> One thing that stood out to me was that in the crash, the goroutine id 
>>> number was 3538668. I had to double check to confirm that the go runtime 
>>> just uses an insignificant increasing number. I guess it does indicate that 
>>> the application turned over > 3 mil goroutines by that point. I'm wondering 
>>> if this is caused by something in the gnatsd embedded server (
>>> https://github.com/nats-io/gnatsd/tree/master/server) since most the 
>>> goroutines do come from that, with all the client handling going on. If we 
>>> are talking about something that is managing very large queues, that would 
>>> be the one doing so in this application.
>>>  
>>>
>>>>
>>>> On Monday, 29 April 2019 22:57:40 UTC-7, Justin Israel wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Apr 30, 2019 at 5:43 PM vaastav anand  
>>>>> wrote:
>>>>>
>>>>>> I'd be very surprised if the anonymous goroutine is the reason behind 
>>>>>> a SIGBUS violation.
>>>>>> So, if I remember SIGBUS correctly, it means that you are issuing a 
>>>>>> read/write to a memory address which is not really addressable or it is 
>>>>>> misaligned. I think the chances of the address being misaligned are very 
>>>>>> low.so it really has to be a non-existent address.
>>>>>> It can happen if you have try to access memory outside the region 
>>>>>> mmaped into your application.
>>>>>> If your application has any kind of mmap or shared memory access, I 
>>>>>> would start there.
>>>>>> In any case your 

Re: [go-nuts] Cause of SIGBUS panic in gc?

2019-04-30 Thread vaastav anand
I was wrong about the gc not getting memory back from the goroutines. I 
think it does get that through the gcResetMarkState function.
So I don't think the # of goroutines are the issue..I'm sorry if I 
misled you

On Tuesday, 30 April 2019 00:28:29 UTC-7, vaastav anand wrote:
>
> The stack trace only lists goroutines that are not dead/not system 
> goroutines/not the goroutine that is calling the traceback function. 
> (src/runtime/traceback.go)
> Additionally, I don't think go reclaims any memory from dead goroutines. 
> allgs struct in src/runtime/proc.go file in the go source code holds all 
> the goroutines that have been created during the lifetime of the program 
> and it is all heap allocated. I don't know if the garbage collector 
> reclaims any of these dead goroutines. If it doesn't, which I don't think 
> it does because nothing ever seems to be removed from allgs.
>
> On Monday, 29 April 2019 23:54:54 UTC-7, Justin Israel wrote:
>>
>>
>>
>> On Tue, Apr 30, 2019 at 6:33 PM vaastav anand  
>> wrote:
>>
>>> I have encountered a SIGBUS with go before but I was hacking inside the 
>>> runtime and using shared mem with mmap.
>>>
>>> goroutines are assigned IDs incrementally and each goroutine at bare 
>>> minimum has 2.1KB stack space in go1.11 down from 2.7KB in go1.10 if I 
>>> recall correctly. So, at the very least at that point you could have easily 
>>> burnt through at least 7.5GB of memory. I am not sure what could happen if 
>>> you somehow exceed the amount of memory available. Seems like that is a 
>>> test you could write and see if launching more goroutines than that could 
>>> fit in the size of memory could actually cause a SIGBUS.
>>>
>>
>> The stack trace only listed 282 goroutines, which seems about right 
>> considering the number of clients that are connected. Its about 3 
>> goroutines per client connection, plus the other stuff in the server. I 
>> think it just indicates that I have turned over a lot of client connections 
>> over time. 
>>  
>>
>>>
>>> On Monday, 29 April 2019 23:25:52 UTC-7, Justin Israel wrote:
>>>>
>>>>
>>>>
>>>> On Tue, Apr 30, 2019 at 6:09 PM vaastav anand  
>>>> wrote:
>>>>
>>>>> Ok, so in the 2nd piece of code you posted, is some request being 
>>>>> pushed onto some OS queue? If so, is it possible that you may be maxing 
>>>>> the 
>>>>> queue out and then pushing something else into it and that could cause a 
>>>>> SIGBUS as well This seems super farfetched tho but it is hard to 
>>>>> debug 
>>>>> without really knowing what the application might really be doing.
>>>>>
>>>>
>>>> I want to say that I really appreciate you taking the time to try and 
>>>> give me some possible ideas, even though this is a really vague problem. I 
>>>> had only hoped someone had encountered something similar. 
>>>>
>>>> So that line in the SIGBUS crash is just trying to add a subscription 
>>>> to a message topic callback in the nats client connection:
>>>> https://godoc.org/github.com/nats-io/go-nats#Conn.Subscribe 
>>>> It's pretty high level logic at my application level. 
>>>>
>>>> One thing that stood out to me was that in the crash, the goroutine id 
>>>> number was 3538668. I had to double check to confirm that the go runtime 
>>>> just uses an insignificant increasing number. I guess it does indicate 
>>>> that 
>>>> the application turned over > 3 mil goroutines by that point. I'm 
>>>> wondering 
>>>> if this is caused by something in the gnatsd embedded server (
>>>> https://github.com/nats-io/gnatsd/tree/master/server) since most the 
>>>> goroutines do come from that, with all the client handling going on. If we 
>>>> are talking about something that is managing very large queues, that would 
>>>> be the one doing so in this application.
>>>>  
>>>>
>>>>>
>>>>> On Monday, 29 April 2019 22:57:40 UTC-7, Justin Israel wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 30, 2019 at 5:43 PM vaastav anand  
>>>>>> wrote:
>>>>>>
>>>>>>> I'd be very surprised if the anonymous goroutine is the reason 
>>>>>>> behind a SIGBUS violation.