Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Burak Serdar
On Thu, May 2, 2019 at 6:34 PM Tyler Compton  wrote:
>
> I took a quick look and yes, it uses unsafe to convert between byte slices 
> and strings. I don't know enough to say that it's the problem but here's an 
> example:
>
> https://github.com/valyala/fasthttp/blob/645361952477dfc16938fb2993065130ed7c02b9/bytesconv.go#L380

You could experiment by using that corrupt string with something
trivial, like calling strings.ToLower(r.Second_subid) before Sprintf
and see if that panics inside ToLower. And then you can add more
ToLower calls backtracking the stack to pinpoint where that string
gets corrupt.

Or, you can just use the "slow" http, see if that works, and not look back.

>
> On Thu, May 2, 2019 at 5:16 PM Burak Serdar  wrote:
>>
>> On Thu, May 2, 2019 at 6:02 PM XXX ZZZ  wrote:
>> >
>> > No use of C via CGO at all.
>> >
>> > Afaik, there isn't any unsafe use of the string, we are basically reading 
>> > it from a get parameter (fasthttp server) on an http request and then 
>> > adding it into this structure, most of the times is just a 5 char string. 
>> > Out of several millions requests, this panic happens.
>>
>> Does this "fasthttp" have any unsafe pointers?
>>
>>
>> >
>> > I failed to find any kind of race using go race detector, I'm currently 
>> > doing some more debugging, hopefuly I should have more info/tests soon.
>> >
>> > El jueves, 2 de mayo de 2019, 20:44:33 (UTC-3), Burak Serdar escribió:
>> >>
>> >> On Thu, May 2, 2019 at 3:56 PM Ian Lance Taylor  wrote:
>> >> >
>> >> > On Thu, May 2, 2019 at 2:50 PM Anthony Martin  wrote:
>> >> > >
>> >> > > What version of Go are you using?
>> >> > >
>> >> > > XXX ZZZ  once said:
>> >> > > > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>> >> > > > /usr/local/go/src/fmt/print.go:448 +0x132
>> >> > > > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
>> >> > > > /usr/local/go/src/fmt/print.go:684 +0x880
>> >> > > > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 
>> >> > > > 0x1)
>> >> > > > /usr/local/go/src/fmt/print.go:1112 +0x3ff
>> >> > > > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
>> >> > > > /usr/local/go/src/fmt/print.go:214 +0x66
>> >> > >
>> >> > > This shows signs of memory corruption. The last argument passed to
>> >> > > fmtString (0xc00076) should be the same as the last argument
>> >> > > passed to printArg (0x76 or 'v') but it has some high bits set. Also,
>> >> > > the pointer to the format string data changes from 0xa6e22f (which is
>> >> > > probably in the .rodata section of the binary) to 0x0.
>> >> > >
>> >> > > Something is amiss.
>> >> >
>> >> > The change from 0x76 to 0xc00076 does not necessarily indicate a
>> >> > problem.  The stack backtrace does not know the types.  The value here
>> >> > is a rune, which is 32 bits.  The compiler will only set the low order
>> >> > 32 bits on the stack, leaving the high order 32 bits unset.  So the
>> >> > 0xc0 could just be garbage left on the stack.
>> >> >
>> >> > I don't *think* the format string is changing.  I think the 0 is from
>> >> > the string being printed, not the format string.  They both happen to
>> >> > be length 5.
>> >>
>> >> There's something that doesn't make sense here. The 0 is from the
>> >> string being printed, it is not the format string. But how can that
>> >> be?
>> >>
>> >> Even if there is a race, the string cannot have a 0 for the slice, can
>> >> it? So the other option is when Sprintf is called, the string being
>> >> printed is already corrupt. Can there be an overflow somewhere that is
>> >> somehow undetected? Any unsafe use in the program?
>> >>
>> >>
>> >> >
>> >> > Ian
>> >> >
>> >> > --
>> >> > You received this message because you are subscribed to the Google 
>> >> > Groups "golang-nuts" group.
>> >> > To unsubscribe from this group and stop receiving emails from it, send 
>> >> > an email to golan...@googlegroups.com.
>> >> > For more options, visit https://groups.google.com/d/optout.
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "golang-nuts" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to golang-nuts+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
ah yes, no, if you see the code in the play link below, it only has three 
channels, ops, done and ready. I just figured out that I replaced that 
ready by putting the close in the clause that processes incoming ops, and 
it's unused as well. I managed to trim it down to just one channel, the ops 
channel, and the done uses a 'comma ok' check and breaks when that produces 
a false value, otherwise it pushes the op back on the channel in case it 
was meant for the main loop: https://play.golang.org/p/zuNAJvwRlf-

On Friday, 3 May 2019 02:50:42 UTC+2, Louki Sumirniy wrote:
>
> oh, I did forget one thing. The race detector does not flag a race in this 
> code: https://play.golang.org/p/M1uGq1g4vjo (play refuses to run it 
> though)
>
> As I understand it, that's because the add/subtract operations are 
> happening serially within the main handler goroutine. I suppose if I were 
> to change my 'worker count' query to just print the value right there in 
> the select statement, the race would disappear, but that's not *that* 
> convenient. Though it covers the case of debugging, really. But how is it 
> not still a race since the data is being copied to send to a TTY?
>
> It would be quite handy, though, if one could constrain the race detector 
> by telling the compiler somehow that 'this goroutine owns that variable' 
> and any reads are ignored. It isn't really exactly a race condition to 
> sample the state at any given moment to give potentially useful information 
> to the caller. 
>
> On Friday, 3 May 2019 02:39:09 UTC+2, Louki Sumirniy wrote:
>>
>> I more or less eventually figured that out since it is impossible to 
>> query the number of workers without a race anyway, and then I started 
>> toying with atomic.Value and made that one race as well (obviously the 
>> value was copied by fmt.Println). I guess keeping track of the number of 
>> workers is on the caller side not on the waitgroup side, the whole thing is 
>> a black box because of the ease with which race conditions can arise when 
>> you let things inside the box. 
>>
>> The thing that I find odd though, is it is impossible to not trip the 
>> race detector, period, when copying that value out, it sees where it goes. 
>> The thing is that in the rest of the library, no operation on the worker 
>> counter triggers the race, I figure that's because it's one goroutine and 
>> the other functions are separate. As soon as the internal value crosses 
>> outwards as caller adds and subtracts workers concurrently, it is racy, but 
>> I don't see how reading a maybe racy value itself is racy if I am not going 
>> to do anything other than tell the user how many workers are running at a 
>> given moment. It wouldn't be to make any concrete, immediate decision to 
>> act based on this. Debugging is a prime example of when you want to read 
>> racy data but have no need to write back where it is being rendered to the 
>> user.
>>
>> Ah well, newbie questions. I think that part of the reason why for many 
>> people goroutines and channels are so fascinating is about concurrency, but 
>> not just concurrency, distributed processing more so. Distributed systems 
>> need concurrent and asynchronous responses to network activity, and 
>> channels are a perfect fit for eliminating context switch overhead from 
>> operations that span many machines.
>>
>> On Friday, 3 May 2019 01:33:53 UTC+2, Robert Engels wrote:
>>>
>>> Channels use sync primitives under the hood so you are not saving 
>>> anything by using multiple channels instead of a single wait group. 
>>>
>>> On May 2, 2019, at 5:57 PM, Louki Sumirniy  
>>> wrote:
>>>
>>> As I mentioned earlier, I wanted to see if I could implement a waitgroup 
>>> with channels instead of the stdlib's sync.Atomic counters, and using a 
>>> special type of concurrent datatype called a PN Converged Replicated 
>>> Datatype. Well, I'm not sure if this implementation precisely implements 
>>> this type of CRDT, but it does work, and I wanted to share it. Note that 
>>> play doesn't like these long running (?) examples, so here it is verbatim 
>>> as I just finished writing it:
>>>
>>> package chanwg
>>>
>>> import "fmt"
>>>
>>> type WaitGroup struct {
>>> workers uint
>>> ops chan func()
>>> ready chan struct{}
>>> done chan struct{}
>>> }
>>>
>>> func New() *WaitGroup {
>>> wg := &WaitGroup{
>>> ops: make(chan func()),
>>> done: make(chan struct{}),
>>> ready: make(chan struct{}),
>>> }
>>> go func() {
>>> // wait loop doesn't start until something is put into thte
>>> done := false
>>> for !done {
>>> select {
>>> case fn := <-wg.ops:
>>> println("received op")
>>> fn()
>>> fmt.Println("num workers:", wg.WorkerCount())
>>> // if !(wg.workers < 1) {
>>> //  println("wait counter at zero")
>>> //  done = true
>>> 

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
oh, I did forget one thing. The race detector does not flag a race in this 
code: https://play.golang.org/p/M1uGq1g4vjo (play refuses to run it though)

As I understand it, that's because the add/subtract operations are 
happening serially within the main handler goroutine. I suppose if I were 
to change my 'worker count' query to just print the value right there in 
the select statement, the race would disappear, but that's not *that* 
convenient. Though it covers the case of debugging, really. But how is it 
not still a race since the data is being copied to send to a TTY?

It would be quite handy, though, if one could constrain the race detector 
by telling the compiler somehow that 'this goroutine owns that variable' 
and any reads are ignored. It isn't really exactly a race condition to 
sample the state at any given moment to give potentially useful information 
to the caller. 

On Friday, 3 May 2019 02:39:09 UTC+2, Louki Sumirniy wrote:
>
> I more or less eventually figured that out since it is impossible to query 
> the number of workers without a race anyway, and then I started toying with 
> atomic.Value and made that one race as well (obviously the value was copied 
> by fmt.Println). I guess keeping track of the number of workers is on the 
> caller side not on the waitgroup side, the whole thing is a black box 
> because of the ease with which race conditions can arise when you let 
> things inside the box. 
>
> The thing that I find odd though, is it is impossible to not trip the race 
> detector, period, when copying that value out, it sees where it goes. The 
> thing is that in the rest of the library, no operation on the worker 
> counter triggers the race, I figure that's because it's one goroutine and 
> the other functions are separate. As soon as the internal value crosses 
> outwards as caller adds and subtracts workers concurrently, it is racy, but 
> I don't see how reading a maybe racy value itself is racy if I am not going 
> to do anything other than tell the user how many workers are running at a 
> given moment. It wouldn't be to make any concrete, immediate decision to 
> act based on this. Debugging is a prime example of when you want to read 
> racy data but have no need to write back where it is being rendered to the 
> user.
>
> Ah well, newbie questions. I think that part of the reason why for many 
> people goroutines and channels are so fascinating is about concurrency, but 
> not just concurrency, distributed processing more so. Distributed systems 
> need concurrent and asynchronous responses to network activity, and 
> channels are a perfect fit for eliminating context switch overhead from 
> operations that span many machines.
>
> On Friday, 3 May 2019 01:33:53 UTC+2, Robert Engels wrote:
>>
>> Channels use sync primitives under the hood so you are not saving 
>> anything by using multiple channels instead of a single wait group. 
>>
>> On May 2, 2019, at 5:57 PM, Louki Sumirniy  
>> wrote:
>>
>> As I mentioned earlier, I wanted to see if I could implement a waitgroup 
>> with channels instead of the stdlib's sync.Atomic counters, and using a 
>> special type of concurrent datatype called a PN Converged Replicated 
>> Datatype. Well, I'm not sure if this implementation precisely implements 
>> this type of CRDT, but it does work, and I wanted to share it. Note that 
>> play doesn't like these long running (?) examples, so here it is verbatim 
>> as I just finished writing it:
>>
>> package chanwg
>>
>> import "fmt"
>>
>> type WaitGroup struct {
>> workers uint
>> ops chan func()
>> ready chan struct{}
>> done chan struct{}
>> }
>>
>> func New() *WaitGroup {
>> wg := &WaitGroup{
>> ops: make(chan func()),
>> done: make(chan struct{}),
>> ready: make(chan struct{}),
>> }
>> go func() {
>> // wait loop doesn't start until something is put into thte
>> done := false
>> for !done {
>> select {
>> case fn := <-wg.ops:
>> println("received op")
>> fn()
>> fmt.Println("num workers:", wg.WorkerCount())
>> // if !(wg.workers < 1) {
>> //  println("wait counter at zero")
>> //  done = true
>> //  close(wg.done)
>> // }
>> default:
>> }
>> }
>>
>> }()
>> return wg
>> }
>>
>> // Add adds a non-negative number
>> func (wg *WaitGroup) Add(delta int) {
>> if delta < 0 {
>> return
>> }
>> fmt.Println("adding", delta, "workers")
>> wg.ops <- func() {
>> wg.workers += uint(delta)
>> }
>> }
>>
>> // Done subtracts a non-negative value from the workers count
>> func (wg *WaitGroup) Done(delta int) {
>> println("worker finished")
>> if delta < 0 {
>> return
>> }
>> println("pushing op to channel")
>> wg.ops <- func() {
>> println("finishing")
>> 

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
I more or less eventually figured that out since it is impossible to query 
the number of workers without a race anyway, and then I started toying with 
atomic.Value and made that one race as well (obviously the value was copied 
by fmt.Println). I guess keeping track of the number of workers is on the 
caller side not on the waitgroup side, the whole thing is a black box 
because of the ease with which race conditions can arise when you let 
things inside the box. 

The thing that I find odd though, is it is impossible to not trip the race 
detector, period, when copying that value out, it sees where it goes. The 
thing is that in the rest of the library, no operation on the worker 
counter triggers the race, I figure that's because it's one goroutine and 
the other functions are separate. As soon as the internal value crosses 
outwards as caller adds and subtracts workers concurrently, it is racy, but 
I don't see how reading a maybe racy value itself is racy if I am not going 
to do anything other than tell the user how many workers are running at a 
given moment. It wouldn't be to make any concrete, immediate decision to 
act based on this. Debugging is a prime example of when you want to read 
racy data but have no need to write back where it is being rendered to the 
user.

Ah well, newbie questions. I think that part of the reason why for many 
people goroutines and channels are so fascinating is about concurrency, but 
not just concurrency, distributed processing more so. Distributed systems 
need concurrent and asynchronous responses to network activity, and 
channels are a perfect fit for eliminating context switch overhead from 
operations that span many machines.

On Friday, 3 May 2019 01:33:53 UTC+2, Robert Engels wrote:
>
> Channels use sync primitives under the hood so you are not saving anything 
> by using multiple channels instead of a single wait group. 
>
> On May 2, 2019, at 5:57 PM, Louki Sumirniy  > wrote:
>
> As I mentioned earlier, I wanted to see if I could implement a waitgroup 
> with channels instead of the stdlib's sync.Atomic counters, and using a 
> special type of concurrent datatype called a PN Converged Replicated 
> Datatype. Well, I'm not sure if this implementation precisely implements 
> this type of CRDT, but it does work, and I wanted to share it. Note that 
> play doesn't like these long running (?) examples, so here it is verbatim 
> as I just finished writing it:
>
> package chanwg
>
> import "fmt"
>
> type WaitGroup struct {
> workers uint
> ops chan func()
> ready chan struct{}
> done chan struct{}
> }
>
> func New() *WaitGroup {
> wg := &WaitGroup{
> ops: make(chan func()),
> done: make(chan struct{}),
> ready: make(chan struct{}),
> }
> go func() {
> // wait loop doesn't start until something is put into thte
> done := false
> for !done {
> select {
> case fn := <-wg.ops:
> println("received op")
> fn()
> fmt.Println("num workers:", wg.WorkerCount())
> // if !(wg.workers < 1) {
> //  println("wait counter at zero")
> //  done = true
> //  close(wg.done)
> // }
> default:
> }
> }
>
> }()
> return wg
> }
>
> // Add adds a non-negative number
> func (wg *WaitGroup) Add(delta int) {
> if delta < 0 {
> return
> }
> fmt.Println("adding", delta, "workers")
> wg.ops <- func() {
> wg.workers += uint(delta)
> }
> }
>
> // Done subtracts a non-negative value from the workers count
> func (wg *WaitGroup) Done(delta int) {
> println("worker finished")
> if delta < 0 {
> return
> }
> println("pushing op to channel")
> wg.ops <- func() {
> println("finishing")
> wg.workers -= uint(delta)
> }
> // println("op should have cleared by now")
> }
>
> // Wait blocks until the waitgroup decrements to zero
> func (wg *WaitGroup) Wait() {
> println("a worker is waiting")
> <-wg.done
> println("job done")
> }
>
> func (wg *WaitGroup) WorkerCount() int {
> return int(wg.workers)
> }
>
>
> There could be some bug lurking in there, I'm not sure, but it runs 
> exactly as I want it to, and all the debug prints show you how it works.
>
> Possibly one does not need to use channels containing functions that 
> mutate the counter, but rather maybe they can be just directly 
> increment/decremented within a select statement. I've gotten really used to 
> using generator functions and they seem to be extremely easy to use and so 
> greatly simplify and modularise my code that I am now tackling far more 
> complex (if cyclomatic complexity is a measure - over 130 paths in a menu 
> system I wrote that uses generators to parse a declaration of data types 
> that also uses generators).
>
> I suppose the thing is it wouldn't be hard 

Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Robert Engels
Whenever I see fast* I think someone took shortcuts to make something “faster” 
without fully implementing the spec (or external constraints, like safe data 
access)

> On May 2, 2019, at 7:16 PM, Burak Serdar  wrote:
> 
>> On Thu, May 2, 2019 at 6:02 PM XXX ZZZ  wrote:
>> 
>> No use of C via CGO at all.
>> 
>> Afaik, there isn't any unsafe use of the string, we are basically reading it 
>> from a get parameter (fasthttp server) on an http request and then adding it 
>> into this structure, most of the times is just a 5 char string. Out of 
>> several millions requests, this panic happens.
> 
> Does this "fasthttp" have any unsafe pointers?
> 
> 
>> 
>> I failed to find any kind of race using go race detector, I'm currently 
>> doing some more debugging, hopefuly I should have more info/tests soon.
>> 
>> El jueves, 2 de mayo de 2019, 20:44:33 (UTC-3), Burak Serdar escribió:
>>> 
>>> On Thu, May 2, 2019 at 3:56 PM Ian Lance Taylor  wrote:
 
> On Thu, May 2, 2019 at 2:50 PM Anthony Martin  wrote:
> 
> What version of Go are you using?
> 
> XXX ZZZ  once said:
>> fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>>/usr/local/go/src/fmt/print.go:448 +0x132
>> fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
>>/usr/local/go/src/fmt/print.go:684 +0x880
>> fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
>>/usr/local/go/src/fmt/print.go:1112 +0x3ff
>> fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
>>/usr/local/go/src/fmt/print.go:214 +0x66
> 
> This shows signs of memory corruption. The last argument passed to
> fmtString (0xc00076) should be the same as the last argument
> passed to printArg (0x76 or 'v') but it has some high bits set. Also,
> the pointer to the format string data changes from 0xa6e22f (which is
> probably in the .rodata section of the binary) to 0x0.
> 
> Something is amiss.
 
 The change from 0x76 to 0xc00076 does not necessarily indicate a
 problem.  The stack backtrace does not know the types.  The value here
 is a rune, which is 32 bits.  The compiler will only set the low order
 32 bits on the stack, leaving the high order 32 bits unset.  So the
 0xc0 could just be garbage left on the stack.
 
 I don't *think* the format string is changing.  I think the 0 is from
 the string being printed, not the format string.  They both happen to
 be length 5.
>>> 
>>> There's something that doesn't make sense here. The 0 is from the
>>> string being printed, it is not the format string. But how can that
>>> be?
>>> 
>>> Even if there is a race, the string cannot have a 0 for the slice, can
>>> it? So the other option is when Sprintf is called, the string being
>>> printed is already corrupt. Can there be an overflow somewhere that is
>>> somehow undetected? Any unsafe use in the program?
>>> 
>>> 
 
 Ian
 
 --
 You received this message because you are subscribed to the Google Groups 
 "golang-nuts" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to golan...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Tyler Compton
I took a quick look and yes, it uses unsafe to convert between byte slices
and strings. I don't know enough to say that it's the problem but here's an
example:

https://github.com/valyala/fasthttp/blob/645361952477dfc16938fb2993065130ed7c02b9/bytesconv.go#L380

On Thu, May 2, 2019 at 5:16 PM Burak Serdar  wrote:

> On Thu, May 2, 2019 at 6:02 PM XXX ZZZ  wrote:
> >
> > No use of C via CGO at all.
> >
> > Afaik, there isn't any unsafe use of the string, we are basically
> reading it from a get parameter (fasthttp server) on an http request and
> then adding it into this structure, most of the times is just a 5 char
> string. Out of several millions requests, this panic happens.
>
> Does this "fasthttp" have any unsafe pointers?
>
>
> >
> > I failed to find any kind of race using go race detector, I'm currently
> doing some more debugging, hopefuly I should have more info/tests soon.
> >
> > El jueves, 2 de mayo de 2019, 20:44:33 (UTC-3), Burak Serdar escribió:
> >>
> >> On Thu, May 2, 2019 at 3:56 PM Ian Lance Taylor 
> wrote:
> >> >
> >> > On Thu, May 2, 2019 at 2:50 PM Anthony Martin 
> wrote:
> >> > >
> >> > > What version of Go are you using?
> >> > >
> >> > > XXX ZZZ  once said:
> >> > > > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
> >> > > > /usr/local/go/src/fmt/print.go:448 +0x132
> >> > > > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> >> > > > /usr/local/go/src/fmt/print.go:684 +0x880
> >> > > > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818,
> 0x1, 0x1)
> >> > > > /usr/local/go/src/fmt/print.go:1112 +0x3ff
> >> > > > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> >> > > > /usr/local/go/src/fmt/print.go:214 +0x66
> >> > >
> >> > > This shows signs of memory corruption. The last argument passed to
> >> > > fmtString (0xc00076) should be the same as the last argument
> >> > > passed to printArg (0x76 or 'v') but it has some high bits set.
> Also,
> >> > > the pointer to the format string data changes from 0xa6e22f (which
> is
> >> > > probably in the .rodata section of the binary) to 0x0.
> >> > >
> >> > > Something is amiss.
> >> >
> >> > The change from 0x76 to 0xc00076 does not necessarily indicate a
> >> > problem.  The stack backtrace does not know the types.  The value here
> >> > is a rune, which is 32 bits.  The compiler will only set the low order
> >> > 32 bits on the stack, leaving the high order 32 bits unset.  So the
> >> > 0xc0 could just be garbage left on the stack.
> >> >
> >> > I don't *think* the format string is changing.  I think the 0 is from
> >> > the string being printed, not the format string.  They both happen to
> >> > be length 5.
> >>
> >> There's something that doesn't make sense here. The 0 is from the
> >> string being printed, it is not the format string. But how can that
> >> be?
> >>
> >> Even if there is a race, the string cannot have a 0 for the slice, can
> >> it? So the other option is when Sprintf is called, the string being
> >> printed is already corrupt. Can there be an overflow somewhere that is
> >> somehow undetected? Any unsafe use in the program?
> >>
> >>
> >> >
> >> > Ian
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google
> Groups "golang-nuts" group.
> >> > To unsubscribe from this group and stop receiving emails from it,
> send an email to golan...@googlegroups.com.
> >> > For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "golang-nuts" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to golang-nuts+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Robert Engels
I only mentioned it because you wrote:

> The interface netmask is part of the IP part of the header 


and I’m also fairly certain it is not part of ARP - ARP maps MAC addresses to 
IP addresses on the local subnet. 

> On May 2, 2019, at 7:22 PM, Louki Sumirniy  
> wrote:
> 
> The interface netmask is part of the IP part of the header

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Louki Sumirniy
I'm quite aware of that, it's part of the ARP, and allows the router to 
quickly determine which port to send to. If you put say 192.168.1.1 to a 
router configured with DHCP to 192.168.0.x/24 it first checks the mask by 
ANDing it with the list of address/port network lists' gateway to find the 
port that matches (mask removes the part with the arbitrary addressable 
range), it will return no path error packet to the machine that sent it.

responding back to John Dreystadt - Since netmasks are about ARP address 
resolution and routing and not DHCP or BIND, they most definitely do belong 
there in the "net" library. Firstly, yes, you could just change it to a set 
of constants, but why?? Secondly, if you aren't gonna bake the 
non-routeable address ranges default netmasks into the network library, 
then where?

The use case is easy for me to see: dynamic cloud service providers need to 
generate virtual IP addresses for virtual lans in a cluster. Sure you could 
force the dev/admin to input the netmask for every request to use when 
generating new addresses for virtual interfaces, but why? I might even go 
from the opposite direction here: what about a function that gives you a 
address range/netmask based on the number of addresses you want? Isn't that 
also the same, and also necessary for dynamic VLAN management.

On Thursday, 2 May 2019 14:38:04 UTC+2, Robert Engels wrote:
>
> The net mask is not part of the ip packet. It is a local config in the 
> router.
>
> On May 2, 2019, at 7:20 AM, Louki Sumirniy  > wrote:
>
> Upon review one thing occurred to me also - Netmasks are specifically a 
> fast way to decide at the router which direction a packet should go. The 
> interface netmask is part of the IP part of the header and allows the 
> router to quickly determine whether a packet should go to the external 
> rather than internal interface.
>
> When you use the expression 'should x exist in todays internet', an 
> unspoken aspect of this has to do with IPv6, which does not have a formal 
> NAT specification, and 'local address' range that is as big as the whole 
> IPv4 is now. This serves a similar purpose for routing as a netmask in 
> IPv4, but IPv6 specifically aims to solve the problem of allowing inbound 
> routing to any node. The address shortage that was resolved by CIDR and NAT 
> is not relevant to IPv6, and I believe, in general, applications are 
> written to generate valid addresses proactively and only change it in the 
> rare case it randomly selects an address already in use. This is an 
> optimistic algorithm that can save a lot of latency for a highly dynamic 
> server application running on many worker node machines.
>
> Yes, it's long past due that we abandon IPv4 and NAT, peer to peer 
> applications and dynamic cloud applications are becoming the dominant form 
> for applications and the complexity of arranging peer to peer connections 
> in this environment is quite high compared to IPv6. IPv6 does not need 
> masks as they are built into the 128 bit address coding system.
>
> On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>>
>> The function has a very specific purpose that I have encountered in 
>> several applications, that being to automatically set the netmask based on 
>> the IP being one of the several defined ones, 192, 10, and i forget which 
>> others. 
>>
>> Incorrect netmask can result in not recognising a LAN address that is 
>> incorrect. A 192.168 network has 255 available addresses. You can't just 
>> presume to make a new 192.168.X... address with a /16, as no other 
>> correctly configured node in the LAN will be able to route to it due to it 
>> being a /16. 
>>
>> If you consider the example of an elastic cloud type network environment, 
>> it is important that all nodes agree on netmask or they will become 
>> (partially) disconnected from each other. An app can be spun up for a few 
>> seconds and grab a new address from the range, this could be done with a 
>> broker (eg dhcp), but especially with cloud, one could use a /8 address 
>> range and randomly select out of the 16 million possible, a big enough 
>> space that random generally won't cause a collision - which is a cheaper 
>> allocation procedure than a list managing broker, and would be more suited 
>> to the dynamic cloud environment.
>>
>> This function allows this type of client-side decisionmaking that a 
>> broker bottlenecks into a service, creating an extra startup latency cost. 
>> A randomly generated IP address takes far less time than sending a request 
>> to a centralised broker and receiving it.
>>
>> That's just one example I can think of where a pre-made list of netmasks 
>> is useful, I'm sure more experienced network programmers can rattle off a 
>> laundry list.
>>
>> On Monday, 11 March 2019 20:45:32 UTC+1, John Dreystadt wrote:
>>>
>>> Yes, I was mistaken on this point. I got confused over someone's 
>>> discussion of RFC 1918 with what the standard actually said. 

Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Burak Serdar
On Thu, May 2, 2019 at 6:02 PM XXX ZZZ  wrote:
>
> No use of C via CGO at all.
>
> Afaik, there isn't any unsafe use of the string, we are basically reading it 
> from a get parameter (fasthttp server) on an http request and then adding it 
> into this structure, most of the times is just a 5 char string. Out of 
> several millions requests, this panic happens.

Does this "fasthttp" have any unsafe pointers?


>
> I failed to find any kind of race using go race detector, I'm currently doing 
> some more debugging, hopefuly I should have more info/tests soon.
>
> El jueves, 2 de mayo de 2019, 20:44:33 (UTC-3), Burak Serdar escribió:
>>
>> On Thu, May 2, 2019 at 3:56 PM Ian Lance Taylor  wrote:
>> >
>> > On Thu, May 2, 2019 at 2:50 PM Anthony Martin  wrote:
>> > >
>> > > What version of Go are you using?
>> > >
>> > > XXX ZZZ  once said:
>> > > > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>> > > > /usr/local/go/src/fmt/print.go:448 +0x132
>> > > > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
>> > > > /usr/local/go/src/fmt/print.go:684 +0x880
>> > > > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
>> > > > /usr/local/go/src/fmt/print.go:1112 +0x3ff
>> > > > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
>> > > > /usr/local/go/src/fmt/print.go:214 +0x66
>> > >
>> > > This shows signs of memory corruption. The last argument passed to
>> > > fmtString (0xc00076) should be the same as the last argument
>> > > passed to printArg (0x76 or 'v') but it has some high bits set. Also,
>> > > the pointer to the format string data changes from 0xa6e22f (which is
>> > > probably in the .rodata section of the binary) to 0x0.
>> > >
>> > > Something is amiss.
>> >
>> > The change from 0x76 to 0xc00076 does not necessarily indicate a
>> > problem.  The stack backtrace does not know the types.  The value here
>> > is a rune, which is 32 bits.  The compiler will only set the low order
>> > 32 bits on the stack, leaving the high order 32 bits unset.  So the
>> > 0xc0 could just be garbage left on the stack.
>> >
>> > I don't *think* the format string is changing.  I think the 0 is from
>> > the string being printed, not the format string.  They both happen to
>> > be length 5.
>>
>> There's something that doesn't make sense here. The 0 is from the
>> string being printed, it is not the format string. But how can that
>> be?
>>
>> Even if there is a race, the string cannot have a 0 for the slice, can
>> it? So the other option is when Sprintf is called, the string being
>> printed is already corrupt. Can there be an overflow somewhere that is
>> somehow undetected? Any unsafe use in the program?
>>
>>
>> >
>> > Ian
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "golang-nuts" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to golan...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread XXX ZZZ
No use of C via CGO at all.

Afaik, there isn't any unsafe use of the string, we are basically reading 
it from a get parameter (fasthttp server) on an http request and then 
adding it into this structure, most of the times is just a 5 char string. 
Out of several millions requests, this panic happens.

I failed to find any kind of race using go race detector, I'm currently 
doing some more debugging, hopefuly I should have more info/tests soon.

El jueves, 2 de mayo de 2019, 20:44:33 (UTC-3), Burak Serdar escribió:
>
> On Thu, May 2, 2019 at 3:56 PM Ian Lance Taylor  > wrote: 
> > 
> > On Thu, May 2, 2019 at 2:50 PM Anthony Martin  > wrote: 
> > > 
> > > What version of Go are you using? 
> > > 
> > > XXX ZZZ > once said: 
> > > > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076) 
> > > > /usr/local/go/src/fmt/print.go:448 +0x132 
> > > > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76) 
> > > > /usr/local/go/src/fmt/print.go:684 +0x880 
> > > > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 
> 0x1) 
> > > > /usr/local/go/src/fmt/print.go:1112 +0x3ff 
> > > > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200) 
> > > > /usr/local/go/src/fmt/print.go:214 +0x66 
> > > 
> > > This shows signs of memory corruption. The last argument passed to 
> > > fmtString (0xc00076) should be the same as the last argument 
> > > passed to printArg (0x76 or 'v') but it has some high bits set. Also, 
> > > the pointer to the format string data changes from 0xa6e22f (which is 
> > > probably in the .rodata section of the binary) to 0x0. 
> > > 
> > > Something is amiss. 
> > 
> > The change from 0x76 to 0xc00076 does not necessarily indicate a 
> > problem.  The stack backtrace does not know the types.  The value here 
> > is a rune, which is 32 bits.  The compiler will only set the low order 
> > 32 bits on the stack, leaving the high order 32 bits unset.  So the 
> > 0xc0 could just be garbage left on the stack. 
> > 
> > I don't *think* the format string is changing.  I think the 0 is from 
> > the string being printed, not the format string.  They both happen to 
> > be length 5. 
>
> There's something that doesn't make sense here. The 0 is from the 
> string being printed, it is not the format string. But how can that 
> be? 
>
> Even if there is a race, the string cannot have a 0 for the slice, can 
> it? So the other option is when Sprintf is called, the string being 
> printed is already corrupt. Can there be an overflow somewhere that is 
> somehow undetected? Any unsafe use in the program? 
>
>
> > 
> > Ian 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to golan...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Burak Serdar
On Thu, May 2, 2019 at 3:56 PM Ian Lance Taylor  wrote:
>
> On Thu, May 2, 2019 at 2:50 PM Anthony Martin  wrote:
> >
> > What version of Go are you using?
> >
> > XXX ZZZ  once said:
> > > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
> > > /usr/local/go/src/fmt/print.go:448 +0x132
> > > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> > > /usr/local/go/src/fmt/print.go:684 +0x880
> > > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> > > /usr/local/go/src/fmt/print.go:1112 +0x3ff
> > > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> > > /usr/local/go/src/fmt/print.go:214 +0x66
> >
> > This shows signs of memory corruption. The last argument passed to
> > fmtString (0xc00076) should be the same as the last argument
> > passed to printArg (0x76 or 'v') but it has some high bits set. Also,
> > the pointer to the format string data changes from 0xa6e22f (which is
> > probably in the .rodata section of the binary) to 0x0.
> >
> > Something is amiss.
>
> The change from 0x76 to 0xc00076 does not necessarily indicate a
> problem.  The stack backtrace does not know the types.  The value here
> is a rune, which is 32 bits.  The compiler will only set the low order
> 32 bits on the stack, leaving the high order 32 bits unset.  So the
> 0xc0 could just be garbage left on the stack.
>
> I don't *think* the format string is changing.  I think the 0 is from
> the string being printed, not the format string.  They both happen to
> be length 5.

There's something that doesn't make sense here. The 0 is from the
string being printed, it is not the format string. But how can that
be?

Even if there is a race, the string cannot have a 0 for the slice, can
it? So the other option is when Sprintf is called, the string being
printed is already corrupt. Can there be an overflow somewhere that is
somehow undetected? Any unsafe use in the program?


>
> Ian
>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Robert Engels
Channels use sync primitives under the hood so you are not saving anything by 
using multiple channels instead of a single wait group. 

> On May 2, 2019, at 5:57 PM, Louki Sumirniy  
> wrote:
> 
> As I mentioned earlier, I wanted to see if I could implement a waitgroup with 
> channels instead of the stdlib's sync.Atomic counters, and using a special 
> type of concurrent datatype called a PN Converged Replicated Datatype. Well, 
> I'm not sure if this implementation precisely implements this type of CRDT, 
> but it does work, and I wanted to share it. Note that play doesn't like these 
> long running (?) examples, so here it is verbatim as I just finished writing 
> it:
> 
> package chanwg
> 
> import "fmt"
> 
> type WaitGroup struct {
> workers uint
> ops chan func()
> ready   chan struct{}
> donechan struct{}
> }
> 
> func New() *WaitGroup {
> wg := &WaitGroup{
> ops:   make(chan func()),
> done:  make(chan struct{}),
> ready: make(chan struct{}),
> }
> go func() {
> // wait loop doesn't start until something is put into thte
> done := false
> for !done {
> select {
> case fn := <-wg.ops:
> println("received op")
> fn()
> fmt.Println("num workers:", wg.WorkerCount())
> // if !(wg.workers < 1) {
> //  println("wait counter at zero")
> //  done = true
> //  close(wg.done)
> // }
> default:
> }
> }
> 
> }()
> return wg
> }
> 
> // Add adds a non-negative number
> func (wg *WaitGroup) Add(delta int) {
> if delta < 0 {
> return
> }
> fmt.Println("adding", delta, "workers")
> wg.ops <- func() {
> wg.workers += uint(delta)
> }
> }
> 
> // Done subtracts a non-negative value from the workers count
> func (wg *WaitGroup) Done(delta int) {
> println("worker finished")
> if delta < 0 {
> return
> }
> println("pushing op to channel")
> wg.ops <- func() {
> println("finishing")
> wg.workers -= uint(delta)
> }
> // println("op should have cleared by now")
> }
> 
> // Wait blocks until the waitgroup decrements to zero
> func (wg *WaitGroup) Wait() {
> println("a worker is waiting")
> <-wg.done
> println("job done")
> }
> 
> func (wg *WaitGroup) WorkerCount() int {
> return int(wg.workers)
> }
> 
> 
> There could be some bug lurking in there, I'm not sure, but it runs exactly 
> as I want it to, and all the debug prints show you how it works.
> 
> Possibly one does not need to use channels containing functions that mutate 
> the counter, but rather maybe they can be just directly increment/decremented 
> within a select statement. I've gotten really used to using generator 
> functions and they seem to be extremely easy to use and so greatly simplify 
> and modularise my code that I am now tackling far more complex (if cyclomatic 
> complexity is a measure - over 130 paths in a menu system I wrote that uses 
> generators to parse a declaration of data types that also uses generators).
> 
> I suppose the thing is it wouldn't be hard to extend the types of operations 
> that you push to the ops  channel, I can't think off the top of my head 
> exactly any reasonable use case for some other operation though. One thing 
> that does come to mind, however, is that a more complex, conditional 
> increment operation could be written and execute based on other channel 
> signals or the state of some other data, but I can't see any real use for 
> that.
> 
> I should create a benchmark that tests the relative performance of this 
> versus sync.Atomic add/subtract operations. I think also that as I mentioned, 
> changing the ops channel to just contain deltas on the group size might be a 
> little bit faster than the conditional jumps a closure requires to enter and 
> exit.
> 
> So the jury is out still if this is in any way superior to sync.WaitGroup, 
> but because I know that this library does not use channels that it almost 
> certainly has a little higher overhead due to the function call context 
> switches hidden inside the Atomic increment/decrement operations.
> 
> Because all of those ops occur within the one supervisor waitgroup goroutine 
> only, they are serialised automatically by the channel buffer (or the wait 
> sync as sender and receiver both become ready), and no atomic/locking 
> operations are required to prevent a race.
> 
> I enabled race detector on a test of this code just now. The WorkerCount() 
> function is racy. I think I need to change it so there is a channel 
> underlying the retrieval implementation, it then would send the (empty) query 
> to the query channel, and listen on an answer channel (maybe make them 
> one-direction) to get the value without an explicit race.
> 
> Yes, and this is probably why sync.WaitGroup has

Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Michael Jones
Is any of this string data touched/from C via CGO?

On Thu, May 2, 2019 at 3:09 PM Anthony Martin  wrote:

> Ian Lance Taylor  once said:
> > I don't *think* the format string is changing.  I think the 0 is from
> > the string being printed, not the format string.  They both happen to
> > be length 5.
>
> Misled by the pair of fives. Mea culpa.
>
>   Anthony
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
-- 

*Michael T. jonesmichael.jo...@gmail.com *

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
As I mentioned earlier, I wanted to see if I could implement a waitgroup 
with channels instead of the stdlib's sync.Atomic counters, and using a 
special type of concurrent datatype called a PN Converged Replicated 
Datatype. Well, I'm not sure if this implementation precisely implements 
this type of CRDT, but it does work, and I wanted to share it. Note that 
play doesn't like these long running (?) examples, so here it is verbatim 
as I just finished writing it:

package chanwg

import "fmt"

type WaitGroup struct {
workers uint
ops chan func()
ready chan struct{}
done chan struct{}
}

func New() *WaitGroup {
wg := &WaitGroup{
ops: make(chan func()),
done: make(chan struct{}),
ready: make(chan struct{}),
}
go func() {
// wait loop doesn't start until something is put into thte
done := false
for !done {
select {
case fn := <-wg.ops:
println("received op")
fn()
fmt.Println("num workers:", wg.WorkerCount())
// if !(wg.workers < 1) {
//  println("wait counter at zero")
//  done = true
//  close(wg.done)
// }
default:
}
}

}()
return wg
}

// Add adds a non-negative number
func (wg *WaitGroup) Add(delta int) {
if delta < 0 {
return
}
fmt.Println("adding", delta, "workers")
wg.ops <- func() {
wg.workers += uint(delta)
}
}

// Done subtracts a non-negative value from the workers count
func (wg *WaitGroup) Done(delta int) {
println("worker finished")
if delta < 0 {
return
}
println("pushing op to channel")
wg.ops <- func() {
println("finishing")
wg.workers -= uint(delta)
}
// println("op should have cleared by now")
}

// Wait blocks until the waitgroup decrements to zero
func (wg *WaitGroup) Wait() {
println("a worker is waiting")
<-wg.done
println("job done")
}

func (wg *WaitGroup) WorkerCount() int {
return int(wg.workers)
}


There could be some bug lurking in there, I'm not sure, but it runs exactly 
as I want it to, and all the debug prints show you how it works.

Possibly one does not need to use channels containing functions that mutate 
the counter, but rather maybe they can be just directly 
increment/decremented within a select statement. I've gotten really used to 
using generator functions and they seem to be extremely easy to use and so 
greatly simplify and modularise my code that I am now tackling far more 
complex (if cyclomatic complexity is a measure - over 130 paths in a menu 
system I wrote that uses generators to parse a declaration of data types 
that also uses generators).

I suppose the thing is it wouldn't be hard to extend the types of 
operations that you push to the ops  channel, I can't think off the top of 
my head exactly any reasonable use case for some other operation though. 
One thing that does come to mind, however, is that a more complex, 
conditional increment operation could be written and execute based on other 
channel signals or the state of some other data, but I can't see any real 
use for that.

I should create a benchmark that tests the relative performance of this 
versus sync.Atomic add/subtract operations. I think also that as I 
mentioned, changing the ops channel to just contain deltas on the group 
size might be a little bit faster than the conditional jumps a closure 
requires to enter and exit.

So the jury is out still if this is in any way superior to sync.WaitGroup, 
but because I know that this library does not use channels that it almost 
certainly has a little higher overhead due to the function call context 
switches hidden inside the Atomic increment/decrement operations.

Because all of those ops occur within the one supervisor waitgroup 
goroutine only, they are serialised automatically by the channel buffer (or 
the wait sync as sender and receiver both become ready), and no 
atomic/locking operations are required to prevent a race.

I enabled race detector on a test of this code just now. The WorkerCount() 
function is racy. I think I need to change it so there is a channel 
underlying the retrieval implementation, it then would send the (empty) 
query to the query channel, and listen on an answer channel (maybe make 
them one-direction) to get the value without an explicit race.

Yes, and this is probably why sync.WaitGroup has no way to inspect the 
current wait count also. I will see if I can make that function not racy.

On Thursday, 2 May 2019 23:29:35 UTC+2, Øyvind Teig wrote:
>
> Thanks for the reference to Dave Cheney's blog note! And for this thread, 
> quite interesting to read. I am not used to explicitly closing channels at 
> all (occam (in the ninetees) and XC (now)), but I have sat through several 
> presentations on conferences seen the theme being discussed, lik

Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Anthony Martin
Ian Lance Taylor  once said:
> I don't *think* the format string is changing.  I think the 0 is from
> the string being printed, not the format string.  They both happen to
> be length 5.

Misled by the pair of fives. Mea culpa.

  Anthony

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Ian Lance Taylor
On Thu, May 2, 2019 at 2:50 PM Anthony Martin  wrote:
>
> What version of Go are you using?
>
> XXX ZZZ  once said:
> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
> > /usr/local/go/src/fmt/print.go:448 +0x132
> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> > /usr/local/go/src/fmt/print.go:684 +0x880
> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> > /usr/local/go/src/fmt/print.go:1112 +0x3ff
> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> > /usr/local/go/src/fmt/print.go:214 +0x66
>
> This shows signs of memory corruption. The last argument passed to
> fmtString (0xc00076) should be the same as the last argument
> passed to printArg (0x76 or 'v') but it has some high bits set. Also,
> the pointer to the format string data changes from 0xa6e22f (which is
> probably in the .rodata section of the binary) to 0x0.
>
> Something is amiss.

The change from 0x76 to 0xc00076 does not necessarily indicate a
problem.  The stack backtrace does not know the types.  The value here
is a rune, which is 32 bits.  The compiler will only set the low order
32 bits on the stack, leaving the high order 32 bits unset.  So the
0xc0 could just be garbage left on the stack.

I don't *think* the format string is changing.  I think the 0 is from
the string being printed, not the format string.  They both happen to
be length 5.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread Anthony Martin
What version of Go are you using?

XXX ZZZ  once said:
> fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
> /usr/local/go/src/fmt/print.go:448 +0x132
> fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> /usr/local/go/src/fmt/print.go:684 +0x880
> fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> /usr/local/go/src/fmt/print.go:1112 +0x3ff
> fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> /usr/local/go/src/fmt/print.go:214 +0x66

This shows signs of memory corruption. The last argument passed to
fmtString (0xc00076) should be the same as the last argument
passed to printArg (0x76 or 'v') but it has some high bits set. Also,
the pointer to the format string data changes from 0xa6e22f (which is
probably in the .rodata section of the binary) to 0x0.

Something is amiss.

  Anthony

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Random panic in production with Sprintf

2019-05-02 Thread XXX ZZZ
using go version go1.12.4 linux/amd64

El jueves, 2 de mayo de 2019, 18:50:24 (UTC-3), Anthony Martin escribió:
>
> What version of Go are you using? 
>
> XXX ZZZ > once said: 
> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076) 
> > /usr/local/go/src/fmt/print.go:448 +0x132 
> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76) 
> > /usr/local/go/src/fmt/print.go:684 +0x880 
> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1) 
> > /usr/local/go/src/fmt/print.go:1112 +0x3ff 
> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200) 
> > /usr/local/go/src/fmt/print.go:214 +0x66 
>
> This shows signs of memory corruption. The last argument passed to 
> fmtString (0xc00076) should be the same as the last argument 
> passed to printArg (0x76 or 'v') but it has some high bits set. Also, 
> the pointer to the format string data changes from 0xa6e22f (which is 
> probably in the .rodata section of the binary) to 0x0. 
>
> Something is amiss. 
>
>   Anthony 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread Ian Lance Taylor
On Thu, May 2, 2019 at 11:18 AM Marcin Romaszewicz  wrote:
>
> If that's the actual problem, you'd just be masking it, and producing an 
> invalid "x". Look here:
>
> func (r *Subid_info) Prepare_subid_logic(){
> r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic happens 
> here.
> }
>
> r.Second_subid is in an invalid state which normal Go code could not create. 
> This means that some other goroutine might be in the middle of changing its 
> value at the same time, and you have a race condition, so Ian Lance Taylor's 
> suggestion to run using the race detector is probably the best bet.

I agree.  Look very closely at how r.Second_subid is set.  The
backtrace shows that it is a value of type string that is invalid.

Ian


> On Thu, May 2, 2019 at 11:13 AM Burak Serdar  wrote:
>>
>> On Thu, May 2, 2019 at 11:31 AM XXX ZZZ  wrote:
>> >
>> > Hello,
>> >
>> > We are having a random panic on our go application that is happening once 
>> > every million requests or so, and so far we haven't been able to reproduce 
>> > it nor to even grasp what's going on.
>> >
>> > Basically our code goes like:
>> >
>> > type Subid_info struct{
>> > Affiliate_subid string
>> > Second_subidstring
>> > Second_subid_8string
>> > S2string
>> > Internal_subidstring
>> > Internal_subid_9string
>> > Internal_subid_12 string
>> > Result string
>> > }
>> >
>> > func (r *Subid_info) Prepare_subid_logic(){
>> > r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic 
>> > happens here.
>> > }
>> >
>> > And the trace we get is:
>> >
>> > panic: runtime error: invalid memory address or nil pointer dereference
>> > [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]
>> >
>> > goroutine 17091 [running]:
>> > unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
>> > /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
>> > fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
>> > /usr/local/go/src/fmt/format.go:113 +0x134
>> > fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
>> > /usr/local/go/src/fmt/format.go:347 +0x61
>> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>>
>> Right here in fmtString, the function gets a 0x0, 0x5 arg, which is I
>> believe a string of length 5 with a nil slice. So it looks like
>> somehow r.Second_subid has nil buffer here. When a string is used as
>> an interface{}, afaik, the interface keeps the value, not the pointer
>> to the string. So I can't see how this is possible. But I wonder if
>> copying the value before sprintf could fix it:
>>
>> x:=r.Second_subid
>> r.Second_subid_8=fmt.Sprintf("1%07v", x)
>>
>>
>>
>>
>> > /usr/local/go/src/fmt/print.go:448 +0x132
>> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
>> > /usr/local/go/src/fmt/print.go:684 +0x880
>> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
>> > /usr/local/go/src/fmt/print.go:1112 +0x3ff
>> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
>> > /usr/local/go/src/fmt/print.go:214 +0x66
>> > code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
>> > 0x2)
>> >
>> > Given that we can't reproduce it, what's the logical way to debug this and 
>> > find out what's happening?
>> >
>> > Thanks!
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "golang-nuts" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to golang-nuts+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread John Dreystadt


>> 
>> 
>>> On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>>> The function has a very specific purpose that I have encountered in several 
>>> applications, that being to automatically set the netmask based on the IP 
>>> being one of the several defined ones, 192, 10, and i forget which others. 
>>> 
>>> Incorrect netmask can result in not recognising a LAN address that is 
>>> incorrect. A 192.168 network has 255 available addresses. You can't just 
>>> presume to make a new 192.168.X... address with a /16, as no other 
>>> correctly configured node in the LAN will be able to route to it due to it 
>>> being a /16. 
>>> 
>>> If you consider the example of an elastic cloud type network environment, 
>>> it is important that all nodes agree on netmask or they will become 
>>> (partially) disconnected from each other. An app can be spun up for a few 
>>> seconds and grab a new address from the range, this could be done with a 
>>> broker (eg dhcp), but especially with cloud, one could use a /8 address 
>>> range and randomly select out of the 16 million possible, a big enough 
>>> space that random generally won't cause a collision - which is a cheaper 
>>> allocation procedure than a list managing broker, and would be more suited 
>>> to the dynamic cloud environment.
>>> 
>>> This function allows this type of client-side decisionmaking that a broker 
>>> bottlenecks into a service, creating an extra startup latency cost. A 
>>> randomly generated IP address takes far less time than sending a request to 
>>> a centralised broker and receiving it.
>>> 
>>> That's just one example I can think of where a pre-made list of netmasks is 
>>> useful, I'm sure more experienced network programmers can rattle off a 
>>> laundry list.
>>> 

While I kind of see your point, it still seems odd that you want a function for 
this that is in the main net package. I would expect most if not all 
applications doing dynamic assignment to pick one address range and then use a 
fixed netmask. I just think that very few programmers will need such a function 
so I don’t think Go, with its emphasis on simplicity, so have it. 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Øyvind Teig
Thanks for the reference to Dave Cheney's blog note! And for this thread, quite 
interesting to read. I am not used to explicitly closing channels at all (occam 
(in the ninetees) and XC (now)), but I have sat through several presentations 
on conferences seen the theme being discussed, like with the JCSP library. I am 
impressed by the depth of the reasoning done by the Go designers!

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
Ah, so this is what they are for - the same thing implemented with channels 
would be a nasty big slice with empty struct quit channels to first tell 
the main they are done. wg.Done() and wg.Wait()  eliminate the complexity 
that a pure channel implementation would require.

With that code I also toyed with changing the size of the 'queue' buffer. 
At 1:1 per data item, it terminates in about 40ns. Obviously it just 
abandons all of the work, since with one channel the routines are taking 
over 32ms a piece, and this produces the correct output except maybe the 
last one. 

So I do need to be using waitgroups if I want to orchestrate 
parallelisation (if available) using goroutines, as otherwise the work will 
be abandoned immediately as the signal is sent.

For some work types, dropping the whole load is correct, that would be 
io-bound stuff that has a short time to live. If the load (like my code) is 
process intensive, one needs the final result so the goroutines must not 
stop until each one finishes its job.

IO-bound jobs like the transport I am writing run continously for the whole 
lifetime of the application's execution. They will only need this 
orchestration to cleanly shut down, but since that adds no overhead during 
the processing loop there is no sane reason to not put the wait in there if 
finishing jobs is mandatory.

On Thursday, 2 May 2019 22:22:33 UTC+2, Steven Hartland wrote:
>
> You can see it doesn't wait by adding a counter as seen here:
> https://play.golang.org/p/-eqKggUEjhQ
>
> On 02/05/2019 21:09, Louki Sumirniy wrote:
>
> I have been spending my time today getting my knowledge of this subject 
> adequate enough to use channels for a UDP transport with FEC creating 
> sharded pieces of the packets, and I just found this and played with some 
> of the code on it and I just wanted to mention these things: 
>
> https://dave.cheney.net/2013/04/30/curious-channels
>
> In the code, specifically first section of this article, I found that the 
> sync.WaitGroup stuff can be completely left out. The quit channel 
> immediately unblocks the select when it is closed and 100 of the goroutines 
> immediately stop. Obviously in a real situation you would put cleanup code 
> in the finish clauses of the goroutines, but yeah, point is the waitgroup 
> is literally redundant in this code:
>
> package main
>
> import (
> "fmt"
> "time"
> )
>
> func main() {
> const n = 100
> finish := make(chan bool)
> for i := 0; i < n; i++ {
> go func() {
> select {
> case <-time.After(1 * time.Hour):
> case <-finish:
> }
> }()
> }
> t0 := time.Now()
> close(finish) // closing finish makes it ready to receive
> fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
> }
>
> The original version uses waitgroups but you can remove them as above and 
> it functions exactly the same. Presumably it has lower overhead from the 
> mutex not being made and propagating to each thread when it finishes a 
> cycle. 
>
> It really seems to me like for this specific case, the use of the property 
> of a closed channel to yield zero completely renders a waitgroup irrelevant.
>
> What I'm curious about is, what reasons would I have for not wanting to 
> use this feature of closed channels as a stop signal versus using a 
> waitgroup?
>
> On Thursday, 2 May 2019 16:20:26 UTC+2, Louki Sumirniy wrote: 
>>
>> It's not precisely the general functionality that I will implement for my 
>> transport, but here is a simple example of a classifier type processing 
>> queue:
>>
>> https://play.golang.org/p/ytdrXgCdbQH
>>
>> This processes a series of sequential integers and pops them into an 
>> array to find the highest factor of a given range of numbers. The code I 
>> will write soon is slightly different, as, obviously, that above there is 
>> not technically a queue. This code shows how to make a non-deadlocking 
>> processing queue, however. 
>>
>> Adding an actual queue like for my intended purpose of bundling packets 
>> with a common uuid is not much further, instead of just dropping the 
>> integers into their position in the slice, it iterates them as each item is 
>> received to find a match, if it doesn't find enough, then it puts the item 
>> back at the end of the search on the queue and waits for the next new item 
>> to arrive. I'll be writing that shortly.
>>
>> For that, I think the simple example would use an RNG to generate numbers 
>> within the specified range, and then for the example, it will continue to 
>> accumulate numbers in the buffer until a recurrance occurs, then the 
>> numbers are appended to  the array and this index is ignored when another 
>> one comes in later. That most closely models what I am building.
>>
>> On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote: 
>>>
>>> Yeah, I was able to think a bit more about it as I was falling asleep 
>>> later and I realised how I meant it to run. I had to verify that indeed 
>>> channels are FIFO queues, as that was the basis of th

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Steven Hartland

You can see it doesn't wait by adding a counter as seen here:
https://play.golang.org/p/-eqKggUEjhQ

On 02/05/2019 21:09, Louki Sumirniy wrote:
I have been spending my time today getting my knowledge of this 
subject adequate enough to use channels for a UDP transport with FEC 
creating sharded pieces of the packets, and I just found this and 
played with some of the code on it and I just wanted to mention these 
things:


https://dave.cheney.net/2013/04/30/curious-channels

In the code, specifically first section of this article, I found that 
the sync.WaitGroup stuff can be completely left out. The quit channel 
immediately unblocks the select when it is closed and 100 of the 
goroutines immediately stop. Obviously in a real situation you would 
put cleanup code in the finish clauses of the goroutines, but yeah, 
point is the waitgroup is literally redundant in this code:


package main

import (
"fmt"
"time"
)

func main() {
const n = 100
finish := make(chan bool)
for i := 0; i < n; i++ {
go func() {
select {
case <-time.After(1 * time.Hour):
case <-finish:
}
}()
}
t0 := time.Now()
close(finish) // closing finish makes it ready to receive
fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
}

The original version uses waitgroups but you can remove them as above 
and it functions exactly the same. Presumably it has lower overhead 
from the mutex not being made and propagating to each thread when it 
finishes a cycle.


It really seems to me like for this specific case, the use of the 
property of a closed channel to yield zero completely renders a 
waitgroup irrelevant.


What I'm curious about is, what reasons would I have for not wanting 
to use this feature of closed channels as a stop signal versus using a 
waitgroup?


On Thursday, 2 May 2019 16:20:26 UTC+2, Louki Sumirniy wrote:

It's not precisely the general functionality that I will implement
for my transport, but here is a simple example of a classifier
type processing queue:

https://play.golang.org/p/ytdrXgCdbQH


This processes a series of sequential integers and pops them into
an array to find the highest factor of a given range of numbers.
The code I will write soon is slightly different, as, obviously,
that above there is not technically a queue. This code shows how
to make a non-deadlocking processing queue, however.

Adding an actual queue like for my intended purpose of bundling
packets with a common uuid is not much further, instead of just
dropping the integers into their position in the slice, it
iterates them as each item is received to find a match, if it
doesn't find enough, then it puts the item back at the end of the
search on the queue and waits for the next new item to arrive.
I'll be writing that shortly.

For that, I think the simple example would use an RNG to generate
numbers within the specified range, and then for the example, it
will continue to accumulate numbers in the buffer until a
recurrance occurs, then the numbers are appended to  the array and
this index is ignored when another one comes in later. That most
closely models what I am building.

On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote:

Yeah, I was able to think a bit more about it as I was falling
asleep later and I realised how I meant it to run. I had to
verify that indeed channels are FIFO queues, as that was the
basis of this way of using them.

The receiver channel is unbuffered, and lives in one
goroutine. When it receives something it bounces it into the
queue and for/range loops through the content of a fairly
big-buffered working channel where items can collect while
they are fresh, and upon arrival of a new item the new item is
checked for a match against the contents of the queue, as well
as kicking out stale data (and recording the uuid of the stale
set so it can be immediately dumped if any further packets got
hung up and come after way too long.

This differs a lot from the loopy design I made in the OP. In
this design there is only to threads instead of three. I think
the geometry of a channel pattern is important - specifically,
everything needs to be done in pairs with channels, although
maybe sometimes you want it too receive but not need it to
send it anywhere, just store/drop, as the algorithm requires.

I still need to think through the design a bit more. Like,
perhaps the queue channel *should* be a pair of one-direction
channels so one is the main fifo and the other side each item
is taken off the queue, processed, and then put back into the
stream. Ordering is not important, except that it is very
handy that it is a FIFO because this means if I have a buffer
with

Re: [go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Steven Hartland
Without the wait group it doesn't wait, so you're not guaranteed for all 
/ any of the goroutines to complete.


On 02/05/2019 21:09, Louki Sumirniy wrote:
I have been spending my time today getting my knowledge of this 
subject adequate enough to use channels for a UDP transport with FEC 
creating sharded pieces of the packets, and I just found this and 
played with some of the code on it and I just wanted to mention these 
things:


https://dave.cheney.net/2013/04/30/curious-channels

In the code, specifically first section of this article, I found that 
the sync.WaitGroup stuff can be completely left out. The quit channel 
immediately unblocks the select when it is closed and 100 of the 
goroutines immediately stop. Obviously in a real situation you would 
put cleanup code in the finish clauses of the goroutines, but yeah, 
point is the waitgroup is literally redundant in this code:


package main

import (
"fmt"
"time"
)

func main() {
const n = 100
finish := make(chan bool)
for i := 0; i < n; i++ {
go func() {
select {
case <-time.After(1 * time.Hour):
case <-finish:
}
}()
}
t0 := time.Now()
close(finish) // closing finish makes it ready to receive
fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
}

The original version uses waitgroups but you can remove them as above 
and it functions exactly the same. Presumably it has lower overhead 
from the mutex not being made and propagating to each thread when it 
finishes a cycle.


It really seems to me like for this specific case, the use of the 
property of a closed channel to yield zero completely renders a 
waitgroup irrelevant.


What I'm curious about is, what reasons would I have for not wanting 
to use this feature of closed channels as a stop signal versus using a 
waitgroup?


On Thursday, 2 May 2019 16:20:26 UTC+2, Louki Sumirniy wrote:

It's not precisely the general functionality that I will implement
for my transport, but here is a simple example of a classifier
type processing queue:

https://play.golang.org/p/ytdrXgCdbQH


This processes a series of sequential integers and pops them into
an array to find the highest factor of a given range of numbers.
The code I will write soon is slightly different, as, obviously,
that above there is not technically a queue. This code shows how
to make a non-deadlocking processing queue, however.

Adding an actual queue like for my intended purpose of bundling
packets with a common uuid is not much further, instead of just
dropping the integers into their position in the slice, it
iterates them as each item is received to find a match, if it
doesn't find enough, then it puts the item back at the end of the
search on the queue and waits for the next new item to arrive.
I'll be writing that shortly.

For that, I think the simple example would use an RNG to generate
numbers within the specified range, and then for the example, it
will continue to accumulate numbers in the buffer until a
recurrance occurs, then the numbers are appended to  the array and
this index is ignored when another one comes in later. That most
closely models what I am building.

On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote:

Yeah, I was able to think a bit more about it as I was falling
asleep later and I realised how I meant it to run. I had to
verify that indeed channels are FIFO queues, as that was the
basis of this way of using them.

The receiver channel is unbuffered, and lives in one
goroutine. When it receives something it bounces it into the
queue and for/range loops through the content of a fairly
big-buffered working channel where items can collect while
they are fresh, and upon arrival of a new item the new item is
checked for a match against the contents of the queue, as well
as kicking out stale data (and recording the uuid of the stale
set so it can be immediately dumped if any further packets got
hung up and come after way too long.

This differs a lot from the loopy design I made in the OP. In
this design there is only to threads instead of three. I think
the geometry of a channel pattern is important - specifically,
everything needs to be done in pairs with channels, although
maybe sometimes you want it too receive but not need it to
send it anywhere, just store/drop, as the algorithm requires.

I still need to think through the design a bit more. Like,
perhaps the queue channel *should* be a pair of one-direction
channels so one is the main fifo and the other side each item
is taken off the queue, processed, and then put back into the
stream. Ordering is not important, except that it is very
handy that it is a FIFO because this means if I have a buffer
 

[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
I have been spending my time today getting my knowledge of this subject 
adequate enough to use channels for a UDP transport with FEC creating 
sharded pieces of the packets, and I just found this and played with some 
of the code on it and I just wanted to mention these things:

https://dave.cheney.net/2013/04/30/curious-channels

In the code, specifically first section of this article, I found that the 
sync.WaitGroup stuff can be completely left out. The quit channel 
immediately unblocks the select when it is closed and 100 of the goroutines 
immediately stop. Obviously in a real situation you would put cleanup code 
in the finish clauses of the goroutines, but yeah, point is the waitgroup 
is literally redundant in this code:

package main

import (
"fmt"
"time"
)

func main() {
const n = 100
finish := make(chan bool)
for i := 0; i < n; i++ {
go func() {
select {
case <-time.After(1 * time.Hour):
case <-finish:
}
}()
}
t0 := time.Now()
close(finish) // closing finish makes it ready to receive
fmt.Printf("Waited %v for %d goroutines to stop\n", time.Since(t0), n)
}

The original version uses waitgroups but you can remove them as above and 
it functions exactly the same. Presumably it has lower overhead from the 
mutex not being made and propagating to each thread when it finishes a 
cycle. 

It really seems to me like for this specific case, the use of the property 
of a closed channel to yield zero completely renders a waitgroup irrelevant.

What I'm curious about is, what reasons would I have for not wanting to use 
this feature of closed channels as a stop signal versus using a waitgroup?

On Thursday, 2 May 2019 16:20:26 UTC+2, Louki Sumirniy wrote:
>
> It's not precisely the general functionality that I will implement for my 
> transport, but here is a simple example of a classifier type processing 
> queue:
>
> https://play.golang.org/p/ytdrXgCdbQH
>
> This processes a series of sequential integers and pops them into an array 
> to find the highest factor of a given range of numbers. The code I will 
> write soon is slightly different, as, obviously, that above there is not 
> technically a queue. This code shows how to make a non-deadlocking 
> processing queue, however.
>
> Adding an actual queue like for my intended purpose of bundling packets 
> with a common uuid is not much further, instead of just dropping the 
> integers into their position in the slice, it iterates them as each item is 
> received to find a match, if it doesn't find enough, then it puts the item 
> back at the end of the search on the queue and waits for the next new item 
> to arrive. I'll be writing that shortly.
>
> For that, I think the simple example would use an RNG to generate numbers 
> within the specified range, and then for the example, it will continue to 
> accumulate numbers in the buffer until a recurrance occurs, then the 
> numbers are appended to  the array and this index is ignored when another 
> one comes in later. That most closely models what I am building.
>
> On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote:
>>
>> Yeah, I was able to think a bit more about it as I was falling asleep 
>> later and I realised how I meant it to run. I had to verify that indeed 
>> channels are FIFO queues, as that was the basis of this way of using them.
>>
>> The receiver channel is unbuffered, and lives in one goroutine. When it 
>> receives something it bounces it into the queue and for/range loops through 
>> the content of a fairly big-buffered working channel where items can 
>> collect while they are fresh, and upon arrival of a new item the new item 
>> is checked for a match against the contents of the queue, as well as 
>> kicking out stale data (and recording the uuid of the stale set so it can 
>> be immediately dumped if any further packets got hung up and come after way 
>> too long.
>>
>> This differs a lot from the loopy design I made in the OP. In this design 
>> there is only to threads instead of three. I think the geometry of a 
>> channel pattern is important - specifically, everything needs to be done in 
>> pairs with channels, although maybe sometimes you want it too receive but 
>> not need it to send it anywhere, just store/drop, as the algorithm requires.
>>
>> I still need to think through the design a bit more. Like, perhaps the 
>> queue channel *should* be a pair of one-direction channels so one is the 
>> main fifo and the other side each item is taken off the queue, processed, 
>> and then put back into the stream. Ordering is not important, except that 
>> it is very handy that it is a FIFO because this means if I have a buffer 
>> with some number, and get a new item, put it into the buffer queue, and 
>> then the queue unpacks the newest item last. I think I could make it like 
>> this, actually:
>>
>> one channel inbound receiver, it passes into a buffered queue channel, 
>> and triggers the passing out of buffered items from the head of the queue 
>> to watcher 1, 2,

Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread XXX ZZZ
I'm testing race conditions again as we speak, however this object is 
created WITHIN the goroutine (the http request), there is no way, afaik, 
that is being used from another routine.

El jueves, 2 de mayo de 2019, 15:19:02 (UTC-3), Marcin Romaszewicz escribió:
>
> If that's the actual problem, you'd just be masking it, and producing an 
> invalid "x". Look here:
>
> func (r *Subid_info) Prepare_subid_logic(){
> r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic 
> happens here.
> }
>
> r.Second_subid is in an invalid state which normal Go code could not 
> create. This means that some other goroutine might be in the middle of 
> changing its value at the same time, and you have a race condition, so Ian 
> Lance Taylor's suggestion to run using the race detector is probably the 
> best bet.
>
> -- Marcin
>
>
> On Thu, May 2, 2019 at 11:13 AM Burak Serdar  > wrote:
>
>> On Thu, May 2, 2019 at 11:31 AM XXX ZZZ > > wrote:
>> >
>> > Hello,
>> >
>> > We are having a random panic on our go application that is happening 
>> once every million requests or so, and so far we haven't been able to 
>> reproduce it nor to even grasp what's going on.
>> >
>> > Basically our code goes like:
>> >
>> > type Subid_info struct{
>> > Affiliate_subid string
>> > Second_subidstring
>> > Second_subid_8string
>> > S2string
>> > Internal_subidstring
>> > Internal_subid_9string
>> > Internal_subid_12 string
>> > Result string
>> > }
>> >
>> > func (r *Subid_info) Prepare_subid_logic(){
>> > r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic 
>> happens here.
>> > }
>> >
>> > And the trace we get is:
>> >
>> > panic: runtime error: invalid memory address or nil pointer dereference
>> > [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]
>> >
>> > goroutine 17091 [running]:
>> > unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
>> > /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
>> > fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
>> > /usr/local/go/src/fmt/format.go:113 +0x134
>> > fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
>> > /usr/local/go/src/fmt/format.go:347 +0x61
>> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>>
>> Right here in fmtString, the function gets a 0x0, 0x5 arg, which is I
>> believe a string of length 5 with a nil slice. So it looks like
>> somehow r.Second_subid has nil buffer here. When a string is used as
>> an interface{}, afaik, the interface keeps the value, not the pointer
>> to the string. So I can't see how this is possible. But I wonder if
>> copying the value before sprintf could fix it:
>>
>> x:=r.Second_subid
>> r.Second_subid_8=fmt.Sprintf("1%07v", x)
>>
>>
>>
>>
>> > /usr/local/go/src/fmt/print.go:448 +0x132
>> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
>> > /usr/local/go/src/fmt/print.go:684 +0x880
>> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
>> > /usr/local/go/src/fmt/print.go:1112 +0x3ff
>> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
>> > /usr/local/go/src/fmt/print.go:214 +0x66
>> > 
>> code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
>> 0x2)
>> >
>> > Given that we can't reproduce it, what's the logical way to debug this 
>> and find out what's happening?
>> >
>> > Thanks!
>> >
>> > --
>> > You received this message because you are subscribed to the Google 
>> Groups "golang-nuts" group.
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an email to golan...@googlegroups.com .
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golan...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread Burak Serdar
On Thu, May 2, 2019 at 12:12 PM Burak Serdar  wrote:
>
> On Thu, May 2, 2019 at 11:31 AM XXX ZZZ  wrote:
> >
> > Hello,
> >
> > We are having a random panic on our go application that is happening once 
> > every million requests or so, and so far we haven't been able to reproduce 
> > it nor to even grasp what's going on.
> >
> > Basically our code goes like:
> >
> > type Subid_info struct{
> > Affiliate_subid string
> > Second_subidstring
> > Second_subid_8string
> > S2string
> > Internal_subidstring
> > Internal_subid_9string
> > Internal_subid_12 string
> > Result string
> > }
> >
> > func (r *Subid_info) Prepare_subid_logic(){
> > r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic 
> > happens here.
> > }
> >
> > And the trace we get is:
> >
> > panic: runtime error: invalid memory address or nil pointer dereference
> > [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]
> >
> > goroutine 17091 [running]:
> > unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
> > /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
> > fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
> > /usr/local/go/src/fmt/format.go:113 +0x134
> > fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
> > /usr/local/go/src/fmt/format.go:347 +0x61
> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>
> Right here in fmtString, the function gets a 0x0, 0x5 arg, which is I
> believe a string of length 5 with a nil slice. So it looks like
> somehow r.Second_subid has nil buffer here. When a string is used as
> an interface{}, afaik, the interface keeps the value, not the pointer
> to the string. So I can't see how this is possible. But I wonder if
> copying the value before sprintf could fix it:
>
> x:=r.Second_subid
> r.Second_subid_8=fmt.Sprintf("1%07v", x)

^^
This is a dumb idea, sorry. If you had a race, you'd still have a race.

>
>
>
>
> > /usr/local/go/src/fmt/print.go:448 +0x132
> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> > /usr/local/go/src/fmt/print.go:684 +0x880
> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> > /usr/local/go/src/fmt/print.go:1112 +0x3ff
> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> > /usr/local/go/src/fmt/print.go:214 +0x66
> > code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
> > 0x2)
> >
> > Given that we can't reproduce it, what's the logical way to debug this and 
> > find out what's happening?
> >
> > Thanks!
> >
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "golang-nuts" group.
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to golang-nuts+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread Marcin Romaszewicz
If that's the actual problem, you'd just be masking it, and producing an
invalid "x". Look here:

func (r *Subid_info) Prepare_subid_logic(){
r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic
happens here.
}

r.Second_subid is in an invalid state which normal Go code could not
create. This means that some other goroutine might be in the middle of
changing its value at the same time, and you have a race condition, so Ian
Lance Taylor's suggestion to run using the race detector is probably the
best bet.

-- Marcin


On Thu, May 2, 2019 at 11:13 AM Burak Serdar  wrote:

> On Thu, May 2, 2019 at 11:31 AM XXX ZZZ  wrote:
> >
> > Hello,
> >
> > We are having a random panic on our go application that is happening
> once every million requests or so, and so far we haven't been able to
> reproduce it nor to even grasp what's going on.
> >
> > Basically our code goes like:
> >
> > type Subid_info struct{
> > Affiliate_subid string
> > Second_subidstring
> > Second_subid_8string
> > S2string
> > Internal_subidstring
> > Internal_subid_9string
> > Internal_subid_12 string
> > Result string
> > }
> >
> > func (r *Subid_info) Prepare_subid_logic(){
> > r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic
> happens here.
> > }
> >
> > And the trace we get is:
> >
> > panic: runtime error: invalid memory address or nil pointer dereference
> > [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]
> >
> > goroutine 17091 [running]:
> > unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
> > /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
> > fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
> > /usr/local/go/src/fmt/format.go:113 +0x134
> > fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
> > /usr/local/go/src/fmt/format.go:347 +0x61
> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
>
> Right here in fmtString, the function gets a 0x0, 0x5 arg, which is I
> believe a string of length 5 with a nil slice. So it looks like
> somehow r.Second_subid has nil buffer here. When a string is used as
> an interface{}, afaik, the interface keeps the value, not the pointer
> to the string. So I can't see how this is possible. But I wonder if
> copying the value before sprintf could fix it:
>
> x:=r.Second_subid
> r.Second_subid_8=fmt.Sprintf("1%07v", x)
>
>
>
>
> > /usr/local/go/src/fmt/print.go:448 +0x132
> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> > /usr/local/go/src/fmt/print.go:684 +0x880
> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> > /usr/local/go/src/fmt/print.go:1112 +0x3ff
> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> > /usr/local/go/src/fmt/print.go:214 +0x66
> >
> code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80,
> 0x2)
> >
> > Given that we can't reproduce it, what's the logical way to debug this
> and find out what's happening?
> >
> > Thanks!
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "golang-nuts" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to golang-nuts+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread XXX ZZZ
I did but nothing detected.

However there aren't any goroutined involved (except for the http request), 
other than that, this variable isn't shared among routines.

El jueves, 2 de mayo de 2019, 14:54:42 (UTC-3), Ian Lance Taylor escribió:
>
> On Thu, May 2, 2019 at 10:31 AM XXX ZZZ > 
> wrote: 
> > 
> > We are having a random panic on our go application that is happening 
> once every million requests or so, and so far we haven't been able to 
> reproduce it nor to even grasp what's going on. 
> > 
> > Basically our code goes like: 
> > 
> > type Subid_info struct{ 
> > Affiliate_subid string 
> > Second_subidstring 
> > Second_subid_8string 
> > S2string 
> > Internal_subidstring 
> > Internal_subid_9string 
> > Internal_subid_12 string 
> > Result string 
> > } 
> > 
> > func (r *Subid_info) Prepare_subid_logic(){ 
> > r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic 
> happens here. 
> > } 
> > 
> > And the trace we get is: 
> > 
> > panic: runtime error: invalid memory address or nil pointer dereference 
> > [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e] 
> > 
> > goroutine 17091 [running]: 
> > unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8) 
> > /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e 
> > fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5) 
> > /usr/local/go/src/fmt/format.go:113 +0x134 
> > fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5) 
> > /usr/local/go/src/fmt/format.go:347 +0x61 
> > fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076) 
> > /usr/local/go/src/fmt/print.go:448 +0x132 
> > fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76) 
> > /usr/local/go/src/fmt/print.go:684 +0x880 
> > fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1) 
> > /usr/local/go/src/fmt/print.go:1112 +0x3ff 
> > fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200) 
> > /usr/local/go/src/fmt/print.go:214 +0x66 
> > 
> code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
> 0x2) 
> > 
> > Given that we can't reproduce it, what's the logical way to debug this 
> and find out what's happening? 
>
> The first thing to try is running your program under the race detector. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread Burak Serdar
On Thu, May 2, 2019 at 11:31 AM XXX ZZZ  wrote:
>
> Hello,
>
> We are having a random panic on our go application that is happening once 
> every million requests or so, and so far we haven't been able to reproduce it 
> nor to even grasp what's going on.
>
> Basically our code goes like:
>
> type Subid_info struct{
> Affiliate_subid string
> Second_subidstring
> Second_subid_8string
> S2string
> Internal_subidstring
> Internal_subid_9string
> Internal_subid_12 string
> Result string
> }
>
> func (r *Subid_info) Prepare_subid_logic(){
> r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic happens 
> here.
> }
>
> And the trace we get is:
>
> panic: runtime error: invalid memory address or nil pointer dereference
> [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]
>
> goroutine 17091 [running]:
> unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
> /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
> fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
> /usr/local/go/src/fmt/format.go:113 +0x134
> fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
> /usr/local/go/src/fmt/format.go:347 +0x61
> fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)

Right here in fmtString, the function gets a 0x0, 0x5 arg, which is I
believe a string of length 5 with a nil slice. So it looks like
somehow r.Second_subid has nil buffer here. When a string is used as
an interface{}, afaik, the interface keeps the value, not the pointer
to the string. So I can't see how this is possible. But I wonder if
copying the value before sprintf could fix it:

x:=r.Second_subid
r.Second_subid_8=fmt.Sprintf("1%07v", x)




> /usr/local/go/src/fmt/print.go:448 +0x132
> fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> /usr/local/go/src/fmt/print.go:684 +0x880
> fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> /usr/local/go/src/fmt/print.go:1112 +0x3ff
> fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> /usr/local/go/src/fmt/print.go:214 +0x66
> code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
> 0x2)
>
> Given that we can't reproduce it, what's the logical way to debug this and 
> find out what's happening?
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Random panic in production with Sprintf

2019-05-02 Thread Ian Lance Taylor
On Thu, May 2, 2019 at 10:31 AM XXX ZZZ  wrote:
>
> We are having a random panic on our go application that is happening once 
> every million requests or so, and so far we haven't been able to reproduce it 
> nor to even grasp what's going on.
>
> Basically our code goes like:
>
> type Subid_info struct{
> Affiliate_subid string
> Second_subidstring
> Second_subid_8string
> S2string
> Internal_subidstring
> Internal_subid_9string
> Internal_subid_12 string
> Result string
> }
>
> func (r *Subid_info) Prepare_subid_logic(){
> r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic happens 
> here.
> }
>
> And the trace we get is:
>
> panic: runtime error: invalid memory address or nil pointer dereference
> [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]
>
> goroutine 17091 [running]:
> unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
> /usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
> fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
> /usr/local/go/src/fmt/format.go:113 +0x134
> fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
> /usr/local/go/src/fmt/format.go:347 +0x61
> fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
> /usr/local/go/src/fmt/print.go:448 +0x132
> fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
> /usr/local/go/src/fmt/print.go:684 +0x880
> fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
> /usr/local/go/src/fmt/print.go:1112 +0x3ff
> fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
> /usr/local/go/src/fmt/print.go:214 +0x66
> code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
> 0x2)
>
> Given that we can't reproduce it, what's the logical way to debug this and 
> find out what's happening?

The first thing to try is running your program under the race detector.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Is it possible to simplify this snippet?

2019-05-02 Thread Sundararajan Seshadri
For your question, the answer is NO. Your version is the most simple one.

If it is relating to making it meaningful or more documented, you can try 
something like:

//checkDirection returns the direction for the key pressed: 'up', 'down', 
'left', 'right'. Any other return variable (nil) will not result in any 
action.
direction :=checkDirection(rl)

switch direction {
case 'left':
   p.Rect.X -= 1
.
}
===
On Wednesday, May 1, 2019 at 6:08:10 PM UTC+5:30, гусь wrote:
>
> if rl.IsKeyDown(rl.KeyA) {
> p.Rect.X -= 1
> }
> if rl.IsKeyDown(rl.KeyD) {
> p.Rect.X += 1
> }
> if rl.IsKeyDown(rl.KeyW) {
> p.Rect.Y -= 1
> }
> if rl.IsKeyDown(rl.KeyS) {
> p.Rect.Y += 1
> }
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why does this simple snip generate an infinite loop ?

2019-05-02 Thread L Godioleskky
Ok, thanks.

On Thu, May 2, 2019 at 1:26 PM Robert Engels  wrote:

> Because when u add 1 to 0xff it goes back to 0 since it is only 8 bits
>
> On May 2, 2019, at 12:22 PM, lgod...@gmail.com wrote:
>
> func main() {
>
> var c8 uint8;
> var S [256] uint8;
>
>for c8 = 0x00; c8 <= 0xff; c8 += 0x01 { S[c8]= c8 }
> }
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Random panic in production with Sprintf

2019-05-02 Thread XXX ZZZ
Hello,

We are having a random panic on our go application that is happening once 
every million requests or so, and so far we haven't been able to reproduce 
it nor to even grasp what's going on.

Basically our code goes like:

type Subid_info struct{
Affiliate_subid string
Second_subidstring
Second_subid_8string
S2string
Internal_subidstring
Internal_subid_9string
Internal_subid_12 string
Result string
}

func (r *Subid_info) Prepare_subid_logic(){
r.Second_subid_8=fmt.Sprintf("1%07v", r.Second_subid) > panic 
happens here.
}

And the trace we get is:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x466b6e]

goroutine 17091 [running]:
unicode/utf8.RuneCountInString(0x0, 0x5, 0xc048c275a8)
/usr/local/go/src/unicode/utf8/utf8.go:411 +0x2e
fmt.(*fmt).padString(0xc023c17780, 0x0, 0x5)
/usr/local/go/src/fmt/format.go:113 +0x134
fmt.(*fmt).fmtS(0xc023c17780, 0x0, 0x5)
/usr/local/go/src/fmt/format.go:347 +0x61
fmt.(*pp).fmtString(0xc023c17740, 0x0, 0x5, 0xc00076)
/usr/local/go/src/fmt/print.go:448 +0x132
fmt.(*pp).printArg(0xc023c17740, 0x9978e0, 0xc016a68a30, 0x76)
/usr/local/go/src/fmt/print.go:684 +0x880
fmt.(*pp).doPrintf(0xc023c17740, 0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1)
/usr/local/go/src/fmt/print.go:1112 +0x3ff
fmt.Sprintf(0xa6e22f, 0x5, 0xc048c27818, 0x1, 0x1, 0x80, 0xa36200)
/usr/local/go/src/fmt/print.go:214 +0x66
code/sharedobjects/sources.(*Subid_info).Prepare_subid_logic(0xc019292f80, 
0x2)

Given that we can't reproduce it, what's the logical way to debug this and 
find out what's happening?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Why does this simple snip generate an infinite loop ?

2019-05-02 Thread Robert Engels
Because when u add 1 to 0xff it goes back to 0 since it is only 8 bits

> On May 2, 2019, at 12:22 PM, lgod...@gmail.com wrote:
> 
> func main() {  
> 
> var c8 uint8; 
> var S [256] uint8;
>  
>for c8 = 0x00; c8 <= 0xff; c8 += 0x01 { S[c8]= c8 }
> }
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Why does this simple snip generate an infinite loop ?

2019-05-02 Thread lgodio2
func main() {  

var c8 uint8; 
var S [256] uint8;
 
   for c8 = 0x00; c8 <= 0xff; c8 += 0x01 { S[c8]= c8 }
}

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
It's not precisely the general functionality that I will implement for my 
transport, but here is a simple example of a classifier type processing 
queue:

https://play.golang.org/p/ytdrXgCdbQH

This processes a series of sequential integers and pops them into an array 
to find the highest factor of a given range of numbers. The code I will 
write soon is slightly different, as, obviously, that above there is not 
technically a queue. This code shows how to make a non-deadlocking 
processing queue, however.

Adding an actual queue like for my intended purpose of bundling packets 
with a common uuid is not much further, instead of just dropping the 
integers into their position in the slice, it iterates them as each item is 
received to find a match, if it doesn't find enough, then it puts the item 
back at the end of the search on the queue and waits for the next new item 
to arrive. I'll be writing that shortly.

For that, I think the simple example would use an RNG to generate numbers 
within the specified range, and then for the example, it will continue to 
accumulate numbers in the buffer until a recurrance occurs, then the 
numbers are appended to  the array and this index is ignored when another 
one comes in later. That most closely models what I am building.

On Thursday, 2 May 2019 13:26:47 UTC+2, Louki Sumirniy wrote:
>
> Yeah, I was able to think a bit more about it as I was falling asleep 
> later and I realised how I meant it to run. I had to verify that indeed 
> channels are FIFO queues, as that was the basis of this way of using them.
>
> The receiver channel is unbuffered, and lives in one goroutine. When it 
> receives something it bounces it into the queue and for/range loops through 
> the content of a fairly big-buffered working channel where items can 
> collect while they are fresh, and upon arrival of a new item the new item 
> is checked for a match against the contents of the queue, as well as 
> kicking out stale data (and recording the uuid of the stale set so it can 
> be immediately dumped if any further packets got hung up and come after way 
> too long.
>
> This differs a lot from the loopy design I made in the OP. In this design 
> there is only to threads instead of three. I think the geometry of a 
> channel pattern is important - specifically, everything needs to be done in 
> pairs with channels, although maybe sometimes you want it too receive but 
> not need it to send it anywhere, just store/drop, as the algorithm requires.
>
> I still need to think through the design a bit more. Like, perhaps the 
> queue channel *should* be a pair of one-direction channels so one is the 
> main fifo and the other side each item is taken off the queue, processed, 
> and then put back into the stream. Ordering is not important, except that 
> it is very handy that it is a FIFO because this means if I have a buffer 
> with some number, and get a new item, put it into the buffer queue, and 
> then the queue unpacks the newest item last. I think I could make it like 
> this, actually:
>
> one channel inbound receiver, it passes into a buffered queue channel, and 
> triggers the passing out of buffered items from the head of the queue to 
> watcher 1, 2, 3, each watcher process being a separate process that may 
> swallow or redirect the contents. For each new UUID item that comes in, a 
> single thread could be started that keeps reading, checking and (re) 
> directing the input as it passes out of the buffer and through the 
> watchers. Something like this:
>
> input -> buffer[64] -> (watcher 1) -> (watcher 2) -> buffer [64] 
>
> With this pattern I could have a new goroutine spawn for each new UUID 
> that marks out a batch, that springs a deadline tick and when the deadline 
> hits the watcher's buffer is cleared and the goroutine ends, implementing 
> expiry, and the UUID is attached to a simple buffered channel that keeps 
> the last 100 or so UUIDs and uses it to immediately identify stale junk 
> (presumably the main data type in the other channels is significantly 
> bigger data than the UUID integer - my intent is that the data type should 
> be a UDP packet so that means it is size restricted and contains something 
> arbitrary that watchers detect, decode and respond to.
>
> It's a work in progress, but I know from previous times writing code 
> dealing with simple batch/queue problems like this, that the Reader/Writer 
> pattern is most often used and requires a lot of slice fiddling implemented 
> using arrays/slices, but a buffered channel, being a FIFO, is a queue 
> buffer, so it can be used to store (relatively) ordered items that age as 
> they get to the head of the queue, and allow a check-pass on each item. 
>
> These checkers can 'return' to the next in line so the checker-queue, so 
> to speak, also has to be stored in some form of state. This could be done 
> by having that first receiver channel check first the list of fresh UUIDs 
> firstly, which would be a m

[go-nuts] Re: Websocket vs Template

2019-05-02 Thread ThisEndUp
Thanks. You're right.

After posting, I kept looking and got as far as starting to read about AJAX 
XMLHttpRequest, at which point I decided that templates wouldn't be a 
particularly pleasant way to accomplish my task.

On Thursday, May 2, 2019 at 2:45:33 AM UTC-4, amn...@gmail.com wrote:
>
> Use a websocket.
>
> templates would give you server side rendering which will not give you 
> live updates on the web page.
>
> Steer clear of x/net/websocket which is deprecated.
> Instead use the popular gorilla/websocket or the simpler nhooyr/websocket.
>
> On Wednesday, 1 May 2019 15:11:55 UTC+1, ThisEndUp wrote:
>>
>> I'm new to Go and am in the design phase of a project that will display 
>> live sensor data on a web page. The data are transferred via an MQTT 
>> broker. I have done such things in the past using a websocket but wonder if 
>> a template would be a more appropriate method. Any thoughts?
>> Thanks.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Robert Engels
The net mask is not part of the ip packet. It is a local config in the router.

> On May 2, 2019, at 7:20 AM, Louki Sumirniy  
> wrote:
> 
> Upon review one thing occurred to me also - Netmasks are specifically a fast 
> way to decide at the router which direction a packet should go. The interface 
> netmask is part of the IP part of the header and allows the router to quickly 
> determine whether a packet should go to the external rather than internal 
> interface.
> 
> When you use the expression 'should x exist in todays internet', an unspoken 
> aspect of this has to do with IPv6, which does not have a formal NAT 
> specification, and 'local address' range that is as big as the whole IPv4 is 
> now. This serves a similar purpose for routing as a netmask in IPv4, but IPv6 
> specifically aims to solve the problem of allowing inbound routing to any 
> node. The address shortage that was resolved by CIDR and NAT is not relevant 
> to IPv6, and I believe, in general, applications are written to generate 
> valid addresses proactively and only change it in the rare case it randomly 
> selects an address already in use. This is an optimistic algorithm that can 
> save a lot of latency for a highly dynamic server application running on many 
> worker node machines.
> 
> Yes, it's long past due that we abandon IPv4 and NAT, peer to peer 
> applications and dynamic cloud applications are becoming the dominant form 
> for applications and the complexity of arranging peer to peer connections in 
> this environment is quite high compared to IPv6. IPv6 does not need masks as 
> they are built into the 128 bit address coding system.
> 
>> On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>> The function has a very specific purpose that I have encountered in several 
>> applications, that being to automatically set the netmask based on the IP 
>> being one of the several defined ones, 192, 10, and i forget which others. 
>> 
>> Incorrect netmask can result in not recognising a LAN address that is 
>> incorrect. A 192.168 network has 255 available addresses. You can't just 
>> presume to make a new 192.168.X... address with a /16, as no other correctly 
>> configured node in the LAN will be able to route to it due to it being a 
>> /16. 
>> 
>> If you consider the example of an elastic cloud type network environment, it 
>> is important that all nodes agree on netmask or they will become (partially) 
>> disconnected from each other. An app can be spun up for a few seconds and 
>> grab a new address from the range, this could be done with a broker (eg 
>> dhcp), but especially with cloud, one could use a /8 address range and 
>> randomly select out of the 16 million possible, a big enough space that 
>> random generally won't cause a collision - which is a cheaper allocation 
>> procedure than a list managing broker, and would be more suited to the 
>> dynamic cloud environment.
>> 
>> This function allows this type of client-side decisionmaking that a broker 
>> bottlenecks into a service, creating an extra startup latency cost. A 
>> randomly generated IP address takes far less time than sending a request to 
>> a centralised broker and receiving it.
>> 
>> That's just one example I can think of where a pre-made list of netmasks is 
>> useful, I'm sure more experienced network programmers can rattle off a 
>> laundry list.
>> 
>>> On Monday, 11 March 2019 20:45:32 UTC+1, John Dreystadt wrote:
>>> Yes, I was mistaken on this point. I got confused over someone's discussion 
>>> of RFC 1918 with what the standard actually said. I should have checked 
>>> closer before I posted that point. But I still don't see the reason for 
>>> this function. In today's networking, the actual value you should use for a 
>>> mask on an interface on the public Internet is decided by a combination of 
>>> the address range you have and how it is divided by your local networking 
>>> people. On the private networks, it is entirely up to the local networking 
>>> people. The value returned by this function is only a guess, and I think it 
>>> is more likely to mislead than to inform.
>>> 
 On Friday, March 8, 2019 at 12:51:41 PM UTC-5, Tristan Colgate wrote:
 Just on a point of clarity. DefaultMask is returning the mask associates 
 with the network class. RFC1918 specifies a bunch of class A,B and C 
 networks for private use. E.g. 192.168/16 is a set of 256 class C 
 networks. The correct netmask for one of those class Cs is 255.255.255.0 
 (/24). So the function returns the correct thing by the RFC.
   
 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 

[go-nuts] Re: Map inside a struct

2019-05-02 Thread Luis Furquim
Hi,

Check if your method has the signature like this:
func (d DP) MyMethod()  { ... }


If so, change to this:
func (d *DP) MyMethod()  { ... }


I made this error so many times: a method called by value changes the value 
of the object and the change vanishes when the method returns, you must use 
a reference in order to preserve your changes to the struct fields. 

Cheers,
Luis Otavio de Colla Furquim


Em terça-feira, 30 de abril de 2019 11:31:14 UTC-3, mdmajid...@gmail.com 
escreveu:
>
> I have a map inside a struct like following - 
>
> type DP struct {
>PI string
>cat map[string]PS
> }
>
> Here, PS is another struct having two string fields.
>
> There is method where I create DP struct, initialise the map and put a 
> key-value pair to it.
> I append this DP struct to an array and return from the method to its 
> caller (the array to which i append is part of the object which is used to 
> invoke the method).
>
> In the method, I can debug and see that the struct got created with map 
> and its key value pair, however when the method returns to the caller, the 
> struct is there PI and cat initialised, but the key value pairs disappear 
> from the map. Is this behavior expected? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Louki Sumirniy
Upon review one thing occurred to me also - Netmasks are specifically a 
fast way to decide at the router which direction a packet should go. The 
interface netmask is part of the IP part of the header and allows the 
router to quickly determine whether a packet should go to the external 
rather than internal interface.

When you use the expression 'should x exist in todays internet', an 
unspoken aspect of this has to do with IPv6, which does not have a formal 
NAT specification, and 'local address' range that is as big as the whole 
IPv4 is now. This serves a similar purpose for routing as a netmask in 
IPv4, but IPv6 specifically aims to solve the problem of allowing inbound 
routing to any node. The address shortage that was resolved by CIDR and NAT 
is not relevant to IPv6, and I believe, in general, applications are 
written to generate valid addresses proactively and only change it in the 
rare case it randomly selects an address already in use. This is an 
optimistic algorithm that can save a lot of latency for a highly dynamic 
server application running on many worker node machines.

Yes, it's long past due that we abandon IPv4 and NAT, peer to peer 
applications and dynamic cloud applications are becoming the dominant form 
for applications and the complexity of arranging peer to peer connections 
in this environment is quite high compared to IPv6. IPv6 does not need 
masks as they are built into the 128 bit address coding system.

On Thursday, 2 May 2019 14:09:09 UTC+2, Louki Sumirniy wrote:
>
> The function has a very specific purpose that I have encountered in 
> several applications, that being to automatically set the netmask based on 
> the IP being one of the several defined ones, 192, 10, and i forget which 
> others. 
>
> Incorrect netmask can result in not recognising a LAN address that is 
> incorrect. A 192.168 network has 255 available addresses. You can't just 
> presume to make a new 192.168.X... address with a /16, as no other 
> correctly configured node in the LAN will be able to route to it due to it 
> being a /16. 
>
> If you consider the example of an elastic cloud type network environment, 
> it is important that all nodes agree on netmask or they will become 
> (partially) disconnected from each other. An app can be spun up for a few 
> seconds and grab a new address from the range, this could be done with a 
> broker (eg dhcp), but especially with cloud, one could use a /8 address 
> range and randomly select out of the 16 million possible, a big enough 
> space that random generally won't cause a collision - which is a cheaper 
> allocation procedure than a list managing broker, and would be more suited 
> to the dynamic cloud environment.
>
> This function allows this type of client-side decisionmaking that a broker 
> bottlenecks into a service, creating an extra startup latency cost. A 
> randomly generated IP address takes far less time than sending a request to 
> a centralised broker and receiving it.
>
> That's just one example I can think of where a pre-made list of netmasks 
> is useful, I'm sure more experienced network programmers can rattle off a 
> laundry list.
>
> On Monday, 11 March 2019 20:45:32 UTC+1, John Dreystadt wrote:
>>
>> Yes, I was mistaken on this point. I got confused over someone's 
>> discussion of RFC 1918 with what the standard actually said. I should have 
>> checked closer before I posted that point. But I still don't see the reason 
>> for this function. In today's networking, the actual value you should use 
>> for a mask on an interface on the public Internet is decided by a 
>> combination of the address range you have and how it is divided by your 
>> local networking people. On the private networks, it is entirely up to the 
>> local networking people. The value returned by this function is only a 
>> guess, and I think it is more likely to mislead than to inform.
>>
>> On Friday, March 8, 2019 at 12:51:41 PM UTC-5, Tristan Colgate wrote:
>>>
>>> Just on a point of clarity. DefaultMask is returning the mask associates 
>>> with the network class. RFC1918 specifies a bunch of class A,B and C 
>>> networks for private use. E.g. 192.168/16 is a set of 256 class C networks. 
>>> The correct netmask for one of those class Cs is 255.255.255.0 (/24). So 
>>> the function returns the correct thing by the RFC.
>>>   
>>>
>>>



-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Should IP.DefaultMask() exist in today's Internet?

2019-05-02 Thread Louki Sumirniy
The function has a very specific purpose that I have encountered in several 
applications, that being to automatically set the netmask based on the IP 
being one of the several defined ones, 192, 10, and i forget which others. 

Incorrect netmask can result in not recognising a LAN address that is 
incorrect. A 192.168 network has 255 available addresses. You can't just 
presume to make a new 192.168.X... address with a /16, as no other 
correctly configured node in the LAN will be able to route to it due to it 
being a /16. 

If you consider the example of an elastic cloud type network environment, 
it is important that all nodes agree on netmask or they will become 
(partially) disconnected from each other. An app can be spun up for a few 
seconds and grab a new address from the range, this could be done with a 
broker (eg dhcp), but especially with cloud, one could use a /8 address 
range and randomly select out of the 16 million possible, a big enough 
space that random generally won't cause a collision - which is a cheaper 
allocation procedure than a list managing broker, and would be more suited 
to the dynamic cloud environment.

This function allows this type of client-side decisionmaking that a broker 
bottlenecks into a service, creating an extra startup latency cost. A 
randomly generated IP address takes far less time than sending a request to 
a centralised broker and receiving it.

That's just one example I can think of where a pre-made list of netmasks is 
useful, I'm sure more experienced network programmers can rattle off a 
laundry list.

On Monday, 11 March 2019 20:45:32 UTC+1, John Dreystadt wrote:
>
> Yes, I was mistaken on this point. I got confused over someone's 
> discussion of RFC 1918 with what the standard actually said. I should have 
> checked closer before I posted that point. But I still don't see the reason 
> for this function. In today's networking, the actual value you should use 
> for a mask on an interface on the public Internet is decided by a 
> combination of the address range you have and how it is divided by your 
> local networking people. On the private networks, it is entirely up to the 
> local networking people. The value returned by this function is only a 
> guess, and I think it is more likely to mislead than to inform.
>
> On Friday, March 8, 2019 at 12:51:41 PM UTC-5, Tristan Colgate wrote:
>>
>> Just on a point of clarity. DefaultMask is returning the mask associates 
>> with the network class. RFC1918 specifies a bunch of class A,B and C 
>> networks for private use. E.g. 192.168/16 is a set of 256 class C networks. 
>> The correct netmask for one of those class Cs is 255.255.255.0 (/24). So 
>> the function returns the correct thing by the RFC.
>>   
>>
>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Louki Sumirniy
Yeah, I was able to think a bit more about it as I was falling asleep later 
and I realised how I meant it to run. I had to verify that indeed channels 
are FIFO queues, as that was the basis of this way of using them.

The receiver channel is unbuffered, and lives in one goroutine. When it 
receives something it bounces it into the queue and for/range loops through 
the content of a fairly big-buffered working channel where items can 
collect while they are fresh, and upon arrival of a new item the new item 
is checked for a match against the contents of the queue, as well as 
kicking out stale data (and recording the uuid of the stale set so it can 
be immediately dumped if any further packets got hung up and come after way 
too long.

This differs a lot from the loopy design I made in the OP. In this design 
there is only to threads instead of three. I think the geometry of a 
channel pattern is important - specifically, everything needs to be done in 
pairs with channels, although maybe sometimes you want it too receive but 
not need it to send it anywhere, just store/drop, as the algorithm requires.

I still need to think through the design a bit more. Like, perhaps the 
queue channel *should* be a pair of one-direction channels so one is the 
main fifo and the other side each item is taken off the queue, processed, 
and then put back into the stream. Ordering is not important, except that 
it is very handy that it is a FIFO because this means if I have a buffer 
with some number, and get a new item, put it into the buffer queue, and 
then the queue unpacks the newest item last. I think I could make it like 
this, actually:

one channel inbound receiver, it passes into a buffered queue channel, and 
triggers the passing out of buffered items from the head of the queue to 
watcher 1, 2, 3, each watcher process being a separate process that may 
swallow or redirect the contents. For each new UUID item that comes in, a 
single thread could be started that keeps reading, checking and (re) 
directing the input as it passes out of the buffer and through the 
watchers. Something like this:

input -> buffer[64] -> (watcher 1) -> (watcher 2) -> buffer [64] 

With this pattern I could have a new goroutine spawn for each new UUID that 
marks out a batch, that springs a deadline tick and when the deadline hits 
the watcher's buffer is cleared and the goroutine ends, implementing 
expiry, and the UUID is attached to a simple buffered channel that keeps 
the last 100 or so UUIDs and uses it to immediately identify stale junk 
(presumably the main data type in the other channels is significantly 
bigger data than the UUID integer - my intent is that the data type should 
be a UDP packet so that means it is size restricted and contains something 
arbitrary that watchers detect, decode and respond to.

It's a work in progress, but I know from previous times writing code 
dealing with simple batch/queue problems like this, that the Reader/Writer 
pattern is most often used and requires a lot of slice fiddling implemented 
using arrays/slices, but a buffered channel, being a FIFO, is a queue 
buffer, so it can be used to store (relatively) ordered items that age as 
they get to the head of the queue, and allow a check-pass on each item. 

These checkers can 'return' to the next in line so the checker-queue, so to 
speak, also has to be stored in some form of state. This could be done by 
having that first receiver channel check first the list of fresh UUIDs 
firstly, which would be a map linking to the bundles that are made out of 
them, and matches are pulled out of the buffer queue and attached to the 
bundles, which are processed when they have the required minimum pieces or 
the thread times out, adds the UUID to the stale list/queue, and so on.

Attaching new watchers to the chain simply means changing the destination 
of the last watcher in the queue from the return to the buffer, to the 
input of the new watcher. When a watcher times out it signals its 
predecessor with its successor's location and the stale item is deleted 
from the watchers list.

It's going to be done... I probably need to look at some transport 
implementations in Go for some other clues, this was just my idea of how to 
build a minimum latency batching system for receiving error-redundancy UDP 
packets encoded with reed solomon encoding, with the main end-goal being to 
have delivery of the data with a minimum of delay. The design I just 
sketched out allows for a lot of parallelisation with the 'watcher' 
processes, but managing that chain of receivers is obviously going to be 
some kind of overhead.

After a little thought I think I know how to implement the multi-watcher 
filter queue - when a watcher expires, and it's specific UUID is stale, it 
sends that in a message to its upline, containing the expiring thread's 
subsequent queue member (next in queue in direction of flow), which then 
redirects its output to bypass the stale thread,

[go-nuts] using channels to gather bundles from inputs for batch processing

2019-05-02 Thread Øyvind Teig
Hi, Louki Sumirniy

This is not really a response to your problem in particular, so it may totally 
miss your target. It's been a while since I did anything in this group. 
However, it's a response to the use of buffered channels. It's a coincidence 
that I react to your posting (and not the probably hundreds of others over the 
years where this comment may have been relevant). But I decided this morning to 
actually look into one of the group update mails, and there you were!

In a transcript from [1] Rob Pike says that 

“Now for those experts in the room who know about buffered channels in Go – 
which exist – you can create a channel with a buffer. And buffered channels 
have the property that they don’t synchronise when you send, because you can 
just drop a value in the buffer and keep going. So they have different 
properties. And they’re kind of subtle. They’re very useful for certain 
problems, but you don’t need them. And we’re not going to use them at all in 
our examples today, because I don’t want to complicate life by explaining them.”

I don't know if that statement is still valid, and I would not know whether 
your example is indeed one of the "certain problems" where you have got the 
correct usage. In that case, my comments below would be of less value in this 
concrete situation. Also, whether there is a more generic library in Go now 
that may help getting rid of buffered channels. Maybe even an output into a 
zero-buffered channel in a select with a timeout will do.

If you fill up a channel with data you would often need some state to know 
about what is in the channel. If it's a safety critical implementation you may 
not want to just drop the data into the channel and forget. If you need to 
restart the comms in some way you would need to flush the channel, without 
easily knowing what you are flushing. The message "fire in 1 second if not 
cancelled" comes through but you would not know that the "cancel!" message was 
what you had to flush in the channel. In any case, a full channel would be 
blocking anyhow - so you would have to take care of that. Or alternatively 
_know_ that the consumer always stays ahead of the buffered channel, which may 
be hard to know.

I guess there are several (more complex?) concurrency patterns available that 
may be used instead of the (simple?) buffered channel:

All of the patterns below would use synchronised rendezvous with zero-buffered 
channels that would let a server goroutine (task, process) never have to block 
to get rid of its data. After all, that's why one would use a buffered channel; 
so that one would not need to block. All of the below patterns move data, but I 
guess there may be patterns for moving access as well (like mobile channels). 
All would also be deadlock free.

The Overflow Buffer pattern uses a composite goroutine consisting of two inner 
goroutines. One Input that always accepts data and one Output that blocks to 
output, and in between there is a channel with Data one direction that never 
blocks and a channel with Data-sent back. If the Input has not got the 
Data-sent back then there is an overflow that may be handled by user code. See 
[2], figure 3.

Then there are the Knock-Come pattern [3] and a pattern like the XCHAN [4]. In 
the latter's appendix a Go solution is discussed.

- - - 

[1] Rob Pike: "Go concurrency patterns": 
https://www.youtube.com/watch?v=f6kdp27TYZs&sns=em at Google I/O 2012. 
Discussed in [5]

Disclaimer: there are no ads, no gifts, no incoming anything with my blog 
notes, just fun and expenses:

[2] http://www.teigfam.net/oyvind/pub/pub_details.html#NoBlocking - See Figure 3

[3] 
https://oyvteig.blogspot.com/2009/03/009-knock-come-deadlock-free-pattern.html 
Knock-come

[4] http://www.teigfam.net/oyvind/pub/pub_details.html#XCHAN - XCHANs: Notes on 
a New Channel Type

[5] 
http://www.teigfam.net/oyvind/home/technology/072-pike-sutter-concurrency-vs-concurrency/
 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Discrepancy between htop and memstats

2019-05-02 Thread Michel Levieux
Hi Vladimir,

I'm gonna try that today, I'll keep you updated, thanks for the advice!

Le mer. 1 mai 2019 à 11:07, Vladimir Varankin  a écrit :

> Hi Michel,
>
> Have tried collecting your program's heap profiles [1] (maybe once after
> each reload cycle)? Comparing pprof results should show you what objects
> leak memory.
>
> [1]: https://blog.golang.org/profiling-go-programs
>
> On Tuesday, April 30, 2019 at 3:36:34 PM UTC+2, Michel Levieux wrote:
>>
>> Hi all,
>>
>> I'm currently having a lot of trouble debugging the memory usage of the
>> program I'm working on. This program, everyday at a given time, reloads a
>> bunch of data (that's been updated by another program) into its memory. The
>> following function:
>>
>> // PrintMemUsage outputs the current, total and OS memory being used. As
>> well as the number
>> // of garage collection cycles completed.
>> func PrintMemUsage() {
>> var m runtime.MemStats
>> runtime.ReadMemStats(&m)
>> // For info on each, see: https://golang.org/pkg/runtime/#MemStats
>> fmt.Printf("Alloc = %v MiB", bToMb(m.Alloc))
>> fmt.Printf("TotalAlloc = %v MiB", bToMb(m.TotalAlloc))
>> fmt.Printf("Sys = %v MiB", bToMb(m.Sys))
>> fmt.Printf("NumGC = %v\n", m.NumGC)
>> }
>>
>> Outputs this:
>>
>> Alloc = 103861 MiB
>> TotalAlloc = 6634355 MiB
>> Sys = 232088 MiB
>> NumGC = 3822
>>
>> The program reloads its data everyday but everyday the few references
>> that existed for the given data is overwritten by the new one. And indeed
>> the "Alloc" line above seems to stay around 100GB. Where I don't understand
>> what happens is that in the htop line of the program, the memory that is
>> used by the program keeps growing as the days pass, and when it reaches
>> close to 100% percent of the host's memory, the program will crash
>> eventually.
>>
>> Any idea that would help me debug this memory inconstency between
>> memstats and htop is welcome. I can also provide more information but no
>> particular piece of code directly from the project.
>>
>> Thank you all in advance.
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.