[go-nuts] Wmi-exporter add new collector

2019-03-17 Thread kunapaneni siddhartha
How do I create a new collector to get a metric from a class say, 
root/microsoft

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] cgo:build trouble

2019-03-17 Thread liaoyue2019
package main

import(
)

/*
#include 
*/
import "C"

func main() {
C.printf(C.CString("hello"))
}

//>build this file with go build, i got
//./testems.go:12:2: unexpected type: ...

could someone help me with this trouble? thanks!


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Go 1.12.1 and Go 1.11.6 are released

2019-03-17 Thread Ian Lance Taylor
On Sun, Mar 17, 2019 at 1:46 PM Serhat Şevki Dinçer  wrote:
>
> I see a regression on speed with sorty tests (go test -short -gcflags '-B -s' 
> -ldflags '-s -w') on my Intel Core i5-4210M laptop (also sortutil became 
> faster, zermelo float became much slower):

Please open an issue with full details.  Thanks.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] panic at fmt.Printlb

2019-03-17 Thread liaoyue2019
Hi,
I got a panic when i run my test code
my go version is 'go version go1.12 linux/amd64'

panic: reflect: call of reflect.Value.Type on zero Value

goroutine 6 [running]:
reflect.Value.Type(0x0, 0x0, 0x0, 0x4db0c0, 0xc183d0)
/usr/local/go/src/reflect/value.go:1813 +0x169
internal/fmtsort.compare(0x0, 0x0, 0x0, 0x4e34c0, 0xcb8110, 0xb4, 0x1)
/usr/local/go/src/internal/fmtsort/sort.go:76 +0x5a
internal/fmtsort.(*SortedMap).Less(0xcc2000, 0x9, 0x8, 0x100cb8100)
/usr/local/go/src/internal/fmtsort/sort.go:27 +0x87
sort.insertionSort(0x525540, 0xcc2000, 0x0, 0xa)
/usr/local/go/src/sort/sort.go:27 +0xab
sort.stable(0x525540, 0xcc2000, 0xa)
/usr/local/go/src/sort/sort.go:368 +0x86
sort.Stable(0x525540, 0xcc2000)
/usr/local/go/src/sort/sort.go:357 +0x53
internal/fmtsort.Sort(0x4e3a00, 0xc12678, 0x1b5, 0x20)
/usr/local/go/src/internal/fmtsort/sort.go:67 +0x2e3
fmt.(*pp).printValue(0xcb2000, 0x4e3a00, 0xc12678, 0x1b5, 0x76, 0x2)
/usr/local/go/src/fmt/print.go:759 +0xd1d
fmt.(*pp).printValue(0xcb2000, 0x4eb220, 0xc12670, 0x199, 
0xc00076, 0x1)
/usr/local/go/src/fmt/print.go:796 +0x1b52
fmt.(*pp).printValue(0xcb2000, 0x4f89e0, 0xc12670, 0x16, 0x76, 0x0)
/usr/local/go/src/fmt/print.go:866 +0x199f
fmt.(*pp).printArg(0xcb2000, 0x4f89e0, 0xc12670, 0x76)
/usr/local/go/src/fmt/print.go:702 +0x2ba
fmt.(*pp).doPrintln(0xcb2000, 0xca9fa0, 0x2, 0x2)
/usr/local/go/src/fmt/print.go:1159 +0xb2
fmt.Fprintln(0x524b60, 0xc10018, 0xc387a0, 0x2, 0x2, 0x0, 0x0, 0x0)
/usr/local/go/src/fmt/print.go:265 +0x58
fmt.Println(...)
/usr/local/go/src/fmt/print.go:275
main.testMapLock.func1(0xc12670, 0xc18350)
/home/lbs/go/src/demo/test/testmap/testmap.go:78 +0x95
created by main.testMapLock
/home/lbs/go/src/demo/test/testmap/testmap.go:77 +0xfa


Piece of my code:
//ForEach range all the node in map and call the handler and then delete 
the node if the handler return true
func (em *EMSmap) ForEachRemove ( h DoEachNode ,userData interface{}){
em.m.Lock()
defer em.m.Unlock()
for key,value := range em.emap{
userData := userData
if b := h(key, value, userData);b{
delete(em.emap, key)
}
}
}

//FindAndRemove store the node in a temporary var and then delete it from 
the map and return the node which stored in the temporary var
func (em *EMSmap) FindAndRemove (key interface{}) interface{}{
em.m.Lock()
defer em.m.Unlock()
if _,ok := em.emap[key]; !ok{
return nil
}
value := em.emap[key]
delete(em.emap, key)
return value
}


func testMapLock(){
var callMap *emsutil.EMSmap = context.CallMap
var wgt sync.WaitGroup
for i := 0; i < 10; i++{
callMap.Put(i, i)
}
wgt.Add(2)
go func(){
fmt.Println("ForEachRemove",callMap)
time.Sleep(time.Duration(10)*time.Millisecond)
callMap.ForEachRemove(func (key interface{}, value interface{}, userData 
interface{})bool{
fmt.Println("delete by ForEachRemove",key,value)
time.Sleep(time.Duration(2)*time.Millisecond)
return true}, nil)
wgt.Done()
}()
go func(){
fmt.Println("FindAndRemove",callMap)
for i := 0; i < 10; i++{
val := callMap.FindAndRemove(i)
fmt.Println("delete by for FindAndRemove",val)
time.Sleep(time.Duration(2)*time.Millisecond)
}
wgt.Done()
}()
wgt.Wait()
wg.Done()
}

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Devon H. O'Dell
Op zo 17 mrt. 2019 om 15:28 schreef Louki Sumirniy
:
> As my simple iterative example showed, given the same sequence of events, 
> channels are deterministic, so this is an approach that is orthogonal but in 
> the same purpose - to prevent multiple concurrent agents from desynchronising 
> shared state, without blocking everything before access. It's not a journal 
> but the idea is to have each goroutine acting on the final state of the value 
> at the time it is invoked to operate on it. So you let everyone at it but 
> everyone stops at this barrier and checks if anyone else changed it and then 
> they try again to have a conflict free access.

Channels are deterministic in the sense that if items A and B are
enqueued in that order, they will be delivered in that order. They are
not deterministic in two ways: if items A and B are enqueued
concurrently, the order could be A B or B A; if two consumers of the
channel read from it concurrently, it is not deterministic which will
read.

This doesn't seem to me to be related in any way to locking / mutual
exclusion. The purpose of mutual exclusion is to serialize some
execution history. In other words, the idea is to make an interleaved
/ concurrent execution history of some set of instructions impossible.
That's not what a channel does. While a channel will act like a FIFO,
when multiple concurrent producers and / or consumers are present, one
cannot say which concurrent producer or consumer "wins" reading or
writing. So this pattern is really only meaningful for SPSC workloads.

What you term "access contention" seems to me somewhat analogous to
"data race". Assuming "somestate" is a pointer to some shared mutable
state, you might be able to determine whether that state changed after
the channel operation, but:

a) You're still susceptible to ABA problems when the state does not
change deterministically.
b) Even with a deterministically mutated state (such as a counter
increasing by a constant factor), you still have TOCTOU (time-of-check
versus time-of-use) issues. If your test of the state after the
channel operation doesn't show that the state has changed, what
guarantees that the state hasn't changed after your check? It's
turtles all the way down from here.
c) Both the assignment of "somestate" and a read from a channel create
a copy of the thing that was yielded from the function or was sent
over the channel, respectively. Unless that thing is a pointer to
shared mutable state, that state can't be observed to change via
external influence, anyway.

The channel use you've demonstrated is much more analogous to a binary
semaphore than any sort of lock in the sense that readers block on a
binary condition (whether or not the channel contains a value). You
can implement a mutex on a binary semaphore, but you need some notion
of "lock" and "unlock". To achieve locking semantics with an
unbuffered channel, the channel must begin in a non-empty state. When
the channel is read from, the state transitions to "locked". When the
lock is meant to be released, the releasing process writes to the
channel. This is perilous:

1) if you ever accidentally write twice, you no longer have the
guarantee of mutual exclusion, and
2) "unlocking" blocks until there's a a new process wishing to acquire the lock.

You can solve the second point with a buffered channel with a capacity
of 1, but you seem to be explicitly talking about unbuffered channels.
The first point can be solved, but I don't think it's super relevant.

If all you want to do is determine that some state was valid at the
time it was accessed, atomics or mutexes are the correct approach. If
you want to have multiple concurrent processes "agree" that they're
both working with the same state for the entire duration of some
execution history, you still need some sort of consensus protocol.

--dho

> On Sunday, 17 March 2019 20:52:12 UTC+1, Robert Engels wrote:
>>
>> https://g.co/kgs/2Q3a5n
>>
>> On Mar 17, 2019, at 2:36 PM, Louki Sumirniy  wrote:
>>
>> So I am incorrect that only one goroutine can access a channel at once? I 
>> don't understand, only one select or receive or send can happen at one 
>> moment per channel, so that means that if one has started others can't start.
>>
>> I was sure this was the case and this seems to confirm it:
>>
>> https://stackoverflow.com/a/19818448
>>
>> https://play.golang.org/p/NQGO5-jCVz
>>
>> In this it is using 5 competing receivers but every time the last one in 
>> line gets it, so there is scheduling and priority between two possible 
>> receivers when a channel is filled.
>>
>> This is with the range statement, of course, but I think the principle I am 
>> seeing here is that in all cases it's either one to one between send and 
>> receive, or one to many, or many from one, one side only receives the other 
>> only sends. If you consider each language element in the construction, and 
>> the 'go' to be something like a unix fork(), this means that the first 
>> 

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
This was a good link to follow:

https://en.wikipedia.org/wiki/Bulk_synchronous_parallel

led me here:

https://en.wikipedia.org/wiki/Automatic_mutual_exclusion

and then to here:

https://en.wikipedia.org/wiki/Transactional_memory

I think this is the pattern for implementing this using channels as an 
optimistic resource state lock during access:

// in outside scope

mx := new(chan bool)

// routine that needs exclusive read or write on variable:

go func() {

  for {
somestate := doSomething()
<-mx
if currentState() == somestate {
  break
}
  }
}

mx <- true

This is not a strict locking mechanism but a way to catch access 
contention. somestate might be a nanosecond timestamp or a value that is 
only read and always incremented by every accessor, signifying the number 
of accesses and thus the synchronisation state before the channel is 
emptied can be compared to after, and if no other access incremented that 
value then it knows it can continue with the state being correctly shared. 
I am deeply fascinated by distributed systems programming and this type of 
scheduling system suits better dealing with potentially large and complex 
state (like a database) is to take note of access sequence. If we didn't 
have the possibility of one central counter that only increments, the event 
could be tagged with a value that derives out of the event that called it 
and the result of the next event.

As my simple iterative example showed, given the same sequence of events, 
channels are deterministic, so this is an approach that is orthogonal but 
in the same purpose - to prevent multiple concurrent agents from 
desynchronising shared state, without blocking everything before access. 
It's not a journal but the idea is to have each goroutine acting on the 
final state of the value at the time it is invoked to operate on it. So you 
let everyone at it but everyone stops at this barrier and checks if anyone 
else changed it and then they try again to have a conflict free access.

On Sunday, 17 March 2019 20:52:12 UTC+1, Robert Engels wrote:
>
> https://g.co/kgs/2Q3a5n
>
> On Mar 17, 2019, at 2:36 PM, Louki Sumirniy  > wrote:
>
> So I am incorrect that only one goroutine can access a channel at once? I 
> don't understand, only one select or receive or send can happen at one 
> moment per channel, so that means that if one has started others can't 
> start. 
>
> I was sure this was the case and this seems to confirm it:
>
> https://stackoverflow.com/a/19818448
>
> https://play.golang.org/p/NQGO5-jCVz
>
> In this it is using 5 competing receivers but every time the last one in 
> line gets it, so there is scheduling and priority between two possible 
> receivers when a channel is filled. 
>
> This is with the range statement, of course, but I think the principle I 
> am seeing here is that in all cases it's either one to one between send and 
> receive, or one to many, or many from one, one side only receives the other 
> only sends. If you consider each language element in the construction, and 
> the 'go' to be something like a unix fork(), this means that the first 
> statement inside the goroutine and the very next one in the block where the 
> goroutine is started potentially can happen at the same time, but only one 
> can happen at a time.
>
> So the sequence puts the receive at the end of the goroutine, which 
> presumably is cleared to run, whereas the parent where it is started is 
> waiting to have a receiver on the other end.
>
> If there is another thread in line to access that channel, at any given 
> time only one can be on the other side of send or receive. That code shows 
> that there is a deterministic order to this, so if I have several 
> goroutines running each one using this same channel lock to cover a small 
> number of mutable shared objects, only one can use the channel at once. 
> Since the chances are equal whether one or the other gets the channel at 
> any given time, but it is impossible that two can be running the accessor 
> code at the same time.
>
> Thus, this construction is a mutex because it prevents more than one 
> thread accessing at a time. It makes sense to me since it takes several 
> instructions to read and write variables copying to register or memory. If 
> you put two slots in the buffer, it can run in parallel, that's the point, 
> a single element in the channel means only one access at a time and thus it 
> is a bottleneck that protects from simultaneous read and right by parallel 
> threads.
>
> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>>
>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>> wrote:
>>
>> > My understanding of channels is they basically create exclusion by 
>> control of the path of execution, instead of using callbacks, or they 
>> bottleneck via the cpu thread which is the reader and writer of this shared 
>> data anyway.
>>
>> The language specification never mentions CPU threads. Reasoning about 
>> the 

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
Ah yes, probably 'loop1' 'loop2' would be more accurate names.

Yes the number of each routine is in that 'i' variable, those other labels 
are just to denote the position within the loops and before and after 
sending and the state of the truthstate variable that is only accessed 
inside the goroutine.

Yes, this is a very artificial example because the triggering of send 
operations is what puts all the goroutines into action. The serial nature 
of the outer part in the main means that this means only the sequentially 
sent and received messages will mostly always come out in the same order as 
they go in (I'd say, pretty much always, except maybe if there was a lot of 
heavy competition for goroutines compared to the supply of CPU threads.

If in an event driven structure with multiple workers that need to share 
and notify state through state variables, one goroutine might send and then 
another runs and receives. So, maybe this means I need to have a little 
more code in the goroutine after it empties the channel that verifies by 
reading that state hasn't changed and starts again if it has.

So, ok, I guess the topic is kinda wrongly labeled. I am just looking for a 
way to queue read and write access to a couple of variables, and order 
didn't matter just that one read or write is happening at any given moment. 
So to be a complete example I would need randomised and deliberately 
congested spawing of threads competing to push to the channel, and inside 
the loop it should have a boolean or maybe 'last modified' stamp that after 
it empties the channel it checks that isn't changed and if it is restarts.

But yes, that could still get into a deadlock if somehow two routines get 
into a perfect rhythm with each other. 

I will have to think more about this, the code I am trying to fix I suppose 
it would help if I understood its logic flow before I just try and prevent 
contention by changing the flow if this is possible.

On Sunday, 17 March 2019 21:59:30 UTC+1, Devon H. O'Dell wrote:
>
> I like to think of a channel as a concurrent messaging queue. You can 
> do all sorts of things with such constructs, including implementing 
> mutual exclusion constructs, but that doesn't mean that one is the 
> other. 
>
> Your playground example is a bit weird and very prone to various kinds 
> of race conditions that indicate that it may not be doing what you 
> expect. At a high level, your program loops 10 times. Each loop 
> iteration spawns a new concurrent process that attempts to read a 
> value off of a channel three times. Each iteration of the main loop 
> writes exactly one value into the channel. 
>
> As a concurrent queue, writes to the channel can be thought of as 
> appending an element, reads can be thought of as removing an element 
> from the front. 
>
> Which goroutine will read any individual value is not deterministic. 
> Since you're only sending 11 values over the channel, but spawn 10 
> goroutines that each want to read 3 values, you have at best 6 
> goroutines still waiting for data to be sent (and at worse, all 10) at 
> the time the program exits. 
>
> I would also point out that this is not evidence of mutual exclusion. 
> Consider a case where the work performed after the channel read 
> exceeds the time it takes for the outer loop to write a new value to 
> the channel. In that case, another goroutine waiting on the channel 
> would begin executing. This is not mutual exclusion. In this regard, 
> the example you've posted is more like a condition variable or monitor 
> than it is like a mutex. 
>
> Also note that in your second playground post, you're spawning 12 
> goroutines, so I'm not sure what "goroutine1" and "goroutine2" are 
> supposed to mean. 
>
> Kind regards, 
>
> --dho 
>
> Op zo 17 mrt. 2019 om 13:07 schreef Louki Sumirniy 
> >: 
> > 
> > https://play.golang.org/p/13GNgAyEcYv 
> > 
> > I think this demonstrates how it works quite well, it appears that 
> threads stick to channels, routine 0 always sends first and 1 always 
> receives, and this makes sense as this is the order of their invocation. I 
> could make more parallel threads but clearly this works as a mutex and only 
> one thread gets access to the channel per send/receive (one per side). 
> > 
> > On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote: 
> >> 
> >> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy <
> louki.sumir...@gmail.com> wrote: 
> >> 
> >> > My understanding of channels is they basically create exclusion by 
> control of the path of execution, instead of using callbacks, or they 
> bottleneck via the cpu thread which is the reader and writer of this shared 
> data anyway. 
> >> 
> >> The language specification never mentions CPU threads. Reasoning about 
> the language semantics in terms of CPU threads is not applicable. 
> >> 
> >> Threads are mentioned twice in the Memory Model document. In both cases 
> I think it's a mistake and we should s/threads/goroutines/ without loss of 
> 

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Devon H. O'Dell
I like to think of a channel as a concurrent messaging queue. You can
do all sorts of things with such constructs, including implementing
mutual exclusion constructs, but that doesn't mean that one is the
other.

Your playground example is a bit weird and very prone to various kinds
of race conditions that indicate that it may not be doing what you
expect. At a high level, your program loops 10 times. Each loop
iteration spawns a new concurrent process that attempts to read a
value off of a channel three times. Each iteration of the main loop
writes exactly one value into the channel.

As a concurrent queue, writes to the channel can be thought of as
appending an element, reads can be thought of as removing an element
from the front.

Which goroutine will read any individual value is not deterministic.
Since you're only sending 11 values over the channel, but spawn 10
goroutines that each want to read 3 values, you have at best 6
goroutines still waiting for data to be sent (and at worse, all 10) at
the time the program exits.

I would also point out that this is not evidence of mutual exclusion.
Consider a case where the work performed after the channel read
exceeds the time it takes for the outer loop to write a new value to
the channel. In that case, another goroutine waiting on the channel
would begin executing. This is not mutual exclusion. In this regard,
the example you've posted is more like a condition variable or monitor
than it is like a mutex.

Also note that in your second playground post, you're spawning 12
goroutines, so I'm not sure what "goroutine1" and "goroutine2" are
supposed to mean.

Kind regards,

--dho

Op zo 17 mrt. 2019 om 13:07 schreef Louki Sumirniy
:
>
> https://play.golang.org/p/13GNgAyEcYv
>
> I think this demonstrates how it works quite well, it appears that threads 
> stick to channels, routine 0 always sends first and 1 always receives, and 
> this makes sense as this is the order of their invocation. I could make more 
> parallel threads but clearly this works as a mutex and only one thread gets 
> access to the channel per send/receive (one per side).
>
> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>>
>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>> wrote:
>>
>> > My understanding of channels is they basically create exclusion by control 
>> > of the path of execution, instead of using callbacks, or they bottleneck 
>> > via the cpu thread which is the reader and writer of this shared data 
>> > anyway.
>>
>> The language specification never mentions CPU threads. Reasoning about the 
>> language semantics in terms of CPU threads is not applicable.
>>
>> Threads are mentioned twice in the Memory Model document. In both cases I 
>> think it's a mistake and we should s/threads/goroutines/ without loss of 
>> correctness.
>>
>> Channel communication establish happen-before relations (see Memory Model). 
>> I see nothing equivalent directly to a critical section in that behavior, at 
>> least as far as when observed from outside. It was mentioned before that 
>> it's possible to _construct a mutex_ using a channel. I dont think that 
>> implies channel _is a mutex_ from the perspective of a program performing 
>> channel communication. The particular channel usage pattern just has the 
>> same semantics as a mutex.
>>
>> --
>>
>> -j
>
> --
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Robert Engels
Then use a mutex or atomic spin lock (although that may have issues in the 
current Go implementation)

> On Mar 17, 2019, at 3:56 PM, Louki Sumirniy 
>  wrote:
> 
> I am pretty sure the main cause of deadlocks not having senders and receivers 
> in pairs in the execution path such that senders precede receivers. Receivers 
> wait to get something, and in another post here I showed a playground that 
> demonstrates that if there is one channel only one thread is every accessing 
> them (because the code has those variables only accessed in there). In a 
> nondeterministic input situation where a listener might trigger a send (and 
> run this protected code), it is still going to be one or another is in front 
> of the queue, in this case we are not concerned with sequence only excluding 
> simultaneous read/write operations.
> 
> I would not use a buffered channel to implement a mutex, as this implicitly 
> means two or more threads can read and write variables inside the goroutine.
> 
> That was my main question, as I want to use the lightest possible mechanism 
> to simply control that only one reader or writer is working at one moment on 
> two variables. that the race detector is flagging in my code.
> 
>> On Sunday, 17 March 2019 20:51:33 UTC+1, Jan Mercl  wrote:
>> On Sun, Mar 17, 2019 at 8:36 PM Louki Sumirniy  
>> wrote:
>> 
>> > So I am incorrect that only one goroutine can access a channel at once? I 
>> > don't understand, only one select or receive or send can happen at one 
>> > moment per channel, so that means that if one has started others can't 
>> > start. 
>> 
>> All channel operations can be safely used by multiple, concurrently 
>> executing goroutines. The black box inside the channel implementation can do 
>> whatever it likes as long as it follows the specs and memory model. But from 
>> the outside, any goroutine can safely send to, read from, close or query 
>> length of a channel at any time without any explicit synchronization 
>> whatsoever. By safely I mean "without creating a data race just by executing 
>> the channel operation". The black box takes care of that.
>> 
>> However, the preceding _does not_ mean any combination of channel operations 
>> performed by multiple goroutines is always sane and that it will, for 
>> example, never deadlock. But that's a different story.
>> 
>> -- 
>> -j
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
I am pretty sure the main cause of deadlocks not having senders and 
receivers in pairs in the execution path such that senders precede 
receivers. Receivers wait to get something, and in another post here I 
showed a playground that demonstrates that if there is one channel only one 
thread is every accessing them (because the code has those variables only 
accessed in there). In a nondeterministic input situation where a listener 
might trigger a send (and run this protected code), it is still going to be 
one or another is in front of the queue, in this case we are not concerned 
with sequence only excluding simultaneous read/write operations.

I would not use a buffered channel to implement a mutex, as this implicitly 
means two or more threads can read and write variables inside the goroutine.

That was my main question, as I want to use the lightest possible mechanism 
to simply control that only one reader or writer is working at one moment 
on two variables. that the race detector is flagging in my code.

On Sunday, 17 March 2019 20:51:33 UTC+1, Jan Mercl wrote:
>
> On Sun, Mar 17, 2019 at 8:36 PM Louki Sumirniy  > wrote:
>
> > So I am incorrect that only one goroutine can access a channel at once? 
> I don't understand, only one select or receive or send can happen at one 
> moment per channel, so that means that if one has started others can't 
> start. 
>
> All channel operations can be safely used by multiple, concurrently 
> executing goroutines. The black box inside the channel implementation can 
> do whatever it likes as long as it follows the specs and memory model. But 
> from the outside, any goroutine can safely send to, read from, close or 
> query length of a channel at any time without any explicit synchronization 
> whatsoever. By safely I mean "without creating a data race just by 
> executing the channel operation". The black box takes care of that.
>
> However, the preceding _does not_ mean any combination of channel 
> operations performed by multiple goroutines is always sane and that it 
> will, for example, never deadlock. But that's a different story.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Robert Engels
Without reading too deeply, and only judging based on your statements, it seems 
you are confusing implementation with specification. The things you cite are 
subject to change. You need to reason based on the specification not the 
observed behavior. Then use the observed behavior to argue that it doesn’t meet 
the specification. 

> On Mar 17, 2019, at 3:50 PM, Louki Sumirniy 
>  wrote:
> 
> https://play.golang.org/p/Kz9SsFeb1iK
> 
> This prints something at each interstice of the execution path and it is of 
> course deterministic.
> 
> I think the reason why the range loop always chooses one per channel, last 
> one in order because it uses a LIFO queue so the last in line gets filled 
> first.
> 
> The example in there shows with 1, 2 and 3 slots in the buffer, the exclusion 
> only occurs properly in the single slot, first goroutine always sends and 
> second always receives. This is because of the order of execution. If the 
> sends were externally determined and random it still only gets written within 
> one location. If you only read and write these variables inside this loop 
> they can never be clobbered while you read them, which is the contention that 
> a mutex is for determining the sequence of execution.
> 
> The main costs I see are that though the overhead is lower, there is still 
> overhead so there is some reasonable ratio between the amount of code you 
> execute in a given moment is shorter the other competing parallel threads 
> with external, not-deterministic inputs will lock the value for not longer 
> than you want to avoid with adding response latency. The scheduling overhead 
> escalates with the number of threads and costs also in memory.
> 
> But my main point is that it functions correctly as a mutex mechanism and 
> code inside the goroutine can count on nobody else accessing the variables 
> that are only read and written inside it.
> 
>> On Sunday, 17 March 2019 20:36:40 UTC+1, Louki Sumirniy wrote:
>> So I am incorrect that only one goroutine can access a channel at once? I 
>> don't understand, only one select or receive or send can happen at one 
>> moment per channel, so that means that if one has started others can't 
>> start. 
>> 
>> I was sure this was the case and this seems to confirm it:
>> 
>> https://stackoverflow.com/a/19818448
>> 
>> https://play.golang.org/p/NQGO5-jCVz
>> 
>> In this it is using 5 competing receivers but every time the last one in 
>> line gets it, so there is scheduling and priority between two possible 
>> receivers when a channel is filled. 
>> 
>> This is with the range statement, of course, but I think the principle I am 
>> seeing here is that in all cases it's either one to one between send and 
>> receive, or one to many, or many from one, one side only receives the other 
>> only sends. If you consider each language element in the construction, and 
>> the 'go' to be something like a unix fork(), this means that the first 
>> statement inside the goroutine and the very next one in the block where the 
>> goroutine is started potentially can happen at the same time, but only one 
>> can happen at a time.
>> 
>> So the sequence puts the receive at the end of the goroutine, which 
>> presumably is cleared to run, whereas the parent where it is started is 
>> waiting to have a receiver on the other end.
>> 
>> If there is another thread in line to access that channel, at any given time 
>> only one can be on the other side of send or receive. That code shows that 
>> there is a deterministic order to this, so if I have several goroutines 
>> running each one using this same channel lock to cover a small number of 
>> mutable shared objects, only one can use the channel at once. Since the 
>> chances are equal whether one or the other gets the channel at any given 
>> time, but it is impossible that two can be running the accessor code at the 
>> same time.
>> 
>> Thus, this construction is a mutex because it prevents more than one thread 
>> accessing at a time. It makes sense to me since it takes several 
>> instructions to read and write variables copying to register or memory. If 
>> you put two slots in the buffer, it can run in parallel, that's the point, a 
>> single element in the channel means only one access at a time and thus it is 
>> a bottleneck that protects from simultaneous read and right by parallel 
>> threads.
>> 
>>> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>>> wrote:
>>> 
>>> > My understanding of channels is they basically create exclusion by 
>>> > control of the path of execution, instead of using callbacks, or they 
>>> > bottleneck via the cpu thread which is the reader and writer of this 
>>> > shared data anyway.
>>> 
>>> The language specification never mentions CPU threads. Reasoning about the 
>>> language semantics in terms of CPU threads is not applicable.
>>> 
>>> Threads are mentioned twice in the Memory Model document. In both 

Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
https://play.golang.org/p/Kz9SsFeb1iK

This prints something at each interstice of the execution path and it is of 
course deterministic.

I think the reason why the range loop always chooses one per channel, last 
one in order because it uses a LIFO queue so the last in line gets filled 
first.

The example in there shows with 1, 2 and 3 slots in the buffer, the 
exclusion only occurs properly in the single slot, first goroutine always 
sends and second always receives. This is because of the order of 
execution. If the sends were externally determined and random it still only 
gets written within one location. If you only read and write these 
variables inside this loop they can never be clobbered while you read them, 
which is the contention that a mutex is for determining the sequence of 
execution.

The main costs I see are that though the overhead is lower, there is still 
overhead so there is some reasonable ratio between the amount of code you 
execute in a given moment is shorter the other competing parallel threads 
with external, not-deterministic inputs will lock the value for not longer 
than you want to avoid with adding response latency. The scheduling 
overhead escalates with the number of threads and costs also in memory.

But my main point is that it functions correctly as a mutex mechanism and 
code inside the goroutine can count on nobody else accessing the variables 
that are only read and written inside it.

On Sunday, 17 March 2019 20:36:40 UTC+1, Louki Sumirniy wrote:
>
> So I am incorrect that only one goroutine can access a channel at once? I 
> don't understand, only one select or receive or send can happen at one 
> moment per channel, so that means that if one has started others can't 
> start. 
>
> I was sure this was the case and this seems to confirm it:
>
> https://stackoverflow.com/a/19818448
>
> https://play.golang.org/p/NQGO5-jCVz
>
> In this it is using 5 competing receivers but every time the last one in 
> line gets it, so there is scheduling and priority between two possible 
> receivers when a channel is filled. 
>
> This is with the range statement, of course, but I think the principle I 
> am seeing here is that in all cases it's either one to one between send and 
> receive, or one to many, or many from one, one side only receives the other 
> only sends. If you consider each language element in the construction, and 
> the 'go' to be something like a unix fork(), this means that the first 
> statement inside the goroutine and the very next one in the block where the 
> goroutine is started potentially can happen at the same time, but only one 
> can happen at a time.
>
> So the sequence puts the receive at the end of the goroutine, which 
> presumably is cleared to run, whereas the parent where it is started is 
> waiting to have a receiver on the other end.
>
> If there is another thread in line to access that channel, at any given 
> time only one can be on the other side of send or receive. That code shows 
> that there is a deterministic order to this, so if I have several 
> goroutines running each one using this same channel lock to cover a small 
> number of mutable shared objects, only one can use the channel at once. 
> Since the chances are equal whether one or the other gets the channel at 
> any given time, but it is impossible that two can be running the accessor 
> code at the same time.
>
> Thus, this construction is a mutex because it prevents more than one 
> thread accessing at a time. It makes sense to me since it takes several 
> instructions to read and write variables copying to register or memory. If 
> you put two slots in the buffer, it can run in parallel, that's the point, 
> a single element in the channel means only one access at a time and thus it 
> is a bottleneck that protects from simultaneous read and right by parallel 
> threads.
>
> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>>
>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>> wrote:
>>
>> > My understanding of channels is they basically create exclusion by 
>> control of the path of execution, instead of using callbacks, or they 
>> bottleneck via the cpu thread which is the reader and writer of this shared 
>> data anyway.
>>
>> The language specification never mentions CPU threads. Reasoning about 
>> the language semantics in terms of CPU threads is not applicable.
>>
>> Threads are mentioned twice in the Memory Model document. In both cases I 
>> think it's a mistake and we should s/threads/goroutines/ without loss of 
>> correctness.
>>
>> Channel communication establish happen-before relations (see Memory 
>> Model). I see nothing equivalent directly to a critical section in that 
>> behavior, at least as far as when observed from outside. It was mentioned 
>> before that it's possible to _construct a mutex_ using a channel. I dont 
>> think that implies channel _is a mutex_ from the perspective of a program 
>> performing channel communication. 

[go-nuts] Re: Go 1.12.1 and Go 1.11.6 are released

2019-03-17 Thread Serhat Şevki Dinçer
Hi,

I see a regression on speed with sorty  
tests (go test -short -gcflags '-B -s' -ldflags '-s -w') on my Intel Core 
i5-4210M laptop (also sortutil became faster, zermelo float became much 
slower):

with *go 1.12.1*

Sorting uint32
sortutil took 18.64s
zermelo took 10.92s
sorty-2 took 17.10s
sorty-3 took 14.22s
sorty-4 took 12.36s
sorty-5 took 12.10s

Sorting float32
sortutil took 18.03s
zermelo took 14.57s
sorty-2 took 19.27s
sorty-3 took 15.82s
sorty-4 took 13.93s
sorty-5 took 13.90s

with *go 1.11.6* (consistent with 1.11.5)

Sorting uint32
sortutil took 25.18s
zermelo took 10.93s
sorty-2 took 15.85s
sorty-3 took 13.05s
sorty-4 took 11.27s
sorty-5 took 11.05s

Sorting float32
sortutil took 23.69s
zermelo took 8.89s
sorty-2 took 19.25s
sorty-3 took 15.42s
sorty-4 took 13.19s
sorty-5 took 13.09s


On Friday, March 15, 2019 at 12:13:07 AM UTC+3, Katie Hockman wrote:
>
> Hello gophers,
>
> We have just released Go versions 1.12.1 and 1.11.6, minor point releases.
>
> These releases include fixes to cgo, the compiler, the go command,
>
> and the fmt, net/smtp, os, path/filepath, sync, and template packages.
>
> View the release notes for more information:
> https://golang.org/doc/devel/release.html#go1.12.minor
>
> You can download binary and source distributions from the Go web site:
> https://golang.org/dl/
>
> To compile from source using a Git clone, update to the release with
> "git checkout go1.12.1" and build as usual.
>
> Thanks to everyone who contributed to the release.
>
> Cheers,
> Katie for the Go team
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
https://play.golang.org/p/13GNgAyEcYv

I think this demonstrates how it works quite well, it appears that threads 
stick to channels, routine 0 always sends first and 1 always receives, and 
this makes sense as this is the order of their invocation. I could make 
more parallel threads but clearly this works as a mutex and only one thread 
gets access to the channel per send/receive (one per side).

On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>
> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  > wrote:
>
> > My understanding of channels is they basically create exclusion by 
> control of the path of execution, instead of using callbacks, or they 
> bottleneck via the cpu thread which is the reader and writer of this shared 
> data anyway.
>
> The language specification never mentions CPU threads. Reasoning about the 
> language semantics in terms of CPU threads is not applicable.
>
> Threads are mentioned twice in the Memory Model document. In both cases I 
> think it's a mistake and we should s/threads/goroutines/ without loss of 
> correctness.
>
> Channel communication establish happen-before relations (see Memory 
> Model). I see nothing equivalent directly to a critical section in that 
> behavior, at least as far as when observed from outside. It was mentioned 
> before that it's possible to _construct a mutex_ using a channel. I dont 
> think that implies channel _is a mutex_ from the perspective of a program 
> performing channel communication. The particular channel usage pattern just 
> has the same semantics as a mutex.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Robert Engels
https://g.co/kgs/2Q3a5n

> On Mar 17, 2019, at 2:36 PM, Louki Sumirniy 
>  wrote:
> 
> So I am incorrect that only one goroutine can access a channel at once? I 
> don't understand, only one select or receive or send can happen at one moment 
> per channel, so that means that if one has started others can't start. 
> 
> I was sure this was the case and this seems to confirm it:
> 
> https://stackoverflow.com/a/19818448
> 
> https://play.golang.org/p/NQGO5-jCVz
> 
> In this it is using 5 competing receivers but every time the last one in line 
> gets it, so there is scheduling and priority between two possible receivers 
> when a channel is filled. 
> 
> This is with the range statement, of course, but I think the principle I am 
> seeing here is that in all cases it's either one to one between send and 
> receive, or one to many, or many from one, one side only receives the other 
> only sends. If you consider each language element in the construction, and 
> the 'go' to be something like a unix fork(), this means that the first 
> statement inside the goroutine and the very next one in the block where the 
> goroutine is started potentially can happen at the same time, but only one 
> can happen at a time.
> 
> So the sequence puts the receive at the end of the goroutine, which 
> presumably is cleared to run, whereas the parent where it is started is 
> waiting to have a receiver on the other end.
> 
> If there is another thread in line to access that channel, at any given time 
> only one can be on the other side of send or receive. That code shows that 
> there is a deterministic order to this, so if I have several goroutines 
> running each one using this same channel lock to cover a small number of 
> mutable shared objects, only one can use the channel at once. Since the 
> chances are equal whether one or the other gets the channel at any given 
> time, but it is impossible that two can be running the accessor code at the 
> same time.
> 
> Thus, this construction is a mutex because it prevents more than one thread 
> accessing at a time. It makes sense to me since it takes several instructions 
> to read and write variables copying to register or memory. If you put two 
> slots in the buffer, it can run in parallel, that's the point, a single 
> element in the channel means only one access at a time and thus it is a 
> bottleneck that protects from simultaneous read and right by parallel threads.
> 
>> On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl  wrote:
>> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  
>> wrote:
>> 
>> > My understanding of channels is they basically create exclusion by control 
>> > of the path of execution, instead of using callbacks, or they bottleneck 
>> > via the cpu thread which is the reader and writer of this shared data 
>> > anyway.
>> 
>> The language specification never mentions CPU threads. Reasoning about the 
>> language semantics in terms of CPU threads is not applicable.
>> 
>> Threads are mentioned twice in the Memory Model document. In both cases I 
>> think it's a mistake and we should s/threads/goroutines/ without loss of 
>> correctness.
>> 
>> Channel communication establish happen-before relations (see Memory Model). 
>> I see nothing equivalent directly to a critical section in that behavior, at 
>> least as far as when observed from outside. It was mentioned before that 
>> it's possible to _construct a mutex_ using a channel. I dont think that 
>> implies channel _is a mutex_ from the perspective of a program performing 
>> channel communication. The particular channel usage pattern just has the 
>> same semantics as a mutex.
>> 
>> -- 
>> -j
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Jan Mercl
On Sun, Mar 17, 2019 at 8:36 PM Louki Sumirniy <
louki.sumirniy.stal...@gmail.com> wrote:

> So I am incorrect that only one goroutine can access a channel at once? I
don't understand, only one select or receive or send can happen at one
moment per channel, so that means that if one has started others can't
start.

All channel operations can be safely used by multiple, concurrently
executing goroutines. The black box inside the channel implementation can
do whatever it likes as long as it follows the specs and memory model. But
from the outside, any goroutine can safely send to, read from, close or
query length of a channel at any time without any explicit synchronization
whatsoever. By safely I mean "without creating a data race just by
executing the channel operation". The black box takes care of that.

However, the preceding _does not_ mean any combination of channel
operations performed by multiple goroutines is always sane and that it
will, for example, never deadlock. But that's a different story.

-- 

-j

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
So I am incorrect that only one goroutine can access a channel at once? I 
don't understand, only one select or receive or send can happen at one 
moment per channel, so that means that if one has started others can't 
start. 

I was sure this was the case and this seems to confirm it:

https://stackoverflow.com/a/19818448

https://play.golang.org/p/NQGO5-jCVz

In this it is using 5 competing receivers but every time the last one in 
line gets it, so there is scheduling and priority between two possible 
receivers when a channel is filled. 

This is with the range statement, of course, but I think the principle I am 
seeing here is that in all cases it's either one to one between send and 
receive, or one to many, or many from one, one side only receives the other 
only sends. If you consider each language element in the construction, and 
the 'go' to be something like a unix fork(), this means that the first 
statement inside the goroutine and the very next one in the block where the 
goroutine is started potentially can happen at the same time, but only one 
can happen at a time.

So the sequence puts the receive at the end of the goroutine, which 
presumably is cleared to run, whereas the parent where it is started is 
waiting to have a receiver on the other end.

If there is another thread in line to access that channel, at any given 
time only one can be on the other side of send or receive. That code shows 
that there is a deterministic order to this, so if I have several 
goroutines running each one using this same channel lock to cover a small 
number of mutable shared objects, only one can use the channel at once. 
Since the chances are equal whether one or the other gets the channel at 
any given time, but it is impossible that two can be running the accessor 
code at the same time.

Thus, this construction is a mutex because it prevents more than one thread 
accessing at a time. It makes sense to me since it takes several 
instructions to read and write variables copying to register or memory. If 
you put two slots in the buffer, it can run in parallel, that's the point, 
a single element in the channel means only one access at a time and thus it 
is a bottleneck that protects from simultaneous read and right by parallel 
threads.

On Sunday, 17 March 2019 14:55:58 UTC+1, Jan Mercl wrote:
>
> On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy  > wrote:
>
> > My understanding of channels is they basically create exclusion by 
> control of the path of execution, instead of using callbacks, or they 
> bottleneck via the cpu thread which is the reader and writer of this shared 
> data anyway.
>
> The language specification never mentions CPU threads. Reasoning about the 
> language semantics in terms of CPU threads is not applicable.
>
> Threads are mentioned twice in the Memory Model document. In both cases I 
> think it's a mistake and we should s/threads/goroutines/ without loss of 
> correctness.
>
> Channel communication establish happen-before relations (see Memory 
> Model). I see nothing equivalent directly to a critical section in that 
> behavior, at least as far as when observed from outside. It was mentioned 
> before that it's possible to _construct a mutex_ using a channel. I dont 
> think that implies channel _is a mutex_ from the perspective of a program 
> performing channel communication. The particular channel usage pattern just 
> has the same semantics as a mutex.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Jan Mercl
On Sun, Mar 17, 2019 at 1:04 PM Louki Sumirniy <
louki.sumirniy.stal...@gmail.com> wrote:

> My understanding of channels is they basically create exclusion by
control of the path of execution, instead of using callbacks, or they
bottleneck via the cpu thread which is the reader and writer of this shared
data anyway.

The language specification never mentions CPU threads. Reasoning about the
language semantics in terms of CPU threads is not applicable.

Threads are mentioned twice in the Memory Model document. In both cases I
think it's a mistake and we should s/threads/goroutines/ without loss of
correctness.

Channel communication establish happen-before relations (see Memory Model).
I see nothing equivalent directly to a critical section in that behavior,
at least as far as when observed from outside. It was mentioned before that
it's possible to _construct a mutex_ using a channel. I dont think that
implies channel _is a mutex_ from the perspective of a program performing
channel communication. The particular channel usage pattern just has the
same semantics as a mutex.

-- 

-j

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Louki Sumirniy
I didn't even think of the idea of using buffered channels, I was trying to 
not lean too far in towards that side of thing, but it is good you mention 
it, it would be simple to just pre-allocate a buffer and trigger the print 
call only when that buffer fills up (say like half a screenful, maybe 4kb 
to allow for stupidly heavy logging outputs.

As you point out, there is some threads and buffering going on with the 
writer already.

I do think, though, that while on one hand you are correct the load is not 
really other than shifted, that on the other hand, the scheduler can more 
cleanly keep the threads separated. In my use case it is only one main 
thread processing a lot of crypto-heavy virtual machine stuff, and the most 
important thing that needs low overhead (in thread) diversions, not that 
the load is reduced.

As I mentioned, I wrote a logger that does this deferral of processing 
until two hops down stream to the root, I am going to look at buffering it 
and actually I think it would be good to actually try to pace it by time 
instead of line-printing speed, and when the heavy debug printing is being 
used performance is not a concern  at all - but being single thread, the 
less time that thread spends talking to other threads, the better, very 
short loops, under 100ms most of the time, making actual calls to log. or 
fmt.Print functions adds more overhead to the main thread than loading a 
channel and dispatching it.

On my main workstation, there is another 11 cores and they can be busy 
doing things without slowing the central process, so long as they aren't 
needing to synchronise with it or put the loop on hold longer than 
necessary. Using a buffer and a fast ticker to fill and empty bulk sends to 
the output sounds like a sensible idea to me, as slicing strings, it is 
easy to design it to flow as a stream, they only need one bottleneck as 
they are streamed into the buffer.

On Sunday, 17 March 2019 10:15:29 UTC+1, Christophe Meessen wrote:
>
> What you did is decouple production from consumption. You can speed up the 
> production go routine if the rate is irregular. But if, on average, 
> consumption is too slow, the list will grow until out of memory. 
>
> If you want to speed up consumption, you may group the strings in one big 
> string and print once. This will reduce the rate of system calls to print 
> each string individually.
>
> Something like this (warning: raw untested code)
>
>
> buf := make([]byte, 0, 1024)
> ticker := time.NewTicker(1*time.Second)
> for {
> select {
> case ticker.C:
> if len(buf) > 0 {
> fmt.Print(string(buf))
> buf := buf[:0]
> }
> case m := <-buffChan:
> buf = append(buf, m...)
> buf = append(buf, '\n')
> }
> }
>
>
> Adjust the ticker period time and initial buffer size to what matches your 
> need. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Louki Sumirniy
I just wrote this a bit over a month ago:

https://git.parallelcoin.io/dev/pod/src/branch/master/pkg/util/cl

It was brutal simple before (only one 600 line source lots of type 
switches) but it now also has a registration system for setting up 
arbitrary subsystem log levels.

By the way, I could be wrong in my thinking about this, but it was central 
to the design to use interface{} channels and meaningful type names and the 
variables and inputs for, such as a printf type function, are this way only 
bundled into a struct literal, no function is called, the execution thread 
just forks and log messages pass through directly without any function call.

I even have a type for closures to pass straight through and they also 
don't execute until the last step of composing the output, so I have 6 core 
goroutines and 6 for each subsystem I set up, and most of the work is done 
in the end of two passes through channels (subsystems drop them if so 
configured). 

I think it also has a somewhat crude shutdown handling system in it too. 

It could definitely do with more work and actual performance overhead in 
throughput and latency to be profiled and optimised, but being simple I 
think it's not got much room to improve in the general design.

The messages pass through two 'funnels', with one per subsystem gated by 
its level setting, and then the actual function calls to splice the strings 
and output the result is deferred until the final 6 channel level main 
thread. I have just used them mainly on a package level basis but you can 
declare several in a package they just have to have different string labels.

Anyway, I'm kinda proud of it because my specific need is for avoiding such 
in-thread overheads as possible since during replay of the database log it 
only runs one thread due to the dependencies between data elements, and I 
think this does this well but I can't say I have any measurements either 
way, except I do know that especially permanent allocated channels only 
have a startup cost and their scheduling is very low cost, and also 
explicates to the compiler that the processes are concurrent and ideally 
should not be overlapping each other.

The print functions in fmt and log all have the per-call overhead cost, 
whereas channels have initialisation and lower context switch cost, but 
yes, more memory use but mainly because it funnels messages from another 
couple hundred other threads. So that's also my theory and why this post 
caught my eye. My logger is unlicenced so do whatever you like except 
pretend it's not prior art.

On Saturday, 16 March 2019 17:02:06 UTC+1, Thomas S wrote:
>
> Hello,
>
> I have a software needing a lot of logs when processing.
>
> I need to :
> 1- Improve the processing time by doing non-blocking logs
> 2- Be safe with goroutines parallel logs
>
> fmt package doesn't match with (1) & (2)
> log package doesn't match with (1)
>
> *Here is my proposal. What do you think ?*
>
> *Design :*
>
> [Log functions] -channel>[Buffering function (goroutine)] 
> channel> [Printing function (goroutine)]
>
>
> package glog
>
> import (
> "container/list"
> "encoding/json"
> "fmt"
> )
>
> /*
> ** Public
>  */
>
> func Println(data ...interface{}) {
> bufferChan <- fmt.Sprintln(data...)
> }
>
> func Print(data ...interface{}) {
> bufferChan <- fmt.Sprint(data...)
> }
>
> func Printf(s string, data ...interface{}) {
> go func() {
> r := fmt.Sprintf(s, data...)
> bufferChan <- r
> }()
> }
>
> /*
> ** Private
>  */
>
> var bufferChan chan string
> var outChan chan string
>
> func init() {
> bufferChan = make(chan string)
> outChan = make(chan string)
> go centrale()
> go buffout()
> }
>
> func centrale() {
> var buff *list.List
> buff = list.New()
> for {
> if buff.Len() > 0 {
> select {
> case outChan <- buff.Front().Value.(string):
> buff.Remove(buff.Front())
> case tmp := <-bufferChan:
> buff.PushBack(tmp)
> }
>
> } else {
> tmp := <-bufferChan
> buff.PushBack(tmp)
> }
> }
> }
>
> func buffout() {
> for {
> data := <-outChan
> fmt.Print(data)
> }
> }
>
>
>
> It works well for now, I want to be sure to not miss anything as it's a 
> very important part of the code.
>
> Thank you for your review.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
I didn't mention actually excluding access by passing data through values 
either, this was just using a flag to confine accessor code to one thread, 
essentially, which has the same result as a mutex as far as its granularity 
goes.

On Sunday, 17 March 2019 13:04:26 UTC+1, Louki Sumirniy wrote:
>
> My understanding of channels is they basically create exclusion by control 
> of the path of execution, instead of using callbacks, or they bottleneck 
> via the cpu thread which is the reader and writer of this shared data 
> anyway.
>
> I think the way they work is that there is queues for read and write 
> access based on most recent, so when a channel is loaded, the most 
> proximate (if possible same) thread executes the other side of the channel, 
> and then if another thread off execution bumps into a patch involving 
> accessing the channel, if the channel is full and it wants to fill, it is 
> blocked, if it wants to unload, and it's empty, it is blocked, but the main 
> goroutine scheduler basically is the gatekeeper and assigns exeuction 
> priority based on sequence and first available.
>
> So, if that is correct, then the version with the load after the goroutine 
> and unload at the end of the goroutine functions to grab the thread of the 
> channel, and when it ends, gives it back, and if another is ready to use 
> it, it is already lined up and the transfer is made. So any code I wrap 
> every place inside the goroutine/unload-load pattern (including inside 
> itself) can only be run by one thread at once. If you ask me, that's better 
> and more logical than callbacks.
>
> On Sunday, 17 March 2019 11:05:35 UTC+1, Jan Mercl wrote:
>>
>>
>> On Sun, Mar 17, 2019 at 10:49 AM Louki Sumirniy  
>> wrote:
>>
>> > I just ran into my first race condition-related error and it made me 
>> wonder about how one takes advantage of the mutex properties of channels.
>>
>> I'd not say there are any such properties. However, it's easy to 
>> implement a semaphore with a channel. And certain semaphores can act as 
>> mutexes.
>>
>> > If I understand correctly, this is a simple example:
>>
>> That example illustrates IMO more of a condition/signal than a typical 
>> mutex usage pattern.
>>
>> -- 
>>
>> -j
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
My understanding of channels is they basically create exclusion by control 
of the path of execution, instead of using callbacks, or they bottleneck 
via the cpu thread which is the reader and writer of this shared data 
anyway.

I think the way they work is that there is queues for read and write access 
based on most recent, so when a channel is loaded, the most proximate (if 
possible same) thread executes the other side of the channel, and then if 
another thread off execution bumps into a patch involving accessing the 
channel, if the channel is full and it wants to fill, it is blocked, if it 
wants to unload, and it's empty, it is blocked, but the main goroutine 
scheduler basically is the gatekeeper and assigns exeuction priority based 
on sequence and first available.

So, if that is correct, then the version with the load after the goroutine 
and unload at the end of the goroutine functions to grab the thread of the 
channel, and when it ends, gives it back, and if another is ready to use 
it, it is already lined up and the transfer is made. So any code I wrap 
every place inside the goroutine/unload-load pattern (including inside 
itself) can only be run by one thread at once. If you ask me, that's better 
and more logical than callbacks.

On Sunday, 17 March 2019 11:05:35 UTC+1, Jan Mercl wrote:
>
>
> On Sun, Mar 17, 2019 at 10:49 AM Louki Sumirniy  > wrote:
>
> > I just ran into my first race condition-related error and it made me 
> wonder about how one takes advantage of the mutex properties of channels.
>
> I'd not say there are any such properties. However, it's easy to implement 
> a semaphore with a channel. And certain semaphores can act as mutexes.
>
> > If I understand correctly, this is a simple example:
>
> That example illustrates IMO more of a condition/signal than a typical 
> mutex usage pattern.
>
> -- 
>
> -j
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Elastic synchronised logging : What do you think ?

2019-03-17 Thread Christophe Meessen
Please replace 'case ticker.C' with 'case <-ticker.C' in my previous code. 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Persistence of value, or Safely close what you expected

2019-03-17 Thread Louki Sumirniy
I am currently dealing with a race related bug, at least, I found a bug, 
and it coincided with a race when I enabled the race detector, and the bug 
is a deadlock, and the shared shut-down button clearly can be pressed on 
and off in the wrong order.

So my first strategy in fixing the bug is putting channel mutexes around 
the read/write operations.However, I'm not sure it's the most efficient 
solution. Basically it's two variables, which have related functions (quit 
and a cursor) so I think I can cover them with one lock. The quit lock will 
be infrequently accessed so I think it doesn't affect performance when it's 
actually meant to be working.

As regards to closing channels, this is always about only doing that in one 
place. Unless closing the channel is itself the signal, there should only 
be one open and one close function and usually in the same scope, possibly 
even with a defer to be safe it is closed in a panic and when the system it 
is part of can be reinitialised and restarted.

As I understand it, closing channels is mostly used as a quit signal, so 
there is usually only sender and many receivers, you can't get a race 
inside a serial thread. This doubles the utility of the channel that is 
being used to pass data around between processes, in that you can configure 
this signal for quit or reload or pause or whatever toggle/pushbutton type 
switch you want to have, with the addition of a response that reopens the 
channel after it closes, to act like a 'my turn' signal. But more usually 
that would just be done by having several workers listening for work, and 
only one sender, or one broker, or other consensus.

There's more than a few layers to the interlock between this and others of 
the go maxims, sharing a sender and receiver channel at the same time is a 
bad idea, in general. You either are funnelling concurrent processes into a 
serial process, or fanning a serial stream out into concurrent processes. 
The close is cognate to the nil in that it is not zero but it is not a 
number either, so in some situations you can use it as a message in itself.

On Thursday, 14 March 2019 13:51:29 UTC+1, rog wrote:
>
> On Wed, 13 Mar 2019 at 23:01, Andrey Tcherepanov  > wrote:
>
>> There were couple of things I was implementing for our little in-house 
>> server app. 
>> One was (in)famous fan-out pattern for broadcasting messages to clients, 
>> and one for job queue(s) that could run fast (seconds) or long (hours, 
>> days) where items kind of coming it from multiple (same) clients.
>>
>
> I'd be interested to hear a little more about these tasks. There are many 
> possible design choices there. For example, if one client is slow to read 
> its messages, what strategy would you choose? You could discard messages, 
> amalgamate messages, slow messages to other clients, etc. The choice you 
> make can have significant impact on the design of your system.
>
> FWIW the usual convention is to avoid sharing the memory that contains the 
> channel. The same channel can be stored in two places - both refer to the 
> same underlying channel. So when the sender goes away, it can nil its own 
> channel without interfering with the receiver's channel.
>
>   cheers,
> rog.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Is this a valid way of implementing a mutex?

2019-03-17 Thread Louki Sumirniy
I just ran into my first race condition-related error and it made me wonder 
about how one takes advantage of the mutex properties of channels.

If I understand correctly, this is a simple example:

mx := make(chan bool)

// in separate scope, presumably...

go func() {

<-mx

 

doSomething()

}()

mx <- true


So what happens here is the contents of the goroutine are waiting on 
something to come out of the channel, which is executed in parallel, or in 
sequential order, the content of the goroutine doesn't start until the 
channel is loaded, which happens at the same time, so it's impossible for 
two competing over the same lock to hold it at the same time.

If we have one thread, I think the goroutine runs first, it hits a block, 
and then the main resumes which blocks loading the other one, so if another 
thread also tries to wait on that channel it will be second (or later) in 
line waiting on that channel to pop something out.

I can see there is also another strategy where the shared memory is the 
variable passing through the channel (so it would also probably be a chan 
*type as well, distinguishing it), and the difference and which to choose 
would depend on how many locks you want to tie to how many sensitive items. 
If it's a bundle of things, like a map or array, then it might be better to 
pass the object around with its pointers, using channels as entrypoints. 
But more often it's one or two out of a set of other variables so it makes 
more sense to lock them separately, and with channels each lock would be 
just one extra (small) field.

Or maybe I am saying that backwards. If the state is big, use a few small 
locks to cover each part of the ephemeral shared state, if the state is 
small, pass it around directly through channels.

I'm really a beginner at concurrent programming, I apologise for my 
noobishness... and thanks to anyone who helps me and anyone understand it 
better :)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Elastic synchronised logging : What do you think ?

2019-03-17 Thread Christophe Meessen
What you did is decouple production from consumption. You can speed up the 
production go routine if the rate is irregular. But if, on average, consumption 
is too slow, the list will grow until out of memory. 

If you want to speed up consumption, you may group the strings in one big 
string and print once. This will reduce the rate of system calls to print each 
string individually.

Something like this (warning: raw untested code)


buf := make([]byte, 0, 1024)
ticker := time.NewTicker(1*time.Second)
for {
select {
case ticker.C:
if len(buf) > 0 {
fmt.Print(string(buf))
buf := buf[:0]
}
case m := <-buffChan:
buf = append(buf, m...)
buf = append(buf, '\n')
}
}


Adjust the ticker period time and initial buffer size to what matches your 
need. 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Elastic synchronised logging : What do you think ?

2019-03-17 Thread Tamás Gulácsi
Buffer your output using bufio.NewWriterSize.

Using a buffered channel gives nothing: is the producers are faster, then the 
buffers will always be full, the bottleneck is the ouput writer.

If the consumer is faster, then you have no problems :)

Buffering the output speeds things up as the main bottleneck used to be the 
write syscall.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Order of evaluation need more definition in spec

2019-03-17 Thread Michael Jones
Upon reflection I see that this is as you say. Not only expressed hazards,
but even such mysterious cases as file system modification yb external
scripts invoked by one argument and visible to another.

On Sat, Mar 16, 2019 at 10:03 PM T L  wrote:

>
>
> On Friday, March 15, 2019 at 2:25:44 PM UTC-4, zs...@pinterest.com wrote:
>>
>> Thanks for response, I'm new here still need do more research on existing
>> topics.
>>
>> I understand compiler can do more optimization with less order
>> restriction, and I agree it's best to not coding in that way. The problem
>> is how to prevent people doing that since it's very similar to the usual
>> return multiple values style. Is it possible to add the check in golint,
>> such as when any comma separated expression (including return, assignment,
>> func parameters etc) contains variable who is also referenced in a func
>> closure (or get pointer in a func parameter) in the same expression, shows
>> a warning say the execution order is unspecified may cause bug?
>>
>
> It is possible for a linter to find some simple such cases. But it is hard
> to find some complicated cases.
> I ever wrote such a linter (not published). It only works for some simple
> cases.
> For my inexperience in using g0.* packages, the linter run very slow even
> for the simple cases.
> And I found that it is almost impossible to some complicated cases in
> theory.
>
>>
>>
>>
>> On Thursday, March 14, 2019 at 8:04:12 PM UTC-7, Ian Lance Taylor wrote:
>>
>>> On Thu, Mar 14, 2019 at 2:49 PM zshi via golang-nuts
>>>  wrote:
>>> >
>>> > I'm a little bit confused with result from below code:
>>> >
>>> > func change(v *int) bool {
>>> > *v = 3
>>> > return true
>>> > }
>>> >
>>> > func value(v int) int {
>>> > return v
>>> > }
>>> >
>>> > func main() {
>>> > b := 1
>>> > fmt.Println(b + 1, value(b + 1), change())
>>> > }
>>> >
>>> > Output:
>>> >
>>> > 4 2 true
>>> >
>>> >
>>> > I expected 2 2 true. Then I checked spec, it said:
>>> > "all function calls, method calls, and communication operations
>>> are evaluated in lexical left-to-right order."
>>> > and:
>>> > "However, the order of those events compared to the evaluation and
>>> indexing of x and the evaluation of y is not specified."
>>> > So from that we can say both 4 2 and 2 2 are correct based on spec,
>>> although the implementation choose putting expression evaluated after all
>>> the function calls.
>>> >
>>> > I think it's better written into spec rather than say not specified.
>>> Why? because I already see some production code with below pattern:
>>> > .
>>> > return b, change()
>>> > }
>>> > (the real production code use a function closure which directly modify
>>> b without pointer, but I think the order issue is same)
>>> >
>>> > I don't know if this pattern is used a lot in other place since I'm
>>> still new to golang, but people writing go style probably like this way.
>>> Since it depends on a not specified behavior which has risk to change in
>>> future and very hard to debug, I think it's better written into spec which
>>> make sure it won't change in future.
>>> >
>>> > Any thoughts?
>>>
>>> This kind of discussion has been had several times, e.g.,
>>> https://golang.org/issue/23735.  In general I don't think there has
>>> been a strong sentiment for precisely specifying evaluation order down
>>> to the level of exactly when the value of a variable is evaluated.
>>> The advantage of doing so is that code such as you describe becomes
>>> precisely specified.  The disadvantage is that the compiler has much
>>> less scope for optimization.
>>>
>>> In general, code such as you describe is confusing and hard to read.
>>> It's impossible to recommend that people write code that way.  And if
>>> we make that unclear code precisely specified, then the compiler will
>>> be forced to generate slower code even for clear code.
>>>
>>> So well there is an argument for fully specifying the code as you
>>> suggest, there is also an argument for leaving it unspecified.
>>>
>>> Ian
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>


-- 

*Michael T. jonesmichael.jo...@gmail.com *

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.