Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-27 Thread Jesper Louis Andersen
On Sun, Nov 27, 2016 at 5:47 PM Michael Jones 
wrote:

> Wait… *“If any of these fails to produce a correct result…”* Really? I
> believe that we want the correct result under any valid schedule, including
> the most perverse, and that anything less means a failure of the design of
> the underlying system. Perhaps I misunderstand what you imply.
>
>
>
Or perhaps I'm just bad at writing :)

What I meant is what you wrote: we want to try random schedules (under the
assumption that these random schedules are possible interleavings). The
code must behave the same, observationally, under any of these schedules.

In particular, pick any schedule and deem it the "deterministic one". If
the code is correct, any schedule should behave as the "determinstic one"
observationally.

The problem is that most optimized schedulers have a "common schedule
order" which they prefer. The other ways to schedule only shows up once the
system is under stress or load. And the reason we get another schedule is
often because some package in the system, totally unrelated to the code we
are scrutinizing, executes in the same process, but in another goroutine.
Thus, it pushes our code in ways which alters the schedule. Test cases,
which isolate packages, are not going to find anything amiss.

Also note that this has nothing to do with data races (which the race
detector finds). Usually the reason your code breaks down has to do with
some kind of causal dependency you didn't account for logically in your
code. It then leads to collapse in a relative or temporal way[0].

Luckily, there are many situations in which your system doesn't really have
the guarantees you think, but it will gracefully recover from the odd
events anyhow. In practice, we can often get away with a weaker consistency
than a linearizable consistency. And this is why many concurrent programs
can run, even though there are some schedules in them which have never been
tested or are slightly dubious from a correctness perspective.

[0] The funniest example is when you have three planets, A, X, and B where
X is in the middle between A and B. If a message is broadcast at exactly
the same point in time from the planets, the arrival is different on the
planets:

A sees A, X, and then B
B sees B, X and then A
X sees X, and then A+B at the same time (any interleaving is possible if we
force serialization)

You may be tempted to write a protocol which assumes these orderings, but
planets move, so you can't. You are forced to obey the laws of physics and
they say information flow is relative. The same happens inside a CPU,
though the window is measured in nanoseconds, not years.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-27 Thread Michael Jones
Wait… “If any of these fails to produce a correct result…” Really? I believe 
that we want the correct result under any valid schedule, including the most 
perverse, and that anything less means a failure of the design of the 
underlying system. Perhaps I misunderstand what you imply.

 

From: <golang-nuts@googlegroups.com> on behalf of Jesper Louis Andersen 
<jesper.louis.ander...@gmail.com>
Date: Sunday, November 27, 2016 at 3:58 AM
To: Olivier El Mekki <oliv...@el-mekki.com>, golang-nuts 
<golang-nuts@googlegroups.com>
Cc: <rogerals...@gmail.com>
Subject: Re: [go-nuts] Do Go Routines use continuations to allow yelding?

 

On Sat, Nov 26, 2016 at 8:32 PM Olivier El Mekki <oliv...@el-mekki.com> wrote:

 

This could yield interesting properties, because this means that even using 
channels, maybe we could write concurrent code and still have a chance to have 
predictable and reproducible execution in testing environment. 

 

 

I think it is a bit more complex:

 

Having a deterministic execution of serial & sequential nature is good for 
establishing a base-point of what a given routine is going to do. The key point 
of a concurrent/parallel environment is that there should be no discrepancy 
between this base-point and any possible execution order of concurrent 
goroutines.

 

In short, you want to randomize the scheduler to run highly unlikely schedules. 
If any of these fails to produce a correct result, you want to know what order 
the scheduler executed the goroutines in. You also want the system to 
automatically degrade the concurrent schedule into a sequential one, so you can 
find the exact interleaving which is the culprit of the problem.

 

The reason this is possible has to do with linearizability. An external party, 
observing the sequential system will see a trace of result-events from the 
system in the test. A correctness criterion which is necessary but not 
sufficient is that the concurrent execution of the same program should produce 
a valid linearizable trace. That is, given a trace of observations, we must ask 
"is there any of the sequential executions which can give such a trace?". If 
yes, then the trace is valid. If no, we can report an error.

 

It isn't as far-fetched as one might think. For example, Kyle Kingsbury are 
using these ideas in his Jepsen tool, for testing the linearizability (and 
other) properties of distributed databases. In his setup, you record a trace of 
events from a distributed database. The database runs virtually in VMs and the 
network is forcefully cut via firewall rules. Afterwards, a tool, Knossos, 
tries to find errors in the traces by looking for invalid observations. It 
proves to be a very efficient way to test the PR-documentation of distributed 
databases and verify the claims of guarantees the databases have. Usually, 
vendors are trying to hand-wave themselves through impossibility results from 
distributed computing (such as the CAP theorem).

 

Another example is PULSE & Concuerror from the Erlang world. A program is 
compiled with instrumentation. This instrumentation inserts a control-point 
before each possible blocking request, so a separate scheduler can handle the 
actual interleaving of the program. Now, the properties of concurrent execution 
can be studied by forcing schedules in a certain order. The crucial trick in 
these systems is a function

 

    func split(seed int) (seedL int, seedR int)

    

which can split the current PRNG seed into two seeds, left and right. It is 
needed to simplify a schedule once a failing schedule is found. Imagine you 
were able to draw the sequence-diagram: you want it to be as small as possible 
since that is easier to understand.

 

Whenever you start up a possible subcomputation, you split the seed. Now, if 
you want to throw away the subcomputation, you throw away either seedL or 
seedR. This keeps the PRNG intact for the other half. If you had a standard 
linear seed (rather than a tree which the above essentially is) the problem is 
that you have to skip a lot of random values in the PRNG for the 
subcomputations you don't use. This allows you to ignore subcomputations which 
are not relevant to the current error, throw them away and continue as if they 
never happened. In particular, you can throw away operations from the trace by 
excising them out—simplifying things in the process.

 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-27 Thread Jesper Louis Andersen
On Sat, Nov 26, 2016 at 8:32 PM Olivier El Mekki 
wrote:

>
> This could yield interesting properties, because this means that even
> using channels, maybe we could write concurrent code and still have a
> chance to have predictable and reproducible execution in testing
> environment.
>
>
>
I think it is a bit more complex:

Having a deterministic execution of serial & sequential nature is good for
establishing a base-point of what a given routine is going to do. The key
point of a concurrent/parallel environment is that there should be no
discrepancy between this base-point and any possible execution order of
concurrent goroutines.

In short, you want to randomize the scheduler to run highly unlikely
schedules. If any of these fails to produce a correct result, you want to
know what order the scheduler executed the goroutines in. You also want the
system to automatically degrade the concurrent schedule into a sequential
one, so you can find the exact interleaving which is the culprit of the
problem.

The reason this is possible has to do with linearizability. An external
party, observing the sequential system will see a trace of result-events
from the system in the test. A correctness criterion which is necessary but
not sufficient is that the concurrent execution of the same program should
produce a valid linearizable trace. That is, given a trace of observations,
we must ask "is there any of the sequential executions which can give such
a trace?". If yes, then the trace is valid. If no, we can report an error.

It isn't as far-fetched as one might think. For example, Kyle Kingsbury are
using these ideas in his Jepsen tool, for testing the linearizability (and
other) properties of distributed databases. In his setup, you record a
trace of events from a distributed database. The database runs virtually in
VMs and the network is forcefully cut via firewall rules. Afterwards, a
tool, Knossos, tries to find errors in the traces by looking for invalid
observations. It proves to be a very efficient way to test the
PR-documentation of distributed databases and verify the claims of
guarantees the databases have. Usually, vendors are trying to hand-wave
themselves through impossibility results from distributed computing (such
as the CAP theorem).

Another example is PULSE & Concuerror from the Erlang world. A program is
compiled with instrumentation. This instrumentation inserts a control-point
before each possible blocking request, so a separate scheduler can handle
the actual interleaving of the program. Now, the properties of concurrent
execution can be studied by forcing schedules in a certain order. The
crucial trick in these systems is a function

func split(seed int) (seedL int, seedR int)
which can split the current PRNG seed into two seeds, left and right. It is
needed to simplify a schedule once a failing schedule is found. Imagine you
were able to draw the sequence-diagram: you want it to be as small as
possible since that is easier to understand.

Whenever you start up a possible subcomputation, you split the seed. Now,
if you want to throw away the subcomputation, you throw away either seedL
or seedR. This keeps the PRNG intact for the other half. If you had a
standard linear seed (rather than a tree which the above essentially is)
the problem is that you have to skip a lot of random values in the PRNG for
the subcomputations you don't use. This allows you to ignore
subcomputations which are not relevant to the current error, throw them
away and continue as if they never happened. In particular, you can throw
away operations from the trace by excising them out—simplifying things in
the process.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-27 Thread Dave Cheney
On Sun, Nov 27, 2016 at 10:25 PM, Jesper Louis Andersen
 wrote:
> On Fri, Nov 25, 2016 at 10:51 PM Dave Cheney  wrote:
>>
>>
>> Yes, goroutine switching occurs a known points; effectively where the
>> goroutine is blocked from proceeding; sending and receiving on channels (if
>> they are full/empty respectively), trying to acquire a locked mutex, and
>> performing IO. These operations happen frequently in a program that
>> interacts with others so things more or less work out.
>>
> I was of the impression Go checks on function calls nowadays. Since you have
> to check the stack anyway, you can always piggyback on that routine if you
> want the system to schedule out the goroutine.

Yes, that is correct, the runtime has its own goroutine called sysmon
which waits up to 10ms and then sets the stack size to the longest
running goroutine to a negative number. The next time the goroutine
executes a function call it will trap into the stack growth slow path
and be preempted.

>
> Only checking on points-of-communication could lead to worse latency because
> a non-communicating goroutine is then able to tie up a processing core
> indefinitely, and delay garbage collection.

Yup, the problem with a spinning goroutine blocking the STW safe point
has been known for several releases. The mechanism above appears to
limit the worst case to 20ms in my testing.

> Not checking inside for-loops puts the onus on the programmer to break such
> loops via function calls, but this is a far easier thing to handle from a
> programmer perspective when needed, and when not needed it could be argued
> it needlessly slows down the program.

David Chase has adding a check on the loop edge as an experiment, it
comes at about a 15% cost.


>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-27 Thread Jesper Louis Andersen
On Fri, Nov 25, 2016 at 10:51 PM Dave Cheney  wrote:

>
> Yes, goroutine switching occurs a known points; effectively where the
> goroutine is blocked from proceeding; sending and receiving on channels (if
> they are full/empty respectively), trying to acquire a locked mutex, and
> performing IO. These operations happen frequently in a program that
> interacts with others so things more or less work out.
>
> I was of the impression Go checks on function calls nowadays. Since you
have to check the stack anyway, you can always piggyback on that routine if
you want the system to schedule out the goroutine.

Only checking on points-of-communication could lead to worse latency
because a non-communicating goroutine is then able to tie up a processing
core indefinitely, and delay garbage collection.

Not checking inside for-loops puts the onus on the programmer to break such
loops via function calls, but this is a far easier thing to handle from a
programmer perspective when needed, and when not needed it could be argued
it needlessly slows down the program.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-26 Thread Jan Mercl
On Sat, Nov 26, 2016 at 8:32 PM Olivier El Mekki 
wrote:

> Am i correct to assume from previous messages that if I use MAX_PROCS=1
and don't use channels, my functions will run from start to end in the
order they've been piled up using the `go` keyword?

No, sorry, the language specification does not say the scheduler provides
any such guarantee.

-- 

-j

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-26 Thread Olivier El Mekki
Am i correct to assume from previous messages that if I use MAX_PROCS=1 and 
don't use channels, my functions will run from start to end in the order 
they've been piled up using the `go` keyword?

This could yield interesting properties, because this means that even using 
channels, maybe we could write concurrent code and still have a chance to 
have predictable and reproducible execution in testing environment. 

On Friday, November 25, 2016 at 10:51:41 PM UTC+1, Dave Cheney wrote:
>
>
>
> On Saturday, 26 November 2016 08:00:24 UTC+11, Roger Alsing wrote:
>>
>> So it works very similar to normal OS thread context switching, but more 
>> explicit?
>>
>
> Yes, goroutine switching occurs a known points; effectively where the 
> goroutine is blocked from proceeding; sending and receiving on channels (if 
> they are full/empty respectively), trying to acquire a locked mutex, and 
> performing IO. These operations happen frequently in a program that 
> interacts with others so things more or less work out.
>  
>
>> The code will push all local state to the go routine stack, pop the state 
>> for the next go routine and change the stack pointer?
>>
>
> The cute thing is the Go function call convention is caller save, so when 
> a goroutine enters the scheduler there is no state to be saved, it's 
> already saved on the stack by the normal function call convention. As Ian 
> said above the scheduler just finds the SP and PC for a new goroutine and 
> swaps them for the current one. The new goroutine awakes at the bottom of 
> the scheduler, and returns to its original control flow.
>  
>
>>
>> And this is cheaper than context switching as threads have their own 1 or 
>> 4 mb stack allocated while go routines have a linked list for its stack 
>> (afaik?) ?
>>
>
> Goroutines used to have a linked list of stack segments, we dropped that 
> in Go 1.3 (might have been 1.2, i'd have to check), now stack segments grow 
> and shrink with a cheap check in the preamble of the function; if there 
> isn't enough stack to perform the function (this is known precisely at 
> compile time), then the function traps into a slow path that doubles the 
> stack allocation by copying the whole stack to a new area, then restarts 
> the function. Stack shrinking is handled via the GC cycle when a goroutine 
> has a large amount of unused stack.
>  
>
>> is that the major difference?
>>
>
> I think so, it makes goroutines cheap to create; only a few kb of initial 
> stack, cheap to use; no trip through kernel space to save a lot of state, 
> and trash TLB's and caches, and cheap to have many of them; in operation 
> hundreds of thousands of goroutines in a busy server are the norm. This is 
> also pretty much the case against threads.
>
> Here is a presentation I gave at OSCON last year about all these things, 
> https://dave.cheney.net/2015/08/08/performance-without-the-event-loop
>  
>
>>
>>
>> Den fredag 25 november 2016 kl. 17:34:24 UTC+1 skrev Ian Lance Taylor:
>>>
>>> On Fri, Nov 25, 2016 at 2:18 AM,   wrote: 
>>> > I have seen a lot of confusion on how go routines actually work 
>>> internally. 
>>> > Will the function execution ontop of the go routine be rewritten to a 
>>> statemachine with continuations in the same way as the C# async await does? 
>>>
>>> I do not know how C# async await works.  That said, goroutines are not 
>>> rewritten to state machines. 
>>>
>>>
>>> > If I have only 1 OS thread and 1000 go routines, obviously there must 
>>> be some sort of "magic" that makes it possible to multiplex these ontop of 
>>> that thread. 
>>> > How do the go routines store state so that they can continue from the 
>>> last execution point when they resume execution? 
>>>
>>> A goroutine is basically a small stack.  All the goroutine state is 
>>> saved on the stack.  To switch to a different goroutine, the goroutine 
>>> scheduler changes the stack pointer.  On the new goroutine, it looks 
>>> like the function call into the scheduler simply returns, and the 
>>> goroutine continues running. 
>>>
>>> This is a standard programming technique known as a "coroutine".  For 
>>> Go we used the name "goroutine" because 1) it is cute; 2) goroutines 
>>> have functionality not available in most coroutine libraries, such as 
>>> multiplexing across many OS threads and automatic use of epoll for 
>>> network connections. 
>>>
>>>
>>> > Is it safe to say that Go gives you the exact same support as the C# 
>>> (and others) async await while still writing code that looks sequential? 
>>>
>>> I don't know C#.  goroutines let you write code that looks sequential 
>>> because it actually is sequential. 
>>>
>>> Ian 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-25 Thread Dave Cheney


On Saturday, 26 November 2016 08:00:24 UTC+11, Roger Alsing wrote:
>
> So it works very similar to normal OS thread context switching, but more 
> explicit?
>

Yes, goroutine switching occurs a known points; effectively where the 
goroutine is blocked from proceeding; sending and receiving on channels (if 
they are full/empty respectively), trying to acquire a locked mutex, and 
performing IO. These operations happen frequently in a program that 
interacts with others so things more or less work out.
 

> The code will push all local state to the go routine stack, pop the state 
> for the next go routine and change the stack pointer?
>

The cute thing is the Go function call convention is caller save, so when a 
goroutine enters the scheduler there is no state to be saved, it's already 
saved on the stack by the normal function call convention. As Ian said 
above the scheduler just finds the SP and PC for a new goroutine and swaps 
them for the current one. The new goroutine awakes at the bottom of the 
scheduler, and returns to its original control flow.
 

>
> And this is cheaper than context switching as threads have their own 1 or 
> 4 mb stack allocated while go routines have a linked list for its stack 
> (afaik?) ?
>

Goroutines used to have a linked list of stack segments, we dropped that in 
Go 1.3 (might have been 1.2, i'd have to check), now stack segments grow 
and shrink with a cheap check in the preamble of the function; if there 
isn't enough stack to perform the function (this is known precisely at 
compile time), then the function traps into a slow path that doubles the 
stack allocation by copying the whole stack to a new area, then restarts 
the function. Stack shrinking is handled via the GC cycle when a goroutine 
has a large amount of unused stack.
 

> is that the major difference?
>

I think so, it makes goroutines cheap to create; only a few kb of initial 
stack, cheap to use; no trip through kernel space to save a lot of state, 
and trash TLB's and caches, and cheap to have many of them; in operation 
hundreds of thousands of goroutines in a busy server are the norm. This is 
also pretty much the case against threads.

Here is a presentation I gave at OSCON last year about all these 
things, https://dave.cheney.net/2015/08/08/performance-without-the-event-loop
 

>
>
> Den fredag 25 november 2016 kl. 17:34:24 UTC+1 skrev Ian Lance Taylor:
>>
>> On Fri, Nov 25, 2016 at 2:18 AM,   wrote: 
>> > I have seen a lot of confusion on how go routines actually work 
>> internally. 
>> > Will the function execution ontop of the go routine be rewritten to a 
>> statemachine with continuations in the same way as the C# async await does? 
>>
>> I do not know how C# async await works.  That said, goroutines are not 
>> rewritten to state machines. 
>>
>>
>> > If I have only 1 OS thread and 1000 go routines, obviously there must 
>> be some sort of "magic" that makes it possible to multiplex these ontop of 
>> that thread. 
>> > How do the go routines store state so that they can continue from the 
>> last execution point when they resume execution? 
>>
>> A goroutine is basically a small stack.  All the goroutine state is 
>> saved on the stack.  To switch to a different goroutine, the goroutine 
>> scheduler changes the stack pointer.  On the new goroutine, it looks 
>> like the function call into the scheduler simply returns, and the 
>> goroutine continues running. 
>>
>> This is a standard programming technique known as a "coroutine".  For 
>> Go we used the name "goroutine" because 1) it is cute; 2) goroutines 
>> have functionality not available in most coroutine libraries, such as 
>> multiplexing across many OS threads and automatic use of epoll for 
>> network connections. 
>>
>>
>> > Is it safe to say that Go gives you the exact same support as the C# 
>> (and others) async await while still writing code that looks sequential? 
>>
>> I don't know C#.  goroutines let you write code that looks sequential 
>> because it actually is sequential. 
>>
>> Ian 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-25 Thread Matt Ho
Goroutines are conceptually very similar to threads, but much lighter weight.  
They are far cheaper than system threads to run.  It's not uncommon for go apps 
to have 50k or more goroutines running.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-25 Thread Roger Alsing
So it works very similar to normal OS thread context switching, but more 
explicit?
The code will push all local state to the go routine stack, pop the state 
for the next go routine and change the stack pointer?

And this is cheaper than context switching as threads have their own 1 or 4 
mb stack allocated while go routines have a linked list for its stack 
(afaik?) ?
is that the major difference?


Den fredag 25 november 2016 kl. 17:34:24 UTC+1 skrev Ian Lance Taylor:
>
> On Fri, Nov 25, 2016 at 2:18 AM,   
> wrote: 
> > I have seen a lot of confusion on how go routines actually work 
> internally. 
> > Will the function execution ontop of the go routine be rewritten to a 
> statemachine with continuations in the same way as the C# async await does? 
>
> I do not know how C# async await works.  That said, goroutines are not 
> rewritten to state machines. 
>
>
> > If I have only 1 OS thread and 1000 go routines, obviously there must be 
> some sort of "magic" that makes it possible to multiplex these ontop of 
> that thread. 
> > How do the go routines store state so that they can continue from the 
> last execution point when they resume execution? 
>
> A goroutine is basically a small stack.  All the goroutine state is 
> saved on the stack.  To switch to a different goroutine, the goroutine 
> scheduler changes the stack pointer.  On the new goroutine, it looks 
> like the function call into the scheduler simply returns, and the 
> goroutine continues running. 
>
> This is a standard programming technique known as a "coroutine".  For 
> Go we used the name "goroutine" because 1) it is cute; 2) goroutines 
> have functionality not available in most coroutine libraries, such as 
> multiplexing across many OS threads and automatic use of epoll for 
> network connections. 
>
>
> > Is it safe to say that Go gives you the exact same support as the C# 
> (and others) async await while still writing code that looks sequential? 
>
> I don't know C#.  goroutines let you write code that looks sequential 
> because it actually is sequential. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-25 Thread Ian Lance Taylor
On Fri, Nov 25, 2016 at 2:18 AM,   wrote:
> I have seen a lot of confusion on how go routines actually work internally.
> Will the function execution ontop of the go routine be rewritten to a 
> statemachine with continuations in the same way as the C# async await does?

I do not know how C# async await works.  That said, goroutines are not
rewritten to state machines.


> If I have only 1 OS thread and 1000 go routines, obviously there must be some 
> sort of "magic" that makes it possible to multiplex these ontop of that 
> thread.
> How do the go routines store state so that they can continue from the last 
> execution point when they resume execution?

A goroutine is basically a small stack.  All the goroutine state is
saved on the stack.  To switch to a different goroutine, the goroutine
scheduler changes the stack pointer.  On the new goroutine, it looks
like the function call into the scheduler simply returns, and the
goroutine continues running.

This is a standard programming technique known as a "coroutine".  For
Go we used the name "goroutine" because 1) it is cute; 2) goroutines
have functionality not available in most coroutine libraries, such as
multiplexing across many OS threads and automatic use of epoll for
network connections.


> Is it safe to say that Go gives you the exact same support as the C# (and 
> others) async await while still writing code that looks sequential?

I don't know C#.  goroutines let you write code that looks sequential
because it actually is sequential.

Ian

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Do Go Routines use continuations to allow yelding?

2016-11-25 Thread rogeralsing
I have seen a lot of confusion on how go routines actually work internally.
Will the function execution ontop of the go routine be rewritten to a 
statemachine with continuations in the same way as the C# async await does?

If I have only 1 OS thread and 1000 go routines, obviously there must be some 
sort of "magic" that makes it possible to multiplex these ontop of that thread.
How do the go routines store state so that they can continue from the last 
execution point when they resume execution?
Is it safe to say that Go gives you the exact same support as the C# (and 
others) async await while still writing code that looks sequential?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.