-Pierre

> On Sep 4, 2017, at 9:10 AM, Chris Lattner via swift-evolution 
> <swift-evolution@swift.org> wrote:
> 
> 
>> On Sep 4, 2017, at 9:05 AM, Jean-Daniel <mail...@xenonium.com 
>> <mailto:mail...@xenonium.com>> wrote:
>> 
>>>> Sometimes, I’d probably make sense (or even be required to fix this to a 
>>>> certain queue (in the thread(-pool?) sense), but at others it may just 
>>>> make sense to execute the messages in-place by the sender if they don’t 
>>>> block so no context switch is incurred.
>>> 
>>> Do you mean kernel context switch?  With well behaved actors, the runtime 
>>> should be able to run work items from many different queues on the same 
>>> kernel thread.  The “queue switch cost” is designed to be very very low.  
>>> The key thing is that the runtime needs to know when work on a queue gets 
>>> blocked so the kernel thread can move on to servicing some other queues 
>>> work.
>> 
>> My understanding is that a kernel thread can’t move on servicing a different 
>> queue while a block is executing on it. The runtime already know when a 
>> queue is blocked, and the only way it has to mitigate the problem is to 
>> spawn an other kernel thread to server the other queues. This is what cause 
>> the kernel thread explosion.
> 
> I’m not sure what you mean by “executing on it”.  A work item that currently 
> has a kernel thread can be doing one of two things: “executing work” (like 
> number crunching) or “being blocked in the kernel on something that GCD 
> doesn’t know about”. 
> 
> However, the whole point is that work items shouldn’t do this: as you say it 
> causes thread explosions.  It is better for them to yield control back to 
> GCD, which allows GCD to use the kernel thread for other queues, even though 
> the original *queue* is blocked.


You're forgetting two things:

First off, when the work item stops doing work and gives up control, the kernel 
thread doesn't become instantaneously available. If you want the thread to be 
reusable to execute some asynchronously waited on work that the actor is 
handling, then you have to make sure to defer scheduling this work until the 
thread is in a reusable state.

Second, there may be other work enqueued already in this context, in which 
case, even if the current work item yields, what it's waiting on will create a 
new thread because the current context is used.

The first issue is something we can optimize (despite GCD not doing it), with 
tons of techniques, so let's not rathole into a discussion on it.
The second one is not something we can "fix". There will be cases when the 
correct thing to do is to linearize, and some cases when it's not. And you 
can't know upfront what the right decision was.



Something else I realized, is that this code is fundamentally broken in swift:

actor func foo()
{
    NSLock *lock = NSLock();
    lock.lock();

    let compute = await someCompute(); <--- this will really break `foo` in two 
pieces of code that can execute on two different physical threads.
    lock.unlock();
}


The reason why it is broken is that mutexes (whether it's NSLock, 
pthread_mutex, os_unfair_lock) have to be unlocked from the same thread that 
took it. the await right in the middle here means that we can't guarantee it.

There are numerous primitives that can't be used across an await call in this 
way:
- things that use the calling context identity in some object (such as locks, 
mutexes, ...)
- anything that attaches data to the context (TSDs)

The things in the first category have probably to be typed in a way that using 
them across an async or await is disallowed at compile time.
The things in the second category are Actor unsafe and need to move to other 
ways of doing the same.



-Pierre
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to