On Aug 31, 2017, at 7:24 PM, Pierre Habouzit <pie...@habouzit.net> wrote: > > I fail at Finding the initial mail and am quite late to the party of > commenters, but there are parts I don't undertsand or have questions about. > > Scalable Runtime > > [...] > > The one problem I anticipate with GCD is that it doesn't scale well enough: > server developers in particular will want to instantiate hundreds of > thousands of actors in their application, at least one for every incoming > network connection. The programming model is substantially harmed when you > have to be afraid of creating too many actors: you have to start aggregating > logically distinct stuff together to reduce # queues, which leads to > complexity and loses some of the advantages of data isolation. > > > What do you mean by this?
My understanding is that GCD doesn’t currently scale to 1M concurrent queues / tasks. > queues are serial/exclusive execution contexts, and if you're not modeling > actors as being serial queues, then these two concepts are just disjoint. AFAICT, the “one queue per actor” model is the only one that makes sense. It doesn’t have to be FIFO, but it needs to be some sort of queue. If you allow servicing multiple requests within the actor at a time, then you lose the advantages of “no shared mutable state”. > Actors are the way you present the various tasks/operations/activities that > you schedule. These contexts are a way for the developer to explain which > things are related in a consistent system, and give them access to state > which is local to this context (whether it's TSD for threads, or queue > specific data, or any similar context), Just MHO, but I don’t think you’d need or want the concept of “actor local data” in the sense of TLS (e.g. __thread). All actor methods have a ‘self’ already, and having something like TLS strongly encourages breaking the model. To me, the motivation for TLS is to provide an easier way to migrate single-threaded global variables, when introducing threading into legacy code. This is not a problem we need or want to solve, given programmers would be rewriting their algorithm anyway to get it into the actor model. > IMO, Swift as a runtime should define what an execution context is, and be > relatively oblivious of which context it is exactly as long it presents a few > common capabilities: > - possibility to schedule work (async) > - have a name > - be an exclusion context > - is an entity the kernel can reason about (if you want to be serious about > any integration on a real operating system with priority inheritance and > complex issues like this, which it is the OS responsibility to handle and not > the language) > - ... > > In that sense, whether your execution context is: > - a dispatch serial queue > - a CFRunloop > - a libev/libevent/... event loop > - your own hand rolled event loop Generalizing the approach is completely possible, but it is also possible to introduce a language abstraction that is “underneath” the high level event loops. That’s what I’m proposing. > > Design sketch for interprocess and distributed compute > > [...] > > One of these principles is the concept of progressive disclosure of > complexity <https://en.wikipedia.org/wiki/Progressive_disclosure>: a Swift > developer shouldn't have to worry about IPC or distributed compute if they > don't care about it. > > > While I agree with the sentiment, I don't think that anything useful can be > done without "distributed" computation. I like the loadResourceFromTheWeb > example, as we have something like this on our platform, which is the > NSURLSession APIs, or the CloudKit API Surface, that are about fetching some > resource from a server (URL or CloudKit database records). However, they > don't have a single result, they have: > > - progress notification callbacks > - broken down notifications for the results (e.g. headers first and body > second, or per-record for CloudKit operations) > - various levels of error reporting. I don’t understand the concern about this. If you want low level control like this, it is quite easy to express that. However, it is also quite common to just want to say “load a URL with this name”, which is super easy and awesome with async/await. > I expect most developers will have to use such a construct, and for these, > having a single async pivot in your code that essentially fully serializes > your state machine on getting a full result from the previous step to be > lacking. Agreed, the examples are not trying to show that. It is perfectly fine to pass in additional callbacks (or delegates, etc) to async methods, which would be a natural way to express this… just like the current APIs do. > Delivering all these notifications on the context of the initiator would be > quite inefficient as clearly there are in my example above two very different > contexts, and having to hop through one to reach the other would make this > really terrible for the operating system. I also don't understand how such > operations would be modeled in the async/await world to be completely honest. The proposal isn’t trying to address this problem, because Swift already has ways to do it. -Chris
_______________________________________________ swift-evolution mailing list swift-evolution@swift.org https://lists.swift.org/mailman/listinfo/swift-evolution