Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Jonathan Hull via swift-evolution
Thanks for this Pierre!

I think one of the main reasons I am proposing this is that this stuff is so 
easy to get wrong.  I know I have gotten it wrong before in subtle ways which 
were really difficult to debug.  By allowing the programmer to declare their 
intent, but not the implementation, we hopefully allow the right thing to 
happen behind the scenes (even if the programmer doesn’t know what that right 
thing is).  In some cases, that right thing™ might even be to just do things 
serially behind the scenes to avoid the overhead of synchronizing.

Thanks,
Jon


> On Sep 4, 2017, at 1:48 PM, Pierre Habouzit via swift-evolution 
>  wrote:
> 
> This doesn't work for priority tracking purposes, and is bad for locking 
> domains too.
> 
> What you really want here is:
> 
> let groupLike : Dispatch.SomethingThatLooksLikeAGroupButDoesTracking()
> 
> myNetworkingQueue().async(group: groupLike) {
> // loadWebResource
> }
> myNetworkingQueue().async(group: groupLike) {
> // loadWebResource
> }
> groupLike.notify(myImageProcessingQueue()) {
> // decodeImage
> }
> 
> The two main differences with what you explained is:
> 
> 1) `groupLike` would definitely be the underlying thing that tracks 
> dependencies which is required for a higher level Future-like construct 
> (which async/await doesn't have to solve, provided that it captures enough 
> context for the sake of such a groupLike object).
> 
> `groupLike` would likely *NOT* be an object developers would manipulate 
> directly but rather the underlying mechanism.
> 
> 2) the loadWebResource is done from the same serial context, because 
> networking is already (if your library is sane) using an asynchronous 
> interface that is very rarely a CPU bound problem, so parallelizing it is not 
> worth it because synchronization cost will dominate. To give hindsight, our 
> WWDC talk at the beginning pitched this measured performance win in a real 
> life scenario:
> 
> 1.3x
> 
> faster after combining queue hierarchies
> 
> 
> This is covered here:
> https://developer.apple.com/videos/play/wwdc2017/706/?time=138 
> 
> https://developer.apple.com/videos/play/wwdc2017/706/?time=1500 
>  and onward
> 
> It happens that this 30%+ performance win that we discuss here happens to 
> have actually be with how some subsystems were using our networking stack, by 
> recombining code that was essentially doing what you wrote into what I just 
> wrote above by using the same exclusion context for all networking.
> 
> If Swift async/await leads to people writing things equivalent to using the 
> global queue the way you suggest, we failed from a system performance 
> perspective.
> 
> -Pierre
> 
>> On Sep 4, 2017, at 12:55 PM, Wallacy via swift-evolution 
>> > wrote:
>> 
>> Yes, maybe in this way... Or using dispatch_group..
>> 
>> dispatch_group_t group = dispatch_group_create();
>> 
>> dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
>> // loadWebResource
>> });
>> 
>> 
>> dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
>> // loadWebResource
>> });
>> 
>> dispatch_group_notify(group,dispatch_get_global_queue(0, 0), ^ {
>> // decodeImage ... etc...
>> });
>> 
>> Can be made using different strategies, the compiler will select the best 
>> fit for every case. Different runtimes, has different "best" strategies 
>> also. No need to use a intermediary type.
>> 
>> Em seg, 4 de set de 2017 às 14:53, Michel Fortin > > escreveu:
>> 
>> > Le 4 sept. 2017 à 10:01, Wallacy via swift-evolution 
>> > > a écrit :
>> >
>> > func processImageData1a() async ->
>> >  Image {
>> >   let dataResource  = async loadWebResource("dataprofile.txt")
>> >   let imageResource = async loadWebResource("imagedata.dat")
>> >
>> >   // ... other stuff can go here to cover load latency...
>> >
>> >   let imageTmp= await decodeImage(dataResource, imageResource) // 
>> > compiler error if await is not present.
>> >   let imageResult = await dewarpAndCleanupImage(imageTmp)
>> >   return imageResult
>> > }
>> >
>> >
>> > If this (or something like that) is not implemented, people will create 
>> > several versions to solve the same problem, so that later (Swift 6?) will 
>> > be solved (because people want this), and we will live with several bad 
>> > codes to maintain.
>> 
>> Just to be sure of what you are proposing, am I right to assume this would 
>> be compiled down to something like this?
>> 
>> func processImageData1a(completion: (Image) -> ()) {
>>   var dataResource: Resource? = nil
>>   var imageResource: Resource? = nil
>>   var finishedBody = false
>> 
>>   func continuation() {
>> // only continue once 

Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Michel Fortin via swift-evolution
The first implementation I proposed before Wallacy suggested using 
dispatch_group_t does not involve any dispatching. It's possible that 
loadWebResource would dispatch in the background, but dispatching is not 
necessary either.

For instance, loadWebResource could just be a wrapper for CFNetwork or 
NSURLRequest that handles its callbacks on the current queue/runloop when 
network events occurs and then call its completion block once it's done. 
loadWebResource does not have to do any dispatching itself in this case, it's 
all handled by the frameworks.

And if a resource is already cached in memory, loadWebResource could call the 
completion block immediately with the data before exiting, nothing asynchronous 
needing to happen in that case.

The generated code I proposed for this would work very well for these cases 
because it does not dispatch the calls to loadWebResource itself. No 
dispatching actually occurs in the case the resource is already available in 
the cache and everything runs smoothly without a single context switch. 
loadWebResource would only resort to dispatch if/when necessary.

I'm just not sure how well that would work for priority tracking in this case 
since we are crossing function boundaries.


> Le 4 sept. 2017 à 16:48, Pierre Habouzit  a écrit :
> 
> This doesn't work for priority tracking purposes, and is bad for locking 
> domains too.
> 
> What you really want here is:
> 
> let groupLike : Dispatch.SomethingThatLooksLikeAGroupButDoesTracking()
> 
> myNetworkingQueue().async(group: groupLike) {
> // loadWebResource
> }
> myNetworkingQueue().async(group: groupLike) {
> // loadWebResource
> }
> groupLike.notify(myImageProcessingQueue()) {
> // decodeImage
> }
> 
> The two main differences with what you explained is:
> 
> 1) `groupLike` would definitely be the underlying thing that tracks 
> dependencies which is required for a higher level Future-like construct 
> (which async/await doesn't have to solve, provided that it captures enough 
> context for the sake of such a groupLike object).
> 
> `groupLike` would likely *NOT* be an object developers would manipulate 
> directly but rather the underlying mechanism.
> 
> 2) the loadWebResource is done from the same serial context, because 
> networking is already (if your library is sane) using an asynchronous 
> interface that is very rarely a CPU bound problem, so parallelizing it is not 
> worth it because synchronization cost will dominate. To give hindsight, our 
> WWDC talk at the beginning pitched this measured performance win in a real 
> life scenario:
> 
> 1.3x
> 
> faster after combining queue hierarchies
> 
> 
> This is covered here:
> https://developer.apple.com/videos/play/wwdc2017/706/?time=138 
> 
> https://developer.apple.com/videos/play/wwdc2017/706/?time=1500 
>  and onward
> 
> It happens that this 30%+ performance win that we discuss here happens to 
> have actually be with how some subsystems were using our networking stack, by 
> recombining code that was essentially doing what you wrote into what I just 
> wrote above by using the same exclusion context for all networking.
> 
> If Swift async/await leads to people writing things equivalent to using the 
> global queue the way you suggest, we failed from a system performance 
> perspective.
> 
> -Pierre
> 
>> On Sep 4, 2017, at 12:55 PM, Wallacy via swift-evolution 
>> > wrote:
>> 
>> Yes, maybe in this way... Or using dispatch_group..
>> 
>> dispatch_group_t group = dispatch_group_create();
>> 
>> dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
>> // loadWebResource
>> });
>> 
>> 
>> dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
>> // loadWebResource
>> });
>> 
>> dispatch_group_notify(group,dispatch_get_global_queue(0, 0), ^ {
>> // decodeImage ... etc...
>> });
>> 
>> Can be made using different strategies, the compiler will select the best 
>> fit for every case. Different runtimes, has different "best" strategies 
>> also. No need to use a intermediary type.
>> 
>> Em seg, 4 de set de 2017 às 14:53, Michel Fortin > > escreveu:
>> 
>> > Le 4 sept. 2017 à 10:01, Wallacy via swift-evolution 
>> > > a écrit :
>> >
>> > func processImageData1a() async ->
>> >  Image {
>> >   let dataResource  = async loadWebResource("dataprofile.txt")
>> >   let imageResource = async loadWebResource("imagedata.dat")
>> >
>> >   // ... other stuff can go here to cover load latency...
>> >
>> >   let imageTmp= await decodeImage(dataResource, imageResource) // 
>> > compiler error if await is not present.
>> >   let imageResult = await dewarpAndCleanupImage(imageTmp)
>> >   return 

Re: [swift-evolution] [SE-0155][Discuss] The role of labels in enum case patterns

2017-09-04 Thread Christopher Kornher via swift-evolution
Apologies for rehashing this, but we seem to be going down that path… I am in 
the minority on this issue and have held my opinions because I thought that 
they would have served as simply a distraction and I was extremely busy at the 
time. That may have been a mistake on my part, because raising these issues now 
is after the fact.

I am airing them now for two reasons:
1) To ensure that at least the agreed upon compromise is implemented.
2) To hopefully improve the evolution process, and help ensure that similar 
proposals are given the scrutiny that they deserve.

I have noticed a pattern in software and other projects over the years. The 
most catastrophic failures and expensive rework has been due to flawed or at 
least incomplete basic assumptions. New postulates / basic assumptions should 
be subjected to rigorous scrutiny. I don’t think that they were in this case.

I am speaking up now because there is a proposal out there to follow what I 
consider to be a flawed basic assumption to its logical conclusion, which seems 
quite reasonable, if you accept the basic assumption, which I don’t, of course. 

Please don’t take this as a personal attack on those on the other side. This is 
a philosophical disagreement with no “right” and “wrong” answer. I don’t 
believe that this proposal is terrible. In fact, the agreed-upon compromise 
does improve the construction and matching of enum values and leaves only edge 
cases that I hope to address in a future proposal — specifically matching is 
made more difficult in some cases of name overloading. 

The history of the process as I saw it:
There was a widely perceived problem with enums involving what could be 
described as “legacy destructuring” which could lead to confusing code and hard 
to discover transposition errors.

A solution was proposed that was based upon an overarching premise: 
that enums should be modeled as much as possible after function calls to 
simplify the language. This led to the original proposal always requiring 
labels (as function calls do, and closures don’t, but that is a discussion for 
another time).

I believe that idea of using function calls as the primary model for enums is 
flawed at its core. The problem is that enums and function calls only resemble 
each other in Swift because some enums can have associated values. 

The purpose of enums is to be matched. Enums that are never matched in some way 
have no purpose. Function calls must always be “matched” (resolved) 
unambiguously so that proper code can be executed. No such requirement exists 
for enums. In fact the language includes rich functionality for matching 
multiple cases and values with a single “case” (predicate). This is not a flaw, 
it improves the expressive power of the language by allowing complex matching 
logic to be expressed tersely and clearly.

So, since the purpose of enums is to be matched, any modification to this 
accepted proposal that makes that more difficult or cluttered should be 
rejected.


> On Sep 4, 2017, at 9:52 AM, T.J. Usiyan via swift-evolution 
>  wrote:
> 
> While re-litigating has it's issues, I am for simplifying the rule and always 
> requiring the labels if they exist. This is similar to the change around 
> external labels. Yes, it is slightly less convenient, but it removes a 
> difficult to motivate caveat for beginners.
> 
> On Sun, Sep 3, 2017 at 4:35 PM, Xiaodi Wu via swift-evolution 
> > wrote:
> The desired behavior was the major topic of controversy during review; I’m 
> wary of revisiting this topic as we are essentially relitigating the proposal.
> 
> To start off, the premise, if I recall, going into review was that the author 
> **rejected** the notion that pattern matching should mirror creation. I 
> happen to agree with you on this point, but it was not the prevailing 
> argument. Fortunately, we do not need to settle this to arrive at some 
> clarity for the issues at hand.
> 
> From a practical standpoint, a requirement for labels in all cases would be 
> much more source-breaking, whereas the proposal as it stands would allow 
> currently omitted labels to continue being valid. Moreover, and I think this 
> is a worthy consideration, one argument for permitting the omission of labels 
> during pattern matching is to encourage API designers to use labels to 
> clarify initialization without forcing its use by API consumers during every 
> pattern matching operation.
> 
> In any case, the conclusion reached is precedented in the world of functions:
> 
> func g(a: Int, b: Int) { ... }
> let f = g
> f(1, 2)
> 
> On Sun, Sep 3, 2017 at 15:13 Robert Widmann via swift-evolution 
> > wrote:
> Hello Swift Evolution,
> 
> I took up the cause of implementing SE-0155 
> 

Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Pierre Habouzit via swift-evolution
This doesn't work for priority tracking purposes, and is bad for locking 
domains too.

What you really want here is:

let groupLike : Dispatch.SomethingThatLooksLikeAGroupButDoesTracking()

myNetworkingQueue().async(group: groupLike) {
// loadWebResource
}
myNetworkingQueue().async(group: groupLike) {
// loadWebResource
}
groupLike.notify(myImageProcessingQueue()) {
// decodeImage
}

The two main differences with what you explained is:

1) `groupLike` would definitely be the underlying thing that tracks 
dependencies which is required for a higher level Future-like construct (which 
async/await doesn't have to solve, provided that it captures enough context for 
the sake of such a groupLike object).

`groupLike` would likely *NOT* be an object developers would manipulate 
directly but rather the underlying mechanism.

2) the loadWebResource is done from the same serial context, because networking 
is already (if your library is sane) using an asynchronous interface that is 
very rarely a CPU bound problem, so parallelizing it is not worth it because 
synchronization cost will dominate. To give hindsight, our WWDC talk at the 
beginning pitched this measured performance win in a real life scenario:

1.3x

faster after combining queue hierarchies


This is covered here:
https://developer.apple.com/videos/play/wwdc2017/706/?time=138 

https://developer.apple.com/videos/play/wwdc2017/706/?time=1500 
 and onward

It happens that this 30%+ performance win that we discuss here happens to have 
actually be with how some subsystems were using our networking stack, by 
recombining code that was essentially doing what you wrote into what I just 
wrote above by using the same exclusion context for all networking.

If Swift async/await leads to people writing things equivalent to using the 
global queue the way you suggest, we failed from a system performance 
perspective.

-Pierre

> On Sep 4, 2017, at 12:55 PM, Wallacy via swift-evolution 
>  wrote:
> 
> Yes, maybe in this way... Or using dispatch_group..
> 
> dispatch_group_t group = dispatch_group_create();
> 
> dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
> // loadWebResource
> });
> 
> 
> dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
> // loadWebResource
> });
> 
> dispatch_group_notify(group,dispatch_get_global_queue(0, 0), ^ {
> // decodeImage ... etc...
> });
> 
> Can be made using different strategies, the compiler will select the best fit 
> for every case. Different runtimes, has different "best" strategies also. No 
> need to use a intermediary type.
> 
> Em seg, 4 de set de 2017 às 14:53, Michel Fortin  > escreveu:
> 
> > Le 4 sept. 2017 à 10:01, Wallacy via swift-evolution 
> > > a écrit :
> >
> > func processImageData1a() async ->
> >  Image {
> >   let dataResource  = async loadWebResource("dataprofile.txt")
> >   let imageResource = async loadWebResource("imagedata.dat")
> >
> >   // ... other stuff can go here to cover load latency...
> >
> >   let imageTmp= await decodeImage(dataResource, imageResource) // 
> > compiler error if await is not present.
> >   let imageResult = await dewarpAndCleanupImage(imageTmp)
> >   return imageResult
> > }
> >
> >
> > If this (or something like that) is not implemented, people will create 
> > several versions to solve the same problem, so that later (Swift 6?) will 
> > be solved (because people want this), and we will live with several bad 
> > codes to maintain.
> 
> Just to be sure of what you are proposing, am I right to assume this would be 
> compiled down to something like this?
> 
> func processImageData1a(completion: (Image) -> ()) {
>   var dataResource: Resource? = nil
>   var imageResource: Resource? = nil
>   var finishedBody = false
> 
>   func continuation() {
> // only continue once everything is ready
> guard finishedBody else { return }
> guard dataResource = dataResource else { return }
> guard imageResource = imageResource else { return }
> 
> // everything is ready now
> decodeImage(dataResource, imageResource) { imageTmp in
>   dewarpAndCleanupImage(imageTmp) { imageResult in
> completion(imageResult)
>   }
> }
>   }
> 
>   loadWebResource("dataprofile.txt") { result in
> dataResource = result
> continuation()
>   }
>   loadWebResource("imagedata.dat") { result in
> imageResource = result
> continuation()
>   }
> 
>   // ... other stuff can go here to cover load latency...
> 
>   finishedBody = true
>   continuation()
> }
> 
> 
> This seems more lightweight than a future to me. I know I've used this 
> pattern a few times. What I'm not sure about is how thrown errors would work. 
> Surely you 

Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Wallacy via swift-evolution
Yes, maybe in this way... Or using dispatch_group..

dispatch_group_t group = dispatch_group_create();

dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
// loadWebResource});


dispatch_group_async(group,dispatch_get_global_queue(0, 0), ^ {
// loadWebResource});

dispatch_group_notify(group,dispatch_get_global_queue(0, 0), ^ {
// decodeImage ... etc...});

Can be made using different strategies, the compiler will select the best
fit for every case. Different runtimes, has different "best" strategies
also. No need to use a intermediary type.

Em seg, 4 de set de 2017 às 14:53, Michel Fortin 
escreveu:

>
> > Le 4 sept. 2017 à 10:01, Wallacy via swift-evolution <
> swift-evolution@swift.org> a écrit :
> >
> > func processImageData1a() async ->
> >  Image {
> >   let dataResource  = async loadWebResource("dataprofile.txt")
> >   let imageResource = async loadWebResource("imagedata.dat")
> >
> >   // ... other stuff can go here to cover load latency...
> >
> >   let imageTmp= await decodeImage(dataResource, imageResource) //
> compiler error if await is not present.
> >   let imageResult = await dewarpAndCleanupImage(imageTmp)
> >   return imageResult
> > }
> >
> >
> > If this (or something like that) is not implemented, people will create
> several versions to solve the same problem, so that later (Swift 6?) will
> be solved (because people want this), and we will live with several bad
> codes to maintain.
>
> Just to be sure of what you are proposing, am I right to assume this would
> be compiled down to something like this?
>
> func processImageData1a(completion: (Image) -> ()) {
>   var dataResource: Resource? = nil
>   var imageResource: Resource? = nil
>   var finishedBody = false
>
>   func continuation() {
> // only continue once everything is ready
> guard finishedBody else { return }
> guard dataResource = dataResource else { return }
> guard imageResource = imageResource else { return }
>
> // everything is ready now
> decodeImage(dataResource, imageResource) { imageTmp in
>   dewarpAndCleanupImage(imageTmp) { imageResult in
> completion(imageResult)
>   }
> }
>   }
>
>   loadWebResource("dataprofile.txt") { result in
> dataResource = result
> continuation()
>   }
>   loadWebResource("imagedata.dat") { result in
> imageResource = result
> continuation()
>   }
>
>   // ... other stuff can go here to cover load latency...
>
>   finishedBody = true
>   continuation()
> }
>
>
> This seems more lightweight than a future to me. I know I've used this
> pattern a few times. What I'm not sure about is how thrown errors would
> work. Surely you want error handling to work when loading resources from
> the web.
>
>
> --
> Michel Fortin
> https://michelf.ca
>
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Pierre Habouzit via swift-evolution

-Pierre

> On Sep 4, 2017, at 9:10 AM, Chris Lattner via swift-evolution 
>  wrote:
> 
> 
>> On Sep 4, 2017, at 9:05 AM, Jean-Daniel > > wrote:
>> 
 Sometimes, I’d probably make sense (or even be required to fix this to a 
 certain queue (in the thread(-pool?) sense), but at others it may just 
 make sense to execute the messages in-place by the sender if they don’t 
 block so no context switch is incurred.
>>> 
>>> Do you mean kernel context switch?  With well behaved actors, the runtime 
>>> should be able to run work items from many different queues on the same 
>>> kernel thread.  The “queue switch cost” is designed to be very very low.  
>>> The key thing is that the runtime needs to know when work on a queue gets 
>>> blocked so the kernel thread can move on to servicing some other queues 
>>> work.
>> 
>> My understanding is that a kernel thread can’t move on servicing a different 
>> queue while a block is executing on it. The runtime already know when a 
>> queue is blocked, and the only way it has to mitigate the problem is to 
>> spawn an other kernel thread to server the other queues. This is what cause 
>> the kernel thread explosion.
> 
> I’m not sure what you mean by “executing on it”.  A work item that currently 
> has a kernel thread can be doing one of two things: “executing work” (like 
> number crunching) or “being blocked in the kernel on something that GCD 
> doesn’t know about”. 
> 
> However, the whole point is that work items shouldn’t do this: as you say it 
> causes thread explosions.  It is better for them to yield control back to 
> GCD, which allows GCD to use the kernel thread for other queues, even though 
> the original *queue* is blocked.


You're forgetting two things:

First off, when the work item stops doing work and gives up control, the kernel 
thread doesn't become instantaneously available. If you want the thread to be 
reusable to execute some asynchronously waited on work that the actor is 
handling, then you have to make sure to defer scheduling this work until the 
thread is in a reusable state.

Second, there may be other work enqueued already in this context, in which 
case, even if the current work item yields, what it's waiting on will create a 
new thread because the current context is used.

The first issue is something we can optimize (despite GCD not doing it), with 
tons of techniques, so let's not rathole into a discussion on it.
The second one is not something we can "fix". There will be cases when the 
correct thing to do is to linearize, and some cases when it's not. And you 
can't know upfront what the right decision was.



Something else I realized, is that this code is fundamentally broken in swift:

actor func foo()
{
NSLock *lock = NSLock();
lock.lock();

let compute = await someCompute(); <--- this will really break `foo` in two 
pieces of code that can execute on two different physical threads.
lock.unlock();
}


The reason why it is broken is that mutexes (whether it's NSLock, 
pthread_mutex, os_unfair_lock) have to be unlocked from the same thread that 
took it. the await right in the middle here means that we can't guarantee it.

There are numerous primitives that can't be used across an await call in this 
way:
- things that use the calling context identity in some object (such as locks, 
mutexes, ...)
- anything that attaches data to the context (TSDs)

The things in the first category have probably to be typed in a way that using 
them across an async or await is disallowed at compile time.
The things in the second category are Actor unsafe and need to move to other 
ways of doing the same.



-Pierre___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Pierre Habouzit via swift-evolution
> On Sep 4, 2017, at 7:27 AM, Wallacy via swift-evolution 
>  wrote:
> 
> Hello,
> 
> I have a little question about the actors.
> 
> On WWDC 2012 Session 712 one of the most important tips (for me at least) 
> was: Improve Performance with Reader-Writer Access
> 
> Basically:
> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
> • Use synchronous concurrent “reads”: dispatch_sync()
> • Use asynchronous serialized “writes”: dispatch_barrier_async()
> 
> Example:
> // ...
>_someManagerQueue = dispatch_queue_create("SomeManager", 
> DISPATCH_QUEUE_CONCURRENT);
> // ...
> 
> And then:
> 
> - (id) getSomeArrayItem:(NSUInteger) index {
> id importantObj = NULL;
> dispatch_sync(_someManagerQueue,^{
> id importantObj = [_importantArray objectAtIndex:index];
>  });
>return importantObj;
>  }
> - (void) removeSomeArrayItem:(id) object {
>  dispatch_barrier_async(_someManagerQueue,^{
>  [_importantArray removeObject:object];
>  });
>  }
> - (void) addSomeArrayItem:(id) object {
>  dispatch_barrier_async(_someManagerQueue,^{
>  [_importantArray addObject:object];
>  });
>  }
> 
> That way you ensure that whenever you read an information (eg an array) all 
> the "changes" have been made ​​or are "waiting" . And every time you write an 
> information, your program will not be blocked waiting for the operation to be 
> completed.
> 
> That way, if you use several threads, none will have to wait another to get 
> any value unless one of them is "writing", which is right thing to do.
> 
> With this will it be composed using actors? I see a lot of discussion about 
> using serial queues, and I also have not seen any mechanism similar to 
> dispatch_barrier_async being discussed here or in other threads.

Actors are serial and exclusive, so this concurrent queue thing is not relevant.
Also, int he QoS world, using reader writer locks or private concurrent queues 
this way is not terribly great.
lastly for a simple writer like that you want dispatch_barrier_sync() not async 
(async will create a thread and it's terribly wasteful for so little work).

We covered this subtleties in this year's WWDC GCD session.

-Pierre

> 
> Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution 
> > escreveu:
> Hello,
> 
> first off, I’m following this discussion with great interest, even though my 
> background (simulation software on HPC) has a different focus than the 
> “usual” paradigms Swift seeks to (primarily) address.
> 
> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
> > > wrote:
> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  >> > wrote:
> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  >>> > wrote:
> >>>
> >>> Is there a specific important use case for being able to target an actor 
> >>> to an existing queue?  Are you looking for advanced patterns where 
> >>> multiple actors (each providing disjoint mutable state) share an 
> >>> underlying queue? Would this be for performance reasons, for 
> >>> compatibility with existing code, or something else?
> >>
> >> Mostly for interaction with current designs where being on a given bottom 
> >> serial queue gives you the locking context for resources naturally 
> >> attached to it.
> >
> > Ok.  I don’t understand the use-case well enough to know how we should 
> > model this.  For example, is it important for an actor to be able to change 
> > its queue dynamically as it goes (something that sounds really scary to me) 
> > or can the “queue to use” be specified at actor initialization time?
> 
> I’m confused, but that may just be me misunderstanding things again. I’d 
> assume each actor has its own (serial) queue that is used to serialize its 
> messages, so the queue above refers to the queue used to actually process the 
> messages the actor receives, correct?
> 
> Sometimes, I’d probably make sense (or even be required to fix this to a 
> certain queue (in the thread(-pool?) sense), but at others it may just make 
> sense to execute the messages in-place by the sender if they don’t block so 
> no context switch is incurred.
> 
> > One plausible way to model this is to say that it is a “multithreaded 
> > actor” of some sort, where the innards of the actor allow arbitrary number 
> > of client threads to call into it concurrently.  The onus would be on the 
> > implementor of the NIC or database to implement the proper synchronization 
> > on the mutable state within the actor.
> >>
> >> I think what you said made sense.
> >
> > Ok, I captured this in yet-another speculative section:
> > https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
> >  
> > 

Re: [swift-evolution] [SE-0155][Discuss] The role of labels in enum case patterns

2017-09-04 Thread Daniel Duan via swift-evolution

> On Sep 3, 2017, at 1:35 PM, Xiaodi Wu via swift-evolution 
>  wrote:
> 
> The desired behavior was the major topic of controversy during review; I’m 
> wary of revisiting this topic as we are essentially relitigating the proposal.
> 
> To start off, the premise, if I recall, going into review was that the author 
> **rejected** the notion that pattern matching should mirror creation.

Original author here. Here’s the review thread, for context: 
https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20170306/thread.html#33626
 


While I did state declaration and pattern should be considered separately, it 
was an defense for the pattern syntax as in that specific revision of the 
proposal. In my heart of hearts, I was in favor of mandatory labels all along. 
In fact, it’s what the 1st revision of the proposal wanted:  
https://github.com/apple/swift-evolution/blob/43ca098355762014f53e1b54e02d2f6a01253385/proposals/0155-normalize-enum-case-representation.md
 


The strictness of label requirements got progressively knocked down as the 
proposal graduated from 1st to 2nd revision to acceptance rationale .

> I happen to agree with you on this point, but it was not the prevailing 
> argument. Fortunately, we do not need to settle this to arrive at some 
> clarity for the issues at hand.
> 
> From a practical standpoint, a requirement for labels in all cases would be 
> much more source-breaking, whereas the proposal as it stands would allow 
> currently omitted labels to continue being valid. Moreover, and I think this 
> is a worthy consideration, one argument for permitting the omission of labels 
> during pattern matching is to encourage API designers to use labels to 
> clarify initialization without forcing its use by API consumers during every 
> pattern matching operation.
> 
> In any case, the conclusion reached is precedented in the world of functions:
> 
> func g(a: Int, b: Int) { ... }
> let f = g
> f(1, 2)
> 
> On Sun, Sep 3, 2017 at 15:13 Robert Widmann via swift-evolution 
> > wrote:
> Hello Swift Evolution,
> 
> I took up the cause of implementing SE-0155 
> ,
>  and am most of the way through the larger points of the proposal.  One thing 
> struck me when I got to the part about normalizing the behavior of pattern 
> matching 
> .
>   The Core Team indicated in their rationale 
> 
>  that the proposal’s suggestion that a variable binding sub in for a label 
> was a little much as in this example:
> 
> enum Foo {
>   case foo(x: Int, y: Int)
> }
> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
> if case let .foo(x, y) {} // Fine?  Missing labels, but variable names match 
> labels
> 
> They instead suggested the following behavior:
> 
> enum Foo {
>   case foo(x: Int, y: Int)
> }
> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
> if case let .foo(x, y) {} // Fine?  Missing labels, and full name of case is 
> unambiguous
> 
> Which, for example, would reject this:
> 
> enum Foo {
>   case foo(x: Int, y: Int) // Note: foo(x:y:)
>   case foo(x: Int, z: Int) // Note: foo(x:z:)
> }
> if case let .foo(x, y) {} // Bad!  Are we matching foo(x:y:) or foo(x:z:)?
> 
> With this reasoning:
> 
>>  - While an associated-value label can indeed contribute to the readability 
>> of the pattern, the programmer can also choose a meaningful name to bind to 
>> the associated value.  This binding name can convey at least as much 
>> information as a label would.
>> 
>>   - The risk of mis-labelling an associated value grows as the number of 
>> associated values grows.  However, very few cases carry a large number of 
>> associated values.  As the amount of information which the case should carry 
>> grows, it becomes more and more interesting to encapsulate that information 
>> in its own struct — among other reasons, to avoid the need to revise every 
>> matching case-pattern in the program.  Furthermore, when a case does carry a 
>> significant number of associated values, there is often a positional 
>> conventional between them that lowers the risk of re-ordering: for example, 
>> the conventional left-then-right ordering of a binary search tree.  
>> Therefore this risk is somewhat over-stated, and of course 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Pierre Habouzit via swift-evolution
> On Sep 4, 2017, at 10:36 AM, Chris Lattner via swift-evolution 
>  wrote:
> 
> On Sep 3, 2017, at 12:44 PM, Pierre Habouzit  > wrote:
>> My currently not very well formed opinion on this subject is that GCD 
>> queues are just what you need with these possibilities:
>> - this Actor queue can be targeted to other queues by the developer when 
>> he means for these actor to be executed in an existing execution context 
>> / locking domain,
>> - we disallow Actors to be directly targeted to GCD global concurrent 
>> queues ever
>> - for the other ones we create a new abstraction with stronger and 
>> better guarantees (typically limiting the number of possible threads 
>> servicing actors to a low number, not greater than NCPU).
> 
> Is there a specific important use case for being able to target an actor 
> to an existing queue?  Are you looking for advanced patterns where 
> multiple actors (each providing disjoint mutable state) share an 
> underlying queue? Would this be for performance reasons, for 
> compatibility with existing code, or something else?
 
 Mostly for interaction with current designs where being on a given bottom 
 serial queue gives you the locking context for resources naturally 
 attached to it.
>>> 
>>> Ok.  I don’t understand the use-case well enough to know how we should 
>>> model this.  For example, is it important for an actor to be able to change 
>>> its queue dynamically as it goes (something that sounds really scary to me) 
>>> or can the “queue to use” be specified at actor initialization time?
>> 
>> I think I need to read more on actors, because the same way you're not an OS 
>> runtime expert, I'm not (or rather no longer, I started down that path a 
>> lifetime ago) a language expert at all, and I feel like I need to understand 
>> your world better to try to explain this part better to you.
> 
> No worries.  Actually, after thinking about it a bit, I don’t think that 
> switching underlying queues at runtime is scary.
> 
> The important semantic invariant which must be maintained is that there is 
> only one thread executing within an actor context at a time.  Switching 
> around underlying queues (or even having multiple actors on the same queue) 
> shouldn’t be a problem.
> 
> OTOH, you don’t want an actor “listening” to two unrelated queues, because 
> there is nothing to synchronize between the queues, and you could have 
> multiple actor methods invoked at the same time: you lose the protection of a 
> single serial queue. 
> 
> The only concern I’d have with an actor switching queues at runtime is that 
> you don’t want a race condition where an item on QueueA goes to the actor, 
> then it switches to QueueB, then another item from QueueB runs while the 
> actor is already doing something for QueueA.
> 
> 
 I think what you said made sense.
>>> 
>>> Ok, I captured this in yet-another speculative section:
>>> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>>>  
>>> 
>> Great. BTW I agree 100% with:
>> 
>> That said, this is definitely a power-user feature, and we should 
>> understand, build, and get experience using the basic system before 
>> considering adding something like this.
>> 
>> Private concurrent queues are not a success in dispatch and cause several 
>> issues, these queues are second class citizens in GCD in terms of feature 
>> they support, and building something with concurrency *within* is hard. I 
>> would keep it as "that's where we'll go some day" but not try to attempt it 
>> until we've build the simpler (or rather less hard) purely serial case first.
> 
> Right, I agree this is not important for the short term.  To clarify though, 
> I meant to indicate that these actors would be implemented completely 
> independently of dispatch, not that they’d build on private concurrent queues.
> 
> 
 Another problem I haven't touched either is kernel-issued events (inbound 
 IPC from other processes, networking events, etc...). Dispatch for the 
 longest time used an indirection through a manager thread for all such 
 events, and that had two major issues:
 
 - the thread hops it caused, causing networking workloads to utilize up to 
 15-20% more CPU time than an equivalent manually made pthread parked in 
 kevent(), because networking pace even when busy idles back all the time 
 as far as the CPU is concerned, so dispatch queues never stay hot, and the 
 context switch is not only a scheduled context switch but also has the 
 cost of a thread bring up
 
 - if you deliver all possible events this way you also deliver events that 
 cannot possibly make progress because the execution context that will 

Re: [swift-evolution] [Planning][Request] "constexpr" for Swift 5

2017-09-04 Thread Alejandro Martinez via swift-evolution
Sorry for jumping late into this,
about the topic of compile time execution, I raised it pretty much in
the beginning of Swift being open sourced and it stills pops in my
mind everytime I see Jonathan Blow use it in his language.

So for my own curiosity, how feasible it is for Swift to do it with
the current compiler pipeline (SIL, LLVM)? And by that I mean
actually marking some function/file and letting it run at compile
time. Would it be posible for the compiler to interpret arbitrary
Swift code? It probably won't be easy but I just want to know if it's
closer to "it can be done" or "no way". From Blow's streams I know
that this probably would have to work by compiling everything it can
before interpreting the compile-time code which I don't know how Swift
would deal with it.
But ignoring that, I would be pretty happy if we could have things
like the ones mentioned (including a be able to do what Sourcery and
what the compiler with equatable/etc do in plain Swift).

Thanks


On Thu, Aug 10, 2017 at 1:10 PM, Tino Heth via swift-evolution
 wrote:
> Imho this topic was much better than that other one ;-) — and I just
> realised that of metaprogramming build on top of reflection wasn't discussed
> in its own thread yet…
> I fear "constexpr" is already burned, because people associate it with
> things like calculating Fibonacci numbers at compile time (which is kind of
> cool, but doesn't have much merit for most of us).
>
> Right now, there is SE-185, which allows to synthesise Equatable and
> Hashable.
> To do so, nearly 1500 lines of C++ are needed, and even if we assume that
> two thirds are comments and whitespace, it's still a big piece of code that
> could only be written by someone with deep knowledge about C++ and the Swift
> compiler.
> Compile time metaprogramming could do the same, but in probably less than
> twenty lines of Swift that could be written rather easily by anyone who
> knows the language…
>
> So to update my list of things that might be added, there are also some
> points that are already there and whose implementation could have been
> simplified drastically:
>
> - Forwarding of protocol conformance (Kotlin, for example, has this: When a
> member conforms to a protocol, you don't have to write a bunch of methods
> that just say "let my member do this")
> - init with reduced boilerplate
> - Subtyping for non-class types, including a "newtype" option
> - Property behaviours
>
> - Equatable, Hashabable
> - Encoding/Decoding
>
> I still wonder that virtually no one else seems to be thrilled about the
> power of the idea… @gor.f.gyolchanyan would you like join an attempt to
> raise the attention?
>
> - Tino
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>



-- 
Alejandro Martinez
http://alejandromp.com
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Michel Fortin via swift-evolution

> Le 4 sept. 2017 à 10:01, Wallacy via swift-evolution 
>  a écrit :
> 
> func processImageData1a() async ->
>  Image {
>   let dataResource  = async loadWebResource("dataprofile.txt")
>   let imageResource = async loadWebResource("imagedata.dat")
>   
>   // ... other stuff can go here to cover load latency...
>   
>   let imageTmp= await decodeImage(dataResource, imageResource) // 
> compiler error if await is not present.
>   let imageResult = await dewarpAndCleanupImage(imageTmp)
>   return imageResult
> }
> 
> 
> If this (or something like that) is not implemented, people will create 
> several versions to solve the same problem, so that later (Swift 6?) will be 
> solved (because people want this), and we will live with several bad codes to 
> maintain.

Just to be sure of what you are proposing, am I right to assume this would be 
compiled down to something like this?

func processImageData1a(completion: (Image) -> ()) {
  var dataResource: Resource? = nil
  var imageResource: Resource? = nil
  var finishedBody = false

  func continuation() {
// only continue once everything is ready
guard finishedBody else { return }
guard dataResource = dataResource else { return }
guard imageResource = imageResource else { return }

// everything is ready now
decodeImage(dataResource, imageResource) { imageTmp in
  dewarpAndCleanupImage(imageTmp) { imageResult in
completion(imageResult)
  }
}
  }

  loadWebResource("dataprofile.txt") { result in
dataResource = result
continuation()
  }
  loadWebResource("imagedata.dat") { result in
imageResource = result
continuation()
  }

  // ... other stuff can go here to cover load latency...

  finishedBody = true
  continuation()
}


This seems more lightweight than a future to me. I know I've used this pattern 
a few times. What I'm not sure about is how thrown errors would work. Surely 
you want error handling to work when loading resources from the web.


-- 
Michel Fortin
https://michelf.ca

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Chris Lattner via swift-evolution
On Sep 3, 2017, at 12:44 PM, Pierre Habouzit  wrote:
> My currently not very well formed opinion on this subject is that GCD 
> queues are just what you need with these possibilities:
> - this Actor queue can be targeted to other queues by the developer when 
> he means for these actor to be executed in an existing execution context 
> / locking domain,
> - we disallow Actors to be directly targeted to GCD global concurrent 
> queues ever
> - for the other ones we create a new abstraction with stronger and better 
> guarantees (typically limiting the number of possible threads servicing 
> actors to a low number, not greater than NCPU).
 
 Is there a specific important use case for being able to target an actor 
 to an existing queue?  Are you looking for advanced patterns where 
 multiple actors (each providing disjoint mutable state) share an 
 underlying queue? Would this be for performance reasons, for compatibility 
 with existing code, or something else?
>>> 
>>> Mostly for interaction with current designs where being on a given bottom 
>>> serial queue gives you the locking context for resources naturally attached 
>>> to it.
>> 
>> Ok.  I don’t understand the use-case well enough to know how we should model 
>> this.  For example, is it important for an actor to be able to change its 
>> queue dynamically as it goes (something that sounds really scary to me) or 
>> can the “queue to use” be specified at actor initialization time?
> 
> I think I need to read more on actors, because the same way you're not an OS 
> runtime expert, I'm not (or rather no longer, I started down that path a 
> lifetime ago) a language expert at all, and I feel like I need to understand 
> your world better to try to explain this part better to you.

No worries.  Actually, after thinking about it a bit, I don’t think that 
switching underlying queues at runtime is scary.

The important semantic invariant which must be maintained is that there is only 
one thread executing within an actor context at a time.  Switching around 
underlying queues (or even having multiple actors on the same queue) shouldn’t 
be a problem.

OTOH, you don’t want an actor “listening” to two unrelated queues, because 
there is nothing to synchronize between the queues, and you could have multiple 
actor methods invoked at the same time: you lose the protection of a single 
serial queue. 

The only concern I’d have with an actor switching queues at runtime is that you 
don’t want a race condition where an item on QueueA goes to the actor, then it 
switches to QueueB, then another item from QueueB runs while the actor is 
already doing something for QueueA.


>>> I think what you said made sense.
>> 
>> Ok, I captured this in yet-another speculative section:
>> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>>  
>> 
> Great. BTW I agree 100% with:
> 
> That said, this is definitely a power-user feature, and we should understand, 
> build, and get experience using the basic system before considering adding 
> something like this.
> 
> Private concurrent queues are not a success in dispatch and cause several 
> issues, these queues are second class citizens in GCD in terms of feature 
> they support, and building something with concurrency *within* is hard. I 
> would keep it as "that's where we'll go some day" but not try to attempt it 
> until we've build the simpler (or rather less hard) purely serial case first.

Right, I agree this is not important for the short term.  To clarify though, I 
meant to indicate that these actors would be implemented completely 
independently of dispatch, not that they’d build on private concurrent queues.


>>> Another problem I haven't touched either is kernel-issued events (inbound 
>>> IPC from other processes, networking events, etc...). Dispatch for the 
>>> longest time used an indirection through a manager thread for all such 
>>> events, and that had two major issues:
>>> 
>>> - the thread hops it caused, causing networking workloads to utilize up to 
>>> 15-20% more CPU time than an equivalent manually made pthread parked in 
>>> kevent(), because networking pace even when busy idles back all the time as 
>>> far as the CPU is concerned, so dispatch queues never stay hot, and the 
>>> context switch is not only a scheduled context switch but also has the cost 
>>> of a thread bring up
>>> 
>>> - if you deliver all possible events this way you also deliver events that 
>>> cannot possibly make progress because the execution context that will 
>>> handle them is already "locked" (as in busy running something else.
>>> 
>>> It took us several years to get to the point we presented at WWDC this year 
>>> where we deliver events directly to the right dispatch queue. If you only 
>>> have very 

Re: [swift-evolution] [SE-0155][Discuss] The role of labels in enum case patterns

2017-09-04 Thread Matthew Johnson via swift-evolution

> On Sep 4, 2017, at 11:47 AM, T.J. Usiyan  wrote:
> 
> I wasn't arguing for a strictly parallel syntax. I was arguing against being 
> able to omit labels. I don't view those as strictly tied together. How are 
> they?

Like Xiaodi I don’t think it would be productive to rehash the prior discussion 
so I’m going to try to be brief.  

In the discussion one idea that arose was to support two labels for associated 
values in a manner similar to parameters.  One would be used during 
construction and the other during matching.  

The idea behind this was that when creating a value a case is analagous to a 
factory method and it would be nice to be able provide labels using the same 
naming guidelines we use for external argument labels.  For example, if an 
associated value was an index `at` might be used for clarity at the call site.  
Labels like this don’t necessarily make as much sense when destructuring the 
value.  The idea of the “internal” label of a case was that it would be used 
when matching and could be elided if the bound name was identical.  In the 
example, `index` might be used.

When matching, `let` is interspersed between the label and the name binding.  
Any label is already at a distance from the name it labels.  Instead of 
providing a label the important thing is that the semantic of the bound 
variable be clear at the match site.  Much of the time the label actually 
reduces clarity at a match site by adding verbosity and very often repetition.  
If the bound name clearly communicates the purpose of the associated value a 
label cannot add any additional clarity, it can only reduce clarity.

The proposal acknowledges most of this by allowing us to elide labels when the 
bound name matches the label.  It doesn’t allow for a distinction between the 
call-site label used when creating a value from the match-site name that allows 
elision.  My recollection of the discussion leads me to believe that is 
unlikely to ever be accepted as an enhancement.  That said, nothing in the 
current design strictly prevents us from considering that in the future if 
experience demonstrates that it would be useful.

> 
> On Mon, Sep 4, 2017 at 12:38 PM, Matthew Johnson  > wrote:
> 
>> On Sep 4, 2017, at 10:52 AM, T.J. Usiyan via swift-evolution 
>> > wrote:
>> 
>> While re-litigating has it's issues, I am for simplifying the rule and 
>> always requiring the labels if they exist. This is similar to the change 
>> around external labels. Yes, it is slightly less convenient, but it removes 
>> a difficult to motivate caveat for beginners.
> 
> I disagree.  Creating a value and destructuring it are two very different 
> operations and I believe it is a mistake to require them to have parallel 
> syntax.  
> 
> Imagine a future enhancement to the language that supports destructuring a 
> struct.  A struct might not have a strictly memberwise initializer.  It might 
> not even be possible to reconstruct initializer arguments for the sake of 
> parallel destructuring syntax.  There might even be more than one projection 
> that is reasonable to use when destructuring the value in a pattern (such as 
> cartesian and polar coordinates).
> 
> FWIW, I made this case in more detail during the discussion and review of 
> this proposal.
> 
>> 
>> On Sun, Sep 3, 2017 at 4:35 PM, Xiaodi Wu via swift-evolution 
>> > wrote:
>> The desired behavior was the major topic of controversy during review; I’m 
>> wary of revisiting this topic as we are essentially relitigating the 
>> proposal.
>> 
>> To start off, the premise, if I recall, going into review was that the 
>> author **rejected** the notion that pattern matching should mirror creation. 
>> I happen to agree with you on this point, but it was not the prevailing 
>> argument. Fortunately, we do not need to settle this to arrive at some 
>> clarity for the issues at hand.
>> 
>> From a practical standpoint, a requirement for labels in all cases would be 
>> much more source-breaking, whereas the proposal as it stands would allow 
>> currently omitted labels to continue being valid. Moreover, and I think this 
>> is a worthy consideration, one argument for permitting the omission of 
>> labels during pattern matching is to encourage API designers to use labels 
>> to clarify initialization without forcing its use by API consumers during 
>> every pattern matching operation.
>> 
>> In any case, the conclusion reached is precedented in the world of functions:
>> 
>> func g(a: Int, b: Int) { ... }
>> let f = g
>> f(1, 2)
>> 
>> On Sun, Sep 3, 2017 at 15:13 Robert Widmann via swift-evolution 
>> > wrote:
>> Hello Swift Evolution,
>> 
>> I took up the cause of implementing SE-0155 
>> 

Re: [swift-evolution] [SE-0155][Discuss] The role of labels in enum case patterns

2017-09-04 Thread T.J. Usiyan via swift-evolution
I wasn't arguing for a strictly parallel syntax. I was arguing against
being able to omit labels. I don't view those as strictly tied together.
How are they?

On Mon, Sep 4, 2017 at 12:38 PM, Matthew Johnson 
wrote:

>
> On Sep 4, 2017, at 10:52 AM, T.J. Usiyan via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> While re-litigating has it's issues, I am for simplifying the rule and
> always requiring the labels if they exist. This is similar to the change
> around external labels. Yes, it is slightly less convenient, but it removes
> a difficult to motivate caveat for beginners.
>
>
> I disagree.  Creating a value and destructuring it are two very different
> operations and I believe it is a mistake to require them to have parallel
> syntax.
>
> Imagine a future enhancement to the language that supports destructuring a
> struct.  A struct might not have a strictly memberwise initializer.  It
> might not even be possible to reconstruct initializer arguments for the
> sake of parallel destructuring syntax.  There might even be more than one
> projection that is reasonable to use when destructuring the value in a
> pattern (such as cartesian and polar coordinates).
>
> FWIW, I made this case in more detail during the discussion and review of
> this proposal.
>
>
> On Sun, Sep 3, 2017 at 4:35 PM, Xiaodi Wu via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>> The desired behavior was the major topic of controversy during review;
>> I’m wary of revisiting this topic as we are essentially relitigating the
>> proposal.
>>
>> To start off, the premise, if I recall, going into review was that the
>> author **rejected** the notion that pattern matching should mirror
>> creation. I happen to agree with you on this point, but it was not the
>> prevailing argument. Fortunately, we do not need to settle this to arrive
>> at some clarity for the issues at hand.
>>
>> From a practical standpoint, a requirement for labels in all cases would
>> be much more source-breaking, whereas the proposal as it stands would allow
>> currently omitted labels to continue being valid. Moreover, and I think
>> this is a worthy consideration, one argument for permitting the omission of
>> labels during pattern matching is to encourage API designers to use labels
>> to clarify initialization without forcing its use by API consumers during
>> every pattern matching operation.
>>
>> In any case, the conclusion reached is precedented in the world of
>> functions:
>>
>> func g(a: Int, b: Int) { ... }
>> let f = g
>> f(1, 2)
>>
>> On Sun, Sep 3, 2017 at 15:13 Robert Widmann via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>>> Hello Swift Evolution,
>>>
>>> I took up the cause of implementing SE-0155
>>> ,
>>> and am most of the way through the larger points of the proposal.  One
>>> thing struck me when I got to the part about normalizing the behavior
>>> of pattern matching
>>> .
>>> The Core Team indicated in their rationale
>>> 
>>>  that
>>> the proposal’s suggestion that a variable binding sub in for a label was a
>>> little much as in this example:
>>>
>>> enum Foo {
>>>   case foo(x: Int, y: Int)
>>> }
>>> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
>>> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
>>> if case let .foo(x, y) {} // Fine?  Missing labels, but variable names
>>> match labels
>>>
>>> They instead suggested the following behavior:
>>>
>>> enum Foo {
>>>   case foo(x: Int, y: Int)
>>> }
>>> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
>>> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
>>> if case let .foo(x, y) {} // Fine?  Missing labels, and full name of
>>> case is unambiguous
>>>
>>> Which, for example, would reject this:
>>>
>>> enum Foo {
>>>   case foo(x: Int, y: Int) // Note: foo(x:y:)
>>>   case foo(x: Int, z: Int) // Note: foo(x:z:)
>>> }
>>> if case let .foo(x, y) {} // Bad!  Are we matching foo(x:y:) or
>>> foo(x:z:)?
>>>
>>> With this reasoning:
>>>
>>>  - While an associated-value label can indeed contribute to the readability 
>>> of the pattern, the programmer can also choose a meaningful name to bind to 
>>> the associated value.  This binding name can convey at least as much 
>>> information as a label would.
>>>
>>>   - The risk of mis-labelling an associated value grows as the number of 
>>> associated values grows.  However, very few cases carry a large number of 
>>> associated values.  As the amount of information which the case should 
>>> carry grows, it becomes more and more interesting to encapsulate that 
>>> information in its own struct — among other 

Re: [swift-evolution] [SE-0155][Discuss] The role of labels in enum case patterns

2017-09-04 Thread Matthew Johnson via swift-evolution

> On Sep 4, 2017, at 10:52 AM, T.J. Usiyan via swift-evolution 
>  wrote:
> 
> While re-litigating has it's issues, I am for simplifying the rule and always 
> requiring the labels if they exist. This is similar to the change around 
> external labels. Yes, it is slightly less convenient, but it removes a 
> difficult to motivate caveat for beginners.

I disagree.  Creating a value and destructuring it are two very different 
operations and I believe it is a mistake to require them to have parallel 
syntax.  

Imagine a future enhancement to the language that supports destructuring a 
struct.  A struct might not have a strictly memberwise initializer.  It might 
not even be possible to reconstruct initializer arguments for the sake of 
parallel destructuring syntax.  There might even be more than one projection 
that is reasonable to use when destructuring the value in a pattern (such as 
cartesian and polar coordinates).

FWIW, I made this case in more detail during the discussion and review of this 
proposal.

> 
> On Sun, Sep 3, 2017 at 4:35 PM, Xiaodi Wu via swift-evolution 
> > wrote:
> The desired behavior was the major topic of controversy during review; I’m 
> wary of revisiting this topic as we are essentially relitigating the proposal.
> 
> To start off, the premise, if I recall, going into review was that the author 
> **rejected** the notion that pattern matching should mirror creation. I 
> happen to agree with you on this point, but it was not the prevailing 
> argument. Fortunately, we do not need to settle this to arrive at some 
> clarity for the issues at hand.
> 
> From a practical standpoint, a requirement for labels in all cases would be 
> much more source-breaking, whereas the proposal as it stands would allow 
> currently omitted labels to continue being valid. Moreover, and I think this 
> is a worthy consideration, one argument for permitting the omission of labels 
> during pattern matching is to encourage API designers to use labels to 
> clarify initialization without forcing its use by API consumers during every 
> pattern matching operation.
> 
> In any case, the conclusion reached is precedented in the world of functions:
> 
> func g(a: Int, b: Int) { ... }
> let f = g
> f(1, 2)
> 
> On Sun, Sep 3, 2017 at 15:13 Robert Widmann via swift-evolution 
> > wrote:
> Hello Swift Evolution,
> 
> I took up the cause of implementing SE-0155 
> ,
>  and am most of the way through the larger points of the proposal.  One thing 
> struck me when I got to the part about normalizing the behavior of pattern 
> matching 
> .
>   The Core Team indicated in their rationale 
> 
>  that the proposal’s suggestion that a variable binding sub in for a label 
> was a little much as in this example:
> 
> enum Foo {
>   case foo(x: Int, y: Int)
> }
> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
> if case let .foo(x, y) {} // Fine?  Missing labels, but variable names match 
> labels
> 
> They instead suggested the following behavior:
> 
> enum Foo {
>   case foo(x: Int, y: Int)
> }
> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
> if case let .foo(x, y) {} // Fine?  Missing labels, and full name of case is 
> unambiguous
> 
> Which, for example, would reject this:
> 
> enum Foo {
>   case foo(x: Int, y: Int) // Note: foo(x:y:)
>   case foo(x: Int, z: Int) // Note: foo(x:z:)
> }
> if case let .foo(x, y) {} // Bad!  Are we matching foo(x:y:) or foo(x:z:)?
> 
> With this reasoning:
> 
>>  - While an associated-value label can indeed contribute to the readability 
>> of the pattern, the programmer can also choose a meaningful name to bind to 
>> the associated value.  This binding name can convey at least as much 
>> information as a label would.
>> 
>>   - The risk of mis-labelling an associated value grows as the number of 
>> associated values grows.  However, very few cases carry a large number of 
>> associated values.  As the amount of information which the case should carry 
>> grows, it becomes more and more interesting to encapsulate that information 
>> in its own struct — among other reasons, to avoid the need to revise every 
>> matching case-pattern in the program.  Furthermore, when a case does carry a 
>> significant number of associated values, there is often a positional 
>> conventional between them that lowers the risk of re-ordering: for example, 
>> the 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Chris Lattner via swift-evolution

> On Sep 4, 2017, at 9:05 AM, Jean-Daniel  wrote:
> 
>>> Sometimes, I’d probably make sense (or even be required to fix this to a 
>>> certain queue (in the thread(-pool?) sense), but at others it may just make 
>>> sense to execute the messages in-place by the sender if they don’t block so 
>>> no context switch is incurred.
>> 
>> Do you mean kernel context switch?  With well behaved actors, the runtime 
>> should be able to run work items from many different queues on the same 
>> kernel thread.  The “queue switch cost” is designed to be very very low.  
>> The key thing is that the runtime needs to know when work on a queue gets 
>> blocked so the kernel thread can move on to servicing some other queues work.
> 
> My understanding is that a kernel thread can’t move on servicing a different 
> queue while a block is executing on it. The runtime already know when a queue 
> is blocked, and the only way it has to mitigate the problem is to spawn an 
> other kernel thread to server the other queues. This is what cause the kernel 
> thread explosion.

I’m not sure what you mean by “executing on it”.  A work item that currently 
has a kernel thread can be doing one of two things: “executing work” (like 
number crunching) or “being blocked in the kernel on something that GCD doesn’t 
know about”. 

However, the whole point is that work items shouldn’t do this: as you say it 
causes thread explosions.  It is better for them to yield control back to GCD, 
which allows GCD to use the kernel thread for other queues, even though the 
original *queue* is blocked.

-Chris

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Jean-Daniel via swift-evolution

> Le 4 sept. 2017 à 17:54, Chris Lattner via swift-evolution 
>  a écrit :
> 
> On Sep 4, 2017, at 4:19 AM, Daniel Vollmer  > wrote:
>> 
>> Hello,
>> 
>> first off, I’m following this discussion with great interest, even though my 
>> background (simulation software on HPC) has a different focus than the 
>> “usual” paradigms Swift seeks to (primarily) address.
>> 
>>> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>>>  wrote:
 On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
> 
> Is there a specific important use case for being able to target an actor 
> to an existing queue?  Are you looking for advanced patterns where 
> multiple actors (each providing disjoint mutable state) share an 
> underlying queue? Would this be for performance reasons, for 
> compatibility with existing code, or something else?
 
 Mostly for interaction with current designs where being on a given bottom 
 serial queue gives you the locking context for resources naturally 
 attached to it.
>>> 
>>> Ok.  I don’t understand the use-case well enough to know how we should 
>>> model this.  For example, is it important for an actor to be able to change 
>>> its queue dynamically as it goes (something that sounds really scary to me) 
>>> or can the “queue to use” be specified at actor initialization time?
>> 
>> I’m confused, but that may just be me misunderstanding things again. I’d 
>> assume each actor has its own (serial) queue that is used to serialize its 
>> messages, so the queue above refers to the queue used to actually process 
>> the messages the actor receives, correct?
> 
> Right.
> 
>> Sometimes, I’d probably make sense (or even be required to fix this to a 
>> certain queue (in the thread(-pool?) sense), but at others it may just make 
>> sense to execute the messages in-place by the sender if they don’t block so 
>> no context switch is incurred.
> 
> Do you mean kernel context switch?  With well behaved actors, the runtime 
> should be able to run work items from many different queues on the same 
> kernel thread.  The “queue switch cost” is designed to be very very low.  The 
> key thing is that the runtime needs to know when work on a queue gets blocked 
> so the kernel thread can move on to servicing some other queues work.

My understanding is that a kernel thread can’t move on servicing a different 
queue while a block is executing on it. The runtime already know when a queue 
is blocked, and the only way it has to mitigate the problem is to spawn an 
other kernel thread to server the other queues. This is what cause the kernel 
thread explosion.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Chris Lattner via swift-evolution
On Sep 4, 2017, at 4:19 AM, Daniel Vollmer  wrote:
> 
> Hello,
> 
> first off, I’m following this discussion with great interest, even though my 
> background (simulation software on HPC) has a different focus than the 
> “usual” paradigms Swift seeks to (primarily) address.
> 
>> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>>  wrote:
>>> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
 On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
 
 Is there a specific important use case for being able to target an actor 
 to an existing queue?  Are you looking for advanced patterns where 
 multiple actors (each providing disjoint mutable state) share an 
 underlying queue? Would this be for performance reasons, for compatibility 
 with existing code, or something else?
>>> 
>>> Mostly for interaction with current designs where being on a given bottom 
>>> serial queue gives you the locking context for resources naturally attached 
>>> to it.
>> 
>> Ok.  I don’t understand the use-case well enough to know how we should model 
>> this.  For example, is it important for an actor to be able to change its 
>> queue dynamically as it goes (something that sounds really scary to me) or 
>> can the “queue to use” be specified at actor initialization time?
> 
> I’m confused, but that may just be me misunderstanding things again. I’d 
> assume each actor has its own (serial) queue that is used to serialize its 
> messages, so the queue above refers to the queue used to actually process the 
> messages the actor receives, correct?

Right.

> Sometimes, I’d probably make sense (or even be required to fix this to a 
> certain queue (in the thread(-pool?) sense), but at others it may just make 
> sense to execute the messages in-place by the sender if they don’t block so 
> no context switch is incurred.

Do you mean kernel context switch?  With well behaved actors, the runtime 
should be able to run work items from many different queues on the same kernel 
thread.  The “queue switch cost” is designed to be very very low.  The key 
thing is that the runtime needs to know when work on a queue gets blocked so 
the kernel thread can move on to servicing some other queues work.

-Chris
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [SE-0155][Discuss] The role of labels in enum case patterns

2017-09-04 Thread T.J. Usiyan via swift-evolution
While re-litigating has it's issues, I am for simplifying the rule and
always requiring the labels if they exist. This is similar to the change
around external labels. Yes, it is slightly less convenient, but it removes
a difficult to motivate caveat for beginners.

On Sun, Sep 3, 2017 at 4:35 PM, Xiaodi Wu via swift-evolution <
swift-evolution@swift.org> wrote:

> The desired behavior was the major topic of controversy during review; I’m
> wary of revisiting this topic as we are essentially relitigating the
> proposal.
>
> To start off, the premise, if I recall, going into review was that the
> author **rejected** the notion that pattern matching should mirror
> creation. I happen to agree with you on this point, but it was not the
> prevailing argument. Fortunately, we do not need to settle this to arrive
> at some clarity for the issues at hand.
>
> From a practical standpoint, a requirement for labels in all cases would
> be much more source-breaking, whereas the proposal as it stands would allow
> currently omitted labels to continue being valid. Moreover, and I think
> this is a worthy consideration, one argument for permitting the omission of
> labels during pattern matching is to encourage API designers to use labels
> to clarify initialization without forcing its use by API consumers during
> every pattern matching operation.
>
> In any case, the conclusion reached is precedented in the world of
> functions:
>
> func g(a: Int, b: Int) { ... }
> let f = g
> f(1, 2)
>
> On Sun, Sep 3, 2017 at 15:13 Robert Widmann via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>> Hello Swift Evolution,
>>
>> I took up the cause of implementing SE-0155
>> ,
>> and am most of the way through the larger points of the proposal.  One
>> thing struck me when I got to the part about normalizing the behavior of
>> pattern matching
>> .
>> The Core Team indicated in their rationale
>> 
>>  that
>> the proposal’s suggestion that a variable binding sub in for a label was a
>> little much as in this example:
>>
>> enum Foo {
>>   case foo(x: Int, y: Int)
>> }
>> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
>> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
>> if case let .foo(x, y) {} // Fine?  Missing labels, but variable names
>> match labels
>>
>> They instead suggested the following behavior:
>>
>> enum Foo {
>>   case foo(x: Int, y: Int)
>> }
>> if case let .foo(x: x, y: y) {} // Fine!  Labels match and are in order
>> if case let .foo(x, y: y) {} // Bad!  Missing label 'x'
>> if case let .foo(x, y) {} // Fine?  Missing labels, and full name of
>> case is unambiguous
>>
>> Which, for example, would reject this:
>>
>> enum Foo {
>>   case foo(x: Int, y: Int) // Note: foo(x:y:)
>>   case foo(x: Int, z: Int) // Note: foo(x:z:)
>> }
>> if case let .foo(x, y) {} // Bad!  Are we matching foo(x:y:) or
>> foo(x:z:)?
>>
>> With this reasoning:
>>
>>  - While an associated-value label can indeed contribute to the readability 
>> of the pattern, the programmer can also choose a meaningful name to bind to 
>> the associated value.  This binding name can convey at least as much 
>> information as a label would.
>>
>>   - The risk of mis-labelling an associated value grows as the number of 
>> associated values grows.  However, very few cases carry a large number of 
>> associated values.  As the amount of information which the case should carry 
>> grows, it becomes more and more interesting to encapsulate that information 
>> in its own struct — among other reasons, to avoid the need to revise every 
>> matching case-pattern in the program.  Furthermore, when a case does carry a 
>> significant number of associated values, there is often a positional 
>> conventional between them that lowers the risk of re-ordering: for example, 
>> the conventional left-then-right ordering of a binary search tree.  
>> Therefore this risk is somewhat over-stated, and of course the programmer 
>> should remain free to include labels for cases where they feel the risk is 
>> significant.
>>
>>   - It is likely that cases will continue to be predominantly distinguished 
>> by their base name alone.  Methods are often distinguished by argument 
>> labels because the base name identifies an entire class of operation with 
>> many possible variants.  In contrast, each case of an enum is a kind of 
>> data, and its name is conventionally more like the name of a property than 
>> the name of a method, and thus likely to be unique among all the cases.  
>> Even when cases are distinguished using only associated value labels, it 
>> simply means that the corresponding case-patterns must include those 

Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Chris Lattner via swift-evolution
On Sep 3, 2017, at 5:01 PM, Jonathan Hull  wrote:
>> On Sep 3, 2017, at 9:04 AM, Chris Lattner via swift-evolution 
>> > wrote:
>>> On Sep 3, 2017, at 4:00 AM, David Hart >> > wrote:
 Please don’t read too much into the beginAsync API.  It is merely a 
 strawman, and intended to be a low-level API that higher level 
 abstractions (like a decent futures API) can be built on top of.  I think 
 it is important to have some sort of primitive low-level API that is 
 independent of higher level abstractions like Futures.
 
 This is all a way of saying “yes, having something like you propose makes 
 sense” but that it should be part of the Futures API, which is outside the 
 scope of the async/await proposal.
>>> 
>>> But it would be nice for all high-level APIs that conform to a Awaitable 
>>> protocol to be used with await without having to reach for a get property 
>>> or something similar everytime.
>> 
>> The futures API that is outlined in the proposal is just an example, it 
>> isn’t a concrete pitch for a specific API.  There are a bunch of 
>> improvements that can (and should) be made to it, it is just that a futures 
>> API should be the subject of a follow-on proposal to the basic async/await 
>> mechanics.
> 
> Would it be possible to have the manifesto be a series of proposals then?  I 
> really think it is important for us to look at how all of these things fit 
> together.  I agree that async/await should come first, but looking at how 
> concrete things like Futures would work may help to inform the design of 
> async/await.  We should do the back-propigation in our design before anything 
> is locked in…

Sure, that would be great.  I don’t have time to write up a futures proposal 
myself, but I’d be happy to contribute editorial advice to someone (or some 
group) who wants to do so.

> The thing I would most like to see as a quick follow-on to async/await is the 
> ability to use the ‘async’ keyword to defer ‘await’. 

This keeps coming up, and is certainly possible (within the scope of the swift 
grammar) but a decision like this should be driven by a specific cost/benefit 
tradeoff.  That tradeoff decision can only be made when the futures API has a 
specific proposal that people generally agree to.  

Because of this, I think that the Futures proposal should be a *library only* 
proposal on top of the async/await language stuff, and them mention something 
like “async foo()” as a possible future extension - the subject of its own 
proposal.

-Chris

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Gwendal Roué via swift-evolution

> Le 4 sept. 2017 à 16:28, Wallacy via swift-evolution 
>  a écrit :
> 
> Hello,
> 
> I have a little question about the actors.
> 
> On WWDC 2012 Session 712 one of the most important tips (for me at least) 
> was: Improve Performance with Reader-Writer Access
> 
> Basically:
> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
> • Use synchronous concurrent “reads”: dispatch_sync()
> • Use asynchronous serialized “writes”: dispatch_barrier_async()
> 
> [...]
> 
> With this will it be composed using actors? I see a lot of discussion about 
> using serial queues, and I also have not seen any mechanism similar to 
> dispatch_barrier_async being discussed here or in other threads.

I tend to believe that such read/write optimization could at least be 
implemented using the "Intra-actor concurrency" described by Chris Lattner at 
https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
 
.

But you generally ask the question of reader vs. writer actor methods, that 
could be backed by dispatch_xxx/dispatch_barrier_xxx. I'm not sure it's as 
simple as mutating vs. non-mutating. For example, a non-mutating method can 
still cache the result of some expensive computation without breaking the 
non-mutating contract. Unless this cache is itself a read/write-safe actor, 
such non-mutating method is not a real reader method.

That's a very interesting topic, Wallacy!

Gwendal

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Wallacy via swift-evolution
Hello,

I have a little question about the actors.

On WWDC 2012 Session 712 one of the most important tips (for me at least)
was: Improve Performance with Reader-Writer Access

Basically:
• Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
• Use synchronous concurrent “reads”: dispatch_sync()
• Use asynchronous serialized “writes”: dispatch_barrier_async()

Example:

// ...
   _someManagerQueue = dispatch_queue_create("SomeManager",
DISPATCH_QUEUE_CONCURRENT);// ...


And then:

- (id) getSomeArrayItem:(NSUInteger) index {
id importantObj = NULL;
dispatch_sync(_someManagerQueue,^{
id importantObj = [_importantArray objectAtIndex:index];
 });
   return importantObj;
 }- (void) removeSomeArrayItem:(id) object {
 dispatch_barrier_async(_someManagerQueue,^{
 [_importantArray removeObject:object];
 });
 }- (void) addSomeArrayItem:(id) object {
 dispatch_barrier_async(_someManagerQueue,^{
 [_importantArray addObject:object];
 });
 }


That way you ensure that whenever you read an information (eg an array) all
the "changes" have been made ​​or are "waiting" . And every time you write
an information, your program will not be blocked waiting for the operation
to be completed.

That way, if you use several threads, none will have to wait another to get
any value unless one of them is "writing", which is right thing to do.

With this will it be composed using actors? I see a lot of discussion about
using serial queues, and I also have not seen any mechanism similar to
dispatch_barrier_async being discussed here or in other threads.

Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution <
swift-evolution@swift.org> escreveu:

> Hello,
>
> first off, I’m following this discussion with great interest, even though
> my background (simulation software on HPC) has a different focus than the
> “usual” paradigms Swift seeks to (primarily) address.
>
> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution <
> swift-evolution@swift.org> wrote:
> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit 
> wrote:
> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit 
> wrote:
> >>>
> >>> Is there a specific important use case for being able to target an
> actor to an existing queue?  Are you looking for advanced patterns where
> multiple actors (each providing disjoint mutable state) share an underlying
> queue? Would this be for performance reasons, for compatibility with
> existing code, or something else?
> >>
> >> Mostly for interaction with current designs where being on a given
> bottom serial queue gives you the locking context for resources naturally
> attached to it.
> >
> > Ok.  I don’t understand the use-case well enough to know how we should
> model this.  For example, is it important for an actor to be able to change
> its queue dynamically as it goes (something that sounds really scary to me)
> or can the “queue to use” be specified at actor initialization time?
>
> I’m confused, but that may just be me misunderstanding things again. I’d
> assume each actor has its own (serial) queue that is used to serialize its
> messages, so the queue above refers to the queue used to actually process
> the messages the actor receives, correct?
>
> Sometimes, I’d probably make sense (or even be required to fix this to a
> certain queue (in the thread(-pool?) sense), but at others it may just make
> sense to execute the messages in-place by the sender if they don’t block so
> no context switch is incurred.
>
> > One plausible way to model this is to say that it is a “multithreaded
> actor” of some sort, where the innards of the actor allow arbitrary number
> of client threads to call into it concurrently.  The onus would be on the
> implementor of the NIC or database to implement the proper synchronization
> on the mutable state within the actor.
> >>
> >> I think what you said made sense.
> >
> > Ok, I captured this in yet-another speculative section:
> >
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>
> This seems like an interesting extension (where the actor-internal serial
> queue is not used / bypassed).
>
>
> Daniel.
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] A slightly different perspective

2017-09-04 Thread Wallacy via swift-evolution
Tanks Jonathan,

As you may already know, I am a great advocate of this feature. I will not
repeat my explanations and examples that I gave in the other threads. But I
think you got the idea right. Even without any kind of optimization, I
believe there is a clear gain in this approach.

But I think you got the idea right. Even without any kind of optimization,
I believe there is a clear gain.

It is extremely naive to believe that people do not need a basic
parallelism already in the first version. 100% of people will have to look
for third-party libraries, or create their custom versions on top of async
/ await already on the first day. Batch two (or more) things to do in the
same function is a pretty common pattern.

func processImageData1a() async -> Image {
  let dataResource  = async loadWebResource("dataprofile.txt")
  let imageResource = async loadWebResource("imagedata.dat")

  // ... other stuff can go here to cover load latency...
  let imageTmp= await decodeImage(dataResource, imageResource) //
compiler error if await is not present.
  let imageResult = await dewarpAndCleanupImage(imageTmp)
  return imageResult
}


If this (or something like that) is not implemented, people will create
several versions to solve the same problem, so that later (Swift 6?) will
be solved (because people want this), and we will live with several bad
codes to maintain.

Since I do not think we should encourage building bad coding standards to
solve simple problems (and I've seen very bad few examples of this right
here), I think we should sort this out soon enough. Even without obvious
optimizations in the first version.

Future/Promises is another talk! Involve a lot of other concepts, and also
will be a shame to need the Future/Promises to solve this problem, because
this is something that we actually can do using GCD only, without involve
any other type/controlling mechanism. And also, can help build better
Future/Promises too.



Em seg, 4 de set de 2017 às 02:37, Jonathan Hull via swift-evolution <
swift-evolution@swift.org> escreveu:

> I think we need to consider both the original proposal on its own, and
> also together with other potential proposals (including deferring ‘async’
> and futures). We don’t need to have them fully spelled out, but I think the
> other proposals will help inform us where the sharp edges of await/async
> are.  That was my original point on this thread.
>
> I keep seeing “Let’s wait to consider X until after async/await” and I am
> saying “Let’s consider how X would affect the async/await proposal”.
> Better to figure out any design issues and back-propagation now than after
> we have baked things in.
>
> Thanks,
> Jon
>
> On Sep 3, 2017, at 10:16 PM, BJ Homer  wrote:
>
> Okay, that's an interesting proposal. I'm not qualified to judge how big
> of a win the compiler optimizations would be there; I'm a little skeptical,
> but I admit a lack of expertise there. It seems like it's not essential to
> the core 'async/await' proposal, though; it could be added later with no
> source breakage, and in the meantime we can accomplish most of the same
> behavior through an explicit Future type. I think it makes more sense to
> consider the base proposal before adding more complexity to it.
>
> -BJ
>
> On Sep 3, 2017, at 10:15 PM, Jonathan Hull  wrote:
>
>
> On Sep 3, 2017, at 7:35 PM, BJ Homer  wrote:
>
> Jonathan,
>
> You've mentioned the desire to have 'async' defer calling 'await', but I
> haven't seen a detailed design yet.
>
> Oh, we discussed it awhile back on a few other threads.  I am happy to
> help write up a more formal/detailed design if there is enough interest...
>
> For example, is the following code valid?
>
>   let image = async fetchImage()
>   let image2 = async fetchImage()
>   let deferredThings = [image1, image2]
>
> If so, what is the type of 'deferredThings'? And how does it not count as
> 'using' the values.
>
>
> No this code is not valid.  You would need to ‘await’ both image 1 & 2
> before they could be put in an array or transferred to another variable.
> You could combine the ‘await’s though (similar to try):
>
> let image = async fetchImage()
> let image2 = async fetchImage()
> let deferredThings = await [image1, image2]
>
>
> Note: You can return something which is deferred from an async function
> without awaiting though...
>
>
> If the above code is not valid, how is this situation better than the
> suggested use of a Future type to allow concurrent async requests?
>
>   let future1 = Future { await fetchImage() }
>   let future2 = Future { await fetchImage() }
>   let deferredThings = [future1, future2]
>
> Note that in this example, 'deferredThings' has a concrete type, and we
> can inspect its values.
>
>
> It isn’t meant to be used instead of Futures (though you may not need to
> reach for them as often), it is a much lower-level construct which would be
> used as a building block for things like 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Daniel Vollmer via swift-evolution
Hello,

first off, I’m following this discussion with great interest, even though my 
background (simulation software on HPC) has a different focus than the “usual” 
paradigms Swift seeks to (primarily) address.

> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>  wrote:
>> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
>>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
>>> 
>>> Is there a specific important use case for being able to target an actor to 
>>> an existing queue?  Are you looking for advanced patterns where multiple 
>>> actors (each providing disjoint mutable state) share an underlying queue? 
>>> Would this be for performance reasons, for compatibility with existing 
>>> code, or something else?
>> 
>> Mostly for interaction with current designs where being on a given bottom 
>> serial queue gives you the locking context for resources naturally attached 
>> to it.
> 
> Ok.  I don’t understand the use-case well enough to know how we should model 
> this.  For example, is it important for an actor to be able to change its 
> queue dynamically as it goes (something that sounds really scary to me) or 
> can the “queue to use” be specified at actor initialization time?

I’m confused, but that may just be me misunderstanding things again. I’d assume 
each actor has its own (serial) queue that is used to serialize its messages, 
so the queue above refers to the queue used to actually process the messages 
the actor receives, correct?

Sometimes, I’d probably make sense (or even be required to fix this to a 
certain queue (in the thread(-pool?) sense), but at others it may just make 
sense to execute the messages in-place by the sender if they don’t block so no 
context switch is incurred.

> One plausible way to model this is to say that it is a “multithreaded actor” 
> of some sort, where the innards of the actor allow arbitrary number of client 
> threads to call into it concurrently.  The onus would be on the implementor 
> of the NIC or database to implement the proper synchronization on the mutable 
> state within the actor.
>> 
>> I think what you said made sense.
> 
> Ok, I captured this in yet-another speculative section:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency

This seems like an interesting extension (where the actor-internal serial queue 
is not used / bypassed).


Daniel.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Contextualizing async coroutines

2017-09-04 Thread Daniel Vollmer via swift-evolution
Hello,

> On 31. Aug 2017, at 20:35, Joe Groff via swift-evolution 
>  wrote:
> 
> # `onResume` hooks
> 
> Relying on coroutine context alone still leaves responsibility wholly on 
> suspending APIs to pay attention to the coroutine context and schedule the 
> continuation correctly. You'd still have the expression problem when 
> coroutine-spawning APIs from one framework interact with suspending APIs from 
> another framework that doesn't understand the spawning framework's desired 
> scheduling policy. We could provide some defense against this by letting the 
> coroutine control its own resumption with an "onResume" hook, which would run 
> when a suspended continuation is invoked instead of immediately resuming the 
> coroutine. That would let the coroutine-aware dispatch_async example from 
> above do something like this, to ensure the continuation always ends up back 
> on the correct queue:

I do feel the “correct” scheduling should be done by the actual scheduler / 
run-loop system, so we’re not paying the context switch cost twice. Of course 
this means that the run-time / OS needs enough information to do this correctly.

To be honest, the enqueuing of messages to actors seems quite straight-forward, 
but I’m having trouble envisioning the draining / processing of the messages.

Daniel.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution