[swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Susan Cheng via swift-evolution
 > Hi all,
>
> As Ted mentioned in his email, it is great to finally kick off
discussions for what concurrency should look like in Swift. This will
surely be an epic multi-year journey, but it is more important to find the
right design than to get there fast.
>
> I’ve been advocating for a specific model involving async/await and
actors for many years now. Handwaving only goes so far, so some folks asked
me to write them down to make the discussion more helpful and concrete.
While I hope these ideas help push the discussion on concurrency forward,
this isn’t in any way meant to cut off other directions: in fact I hope it
helps give proponents of other designs a model to follow: a discussion
giving extensive rationale, combined with the long term story arc to show
that the features fit together.
>
> Anyway, here is the document, I hope it is useful, and I’d love to hear
comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
>
> -Chris
>
>
>

Hi chris,

is a actor guarantee always process the messages in one by one?
so, can it assume that never being multiple threads try to modify the state
at the same time?


P.S. i have implemented similar idea before:

https://github.com/SusanDoggie/Doggie/blob/master/Sources/Doggie/Thread/Thread.swift
https://github.com/SusanDoggie/Doggie/blob/master/Sources/Doggie/SDTriggerNode.swift
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Andre via swift-evolution


> H29/08/19 12:41、Chris Lattner のメール:
> 
> On Aug 18, 2017, at 8:18 PM, Andre  wrote:
>> If this was to be a feature at some point, when do you foresee a possible 
>> inclusion for it?
>> Swift 5,or 6 timeframe?
> 
> It’s hard to predict.  IMO, ABI stability and concurrency are the most 
> important things the community faces in the short term, so you’re looking at 
> Swift 6 at the earliest, possibly 7 or 8…. 
Got it, well I definitely look forward to it then.

Thanks!

Andre

> -Chris
> 
> 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Thomas via swift-evolution

> On 19 Aug 2017, at 07:30, Brent Royal-Gordon via swift-evolution 
>  wrote:
> 
>> On Aug 18, 2017, at 12:35 PM, Chris Lattner > > wrote:
>> 
>>> (Also, I notice that a fire-and-forget message can be thought of as an 
>>> `async` method returning `Never`, even though the computation *does* 
>>> terminate eventually. I'm not sure how to handle that, though)
>> 
>> Yeah, I think that actor methods deserve a bit of magic:
>> 
>> - Their bodies should be implicitly async, so they can call async methods 
>> without blocking their current queue or have to use beginAsync.
>> - However, if they are void “fire and forget” messages, I think the caller 
>> side should *not* have to use await on them, since enqueuing the message 
>> will not block.
> 
> I think we need to be a little careful here—the mere fact that a message 
> returns `Void` doesn't mean the caller shouldn't wait until it's done to 
> continue. For instance:
> 
>   listActor.delete(at: index) // Void, so it 
> doesn't wait
>   let count = await listActor.getCount()  // But we want the count 
> *after* the deletion!

In fact this will just work. Because both messages happen on the actor's 
internal serial queue, the "get count" message will only happen after the 
deletion. Therefore the "delete" message can return immediately to the caller 
(you just need the dispatch call on the queue to be made).

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Thomas via swift-evolution

> On 18 Aug 2017, at 21:35, Chris Lattner via swift-evolution 
>  wrote:
> 
> Yeah, I think that actor methods deserve a bit of magic:
> 
> - Their bodies should be implicitly async, so they can call async methods 
> without blocking their current queue or have to use beginAsync.

I have a question here. If the body calls an async method "without blocking 
their current queue", does that mean the queue gets to process other messages, 
even if the current message is not done with? Or should the queue somehow be 
suspended so new messages will not be processed?

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Georgios Moschovitis via swift-evolution
I am wondering, am I the only one that *strongly* prefers `yield` over `await`?

Superficially, `await` seems like the standard term, but given the fact that 
the proposal is about coroutines, I think `yield` is actually the proper name. 
Also, subjectively, it sounds much better/elegant to me!

-g.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Tino Heth via swift-evolution

> Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution 
> :
> 
> For me the whole point of a basic protocol is that it forces me to implement 
> some requirements in order to conform; I can throw a bunch of protocols onto 
> a type and know that it won't compile until I've finished it, developers get 
> distracted, leave things unfinished to go back to later, make typos etc. etc. 
> To me declaring a conformance is a declaration of "my type will meet the 
> requirements for this make, sure I do it", not "please, please use some magic 
> to do this for me"; there needs to be a clear difference between the two.

My conclusion isn't as pessimistic as yours, but I share your objections: 
Mixing a normal feature (protocols) with compiler magic doesn't feel right to 
me — wether it's Equatable, Hashable, Codable or Error.
It's two different concepts with a shared name*, so I think even AutoEquatable 
wouldn't be the right solution, and something like #Equatable would be a much 
better indicator for what is happening.

Besides that specific concern, I can't fight the feeling that the evolution 
process doesn't work well for proposals like this:
It's a feature that many people just want to have as soon as possible, and 
concerns regarding the long-term effects are more or less washed away with 
eagerness.

- Tino

* for the same reason, I have big concerns whenever someone proposes to blur 
the line between tuples and arrays___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Haravikk via swift-evolution

> On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de> wrote:
>> Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution 
>> mailto:swift-evolution@swift.org>>:
>> For me the whole point of a basic protocol is that it forces me to implement 
>> some requirements in order to conform; I can throw a bunch of protocols onto 
>> a type and know that it won't compile until I've finished it, developers get 
>> distracted, leave things unfinished to go back to later, make typos etc. 
>> etc. To me declaring a conformance is a declaration of "my type will meet 
>> the requirements for this make, sure I do it", not "please, please use some 
>> magic to do this for me"; there needs to be a clear difference between the 
>> two.
> 
> My conclusion isn't as pessimistic as yours, but I share your objections: 
> Mixing a normal feature (protocols) with compiler magic doesn't feel right to 
> me — wether it's Equatable, Hashable, Codable or Error.
> It's two different concepts with a shared name*, so I think even 
> AutoEquatable wouldn't be the right solution, and something like #Equatable 
> would be a much better indicator for what is happening.
> 
> Besides that specific concern, I can't fight the feeling that the evolution 
> process doesn't work well for proposals like this:
> It's a feature that many people just want to have as soon as possible, and 
> concerns regarding the long-term effects are more or less washed away with 
> eagerness.
> 
> - Tino
> 
> * for the same reason, I have big concerns whenever someone proposes to blur 
> the line between tuples and arrays

Agreed. To be clear though; in spite of my pessimism this is a feature that I 
do want, but I would rather not have it at all than have it implemented in a 
way that hides bugs and sets a horrible precedent for the future.

I realise I may seem to be overreacting, but I really do feel that strongly 
about what I fully believe is a mistake. I understand people's enthusiasm for 
the feature, I do; I hate boilerplate as much as the next developer, but as you 
say, it's not a reason to rush forward, especially when this is not something 
that can be easily changed later.

That's a big part of the problem; the decisions here are not just about 
trimming boilerplate for Equatable/Hashable, it's also about the potential 
overreach of every synthesised feature now and in the future as well.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


[swift-evolution] [Concurrency] Fixing race conditions in async/await example

2017-08-19 Thread Jakob Egger via swift-evolution
I've read async/await proposal, and I'm thrilled by the possibilities. Here's 
what I consider the canonical example:
@IBAction func buttonDidClick(sender:AnyObject) {
  beginAsync {
let image = await processImage()
imageView.image = image
  }
}
This is exactly the kind of thing I will use async/await for!

But while this example looks extremely elegant, it would suffer from a number 
of problems in practice:

1. There is no guarantee that you are on the main thread after `await 
processImage()`
2. There is no way to cancel processing 
3. Race Condition: If you click the button a second time before 
`processImage()` is done, two copies will run simultaneously and you don't know 
which image will "win".

So I wondered: What would a more thorough example look like in practice? How 
would I fix all these issues?

After some consideration, I came up with the following minimal example that 
addresses all these issues:
class ImageProcessingTask {
  var cancelled = false
  func process() async -> Image? { … }
}
var currentTask: ImageProcessingTask?
@IBAction func buttonDidClick(sender:AnyObject) {
  currentTask?.cancelled = true
  let task = ImageProcessingTask()
  currentTask = task
  beginAsync {
guard let image = await task.process() else { return }
DispatchQueue.main.async {
  guard task.cancelled == false else { return }
  imageView.image = image
}
  }
}
If my example isn't obvious, I've documented my thinking (and some 
alternatives) in a gist:
https://gist.github.com/jakob/22c9725caac5125c1273ece93cc2e1e7

Anyway, this more realistic code sample doesn't look nearly as nice any more, 
and I actually think this could be implemented nicer without async/await:

class ImageProcessingTask {
  var cancelled = false
  func process(completionQueue: DispatchQueue, completionHandler: (Image?)->()) 
{ … }
}
@IBAction func buttonDidClick(sender:AnyObject) {
  currentTask?.cancelled = true
  let task = ImageProcessingTask()
  currentTask = task
  task.process(completionQueue: DispatchQueue.main) { (image) in
guard let image = image else { return }
guard task.cancelled == false else { return }
imageView.image = image
  }
}

So I wonder: What's the point of async/await if it doesn't result in nicer code 
in practice? How can we make async/await more elegant when calling from 
non-async functions?


Jakob

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] Fixing race conditions in async/await example

2017-08-19 Thread Thomas via swift-evolution
Maybe this will be handled more gracefully via the actor model.

1. if you're calling from an actor, you'd be called back on its internal queue. 
If you're calling from the 'main actor' (or just the main thread), you'd be 
called back on the main queue
2. you would just add a cancel() method to the actor
3. the processImage() method would call suspendAsync() and store the 
continuation block in an array, once the result is available it would call all 
the continuation blocks with the result

That would only work if we're able to send messages to the actor while previous 
messages to processImage() are pending. That's something that's not yet clear 
to me if actors can work that way.

Thomas

> On 19 Aug 2017, at 13:56, Jakob Egger via swift-evolution 
>  wrote:
> 
> I've read async/await proposal, and I'm thrilled by the possibilities. Here's 
> what I consider the canonical example:
> @IBAction func buttonDidClick(sender:AnyObject) {
>   beginAsync {
> let image = await processImage()
> imageView.image = image
>   }
> }
> This is exactly the kind of thing I will use async/await for!
> 
> But while this example looks extremely elegant, it would suffer from a number 
> of problems in practice:
> 
> 1. There is no guarantee that you are on the main thread after `await 
> processImage()`
> 2. There is no way to cancel processing 
> 3. Race Condition: If you click the button a second time before 
> `processImage()` is done, two copies will run simultaneously and you don't 
> know which image will "win".
> 
> So I wondered: What would a more thorough example look like in practice? How 
> would I fix all these issues?
> 
> After some consideration, I came up with the following minimal example that 
> addresses all these issues:
> class ImageProcessingTask {
>   var cancelled = false
>   func process() async -> Image? { … }
> }
> var currentTask: ImageProcessingTask?
> @IBAction func buttonDidClick(sender:AnyObject) {
>   currentTask?.cancelled = true
>   let task = ImageProcessingTask()
>   currentTask = task
>   beginAsync {
> guard let image = await task.process() else { return }
> DispatchQueue.main.async {
>   guard task.cancelled == false else { return }
>   imageView.image = image
> }
>   }
> }
> If my example isn't obvious, I've documented my thinking (and some 
> alternatives) in a gist:
> https://gist.github.com/jakob/22c9725caac5125c1273ece93cc2e1e7 
> 
> 
> Anyway, this more realistic code sample doesn't look nearly as nice any more, 
> and I actually think this could be implemented nicer without async/await:
> 
> class ImageProcessingTask {
>   var cancelled = false
>   func process(completionQueue: DispatchQueue, completionHandler: 
> (Image?)->()) { … }
> }
> @IBAction func buttonDidClick(sender:AnyObject) {
>   currentTask?.cancelled = true
>   let task = ImageProcessingTask()
>   currentTask = task
>   task.process(completionQueue: DispatchQueue.main) { (image) in
> guard let image = image else { return }
> guard task.cancelled == false else { return }
> imageView.image = image
>   }
> }
> 
> So I wonder: What's the point of async/await if it doesn't result in nicer 
> code in practice? How can we make async/await more elegant when calling from 
> non-async functions?
> 
> 
> Jakob
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Michel Fortin via swift-evolution
>> For instance, has Array value semantics?
> 
> By the commonly accepted definition, Array does not provide value 
> semantics.
> 
>> You might be tempted to say that it does not because it contains class 
>> references, but in reality that depends on what you do with those UIViews.
> 
> An aspect of the type (“does it have value semantics or not”) should not 
> depend on the clients.  By your definition, every type has value semantics if 
> none of the mutating operations are called :-)

No, not mutating operations. Access to mutable memory shared by multiple 
"values" is what breaks value semantics. You can get into this situation using 
pointers, object references, or global variables. It's all the same thing in 
the end: shared memory that can mutate.

For demonstration's sake, here's a silly example of how you can give Array 
literally the same semantics as Array:

// shared UIView instances in global memory
var instances: [UIView] = []

extension Array where Element == Int {

// append a new integer to the array pointing to our UIView 
instance
func append(view: UIView) {
self.append(instances.count)
instances.append(newValue)
}

// access views pointed to by the integers in the array
subscript(viewAt index: Int) -> UIView {
get {
return instances[self[index]]
}
set {
self[index] = instances.count
instances.append(newValue)
}
}
}

And now you need to worry about passing Array to other thread. ;-)

It does not really matter whether the array contains pointers or wether it 
contains indices into a global table: in both cases access to the same mutable 
memory is accessible through multiple copies of an array, and this is what 
breaks value semantics.

Types cannot enforce value semantics. Its the functions you choose to call that 
matters. This is especially important to realize in a language with extensions 
where you can't restrict what functions gets attached to a type.


>> If you treat the class references as opaque pointers (never dereferencing 
>> them), you preserve value semantics. You can count the elements, shuffle 
>> them, all without dereferencing the UIViews it contains. Value semantics 
>> only end when you dereference the class references. And even then, there are 
>> some exceptions.
> 
> I agree with you that the model could permit all values to be sent in actor 
> messages, but doing so would give up the principle advantages of mutable 
> state separation.  You’d have to do synchronization, you’d have the same bugs 
> that have always existed, etc.

What the compiler should aim at is enforcing useful rules when it comes to 
accessing shared mutable state.


-- 
Michel Fortin
https://michelf.ca

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] typed throws

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 18, 2017, at 9:19 PM, Xiaodi Wu  wrote:
> 
> 
> 
>> On Fri, Aug 18, 2017 at 8:11 PM, Matthew Johnson  
>> wrote:
>> 
>> 
>> Sent from my iPad
>> 
>>> On Aug 18, 2017, at 6:56 PM, Xiaodi Wu  wrote:
>>> 
>>> Joe Groff wrote:
>>> 
>>> An alternative approach that embraces the open nature of errors could be to 
>>> represent domains as independent protocols, and extend the error types that 
>>> are relevant to that domain to conform to the protocol. That way, you don't 
>>> obscure the structure of the underlying error value with wrappers. If you 
>>> expect to exhaustively handle all errors in a domain, well, you'd almost 
>>> certainly going to need to have a fallback case in your wrapper type for 
>>> miscellaneous errors, but you could represent that instead without wrapping 
>>> via a catch-all, and as?-casting to your domain protocol with a ??-default 
>>> for errors that don't conform to the protocol. For example, instead of 
>>> attempting something like this:
>>> 
>>> enum DatabaseError {
>>>   case queryError(QueryError)
>>>   case ioError(IOError)
>>>   case other(Error)
>>> 
>>>   var errorKind: String {
>>> switch self {
>>>   case .queryError(let q): return "query error: \(q.query)"
>>>   case .ioError(let i): return "io error: \(i.filename)"
>>>   case .other(let e): return "\(e)"
>>> }
>>>   }
>>> }
>>> 
>>> func queryDatabase(_ query: String) throws /*DatabaseError*/ -> Table
>>> 
>>> do {
>>>   queryDatabase("delete * from users")
>>> } catch let d as DatabaseError {
>>>   os_log(d.errorKind)
>>> } catch {
>>>   fatalError("unexpected non-database error")
>>> }
>>> 
>>> You could do this:
>>> 
>>> protocol DatabaseError {
>>>   var errorKind: String { get }
>>> }
>>> 
>>> extension QueryError: DatabaseError {
>>>   var errorKind: String { return "query error: \(q.query)" }
>>> }
>>> extension IOError: DatabaseError {
>>>   var errorKind: String ( return "io error: \(i.filename)" }
>>> }
>>> 
>>> extension Error {
>>>   var databaseErrorKind: String {
>>> return (error as? DatabaseError)?.errorKind ?? "unexpected non-database 
>>> error"
>>>   }
>>> }
>>> 
>>> func queryDatabase(_ query: String) throws -> Table
>>> 
>>> do {
>>>   queryDatabase("delete * from users")
>>> } catch {
>>>   os_log(error.databaseErrorKind)
>>> }
>> 
>> This approach isn't sufficient for several reasons.  Notably, it requires 
>> the underlying errors to already have a distinct type for every category we 
>> wish to place them in.  If all network errors have the same type and I want 
>> to categorize them based on network availability, authentication, dropped 
>> connection, etc I am not able to do that.  
> 
> Sorry, how does the presence or absence of typed throws play into this?

It provides a convenient way to drive an error conversion mechanism during 
propagation, whether in a library function used to wrap the throwing expression 
or ideally with language support.  If I call a function that throws FooError 
and my function throws BarError and we have a way to go from FooError to 
BarError we can invoke that conversion without needing to catch and rethrow the 
wrapped error.  

It also provides convenient documentation of the categorization along with a 
straightforward way to match the cases (with code completion as Chris pointed 
out).  IMO, making this information immediately clear and with easy matching at 
call sites is crucial to improving how people handle errors in practice.  

Error handling is an afterthought all too often.  The value of making it 
immediately clear how to match important categories of errors should not be 
understated.  I really believe language support of some kind is warranted and 
would have an impact on the quality of software.  Maybe types aren't the right 
solution, but we do need one.

Deciding what categories are important is obviously subjective, but I do 
believe that libraries focused on a specific domain can often make reasonable 
guesses that are pretty close in the majority of use cases.  This is especially 
true for internal libraries where part of the purpose of the library may be to 
establish conventions for the app that are intended to be used (almost) 
everywhere.

>  
>> The kind of categorization I want to be able to do requires a custom 
>> algorithm.  The specific algorithm is used to categorize errors depends on 
>> the dynamic context (i.e. the function that is propagating it).  The way I 
>> usually think about this categorization is as a conversion initializer as I 
>> showed in the example, but it certainly wouldn't need to be accomplished 
>> that way.  The most important thing IMO is the ability to categorize during 
>> error propagation and make information about that categorization easy for 
>> callers to discover.
>> 
>> The output of the algorithm could use various mechanisms for categorization 
>> - an enum is one mechanism, distinct types conforming to appropriate 
>> categorization protocols i

Re: [swift-evolution] typed throws

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 18, 2017, at 11:24 PM, John McCall  wrote:
> 
> 
>>> On Aug 18, 2017, at 11:43 PM, Mark Lilback  wrote:
>>> 
>>> 
>>> On Aug 18, 2017, at 2:27 AM, John McCall via swift-evolution 
>>>  wrote:
>>> 
>>> Even for non-public code.  The only practical merit of typed throws I have 
>>> ever seen someone demonstrate is that it would let them use contextual 
>>> lookup in a throw or catch.  People always say "I'll be able to 
>>> exhaustively switch over my errors", and then I ask them to show me where 
>>> they want to do that, and they show me something that just logs the error, 
>>> which of course does not require typed throws.  Every.  Single.  Time.
>> 
>> We're doing it in the project I'm working on. Sure, there are some places 
>> where we just log the error. But the vast majority of the time, we're 
>> handling them. Maybe that's because we're using reactive programming and 
>> errors bubble to the top, so there's no need to write that many error 
>> handlers. And if I am just logging, either the error doesn't really matter 
>> or there is a TODO label reminding me to fix it at some point.
> 
> I'm not saying people only log errors instead of handling them in some more 
> reasonable way.  I'm saying that logging functions are the only place I've 
> ever seen someone switch over an entire error type.
> 
> I keep bringing exhaustive switches up because, as soon as you have a default 
> case, it seems to me that you haven't really lost anything vs. starting from 
> an opaque type like Error.

If the bar for typed errors is going to be exhaustive handling without an 
"other" / "unknown" then I doubt we will be able to meet it.  That is possible 
and useful in some kinds of code but will certainly continue to be the 
exception.  

On the other hand, if the bar is more generally whether typed errors can 
improve error handling in practice to a sufficient degree to justify the 
feature I think there is a good chance that they can, given the right design.  

Currently good error handling often requires a lot of time reading 
documentation, and often a lot of time trying to *find* the right 
documentation.  All too often that documentation doesn't even exist or is 
spotty and out of date.  There has been more than one occasion where the only 
way to get the information I needed was to read the source code of a dependency 
(thankfully they were open source).  A language is obviously not expected to 
solve the problem of poor documentation, but it can and should help surface 
more information regardless of the state of documentation.

Once you have the necessary information it is often necessary to write tedious 
error analysis code to categorize the error appropriately for the purpose of 
recovery.

I believe these problems are a significant driver behind the sad state of error 
handling in many (probably most) apps.  If a library author believes they have 
sufficient information about usage to categorize errors in a way that will be 
useful in practice for the majority of their users the language should support 
the library author in doing this.  Again, this is specifically *not* about 
providing an exhaustive list of every possible kind of error that might occur. 
It is about categorizing errors based on anticipated recovery strategy (while 
still retaining the underlying error information for cases where the catch site 
requires it).

Types seem like a convenient way to accomplish the goal of categorization.  
They are also already in use by at least some of us, but unfortunately the type 
information is currently discarded by the language.  This is unfortunate for 
callers and can also lead to bugs where an error that should have been 
categorized leaks out because it wasn't wrapped as intended.

> 
>>> On Aug 18, 2017, at 3:11 PM, Matthew Johnson via swift-evolution 
>>>  wrote:
>>> 
>>> The primary goal for me personally is to factor out and centralize code 
>>> that categorizes an error, allowing catch sites to focus on implementing 
>>> recovery instead of figuring out what went wrong.  Here’s some concrete 
>>> sample code building on the example scenario above:
>> 
>> I'm using a similar approach. Here is some stripped down code:
>> 
>> //error object used throughout project
>> public struct Rc2Error: LocalizedError, CustomStringConvertible, 
>> CustomDebugStringConvertible {
>>/// basic categories of errors
>>public enum Rc2ErrorType: String, Error {
>>/// a requested object was not found
>>case noSuchElement
>>/// a requested operation is already in progress
>>case alreadyInProgress
>>/// problem parsing json, Freddy error is nested
>>case invalidJson
>>/// nestedError will be the NSError 
>>case cocoa 
>>/// nested error is related to the file system 
>>case file 
>>/// a wrapped error from a websocket 
>>case websocket 
>>/// a generic network error 
>>c

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

On Aug 19, 2017, at 12:29 AM, Brent Royal-Gordon via swift-evolution 
 wrote:

>> On Aug 18, 2017, at 12:35 PM, Chris Lattner  wrote:
>> 
>>> (Also, I notice that a fire-and-forget message can be thought of as an 
>>> `async` method returning `Never`, even though the computation *does* 
>>> terminate eventually. I'm not sure how to handle that, though)
>> 
>> Yeah, I think that actor methods deserve a bit of magic:
>> 
>> - Their bodies should be implicitly async, so they can call async methods 
>> without blocking their current queue or have to use beginAsync.
>> - However, if they are void “fire and forget” messages, I think the caller 
>> side should *not* have to use await on them, since enqueuing the message 
>> will not block.
> 
> I think we need to be a little careful here—the mere fact that a message 
> returns `Void` doesn't mean the caller shouldn't wait until it's done to 
> continue. For instance:
> 
>listActor.delete(at: index)// Void, so it doesn't wait
>let count = await listActor.getCount()// But we want the count *after* 
> the deletion!
> 
> Perhaps we should depend on the caller to use a future (or a `beginAsync(_:)` 
> call) when they want to fire-and-forget? And yet sometimes a message truly 
> *can't* tell you when it's finished, and we don't want APIs to over-promise 
> on when they tell you they're done. I don't know.
> 
>> I agree.  That is one reason that I think it is important for it to have a 
>> (non-defaulted) protocol requirement.  Requiring someone to implement some 
>> code is a good way to get them to think about the operation… at least a 
>> little bit.
> 
> I wondered if that might have been your reasoning.
> 
>> That said, the design does not try to *guarantee* memory safety, so there 
>> will always be an opportunity for error.
> 
> True, but I think we could mitigate that by giving this protocol a relatively 
> narrow purpose. If we eventually build three different features on 
> `ValueSemantical`, we don't want all three of those features to break when 
> someone abuses the protocol to gain access to actors.
> 
>>> I also worry that the type behavior of a protocol is a bad fit for 
>>> `ValueSemantical`. Retroactive conformance to `ValueSemantical` is almost 
>>> certain to be an unprincipled hack; subclasses can very easily lose the 
>>> value-semantic behavior of their superclasses, but almost certainly can't 
>>> have value semantics unless their superclasses do. And yet having 
>>> `ValueSemantical` conformance somehow be uninherited would destroy Liskov 
>>> substitutability.
>> 
>> Indeed.  See NSArray vs NSMutableArray.
>> 
>> OTOH, I tend to think that retroactive conformance is really a good thing, 
>> particularly in the transition period where you’d be dealing with other 
>> people’s packages who haven’t adopted the model.  You may be adopting it for 
>> their structs afterall.
>> 
>> An alternate approach would be to just say “no, you can’t do that.  If you 
>> want to work around someone else’s problem, define a wrapper struct and mark 
>> it as ValueSemantical”.  That design could also work.
> 
> Yeah, I think wrapper structs are a workable alternative to retroactive 
> conformance.
> 
> What I basically envision (if we want to go with a general 
> `ValueSemantical`-type solution) is that, rather than being a protocol, we 
> would have a `value` keyword that went before the `enum`, `struct`, `class`, 
> or `protocol` keyword. (This is somewhat similar to the proposed `moveonly` 
> keyword.) It would not be valid before `extension`, except perhaps on a 
> conditional extension that only applied when a generic or associated type was 
> `value`, so retroactive conformance wouldn't really be possible. You could 
> also use `value` in a generic constraint list just as you can use `class` 
> there.

A modifier on the type feels like the right approach to specifying value 
semantics to me.  Regardless of which approach we take, it feels like something 
that needs to be implicit for structs and enums where value semantics is 
trivially provable by way of transitivity.  When that is not the case we could 
require an explicit `value` or `nonvalue` annotation (specific keywords subject 
to bikeshedding of course).

> 
> I'm not totally sure how to reconcile this with mutable subclasses, but I 
> have a very vague sense it might be possible if `value` required some kind of 
> *non*-inheritable initializer, and passing to a `value`-constrained parameter 
> implicitly passed the value through that initializer. That is, if you had:
> 
>// As imported--in reality this would be an NS_SWIFT_VALUE_TYPE annotation 
> on the Objective-C definition
>value class NSArray: NSObject {
>init(_ array: NSArray) { self = array.copy() as! NSArray }
>}
> 
> Then Swift would implicitly add some code to an actor method like this:
> 
>actor Foo {
>actor func bar(_ array: NSArray) {
>let array =

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Karim Nassar via swift-evolution
This looks fantastic. Can’t wait (heh) for async/await to land, and the Actors 
pattern looks really compelling.

One thought that occurred to me reading through the section of the 
"async/await" proposal on whether async implies throws:

If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want to 
suppress the catch block with ?/!, does that mean we do it on the ‘await’ ? 

guard let foo = await? getAFoo() else {  …  }

This looks a little odd to me, not not extremely clear as to what is happening. 
Under what conditions will we get a nil instead of a Foo? Maybe it’s just me or 
that the syntax is new and I’ll get used to it.

But it gets even more complicated if we have:

func getAFoo() async -> Foo {
   …
}

func getABar() async(nonthrowing) -> Bar {
   …
}

Now, we have an odd situation where the ‘await’ keyword may sometimes accept 
?/! but in other cases may not (or it has no meaning):

guard 
   let foo = await? getAFoo(),
   let bar = await? getABar() // Is this an error?? If not, what does it mean?
   else {  …  }


Since this edge of throws/try wasn’t explicitly covered in the write-up (or did 
I miss it?), was wondering about your thoughts.

—Karim


> Date: Thu, 17 Aug 2017 15:24:14 -0700
> From: Chris Lattner 
> To: swift-evolution 
> Subject: [swift-evolution] [Concurrency] async/await + actors
> 
> Hi all,
> 
> As Ted mentioned in his email, it is great to finally kick off discussions 
> for what concurrency should look like in Swift.  This will surely be an epic 
> multi-year journey, but it is more important to find the right design than to 
> get there fast.
> 
> I’ve been advocating for a specific model involving async/await and actors 
> for many years now.  Handwaving only goes so far, so some folks asked me to 
> write them down to make the discussion more helpful and concrete.  While I 
> hope these ideas help push the discussion on concurrency forward, this isn’t 
> in any way meant to cut off other directions: in fact I hope it helps give 
> proponents of other designs a model to follow: a discussion giving extensive 
> rationale, combined with the long term story arc to show that the features 
> fit together.
> 
> Anyway, here is the document, I hope it is useful, and I’d love to hear 
> comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
> 
> -Chris



___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Georgios Moschovitis via swift-evolution
Also, notice the consistency

keyword: throw, return type marker: throws  (‘monadic’ Result)
keyword: yield, return type marker: yields(‘monadic’ Future)

-g.

> On 19 Aug 2017, at 1:23 PM, Georgios Moschovitis via swift-evolution 
>  wrote:
> 
> I am wondering, am I the only one that *strongly* prefers `yield` over 
> `await`?
> 
> Superficially, `await` seems like the standard term, but given the fact that 
> the proposal is about coroutines, I think `yield` is actually the proper 
> name. Also, subjectively, it sounds much better/elegant to me!
> 
> -g.
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 19, 2017, at 8:16 AM, Michel Fortin via swift-evolution 
>  wrote:
> 
>>> For instance, has Array value semantics?
>> 
>> By the commonly accepted definition, Array does not provide value 
>> semantics.
>> 
>>> You might be tempted to say that it does not because it contains class 
>>> references, but in reality that depends on what you do with those UIViews.
>> 
>> An aspect of the type (“does it have value semantics or not”) should not 
>> depend on the clients.  By your definition, every type has value semantics 
>> if none of the mutating operations are called :-)
> 
> No, not mutating operations. Access to mutable memory shared by multiple 
> "values" is what breaks value semantics. You can get into this situation 
> using pointers, object references, or global variables. It's all the same 
> thing in the end: shared memory that can mutate.
> 
> For demonstration's sake, here's a silly example of how you can give 
> Array literally the same semantics as Array:
> 
>   // shared UIView instances in global memory
>   var instances: [UIView] = []
> 
>   extension Array where Element == Int {
> 
>   // append a new integer to the array pointing to our UIView 
> instance
>   func append(view: UIView) {
>   self.append(instances.count)
>   instances.append(newValue)
>   }
> 
>   // access views pointed to by the integers in the array
>   subscript(viewAt index: Int) -> UIView {
>   get {
>   return instances[self[index]]
>   }
>   set {
>   self[index] = instances.count
>   instances.append(newValue)
>   }
>   }
>   }
> 
> And now you need to worry about passing Array to other thread. ;-)
> 
> It does not really matter whether the array contains pointers or wether it 
> contains indices into a global table: in both cases access to the same 
> mutable memory is accessible through multiple copies of an array, and this is 
> what breaks value semantics.
> 
> Types cannot enforce value semantics. Its the functions you choose to call 
> that matters. This is especially important to realize in a language with 
> extensions where you can't restrict what functions gets attached to a type.

This gets deeper into the territory of the conversation Dave A and I had a 
while ago.  I think this conflates value semantics with pure functions, which I 
think is a mistake.  

I agree that if you assume away reference counting a function that takes 
Array but never dereferences the pointers can still be a pure function. 
 However, I disagree that Array has value semantics.

The relationship of value semantics to purity is that value semantics can be 
defined in terms of the purity of the "salient operations" of the type - those 
which represent the meaning of the value represented by the type.  The purity 
of these operations is what gives the value independence from copies in terms 
of its meaning.  If somebody chooses to add a new impure operation in an 
extension of a type with value semantics it does not mean that the type itself 
no longer has value semantics.  The operation in the extension is not "salient".

This still begs the question: what operations are "salient"?  I think everyone 
can agree that those used in the definition of equality absolutely must be 
included.  If two values don't compare equal they clearly do not have the same 
meaning.  Thread safety is also usually implied for practical reasons as is the 
case in Chris's manifesto.  These properties are generally considered necessary 
for value semantics.

While these conditions are *necessary* for value semantics I do not believe 
they are *sufficient* for value semantics.  Independence of the value is also 
required.  When a reference type defines equality in terms of object identity 
copies of the reference are not truly independent.  

This is especially true in a language like Swift where dereference is implicit. 
 I argue that when equality is defined in terms of object identity copies of 
the reference are *not* independent.  The meaning of the reference is 
inherently tied up with the resource it references.  The resource has to be 
considered "salient" for the independence to be a useful property.  On the 
other hand, if all you really care about is the identity and not the resource, 
ObjectIdentifier is available and does have value semantics.  There is a very 
good reason this type exists.

I'm happy to see this topic emerging again and looking forward to seeing value 
semantics and pure functions eventually receive language support.  There are a 
lot of subtleties involved.  Working through them and clearly defining what 
they mean in Swift is really important.

> 
> 
>>> If you treat the class references as opaque pointers (nev

Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
Added

those methods to the implementation and updated the proposal document.

On Fri, Aug 18, 2017 at 11:42 PM, Andrew Trick  wrote:

>
> On Aug 18, 2017, at 5:36 PM, Taylor Swift  wrote:
>
> Should the immutable buffer pointer types also get deallocate()?
>
>
> Both UnsafePointer and UnsafeBufferPointer should get deallocate. The Raw
> API already has those methods.
>
> -Andy
>
> On Fri, Aug 18, 2017 at 7:55 PM, Andrew Trick  wrote:
>
>>
>> On Aug 15, 2017, at 9:47 PM, Taylor Swift via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>> Implementation is here: https://github.com/apple/swift/pull/11464
>>
>> On Sat, Aug 12, 2017 at 8:23 PM, Taylor Swift 
>> wrote:
>>
>>> I’ve revised the proposal based on what I learned from trying to
>>> implement these changes. I think it’s worth tacking the existing methods
>>> that take Sequences at the same time as this actually makes the design
>>> a bit simpler.
>>> 
>>>
>>> *The previous version
>>>  of this
>>> document ignored the generic initialization methods on
>>> UnsafeMutableBufferPointer and UnsafeMutableRawBufferPointer, leaving them
>>> to be overhauled at a later date, in a separate proposal. Instead, this
>>> version of the proposal leverages those existing methods to inform a more
>>> compact API design which has less surface area, and is more future-proof
>>> since it obviates the need to design and add another (redundant) set of
>>> protocol-oriented pointer APIs later.*
>>>
>>> On Tue, Aug 8, 2017 at 12:52 PM, Taylor Swift 
>>> wrote:
>>>
 Since Swift 5 just got opened up for proposals, SE-184 Improved
 Pointers is ready for community review, and I encourage everyone to look it
 over and provide feedback. Thank you!
 


>>
>> Would you mind adding a deallocate method to (nonmutable)
>> UnsafePointer/UnsafeBufferPointer to take care of
>> [SR-3309](https://bugs.swift.org/browse/SR-3309)?
>>
>> There’s simply nothing in the memory model that requires mutable memory
>> for deallocation.
>>
>> It fits right in with this proposal and hardly seems worth a separate one.
>>
>> -Andy
>>
>>
>
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] typed throws

2017-08-19 Thread Xiaodi Wu via swift-evolution
On Sat, Aug 19, 2017 at 08:29 Matthew Johnson 
wrote:

>
>
> Sent from my iPad
>
> On Aug 18, 2017, at 9:19 PM, Xiaodi Wu  wrote:
>
>
>
> On Fri, Aug 18, 2017 at 8:11 PM, Matthew Johnson 
> wrote:
>
>>
>>
>> Sent from my iPad
>>
>> On Aug 18, 2017, at 6:56 PM, Xiaodi Wu  wrote:
>>
>> Joe Groff wrote:
>>
>> An alternative approach that embraces the open nature of errors could be
>> to represent domains as independent protocols, and extend the error types
>> that are relevant to that domain to conform to the protocol. That way, you
>> don't obscure the structure of the underlying error value with wrappers. If
>> you expect to exhaustively handle all errors in a domain, well, you'd
>> almost certainly going to need to have a fallback case in your wrapper type
>> for miscellaneous errors, but you could represent that instead without
>> wrapping via a catch-all, and as?-casting to your domain protocol with a
>> ??-default for errors that don't conform to the protocol. For example,
>> instead of attempting something like this:
>>
>> enum DatabaseError {
>>   case queryError(QueryError)
>>   case ioError(IOError)
>>   case other(Error)
>>
>>   var errorKind: String {
>> switch self {
>>   case .queryError(let q): return "query error: \(q.query)"
>>   case .ioError(let i): return "io error: \(i.filename)"
>>   case .other(let e): return "\(e)"
>> }
>>   }
>> }
>>
>> func queryDatabase(_ query: String) throws /*DatabaseError*/ -> Table
>>
>> do {
>>   queryDatabase("delete * from users")
>> } catch let d as DatabaseError {
>>   os_log(d.errorKind)
>> } catch {
>>   fatalError("unexpected non-database error")
>> }
>>
>> You could do this:
>>
>> protocol DatabaseError {
>>   var errorKind: String { get }
>> }
>>
>> extension QueryError: DatabaseError {
>>   var errorKind: String { return "query error: \(q.query)" }
>> }
>> extension IOError: DatabaseError {
>>   var errorKind: String ( return "io error: \(i.filename)" }
>> }
>>
>> extension Error {
>>   var databaseErrorKind: String {
>> return (error as? DatabaseError)?.errorKind ?? "unexpected
>> non-database error"
>>   }
>> }
>>
>> func queryDatabase(_ query: String) throws -> Table
>>
>> do {
>>   queryDatabase("delete * from users")
>> } catch {
>>   os_log(error.databaseErrorKind)
>> }
>>
>>
>> This approach isn't sufficient for several reasons.  Notably, it requires
>> the underlying errors to already have a distinct type for every category we
>> wish to place them in.  If all network errors have the same type and I want
>> to categorize them based on network availability, authentication, dropped
>> connection, etc I am not able to do that.
>>
>
> Sorry, how does the presence or absence of typed throws play into this?
>
>
> It provides a convenient way to drive an error conversion mechanism during
> propagation, whether in a library function used to wrap the throwing
> expression or ideally with language support.  If I call a function that
> throws FooError and my function throws BarError and we have a way to go
> from FooError to BarError we can invoke that conversion without needing to
> catch and rethrow the wrapped error.
>

But isn't that an argument *against* typed errors? You need this
language-level support to automatically convert FooErrors to BarErrors
*because* you've restricted yourself to throwing BarErrors and the function
you call is restricted to throwing FooErrors. Currently, without typed
errors, there is no need to convert a FooError to a BarError.

As mentioned above, it's difficult even internally to design a single
ontology of errors that works throughout a library, so compiler support for
typed errors would be tantamount to a compiler-enforced facility that
pervasively requires this laborious classification and re-classification of
errrors whenever a function rethrows, much of which may be ultimately
unnecessary. In other words, if you are a library vendor and wrap every
FooError from an upstream dependency into a BarError, your user is still
likely to have their own classification of errors and decide to handle
different groups of BarError cases differently anyway, so what was the
point of your laborious conversion of FooErrors to BarErrors?

It also provides convenient documentation of the categorization along with
> a straightforward way to match the cases (with code completion as Chris
> pointed out).  IMO, making this information immediately clear and with easy
> matching at call sites is crucial to improving how people handle errors in
> practice.
>

Again, I don't see documentation as a sufficient argument for this feature;
there is no reason why the Swift compiler could not extract comprehensive
information about what errors are thrown at compile time without typed
errors--and with more granularity than can be documented via types (since
only specific enum cases may ever be thrown in a particular function).

Error handling is an afterthought all too often.  The value of making it
> immediately clear how t

Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Xiaodi Wu via swift-evolution
On Sat, Aug 19, 2017 at 06:07 Haravikk via swift-evolution <
swift-evolution@swift.org> wrote:

>
> On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de> wrote:
>
> Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution <
> swift-evolution@swift.org>:
>
> For me the whole point of a basic protocol is that it forces me to
> implement some requirements in order to conform; I can throw a bunch of
> protocols onto a type and know that it won't compile until I've finished
> it, developers get distracted, leave things unfinished to go back to later,
> make typos etc. etc. To me declaring a conformance is a declaration of "my
> type will meet the requirements for this make, sure I do it", not "please,
> please use some magic to do this for me"; there needs to be a clear
> difference between the two.
>
>
> My conclusion isn't as pessimistic as yours, but I share your objections:
> Mixing a normal feature (protocols) with compiler magic doesn't feel right
> to me — wether it's Equatable, Hashable, Codable or Error.
> It's two different concepts with a shared name*, so I think even
> AutoEquatable wouldn't be the right solution, and something like #Equatable
> would be a much better indicator for what is happening.
>
> Besides that specific concern, I can't fight the feeling that the
> evolution process doesn't work well for proposals like this:
> It's a feature that many people just want to have as soon as possible, and
> concerns regarding the long-term effects are more or less washed away with
> eagerness.
>
> - Tino
>
> * for the same reason, I have big concerns whenever someone proposes to
> blur the line between tuples and arrays
>
>
> Agreed. To be clear though; in spite of my pessimism this *is* a feature
> that I *do* want, but I would rather not have it at all than have it
> implemented in a way that hides bugs and sets a horrible precedent for the
> future.
>

This was already touched upon during review, but to reiterate, the analogy
to default protocol implementations is meant specifically to address this
point about "hiding bugs." Yes, this feature cannot currently be
implemented as a default protocol implementation without magic; with better
reflection facilities there's a good chance that one day it might be, but
that's not the reason why it's being compared to default protocol
implementations. The reason for the comparison is that this feature only
"hides bugs" like a default protocol implementation "hides bugs" (in the
I-conformed-my-type-and-forgot-to-override-the-default-and-the-compiler-won't-remind-me-anymore
sense of "hiding bugs"), and the addition of default protocol
implementations, unless I'm mistaken, isn't even considered an API change
that requires Swift Evolution review.

Given Swift's emphasis on progressive disclosure, I'm fairly confident that
once reflection facilities and/or code-generation facilities improve, many
boilerplate-y protocol requirements will be given default implementations
where they cannot be written today. With every advance in expressiveness,
more protocol requirements that cannot currently have a default
implementation will naturally acquire them. Since the degree to which the
compiler will cease to give errors about non-implementation is directly in
proportion to the boilerplate reduced, it's not a defect but a feature that
these compiler errors go away. At the moment, it is a great idea to enable
some of these improvements for specific common use cases before the general
facilites for reflection and/or code-generation are improved in later
versions of Swift, since the user experience would be expected to remain
the same once those full facilities arrive.

I realise I may seem to be overreacting, but I really do feel that strongly
> about what I fully believe is a mistake. I understand people's enthusiasm
> for the feature, I do; I hate boilerplate as much as the next developer,
> but as you say, it's not a reason to rush forward, especially when this is
> not something that can be easily changed later.
>
> That's a big part of the problem; the decisions here are not just about
> trimming boilerplate for Equatable/Hashable, it's also about the potential
> overreach of every synthesised feature now and in the future as well.
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Goffredo Marocchi via swift-evolution
We can override the protocol default implementation in the extension, but the 
issue I see with default implementation in Swift is that if I pass the object 
created this way around in a type erased container (Any : Protocol1   like it 
was common for many to pass id around in the Objective-C days, a good 
practice IMHO) then my overrode would not be called, but the default 
implementation will be used instead. I would be far more comfortable with this 
“magic” provided for free of default implementations were dynamically 
dispatched.

Sent from my iPhone

> On 19 Aug 2017, at 19:06, Xiaodi Wu via swift-evolution 
>  wrote:
> 
> 
>> On Sat, Aug 19, 2017 at 06:07 Haravikk via swift-evolution 
>>  wrote:
>> 
 On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de> wrote:
 Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution 
 :
 For me the whole point of a basic protocol is that it forces me to 
 implement some requirements in order to conform; I can throw a bunch of 
 protocols onto a type and know that it won't compile until I've finished 
 it, developers get distracted, leave things unfinished to go back to 
 later, make typos etc. etc. To me declaring a conformance is a declaration 
 of "my type will meet the requirements for this make, sure I do it", not 
 "please, please use some magic to do this for me"; there needs to be a 
 clear difference between the two.
>>> 
>>> My conclusion isn't as pessimistic as yours, but I share your objections: 
>>> Mixing a normal feature (protocols) with compiler magic doesn't feel right 
>>> to me — wether it's Equatable, Hashable, Codable or Error.
>>> It's two different concepts with a shared name*, so I think even 
>>> AutoEquatable wouldn't be the right solution, and something like #Equatable 
>>> would be a much better indicator for what is happening.
>>> 
>>> Besides that specific concern, I can't fight the feeling that the evolution 
>>> process doesn't work well for proposals like this:
>>> It's a feature that many people just want to have as soon as possible, and 
>>> concerns regarding the long-term effects are more or less washed away with 
>>> eagerness.
>>> 
>>> - Tino
>>> 
>>> * for the same reason, I have big concerns whenever someone proposes to 
>>> blur the line between tuples and arrays
>> 
>> Agreed. To be clear though; in spite of my pessimism this is a feature that 
>> I do want, but I would rather not have it at all than have it implemented in 
>> a way that hides bugs and sets a horrible precedent for the future.
> 
> This was already touched upon during review, but to reiterate, the analogy to 
> default protocol implementations is meant specifically to address this point 
> about "hiding bugs." Yes, this feature cannot currently be implemented as a 
> default protocol implementation without magic; with better reflection 
> facilities there's a good chance that one day it might be, but that's not the 
> reason why it's being compared to default protocol implementations. The 
> reason for the comparison is that this feature only "hides bugs" like a 
> default protocol implementation "hides bugs" (in the 
> I-conformed-my-type-and-forgot-to-override-the-default-and-the-compiler-won't-remind-me-anymore
>  sense of "hiding bugs"), and the addition of default protocol 
> implementations, unless I'm mistaken, isn't even considered an API change 
> that requires Swift Evolution review.
> 
> Given Swift's emphasis on progressive disclosure, I'm fairly confident that 
> once reflection facilities and/or code-generation facilities improve, many 
> boilerplate-y protocol requirements will be given default implementations 
> where they cannot be written today. With every advance in expressiveness, 
> more protocol requirements that cannot currently have a default 
> implementation will naturally acquire them. Since the degree to which the 
> compiler will cease to give errors about non-implementation is directly in 
> proportion to the boilerplate reduced, it's not a defect but a feature that 
> these compiler errors go away. At the moment, it is a great idea to enable 
> some of these improvements for specific common use cases before the general 
> facilites for reflection and/or code-generation are improved in later 
> versions of Swift, since the user experience would be expected to remain the 
> same once those full facilities arrive.
> 
>> I realise I may seem to be overreacting, but I really do feel that strongly 
>> about what I fully believe is a mistake. I understand people's enthusiasm 
>> for the feature, I do; I hate boilerplate as much as the next developer, but 
>> as you say, it's not a reason to rush forward, especially when this is not 
>> something that can be easily changed later.
>> 
>> That's a big part of the problem; the decisions here are not just about 
>> trimming boilerplate for Equatable/Hashable, it's also about the potential 
>> overreach of every synthesised feature now and in the 

Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Xiaodi Wu via swift-evolution
On Sat, Aug 19, 2017 at 1:13 PM, Goffredo Marocchi 
wrote:

> We can override the protocol default implementation in the extension, but
> the issue I see with default implementation in Swift is that if I pass the
> object created this way around in a type erased container (Any : Protocol1
>   like it was common for many to pass id around in the
> Objective-C days, a good practice IMHO) then my overrode would not be
> called, but the default implementation will be used instead. I would be far
> more comfortable with this “magic” provided for free of default
> implementations were dynamically dispatched.
>

Are you referring to protocol extension methods? Those are not default
implementations, do not have a corresponding protocol requirement that can
be overridden, and are not what's being discussed here.


Sent from my iPhone
>
> On 19 Aug 2017, at 19:06, Xiaodi Wu via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>
> On Sat, Aug 19, 2017 at 06:07 Haravikk via swift-evolution <
> swift-evolution@swift.org> wrote:
>
>>
>> On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de> wrote:
>>
>> Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution <
>> swift-evolution@swift.org>:
>>
>> For me the whole point of a basic protocol is that it forces me to
>> implement some requirements in order to conform; I can throw a bunch of
>> protocols onto a type and know that it won't compile until I've finished
>> it, developers get distracted, leave things unfinished to go back to later,
>> make typos etc. etc. To me declaring a conformance is a declaration of "my
>> type will meet the requirements for this make, sure I do it", not "please,
>> please use some magic to do this for me"; there needs to be a clear
>> difference between the two.
>>
>>
>> My conclusion isn't as pessimistic as yours, but I share your objections:
>> Mixing a normal feature (protocols) with compiler magic doesn't feel right
>> to me — wether it's Equatable, Hashable, Codable or Error.
>> It's two different concepts with a shared name*, so I think even
>> AutoEquatable wouldn't be the right solution, and something like #Equatable
>> would be a much better indicator for what is happening.
>>
>> Besides that specific concern, I can't fight the feeling that the
>> evolution process doesn't work well for proposals like this:
>> It's a feature that many people just want to have as soon as possible,
>> and concerns regarding the long-term effects are more or less washed away
>> with eagerness.
>>
>> - Tino
>>
>> * for the same reason, I have big concerns whenever someone proposes to
>> blur the line between tuples and arrays
>>
>>
>> Agreed. To be clear though; in spite of my pessimism this *is* a feature
>> that I *do* want, but I would rather not have it at all than have it
>> implemented in a way that hides bugs and sets a horrible precedent for the
>> future.
>>
>
> This was already touched upon during review, but to reiterate, the analogy
> to default protocol implementations is meant specifically to address this
> point about "hiding bugs." Yes, this feature cannot currently be
> implemented as a default protocol implementation without magic; with better
> reflection facilities there's a good chance that one day it might be, but
> that's not the reason why it's being compared to default protocol
> implementations. The reason for the comparison is that this feature only
> "hides bugs" like a default protocol implementation "hides bugs" (in the
> I-conformed-my-type-and-forgot-to-override-the-
> default-and-the-compiler-won't-remind-me-anymore sense of "hiding bugs"),
> and the addition of default protocol implementations, unless I'm mistaken,
> isn't even considered an API change that requires Swift Evolution review.
>
> Given Swift's emphasis on progressive disclosure, I'm fairly confident
> that once reflection facilities and/or code-generation facilities improve,
> many boilerplate-y protocol requirements will be given default
> implementations where they cannot be written today. With every advance in
> expressiveness, more protocol requirements that cannot currently have a
> default implementation will naturally acquire them. Since the degree to
> which the compiler will cease to give errors about non-implementation is
> directly in proportion to the boilerplate reduced, it's not a defect but a
> feature that these compiler errors go away. At the moment, it is a great
> idea to enable some of these improvements for specific common use cases
> before the general facilites for reflection and/or code-generation are
> improved in later versions of Swift, since the user experience would be
> expected to remain the same once those full facilities arrive.
>
> I realise I may seem to be overreacting, but I really do feel that
>> strongly about what I fully believe is a mistake. I understand people's
>> enthusiasm for the feature, I do; I hate boilerplate as much as the next
>> developer, but as you say, it's not a reason to rush forward, especia

[swift-evolution] [Concurrency] modifying beginAsync, suspendAsync to support cancellation

2017-08-19 Thread Marc Schlichte via swift-evolution
Hi,

to support cancellation, I propose the following changes to `beginAsync()` and 
`suspendAsync()`:

`beginAsync()` returns an object adhering to a `Cancelable` protocol:

```
func beginAsync(_ body: () async throws-> Void) rethrows -> Cancelable

protocol Cancelable { func cancel() }
```

`suspendAsync()` takes a new thunk parameter:

```
func suspendAsync(onCancel: () -> Void, body: (cont: (T) -> Void, err: 
(Error) -> Void) async -> T 
```

Now, when `cancel()` is invoked, the `onCancel` thunk in the current suspension 
(if any) will be called.


Example:

```
var task: Cancelable?

@IBAction func buttonDidClick(sender: AnyObject) {
  task = beginAsync {
do {
  let image = try await processImage()
  imageView.image = image 
} catch AsyncError.canceled {
  imageView.image = nil // or some fallback image...
} catch {
  // other handling
}
  }  
)

@IBAction func cancelDidClick(sender: AnyObject) {
  task?.cancel()
}

func processImage() async throws -> UIImage {
  // This processing should be on a background queue (or better an Actor :-) - 
but ignored for this example
  var cancelled = false
  suspendAsync(onCancel: {
cancelled = true
  }, body: { cont, err in
 while !done && !cancelled {
   // do the processing on image until done or canceled
 }
 guard !cancelled else { err(AsyncError.canceled) } // BTW, maybe change 
signature of `suspendAsync` to allow to throw here instead
 cont(image)
  }
}
```

Cheers
Marc

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Improve `init(repeating:count)`

2017-08-19 Thread Daryle Walker via swift-evolution
> On Aug 17, 2017, at 1:06 PM, Erica Sadun via swift-evolution 
>  wrote:
> 
> Also, for those of you here who haven't heard my previous rant on the 
> subject, I dislike using map for generating values that don't depend on 
> transforming a domain to a range. (It has been argued that `_ in` is mapping 
> from `Void`, but I still dislike it immensely)
> 
> Here are the ways that I have approached this:
> 
> // Ugh
> [UIView(), UIView(), UIView(), UIView(), UIView()]

What if we got a duplication “macro”:

[ #dup(5 ; UIView()) ]

It’s not a true macro because it reads the count when “constexpr” objects would 
be evaluated.

— 
Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Daryle Walker via swift-evolution
> On Aug 19, 2017, at 7:06 AM, Haravikk via swift-evolution 
>  wrote:
> 
>> On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de > wrote:
>>> Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution 
>>> mailto:swift-evolution@swift.org>>:
>>> For me the whole point of a basic protocol is that it forces me to 
>>> implement some requirements in order to conform; I can throw a bunch of 
>>> protocols onto a type and know that it won't compile until I've finished 
>>> it, developers get distracted, leave things unfinished to go back to later, 
>>> make typos etc. etc. To me declaring a conformance is a declaration of "my 
>>> type will meet the requirements for this make, sure I do it", not "please, 
>>> please use some magic to do this for me"; there needs to be a clear 
>>> difference between the two.
>> 
>> My conclusion isn't as pessimistic as yours, but I share your objections: 
>> Mixing a normal feature (protocols) with compiler magic doesn't feel right 
>> to me — wether it's Equatable, Hashable, Codable or Error.
>> It's two different concepts with a shared name*, so I think even 
>> AutoEquatable wouldn't be the right solution, and something like #Equatable 
>> would be a much better indicator for what is happening.
>> 
>> Besides that specific concern, I can't fight the feeling that the evolution 
>> process doesn't work well for proposals like this:
>> It's a feature that many people just want to have as soon as possible, and 
>> concerns regarding the long-term effects are more or less washed away with 
>> eagerness.
>> 
>> - Tino
>> 
>> * for the same reason, I have big concerns whenever someone proposes to blur 
>> the line between tuples and arrays
> 
> Agreed. To be clear though; in spite of my pessimism this is a feature that I 
> do want, but I would rather not have it at all than have it implemented in a 
> way that hides bugs and sets a horrible precedent for the future.

I tried to make a split thread for this, but would you object to synthesized 
conformance if we had to explicitly add a command within the definition block 
to trigger the synthesis? If we add strong type-aliases, we could reuse the 
directive to copy an interface (method, inner type, property, or conformed-to 
protocol) from the underlying type to the current type for synthesis too. The 
only problem would be backward compatibility; once added, we would urge users 
to explicitly list “publish Equatable” for synthesis, but what about code that 
already uses the implicit version (since this feature will probably be released 
for at least one Swift version by the time strong type-aliases happen), do we 
force users to change their code?

> I realise I may seem to be overreacting, but I really do feel that strongly 
> about what I fully believe is a mistake. I understand people's enthusiasm for 
> the feature, I do; I hate boilerplate as much as the next developer, but as 
> you say, it's not a reason to rush forward, especially when this is not 
> something that can be easily changed later.
> 
> That's a big part of the problem; the decisions here are not just about 
> trimming boilerplate for Equatable/Hashable, it's also about the potential 
> overreach of every synthesised feature now and in the future as well.

— 
Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] typed throws

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 19, 2017, at 12:43 PM, Xiaodi Wu  wrote:
> 
> 
>> On Sat, Aug 19, 2017 at 08:29 Matthew Johnson  wrote:
>> 
>> 
>> Sent from my iPad
>> 
>>> On Aug 18, 2017, at 9:19 PM, Xiaodi Wu  wrote:
>>> 
>>> 
>>> 
 On Fri, Aug 18, 2017 at 8:11 PM, Matthew Johnson  
 wrote:
 
 
 Sent from my iPad
 
> On Aug 18, 2017, at 6:56 PM, Xiaodi Wu  wrote:
> 
> Joe Groff wrote:
> 
> An alternative approach that embraces the open nature of errors could be 
> to represent domains as independent protocols, and extend the error types 
> that are relevant to that domain to conform to the protocol. That way, 
> you don't obscure the structure of the underlying error value with 
> wrappers. If you expect to exhaustively handle all errors in a domain, 
> well, you'd almost certainly going to need to have a fallback case in 
> your wrapper type for miscellaneous errors, but you could represent that 
> instead without wrapping via a catch-all, and as?-casting to your domain 
> protocol with a ??-default for errors that don't conform to the protocol. 
> For example, instead of attempting something like this:
> 
> enum DatabaseError {
>   case queryError(QueryError)
>   case ioError(IOError)
>   case other(Error)
> 
>   var errorKind: String {
> switch self {
>   case .queryError(let q): return "query error: \(q.query)"
>   case .ioError(let i): return "io error: \(i.filename)"
>   case .other(let e): return "\(e)"
> }
>   }
> }
> 
> func queryDatabase(_ query: String) throws /*DatabaseError*/ -> Table
> 
> do {
>   queryDatabase("delete * from users")
> } catch let d as DatabaseError {
>   os_log(d.errorKind)
> } catch {
>   fatalError("unexpected non-database error")
> }
> 
> You could do this:
> 
> protocol DatabaseError {
>   var errorKind: String { get }
> }
> 
> extension QueryError: DatabaseError {
>   var errorKind: String { return "query error: \(q.query)" }
> }
> extension IOError: DatabaseError {
>   var errorKind: String ( return "io error: \(i.filename)" }
> }
> 
> extension Error {
>   var databaseErrorKind: String {
> return (error as? DatabaseError)?.errorKind ?? "unexpected 
> non-database error"
>   }
> }
> 
> func queryDatabase(_ query: String) throws -> Table
> 
> do {
>   queryDatabase("delete * from users")
> } catch {
>   os_log(error.databaseErrorKind)
> }
 
 This approach isn't sufficient for several reasons.  Notably, it requires 
 the underlying errors to already have a distinct type for every category 
 we wish to place them in.  If all network errors have the same type and I 
 want to categorize them based on network availability, authentication, 
 dropped connection, etc I am not able to do that.  
>>> 
>>> Sorry, how does the presence or absence of typed throws play into this?
>> 
>> It provides a convenient way to drive an error conversion mechanism during 
>> propagation, whether in a library function used to wrap the throwing 
>> expression or ideally with language support.  If I call a function that 
>> throws FooError and my function throws BarError and we have a way to go from 
>> FooError to BarError we can invoke that conversion without needing to catch 
>> and rethrow the wrapped error.  
> 
> But isn't that an argument *against* typed errors? You need this 
> language-level support to automatically convert FooErrors to BarErrors 
> *because* you've restricted yourself to throwing BarErrors and the function 
> you call is restricted to throwing FooErrors. Currently, without typed 
> errors, there is no need to convert a FooError to a BarError.

No, this is a misunderstanding of the point of the conversion.  In that 
example, the point of performing a conversion is not because the types are 
arbitrarily chosen. It is because the initializer of BarError includes an 
algorithm that categorizes errors.  It may place different values of FooError 
into different cases.  What I am after is language-level support for 
categorizing errors during propagation and making the categories easily visible 
to anyone who looks at the signature of a function that chooses to categorize 
its errors.  Using types and the initializer is one way (the most obvious way) 
to do this.

> 
> As mentioned above, it's difficult even internally to design a single 
> ontology of errors that works throughout a library, so compiler support for 
> typed errors would be tantamount to a compiler-enforced facility that 
> pervasively requires this laborious classification and re-classification of 
> errrors whenever a function rethrows, much of which may be ultimately 
> unnecessary.  In other words, if you are a library vendor and wrap every 
> FooError from an upstream dependen

Re: [swift-evolution] typed throws

2017-08-19 Thread Xiaodi Wu via swift-evolution
On Sat, Aug 19, 2017 at 2:04 PM, Matthew Johnson 
wrote:

>
>
> Sent from my iPad
>
> On Aug 19, 2017, at 12:43 PM, Xiaodi Wu  wrote:
>
>
> On Sat, Aug 19, 2017 at 08:29 Matthew Johnson 
> wrote:
>
>>
>>
>> Sent from my iPad
>>
>> On Aug 18, 2017, at 9:19 PM, Xiaodi Wu  wrote:
>>
>>
>>
>> On Fri, Aug 18, 2017 at 8:11 PM, Matthew Johnson 
>> wrote:
>>
>>>
>>>
>>> Sent from my iPad
>>>
>>> On Aug 18, 2017, at 6:56 PM, Xiaodi Wu  wrote:
>>>
>>> Joe Groff wrote:
>>>
>>> An alternative approach that embraces the open nature of errors could be
>>> to represent domains as independent protocols, and extend the error types
>>> that are relevant to that domain to conform to the protocol. That way, you
>>> don't obscure the structure of the underlying error value with wrappers. If
>>> you expect to exhaustively handle all errors in a domain, well, you'd
>>> almost certainly going to need to have a fallback case in your wrapper type
>>> for miscellaneous errors, but you could represent that instead without
>>> wrapping via a catch-all, and as?-casting to your domain protocol with a
>>> ??-default for errors that don't conform to the protocol. For example,
>>> instead of attempting something like this:
>>>
>>> enum DatabaseError {
>>>   case queryError(QueryError)
>>>   case ioError(IOError)
>>>   case other(Error)
>>>
>>>   var errorKind: String {
>>> switch self {
>>>   case .queryError(let q): return "query error: \(q.query)"
>>>   case .ioError(let i): return "io error: \(i.filename)"
>>>   case .other(let e): return "\(e)"
>>> }
>>>   }
>>> }
>>>
>>> func queryDatabase(_ query: String) throws /*DatabaseError*/ -> Table
>>>
>>> do {
>>>   queryDatabase("delete * from users")
>>> } catch let d as DatabaseError {
>>>   os_log(d.errorKind)
>>> } catch {
>>>   fatalError("unexpected non-database error")
>>> }
>>>
>>> You could do this:
>>>
>>> protocol DatabaseError {
>>>   var errorKind: String { get }
>>> }
>>>
>>> extension QueryError: DatabaseError {
>>>   var errorKind: String { return "query error: \(q.query)" }
>>> }
>>> extension IOError: DatabaseError {
>>>   var errorKind: String ( return "io error: \(i.filename)" }
>>> }
>>>
>>> extension Error {
>>>   var databaseErrorKind: String {
>>> return (error as? DatabaseError)?.errorKind ?? "unexpected
>>> non-database error"
>>>   }
>>> }
>>>
>>> func queryDatabase(_ query: String) throws -> Table
>>>
>>> do {
>>>   queryDatabase("delete * from users")
>>> } catch {
>>>   os_log(error.databaseErrorKind)
>>> }
>>>
>>>
>>> This approach isn't sufficient for several reasons.  Notably, it
>>> requires the underlying errors to already have a distinct type for every
>>> category we wish to place them in.  If all network errors have the same
>>> type and I want to categorize them based on network availability,
>>> authentication, dropped connection, etc I am not able to do that.
>>>
>>
>> Sorry, how does the presence or absence of typed throws play into this?
>>
>>
>> It provides a convenient way to drive an error conversion mechanism
>> during propagation, whether in a library function used to wrap the throwing
>> expression or ideally with language support.  If I call a function that
>> throws FooError and my function throws BarError and we have a way to go
>> from FooError to BarError we can invoke that conversion without needing to
>> catch and rethrow the wrapped error.
>>
>
> But isn't that an argument *against* typed errors? You need this
> language-level support to automatically convert FooErrors to BarErrors
> *because* you've restricted yourself to throwing BarErrors and the function
> you call is restricted to throwing FooErrors. Currently, without typed
> errors, there is no need to convert a FooError to a BarError.
>
>
> No, this is a misunderstanding of the point of the conversion.  In that
> example, the point of performing a conversion is not because the types are
> arbitrarily chosen. It is because the initializer of BarError includes an
> algorithm that categorizes errors.  It may place different values of
> FooError into different cases.  What I am after is language-level support
> for categorizing errors during propagation and making the categories easily
> visible to anyone who looks at the signature of a function that chooses to
> categorize its errors.  Using types and the initializer is one way (the
> most obvious way) to do this.
>
>
> As mentioned above, it's difficult even internally to design a single
> ontology of errors that works throughout a library, so compiler support for
> typed errors would be tantamount to a compiler-enforced facility that
> pervasively requires this laborious classification and re-classification of
> errrors whenever a function rethrows, much of which may be ultimately
> unnecessary.  In other words, if you are a library vendor and wrap every
> FooError from an upstream dependency into a BarError, your user is still
> likely to have their own classification of errors and decide to handle
> dif

Re: [swift-evolution] typed throws

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 19, 2017, at 2:16 PM, Xiaodi Wu  wrote:
> 
>> On Sat, Aug 19, 2017 at 2:04 PM, Matthew Johnson  
>> wrote:
>> 
>> 
>> Sent from my iPad
>> 
>>> On Aug 19, 2017, at 12:43 PM, Xiaodi Wu  wrote:
>>> 
>>> 
 On Sat, Aug 19, 2017 at 08:29 Matthew Johnson  
 wrote:
 
 
 Sent from my iPad
 
> On Aug 18, 2017, at 9:19 PM, Xiaodi Wu  wrote:
> 
> 
> 
>> On Fri, Aug 18, 2017 at 8:11 PM, Matthew Johnson 
>>  wrote:
>> 
>> 
>> Sent from my iPad
>> 
>>> On Aug 18, 2017, at 6:56 PM, Xiaodi Wu  wrote:
>>> 
>>> Joe Groff wrote:
>>> 
>>> An alternative approach that embraces the open nature of errors could 
>>> be to represent domains as independent protocols, and extend the error 
>>> types that are relevant to that domain to conform to the protocol. That 
>>> way, you don't obscure the structure of the underlying error value with 
>>> wrappers. If you expect to exhaustively handle all errors in a domain, 
>>> well, you'd almost certainly going to need to have a fallback case in 
>>> your wrapper type for miscellaneous errors, but you could represent 
>>> that instead without wrapping via a catch-all, and as?-casting to your 
>>> domain protocol with a ??-default for errors that don't conform to the 
>>> protocol. For example, instead of attempting something like this:
>>> 
>>> enum DatabaseError {
>>>   case queryError(QueryError)
>>>   case ioError(IOError)
>>>   case other(Error)
>>> 
>>>   var errorKind: String {
>>> switch self {
>>>   case .queryError(let q): return "query error: \(q.query)"
>>>   case .ioError(let i): return "io error: \(i.filename)"
>>>   case .other(let e): return "\(e)"
>>> }
>>>   }
>>> }
>>> 
>>> func queryDatabase(_ query: String) throws /*DatabaseError*/ -> Table
>>> 
>>> do {
>>>   queryDatabase("delete * from users")
>>> } catch let d as DatabaseError {
>>>   os_log(d.errorKind)
>>> } catch {
>>>   fatalError("unexpected non-database error")
>>> }
>>> 
>>> You could do this:
>>> 
>>> protocol DatabaseError {
>>>   var errorKind: String { get }
>>> }
>>> 
>>> extension QueryError: DatabaseError {
>>>   var errorKind: String { return "query error: \(q.query)" }
>>> }
>>> extension IOError: DatabaseError {
>>>   var errorKind: String ( return "io error: \(i.filename)" }
>>> }
>>> 
>>> extension Error {
>>>   var databaseErrorKind: String {
>>> return (error as? DatabaseError)?.errorKind ?? "unexpected 
>>> non-database error"
>>>   }
>>> }
>>> 
>>> func queryDatabase(_ query: String) throws -> Table
>>> 
>>> do {
>>>   queryDatabase("delete * from users")
>>> } catch {
>>>   os_log(error.databaseErrorKind)
>>> }
>> 
>> This approach isn't sufficient for several reasons.  Notably, it 
>> requires the underlying errors to already have a distinct type for every 
>> category we wish to place them in.  If all network errors have the same 
>> type and I want to categorize them based on network availability, 
>> authentication, dropped connection, etc I am not able to do that.  
> 
> Sorry, how does the presence or absence of typed throws play into this?
 
 It provides a convenient way to drive an error conversion mechanism during 
 propagation, whether in a library function used to wrap the throwing 
 expression or ideally with language support.  If I call a function that 
 throws FooError and my function throws BarError and we have a way to go 
 from FooError to BarError we can invoke that conversion without needing to 
 catch and rethrow the wrapped error.  
>>> 
>>> But isn't that an argument *against* typed errors? You need this 
>>> language-level support to automatically convert FooErrors to BarErrors 
>>> *because* you've restricted yourself to throwing BarErrors and the function 
>>> you call is restricted to throwing FooErrors. Currently, without typed 
>>> errors, there is no need to convert a FooError to a BarError.
>> 
>> No, this is a misunderstanding of the point of the conversion.  In that 
>> example, the point of performing a conversion is not because the types are 
>> arbitrarily chosen. It is because the initializer of BarError includes an 
>> algorithm that categorizes errors.  It may place different values of 
>> FooError into different cases.  What I am after is language-level support 
>> for categorizing errors during propagation and making the categories easily 
>> visible to anyone who looks at the signature of a function that chooses to 
>> categorize its errors.  Using types and the initializer is one way (the most 
>> obvious way) to do this.
>> 
>>> 
>>> As mentioned above, it's difficult even internally to design a single 
>>> ontology o

[swift-evolution] Preparing Swift compiler stage reentrancy in preparation for "constexpr"

2017-08-19 Thread Daryle Walker via swift-evolution
[I’m not sure which list should cover this.]

I once thought of having a “#protocols(SomeTypeOrProtocol)” that was a type 
alias to a composition of all protocols the given type/protocol conforms to (or 
“Any” if none). It seems simple, but all current uses of protocols requires 
explicit mentions by the user, while this new facility would require looking 
back at the table of existing protocols.

It seems that implementing something like C++’s “constexpr” would have the same 
problem. The compiler stages can’t be one way anymore; a non-literal that’s 
needed for a compile-time context has to be deferred until its components can 
be evaluated in compile-time contexts (down to literals). So we would partially 
evaluate down to SIL, run the SIL in the compiler’s environment to determine 
all the compile-time data (not a big deal for discrete data, but we’ll have to 
ignore differences between the compiler’s environment and the target platform’s 
environment w.r.t. floating-point processing), then go back to the semantic 
phase to fill in the “constexpr” constants and evaluate the SIL again, possibly 
taking several cycles depending how deep “constexpr” is needed.

As an example, take a mythical function that converts a mythical 
one-dimensional fixed-size array to a tuple:

func tuple(from: [N; T]) -> ( #dup(N ; T) )

We would have to defer the return type until each call of “tuple(from:)”, where 
we look at the input’s type’s shape to determine N, then expand the #dup, then 
run the semantic phase again with the final type. What if N itself was based 
off a “constexpr” function? Then we would need (at least) two passes.

Should we start preparing the compiler stages for loopback now? Or do we have 
to figure out what exactly we want “Swift constexpr” to mean first?

— 
Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Haravikk via swift-evolution

> On 19 Aug 2017, at 19:46, Daryle Walker  wrote:
> 
>> On Aug 19, 2017, at 7:06 AM, Haravikk via swift-evolution 
>> mailto:swift-evolution@swift.org>> wrote:
>> 
>>> On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de > 
>>> wrote:
 Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution 
 mailto:swift-evolution@swift.org>>:
 For me the whole point of a basic protocol is that it forces me to 
 implement some requirements in order to conform; I can throw a bunch of 
 protocols onto a type and know that it won't compile until I've finished 
 it, developers get distracted, leave things unfinished to go back to 
 later, make typos etc. etc. To me declaring a conformance is a declaration 
 of "my type will meet the requirements for this make, sure I do it", not 
 "please, please use some magic to do this for me"; there needs to be a 
 clear difference between the two.
>>> 
>>> My conclusion isn't as pessimistic as yours, but I share your objections: 
>>> Mixing a normal feature (protocols) with compiler magic doesn't feel right 
>>> to me — wether it's Equatable, Hashable, Codable or Error.
>>> It's two different concepts with a shared name*, so I think even 
>>> AutoEquatable wouldn't be the right solution, and something like #Equatable 
>>> would be a much better indicator for what is happening.
>>> 
>>> Besides that specific concern, I can't fight the feeling that the evolution 
>>> process doesn't work well for proposals like this:
>>> It's a feature that many people just want to have as soon as possible, and 
>>> concerns regarding the long-term effects are more or less washed away with 
>>> eagerness.
>>> 
>>> - Tino
>>> 
>>> * for the same reason, I have big concerns whenever someone proposes to 
>>> blur the line between tuples and arrays
>> 
>> Agreed. To be clear though; in spite of my pessimism this is a feature that 
>> I do want, but I would rather not have it at all than have it implemented in 
>> a way that hides bugs and sets a horrible precedent for the future.
> 
> I tried to make a split thread for this, but would you object to synthesized 
> conformance if we had to explicitly add a command within the definition block 
> to trigger the synthesis? If we add strong type-aliases, we could reuse the 
> directive to copy an interface (method, inner type, property, or conformed-to 
> protocol) from the underlying type to the current type for synthesis too. The 
> only problem would be backward compatibility; once added, we would urge users 
> to explicitly list “publish Equatable” for synthesis, but what about code 
> that already uses the implicit version (since this feature will probably be 
> released for at least one Swift version by the time strong type-aliases 
> happen), do we force users to change their code?

I would rather no code at all use the implicit version; one of my points is 
that it's not something that's easily changed after the fact, which is why it 
needs to be done correctly now.

I'm open to any method that makes opting in to the synthesised conformance 
explicit; I still think a specifically named protocol is the simplest, but I'm 
not married to that as a solution; attributes, keywords etc. are all fine too, 
whatever is the easiest way to opt-in to the behaviour explicitly without 
ambiguity. I'm not 100% sure exactly what you mean by "add a command within the 
definition block", or is an attribute/keyword what you meant?___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Goffredo Marocchi via swift-evolution
Sorry, I thought that the default implementation in the protocol extension was 
how this was provided.

> Providing Default Implementations
> You can use protocol extensions to provide a default implementation to any 
> method or computed property requirement of that protocol
> 

https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Protocols.html#//apple_ref/doc/uid/TP40014097-CH25-ID521

Sent from my iPhone

>> On 19 Aug 2017, at 19:28, Xiaodi Wu  wrote:
>> 
>> On Sat, Aug 19, 2017 at 1:13 PM, Goffredo Marocchi  wrote:
>> We can override the protocol default implementation in the extension, but 
>> the issue I see with default implementation in Swift is that if I pass the 
>> object created this way around in a type erased container (Any : Protocol1   
>> like it was common for many to pass id around in the Objective-C 
>> days, a good practice IMHO) then my overrode would not be called, but the 
>> default implementation will be used instead. I would be far more comfortable 
>> with this “magic” provided for free of default implementations were 
>> dynamically dispatched.
> 
> Are you referring to protocol extension methods? Those are not default 
> implementations, do not have a corresponding protocol requirement that can be 
> overridden, and are not what's being discussed here.
> 
> 
>> Sent from my iPhone
>> 
 On 19 Aug 2017, at 19:06, Xiaodi Wu via swift-evolution 
  wrote:
 
 
 On Sat, Aug 19, 2017 at 06:07 Haravikk via swift-evolution 
  wrote:
 
>> On 19 Aug 2017, at 11:44, Tino Heth <2...@gmx.de> wrote:
>> Am 17.08.2017 um 20:11 schrieb Haravikk via swift-evolution 
>> :
>> For me the whole point of a basic protocol is that it forces me to 
>> implement some requirements in order to conform; I can throw a bunch of 
>> protocols onto a type and know that it won't compile until I've finished 
>> it, developers get distracted, leave things unfinished to go back to 
>> later, make typos etc. etc. To me declaring a conformance is a 
>> declaration of "my type will meet the requirements for this make, sure I 
>> do it", not "please, please use some magic to do this for me"; there 
>> needs to be a clear difference between the two.
> 
> My conclusion isn't as pessimistic as yours, but I share your objections: 
> Mixing a normal feature (protocols) with compiler magic doesn't feel 
> right to me — wether it's Equatable, Hashable, Codable or Error.
> It's two different concepts with a shared name*, so I think even 
> AutoEquatable wouldn't be the right solution, and something like 
> #Equatable would be a much better indicator for what is happening.
> 
> Besides that specific concern, I can't fight the feeling that the 
> evolution process doesn't work well for proposals like this:
> It's a feature that many people just want to have as soon as possible, 
> and concerns regarding the long-term effects are more or less washed away 
> with eagerness.
> 
> - Tino
> 
> * for the same reason, I have big concerns whenever someone proposes to 
> blur the line between tuples and arrays
 
 Agreed. To be clear though; in spite of my pessimism this is a feature 
 that I do want, but I would rather not have it at all than have it 
 implemented in a way that hides bugs and sets a horrible precedent for the 
 future.
>>> 
>>> This was already touched upon during review, but to reiterate, the analogy 
>>> to default protocol implementations is meant specifically to address this 
>>> point about "hiding bugs." Yes, this feature cannot currently be 
>>> implemented as a default protocol implementation without magic; with better 
>>> reflection facilities there's a good chance that one day it might be, but 
>>> that's not the reason why it's being compared to default protocol 
>>> implementations. The reason for the comparison is that this feature only 
>>> "hides bugs" like a default protocol implementation "hides bugs" (in the 
>>> I-conformed-my-type-and-forgot-to-override-the-default-and-the-compiler-won't-remind-me-anymore
>>>  sense of "hiding bugs"), and the addition of default protocol 
>>> implementations, unless I'm mistaken, isn't even considered an API change 
>>> that requires Swift Evolution review.
>>> 
>>> Given Swift's emphasis on progressive disclosure, I'm fairly confident that 
>>> once reflection facilities and/or code-generation facilities improve, many 
>>> boilerplate-y protocol requirements will be given default implementations 
>>> where they cannot be written today. With every advance in expressiveness, 
>>> more protocol requirements that cannot currently have a default 
>>> implementation will naturally acquire them. Since the degree to which the 
>>> compiler will cease to give errors about non-implementation is directly in 
>>> proportion to the boilerplate reduced, it's not a defect

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Michel Fortin via swift-evolution

> Le 19 août 2017 à 11:38, Matthew Johnson  a écrit :
> 
> 
> 
> Sent from my iPad
> 
> On Aug 19, 2017, at 8:16 AM, Michel Fortin via swift-evolution 
> mailto:swift-evolution@swift.org>> wrote:
> 
 For instance, has Array value semantics? 
>>> 
>>> By the commonly accepted definition, Array does not provide value 
>>> semantics.
>>> 
 You might be tempted to say that it does not because it contains class 
 references, but in reality that depends on what you do with those UIViews.
>>> 
>>> An aspect of the type (“does it have value semantics or not”) should not 
>>> depend on the clients.  By your definition, every type has value semantics 
>>> if none of the mutating operations are called :-)
>> 
>> No, not mutating operations. Access to mutable memory shared by multiple 
>> "values" is what breaks value semantics. You can get into this situation 
>> using pointers, object references, or global variables. It's all the same 
>> thing in the end: shared memory that can mutate.
>> 
>> For demonstration's sake, here's a silly example of how you can give 
>> Array literally the same semantics as Array:
>> 
>>  // shared UIView instances in global memory
>>  var instances: [UIView] = []
>> 
>>  extension Array where Element == Int {
>> 
>>  // append a new integer to the array pointing to our UIView 
>> instance
>>  func append(view: UIView) {
>>  self.append(instances.count)
>>  instances.append(newValue)
>>  }
>> 
>>  // access views pointed to by the integers in the array
>>  subscript(viewAt index: Int) -> UIView {
>>  get {
>>  return instances[self[index]]
>>  }
>>  set {
>>  self[index] = instances.count
>>  instances.append(newValue)
>>  }
>>  }
>>  }
>> 
>> And now you need to worry about passing Array to other thread. ;-)
>> 
>> It does not really matter whether the array contains pointers or wether it 
>> contains indices into a global table: in both cases access to the same 
>> mutable memory is accessible through multiple copies of an array, and this 
>> is what breaks value semantics.
>> 
>> Types cannot enforce value semantics. Its the functions you choose to call 
>> that matters. This is especially important to realize in a language with 
>> extensions where you can't restrict what functions gets attached to a type.
> 
> This gets deeper into the territory of the conversation Dave A and I had a 
> while ago.  I think this conflates value semantics with pure functions, which 
> I think is a mistake.  
> 
> I agree that if you assume away reference counting a function that takes 
> Array but never dereferences the pointers can still be a pure 
> function.  However, I disagree that Array has value semantics.
> 

> The relationship of value semantics to purity is that value semantics can be 
> defined in terms of the purity of the "salient operations" of the type - 
> those which represent the meaning of the value represented by the type.  The 
> purity of these operations is what gives the value independence from copies 
> in terms of its meaning.  If somebody chooses to add a new impure operation 
> in an extension of a type with value semantics it does not mean that the type 
> itself no longer has value semantics.  The operation in the extension is not 
> "salient".
> 
> This still begs the question: what operations are "salient"?  I think 
> everyone can agree that those used in the definition of equality absolutely 
> must be included.  If two values don't compare equal they clearly do not have 
> the same meaning.  Thread safety is also usually implied for practical 
> reasons as is the case in Chris's manifesto.  These properties are generally 
> considered necessary for value semantics.
> 
> While these conditions are *necessary* for value semantics I do not believe 
> they are *sufficient* for value semantics.  Independence of the value is also 
> required.  When a reference type defines equality in terms of object identity 
> copies of the reference are not truly independent.  
> 
> This is especially true in a language like Swift where dereference is 
> implicit.  I argue that when equality is defined in terms of object identity 
> copies of the reference are *not* independent.  The meaning of the reference 
> is inherently tied up with the resource it references.  The resource has to 
> be considered "salient" for the independence to be a useful property.  On the 
> other hand, if all you really care about is the identity and not the 
> resource, ObjectIdentifier is available and does have value semantics.  There 
> is a very good reason this type exists.

The reason we're discussing value semantics here is because they are useful 
making concurrency safer. If we define the meaning of 

Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread Xiaodi Wu via swift-evolution
On Sat, Aug 19, 2017 at 3:26 PM, Goffredo Marocchi 
wrote:

> Sorry, I thought that the default implementation in the protocol extension
> was how this was provided.
>
> Providing Default Implementations
>
> You can use protocol extensions to provide a default implementation to any
> method or computed property requirement of that protocol
>
> https://developer.apple.com/library/content/documentation/
> Swift/Conceptual/Swift_Programming_Language/Protocols.html#//apple_ref/
> doc/uid/TP40014097-CH25-ID521
>


There are default implementations and extension methods. Both are written
inside protocol extensions. Default implementations are dynamically
dispatched, but extension methods are not. A default implementation
implements a protocol requirement. An extension method adds a method to a
protocol which is not a requirement.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 19, 2017, at 3:29 PM, Michel Fortin  wrote:
> 
> 
>> Le 19 août 2017 à 11:38, Matthew Johnson  a écrit :
>> 
>> 
>> 
>> Sent from my iPad
>> 
>> On Aug 19, 2017, at 8:16 AM, Michel Fortin via swift-evolution 
>>  wrote:
>> 
> For instance, has Array value semantics? 
 
 By the commonly accepted definition, Array does not provide value 
 semantics.
 
> You might be tempted to say that it does not because it contains class 
> references, but in reality that depends on what you do with those UIViews.
 
 An aspect of the type (“does it have value semantics or not”) should not 
 depend on the clients.  By your definition, every type has value semantics 
 if none of the mutating operations are called :-)
>>> 
>>> No, not mutating operations. Access to mutable memory shared by multiple 
>>> "values" is what breaks value semantics. You can get into this situation 
>>> using pointers, object references, or global variables. It's all the same 
>>> thing in the end: shared memory that can mutate.
>>> 
>>> For demonstration's sake, here's a silly example of how you can give 
>>> Array literally the same semantics as Array:
>>> 
>>> // shared UIView instances in global memory
>>> var instances: [UIView] = []
>>> 
>>> extension Array where Element == Int {
>>> 
>>> // append a new integer to the array pointing to our UIView 
>>> instance
>>> func append(view: UIView) {
>>> self.append(instances.count)
>>> instances.append(newValue)
>>> }
>>> 
>>> // access views pointed to by the integers in the array
>>> subscript(viewAt index: Int) -> UIView {
>>> get {
>>> return instances[self[index]]
>>> }
>>> set {
>>> self[index] = instances.count
>>> instances.append(newValue)
>>> }
>>> }
>>> }
>>> 
>>> And now you need to worry about passing Array to other thread. ;-)
>>> 
>>> It does not really matter whether the array contains pointers or wether it 
>>> contains indices into a global table: in both cases access to the same 
>>> mutable memory is accessible through multiple copies of an array, and this 
>>> is what breaks value semantics.
>>> 
>>> Types cannot enforce value semantics. Its the functions you choose to call 
>>> that matters. This is especially important to realize in a language with 
>>> extensions where you can't restrict what functions gets attached to a type.
>> 
>> This gets deeper into the territory of the conversation Dave A and I had a 
>> while ago.  I think this conflates value semantics with pure functions, 
>> which I think is a mistake.  
>> 
>> I agree that if you assume away reference counting a function that takes 
>> Array but never dereferences the pointers can still be a pure 
>> function.  However, I disagree that Array has value semantics.
>> 
> 
>> The relationship of value semantics to purity is that value semantics can be 
>> defined in terms of the purity of the "salient operations" of the type - 
>> those which represent the meaning of the value represented by the type.  The 
>> purity of these operations is what gives the value independence from copies 
>> in terms of its meaning.  If somebody chooses to add a new impure operation 
>> in an extension of a type with value semantics it does not mean that the 
>> type itself no longer has value semantics.  The operation in the extension 
>> is not "salient".
>> 
>> This still begs the question: what operations are "salient"?  I think 
>> everyone can agree that those used in the definition of equality absolutely 
>> must be included.  If two values don't compare equal they clearly do not 
>> have the same meaning.  Thread safety is also usually implied for practical 
>> reasons as is the case in Chris's manifesto.  These properties are generally 
>> considered necessary for value semantics.
>> 
>> While these conditions are *necessary* for value semantics I do not believe 
>> they are *sufficient* for value semantics.  Independence of the value is 
>> also required.  When a reference type defines equality in terms of object 
>> identity copies of the reference are not truly independent.  
>> 
>> This is especially true in a language like Swift where dereference is 
>> implicit.  I argue that when equality is defined in terms of object identity 
>> copies of the reference are *not* independent.  The meaning of the reference 
>> is inherently tied up with the resource it references.  The resource has to 
>> be considered "salient" for the independence to be a useful property.  On 
>> the other hand, if all you really care about is the identity and not the 
>> resource, ObjectIdentifier is available and does have value semantics.  
>> There is a very good reason this type exists.
> 

Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution

> On Aug 15, 2017, at 9:47 PM, Taylor Swift via swift-evolution 
>  wrote:
> 
> Implementation is here: https://github.com/apple/swift/pull/11464 
> 
> 
> On Sat, Aug 12, 2017 at 8:23 PM, Taylor Swift  > wrote:
> I’ve revised the proposal based on what I learned from trying to implement 
> these changes. I think it’s worth tacking the existing methods that take 
> Sequences at the same time as this actually makes the design a bit simpler.
>  >
> 
> The previous version 
>  of this 
> document ignored the generic initialization methods on 
> UnsafeMutableBufferPointer and UnsafeMutableRawBufferPointer, leaving them to 
> be overhauled at a later date, in a separate proposal. Instead, this version 
> of the proposal leverages those existing methods to inform a more compact API 
> design which has less surface area, and is more future-proof since it 
> obviates the need to design and add another (redundant) set of 
> protocol-oriented pointer APIs later.
> 
> On Tue, Aug 8, 2017 at 12:52 PM, Taylor Swift  > wrote:
> Since Swift 5 just got opened up for proposals, SE-184 Improved Pointers is 
> ready for community review, and I encourage everyone to look it over and 
> provide feedback. Thank you!
>   
> >


Thanks for continuing to improve this proposal. It’s in great shape now.

Upon rereading it today I have to say I strongly object to the `count = 1` 
default in the following two cases:

+ UnsafeMutablePointer.withMemoryRebound(to: count: Int = 1)
+ UnsafeMutableRawPointer.bindMemory(to:T.Type, count:Int = 1)
  -> UnsafeMutablePointer

To aid understanding, it needs to be clear at the call-site that binding memory 
only applies to the specified number of elements. It's a common mistake for 
users to think they can obtain a pointer to a different type, then use that 
pointer as a base to access other elements. These APIs are dangerous expert 
interfaces. We certainly don't want to make their usage more concise at the 
expense of clarity.

In general, I think there's very little value in the `count=1` default, and it 
creates potential confusion on the caller side between the `BufferPointer` API 
and the `Pointer` API. For example:

+ initialize(repeating:Pointee, count:Int = 1)

Seeing `p.initialize(repeating: x)`, the user may think `p` refers to the 
buffer instead of a pointer into the buffer and misunderstand the behavior.

+ UnsafeMutablePointer.deinitialize(count: Int = 1)

Again, `p.deinitialize()` looks to me like it might be deinitializing an entire 
buffer.

If the `count` label is always explicit, then there's a clear distinction 
between the low-level `pointer` APIs and the `buffer` APIs.

The pointer-to-single-element case never seemed interesting enough to me to 
worry about making convenient. If I'm wrong about that, is there some 
real-world code you can point to where the count=1 default significantly 
improves clarity?

-Andy___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution

> On Aug 9, 2017, at 8:51 AM, Taylor Swift  wrote:
> 
> 
> 
> On Wed, Aug 9, 2017 at 2:34 AM, Andrew Trick  > wrote:
> 
>> On Aug 8, 2017, at 11:10 PM, Taylor Swift > > wrote:
>> 
>> 
>> On Wed, Aug 9, 2017 at 1:51 AM, Andrew Trick > > wrote:
>> 
>>> On Aug 8, 2017, at 8:44 PM, Taylor Swift >> > wrote:
>>> 
>>> cool,, as for UnsafeMutableRawBufferPointer.copy(from:bytes:), I cannot 
>>> find such a function anywhere in the API. There is copyBytes(from:) 
>>> ,
>>>  but the documentation is messed up and mentions a nonexistent count: 
>>> argument over and over again. The documentation also doesn’t mention what 
>>> happens if there is a length mismatch, so users are effectively relying on 
>>> an implementation detail. I don’t know how to best resolve this.
>> 
>> We currently have `UnsafeMutableRawBufferPointer.copyBytes(from:)`. I don’t 
>> think your proposal changes that. The current docs refer to the `source` 
>> parameter, which is correct. Docs refer to the parameter name, not the label 
>> name. So `source.count` is the size of the input. I was pointing out that it 
>> has the semantics: `debugAssert(source.count <= self.count)`.
>> 
>> Your proposal changes `UnsafeRawPointer.copyBytes(from:count:)` to 
>> `UnsafeRawPointer.copy(from:bytes:)`. Originally we wanted to those API 
>> names to match, but I’m fine with your change. What is more important is 
>> that the semantics are the same as `copyBytes(from:)`. Furthermore, any new 
>> methods that you add that copy into a raw buffer (e.g. 
>> initializeMemory(as:from:count:)) should have similar behavior.
>> 
>>  
>> I’m fine with switching to taking the count from the source, though I think 
>> taking the count from the destination is slightly better because 1) the use 
>> cases I mentioned in the other email, and 2) all the other memorystate 
>> functions use self.count instead of source.count, if they take a source 
>> argument. But being consistent with the raw pointer version is more 
>> important.
> 
> If it’s copying from a buffer it should not take a count, if it’s copying 
> from a pointer it obviously needs to take a count. What I mean by the two 
> versions being named consistently is simply that they’re both named 
> `copyBytes`. That really isn’t important though. The overflow/underflow 
> semantics being consistent are important.
> 
> (Incidentally, the reason “bytes” needs to be in the somewhere name is 
> because this method isn’t capable of copying nontrivial values)
> 
>> Should the methods that don’t deal with raw buffers also be modified to use 
>> the source argument (i.e. UnsafeMutableBufferPointer.initialize(from:))?
> 
> I’m not sure what you mean by this. It also allows the destination to be 
> larger than the source. Initializing from a sequence does not trap on 
> overflow because we can’t guarantee the size of the sequence ahead of time. 
> When I talk about consistent overflow/underflow semantics, I’m only talking 
> about initializing one unsafe buffer/pointer from another unsafe 
> buffer/pointer.
> 
>> Also, was there a reason why UnsafeMutableRawBufferPointer.copyBytes(from:) 
>> uses the source’s count instead of its own? Right now this behavior is 
>> “technically” undocumented behavior (as the public docs haven’t been 
>> updated) so if there was ever a time to change it, now would be it.
> 
> Mainly because partial initialization is more expected than dropping data on 
> the floor. Ultimately, this should be whatever typical developers would 
> expect the behavior to be. I would be very hesitant to change the behavior 
> now though.
> 
> -Andy
> 
> The problem is I would expect to be able to safely call deinitialize() and 
> friends after calling initialize(from:). If Element is a class type and 
> initialize doesn’t fill the entire buffer range, calling deinitialize() will 
> crash. That being said, since copy(from:bytes:) and copyBytes(from:) don’t do 
> any initialization and have no direct counterparts in 
> UnsafeMutableBufferPointer, it’s okay if they have different behavior than 
> the other methods.

You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize() 
method is dangerous, and I asked you to add a warning to its comments. However, 
given the danger, I think we need to justify adding the method to begin with. 
Are there real use cases that greatly benefit from it?

We have to accept that UnsafeBufferPointer is simply not a safe API for 
managing initialization and deinitialization. Adding a convenience method only 
makes it less safe.

The standard library *should* vend a safe API for initializing and 
deinitializing manually allocated memory. However, that will require a new 
"buffer" type that wraps UnsafeBufferPointer. That will be a major design 
discussion that is both out o

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Chris Lattner via swift-evolution
On Aug 19, 2017, at 2:02 AM, Susan Cheng  wrote:
> Hi chris,
> 
> is a actor guarantee always process the messages in one by one?
> so, can it assume that never being multiple threads try to modify the state 
> at the same time?

Yep, that’s the idea.

> P.S. i have implemented similar idea before:
> 
> https://github.com/SusanDoggie/Doggie/blob/master/Sources/Doggie/Thread/Thread.swift
>  
> 
> https://github.com/SusanDoggie/Doggie/blob/master/Sources/Doggie/SDTriggerNode.swift
>  
> 

Cool.  That’s one of the other interesting things about the actor model.  We 
can prototype and build it as a completely library feature to get experience 
with the runtime model, then move to language support (providing the additional 
safety) when things seem to work well in practice.

-Chris


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Chris Lattner via swift-evolution

> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
>  wrote:
> 
> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
> Actors pattern looks really compelling.
> 
> One thought that occurred to me reading through the section of the 
> "async/await" proposal on whether async implies throws:
> 
> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want 
> to suppress the catch block with ?/!, does that mean we do it on the ‘await’ 
> ? 
> 
> guard let foo = await? getAFoo() else {  …  }

Interesting question, I’d lean towards “no, we don’t want await? and await!”.  
My sense is that the try? and try! forms are only occasionally used, and await? 
implies heavily that the optional behavior has something to do with the async, 
not with the try.  I think it would be ok to have to write “try? await foo()” 
in the case that you’d want the thrown error to turn into an optional.  That 
would be nice and explicit.

-Chris

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
On Sat, Aug 19, 2017 at 6:05 PM, Andrew Trick  wrote:

>
> On Aug 9, 2017, at 8:51 AM, Taylor Swift  wrote:
>
>
>
> On Wed, Aug 9, 2017 at 2:34 AM, Andrew Trick  wrote:
>
>>
>> On Aug 8, 2017, at 11:10 PM, Taylor Swift  wrote:
>>
>>
>> On Wed, Aug 9, 2017 at 1:51 AM, Andrew Trick  wrote:
>>
>>>
>>> On Aug 8, 2017, at 8:44 PM, Taylor Swift  wrote:
>>>
>>> cool,, as for UnsafeMutableRawBufferPointer.copy(from:bytes:), I cannot
>>> find such a function anywhere in the API. There is copyBytes(from:)
>>> ,
>>> but the documentation is messed up and mentions a nonexistent count: 
>>> argument
>>> over and over again. The documentation also doesn’t mention what happens if
>>> there is a length mismatch, so users are effectively relying on an
>>> implementation detail. I don’t know how to best resolve this.
>>>
>>>
>>> We currently have `UnsafeMutableRawBufferPointer.copyBytes(from:)`. I
>>> don’t think your proposal changes that. The current docs refer to the
>>> `source` parameter, which is correct. Docs refer to the parameter name, not
>>> the label name. So `source.count` is the size of the input. I was pointing
>>> out that it has the semantics: `debugAssert(source.count <= self.count)`.
>>>
>>> Your proposal changes `UnsafeRawPointer.copyBytes(from:count:)` to
>>> `UnsafeRawPointer.copy(from:bytes:)`. Originally we wanted to those API
>>> names to match, but I’m fine with your change. What is more important is
>>> that the semantics are the same as `copyBytes(from:)`. Furthermore, any new
>>> methods that you add that copy into a raw buffer (e.g.
>>> initializeMemory(as:from:count:)) should have similar behavior.
>>>
>>>
>> I’m fine with switching to taking the count from the source, though I
>> think taking the count from the destination is slightly better because
>> 1) the use cases I mentioned in the other email, and 2) all the other
>> memorystate functions use self.count instead of source.count, if they
>> take a source argument. But being consistent with the raw pointer
>> version is more important.
>>
>>
>> If it’s copying from a buffer it should not take a count, if it’s copying
>> from a pointer it obviously needs to take a count. What I mean by the two
>> versions being named consistently is simply that they’re both named
>> `copyBytes`. That really isn’t important though. The overflow/underflow
>> semantics being consistent are important.
>>
>> (Incidentally, the reason “bytes” needs to be in the somewhere name is
>> because this method isn’t capable of copying nontrivial values)
>>
>> Should the methods that don’t deal with raw buffers also be modified to
>> use the source argument (i.e. UnsafeMutableBufferPointer.ini
>> tialize(from:))?
>>
>>
>> I’m not sure what you mean by this. It also allows the destination to be
>> larger than the source. Initializing from a sequence does not trap on
>> overflow because we can’t guarantee the size of the sequence ahead of time.
>> When I talk about consistent overflow/underflow semantics, I’m only talking
>> about initializing one unsafe buffer/pointer from another unsafe
>> buffer/pointer.
>>
>> Also, was there a reason why UnsafeMutableRawBufferPoin
>> ter.copyBytes(from:) uses the source’s count instead of its own? Right
>> now this behavior is “technically” undocumented behavior (as the public
>> docs haven’t been updated) so if there was ever a time to change it, now
>> would be it.
>>
>>
>> Mainly because partial initialization is more expected than dropping data
>> on the floor. Ultimately, this should be whatever typical developers would
>> expect the behavior to be. I would be very hesitant to change the behavior
>> now though.
>>
>> -Andy
>>
>
> The problem is I would expect to be able to safely call deinitialize() and
> friends after calling initialize(from:). If Element is a class type and
> initialize doesn’t fill the entire buffer range, calling deinitialize()
> will crash. That being said, since copy(from:bytes:) and copyBytes(from:)
> don’t do any initialization and have no direct counterparts in
> UnsafeMutableBufferPointer, it’s okay if they have different behavior than
> the other methods.
>
>
> You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize()
> method is dangerous, and I asked you to add a warning to its comments.
> However, given the danger, I think we need to justify adding the method to
> begin with. Are there real use cases that greatly benefit from it?
>

I agree that’s a problem, which is why i was iffy on supporting partial
initialization to begin with. The use case is for things like growing
collections where you have to periodically move to larger storage. However,
deinitialize is no more dangerous than moveInitialize,
assign(repeating:count:), or moveAssign; they all deinitialize at least one
entire buffer. If deinitialize is to be omitted, so must a majority of the
unsafe pointer API.

Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
I agree it’s probably a bad idea to add the default arg to those two
functions. However, the default argument in initialize(repeating:count:) is
there for backwards compatibility since it already had it before and
there’s like a hundred places in the stdlib that use this default value.

On Sat, Aug 19, 2017 at 6:02 PM, Andrew Trick  wrote:

>
> On Aug 15, 2017, at 9:47 PM, Taylor Swift via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Implementation is here: https://github.com/apple/swift/pull/11464
>
> On Sat, Aug 12, 2017 at 8:23 PM, Taylor Swift 
> wrote:
>
>> I’ve revised the proposal based on what I learned from trying to
>> implement these changes. I think it’s worth tacking the existing methods
>> that take Sequences at the same time as this actually makes the design a
>> bit simpler.
>> 
>>
>> *The previous version
>>  of this
>> document ignored the generic initialization methods on
>> UnsafeMutableBufferPointer and UnsafeMutableRawBufferPointer, leaving them
>> to be overhauled at a later date, in a separate proposal. Instead, this
>> version of the proposal leverages those existing methods to inform a more
>> compact API design which has less surface area, and is more future-proof
>> since it obviates the need to design and add another (redundant) set of
>> protocol-oriented pointer APIs later.*
>>
>> On Tue, Aug 8, 2017 at 12:52 PM, Taylor Swift 
>> wrote:
>>
>>> Since Swift 5 just got opened up for proposals, SE-184 Improved Pointers
>>> is ready for community review, and I encourage everyone to look it over and
>>> provide feedback. Thank you!
>>> >> als/0184-improved-pointers.md>
>>>
>>
> Thanks for continuing to improve this proposal. It’s in great shape now.
>
> Upon rereading it today I have to say I strongly object to the `count = 1`
> default in the following two cases:
>
> + UnsafeMutablePointer.withMemoryRebound(to: count: Int = 1)
> + UnsafeMutableRawPointer.bindMemory(to:T.Type, count:Int = 1)
>   -> UnsafeMutablePointer
>
> To aid understanding, it needs to be clear at the call-site that binding
> memory only applies to the specified number of elements. It's a common
> mistake for users to think they can obtain a pointer to a different type,
> then use that pointer as a base to access other elements. These APIs are
> dangerous expert interfaces. We certainly don't want to make their usage
> more concise at the expense of clarity.
>
> In general, I think there's very little value in the `count=1` default,
> and it creates potential confusion on the caller side between the
> `BufferPointer` API and the `Pointer` API. For example:
>
> + initialize(repeating:Pointee, count:Int = 1)
>
> Seeing `p.initialize(repeating: x)`, the user may think `p` refers to the
> buffer instead of a pointer into the buffer and misunderstand the behavior.
>
> + UnsafeMutablePointer.deinitialize(count: Int = 1)
>
> Again, `p.deinitialize()` looks to me like it might be deinitializing an
> entire buffer.
>
> If the `count` label is always explicit, then there's a clear distinction
> between the low-level `pointer` APIs and the `buffer` APIs.
>
> The pointer-to-single-element case never seemed interesting enough to me
> to worry about making convenient. If I'm wrong about that, is there some
> real-world code you can point to where the count=1 default significantly
> improves clarity?
>
> -Andy
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution
>> 
>> The problem is I would expect to be able to safely call deinitialize() and 
>> friends after calling initialize(from:). If Element is a class type and 
>> initialize doesn’t fill the entire buffer range, calling deinitialize() will 
>> crash. That being said, since copy(from:bytes:) and copyBytes(from:) don’t 
>> do any initialization and have no direct counterparts in 
>> UnsafeMutableBufferPointer, it’s okay if they have different behavior than 
>> the other methods.
> 
> You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize() 
> method is dangerous, and I asked you to add a warning to its comments. 
> However, given the danger, I think we need to justify adding the method to 
> begin with. Are there real use cases that greatly benefit from it?
> 
> I agree that’s a problem, which is why i was iffy on supporting partial 
> initialization to begin with. The use case is for things like growing 
> collections where you have to periodically move to larger storage. However, 
> deinitialize is no more dangerous than moveInitialize, 
> assign(repeating:count:), or moveAssign; they all deinitialize at least one 
> entire buffer. If deinitialize is to be omitted, so must a majority of the 
> unsafe pointer API.

Here's an alternative. Impose the precondition(source.count == self.count) to 
the following UnsafeMutableBufferPointer convenience methods that you propose 
adding:

+++ func assign(from:UnsafeBufferPointer)
+++ func assign(from:UnsafeMutableBufferPointer)
+++ func moveAssign(from:UnsafeMutableBufferPointer)
+++ func moveInitialize(from:UnsafeMutableBufferPointer)
+++ func initialize(from:UnsafeBufferPointer)
+++ func initialize(from:UnsafeMutableBufferPointer)

I don't that introduces any behavior that is inconsistent with other methods. 
`copyBytes` is a totally different thing that only works on trivial types. The 
currently dominant use case for UnsafeBufferPointer, partially initialized 
backing store, does not need to use your new convenience methods. It can 
continue dropping down to pointer+count style initialization/deinitialization.

-Andy___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
On Sat, Aug 19, 2017 at 8:52 PM, Andrew Trick  wrote:

>
>> The problem is I would expect to be able to safely call deinitialize()
>> and friends after calling initialize(from:). If Element is a class type and
>> initialize doesn’t fill the entire buffer range, calling deinitialize()
>> will crash. That being said, since copy(from:bytes:) and copyBytes(from:)
>> don’t do any initialization and have no direct counterparts in
>> UnsafeMutableBufferPointer, it’s okay if they have different behavior than
>> the other methods.
>>
>>
>> You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize()
>> method is dangerous, and I asked you to add a warning to its comments.
>> However, given the danger, I think we need to justify adding the method to
>> begin with. Are there real use cases that greatly benefit from it?
>>
>
> I agree that’s a problem, which is why i was iffy on supporting partial
> initialization to begin with. The use case is for things like growing
> collections where you have to periodically move to larger storage. However,
> deinitialize is no more dangerous than moveInitialize,
> assign(repeating:count:), or moveAssign; they all deinitialize at least one
> entire buffer. If deinitialize is to be omitted, so must a majority of the
> unsafe pointer API.
>
>
> Here's an alternative. Impose the precondition(source.count == self.count)
> to the following UnsafeMutableBufferPointer convenience methods that you
> propose adding:
>
> +++ func assign(from:UnsafeBufferPointer)
> +++ func assign(from:UnsafeMutableBufferPointer)
> +++ func moveAssign(from:UnsafeMutableBufferPointer)
> +++ func moveInitialize(from:UnsafeMutableBufferPointer)
> +++ func initialize(from:UnsafeBufferPointer)
> +++ func initialize(from:UnsafeMutableBufferPointer)
>
> I don't that introduces any behavior that is inconsistent with other
> methods. `copyBytes` is a totally different thing that only works on
> trivial types. The currently dominant use case for UnsafeBufferPointer,
> partially initialized backing store, does not need to use your new
> convenience methods. It can continue dropping down to pointer+count style
> initialization/deinitialization.
>
> -Andy
>

the latest draft does not have assign(from:UnsafeMutableBufferPointer<
Element>) or  initialize(from:UnsafeMutableBufferPointer), it uses
the generic Sequence methods that are already there that do not require
that precondition.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution

> On Aug 19, 2017, at 6:03 PM, Taylor Swift  wrote:
> 
> 
> 
> On Sat, Aug 19, 2017 at 8:52 PM, Andrew Trick  > wrote:
>>> 
>>> The problem is I would expect to be able to safely call deinitialize() and 
>>> friends after calling initialize(from:). If Element is a class type and 
>>> initialize doesn’t fill the entire buffer range, calling deinitialize() 
>>> will crash. That being said, since copy(from:bytes:) and copyBytes(from:) 
>>> don’t do any initialization and have no direct counterparts in 
>>> UnsafeMutableBufferPointer, it’s okay if they have different behavior than 
>>> the other methods.
>> 
>> You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize() 
>> method is dangerous, and I asked you to add a warning to its comments. 
>> However, given the danger, I think we need to justify adding the method to 
>> begin with. Are there real use cases that greatly benefit from it?
>> 
>> I agree that’s a problem, which is why i was iffy on supporting partial 
>> initialization to begin with. The use case is for things like growing 
>> collections where you have to periodically move to larger storage. However, 
>> deinitialize is no more dangerous than moveInitialize, 
>> assign(repeating:count:), or moveAssign; they all deinitialize at least one 
>> entire buffer. If deinitialize is to be omitted, so must a majority of the 
>> unsafe pointer API.
> 
> Here's an alternative. Impose the precondition(source.count == self.count) to 
> the following UnsafeMutableBufferPointer convenience methods that you propose 
> adding:
> 
> +++ func assign(from:UnsafeBufferPointer)
> +++ func assign(from:UnsafeMutableBufferPointer)
> +++ func moveAssign(from:UnsafeMutableBufferPointer)
> +++ func moveInitialize(from:UnsafeMutableBufferPointer)
> +++ func initialize(from:UnsafeBufferPointer)
> +++ func initialize(from:UnsafeMutableBufferPointer)
> 
> I don't that introduces any behavior that is inconsistent with other methods. 
> `copyBytes` is a totally different thing that only works on trivial types. 
> The currently dominant use case for UnsafeBufferPointer, partially 
> initialized backing store, does not need to use your new convenience methods. 
> It can continue dropping down to pointer+count style 
> initialization/deinitialization.
> 
> -Andy
>  
> the latest draft does not have 
> assign(from:UnsafeMutableBufferPointer) or  
> initialize(from:UnsafeMutableBufferPointer), it uses the generic 
> Sequence methods that are already there that do not require that precondition.

Sorry, I was pasting from your original proposal. Here are the relevant methods 
from the latest draft:

https://github.com/kelvin13/swift-evolution/blob/1b7738513c00388b8de3b09769eab773539be386/proposals/0184-improved-pointers.md

+++ func moveInitialize(from:UnsafeMutableBufferPointer)
+++ func moveAssign(from:UnsafeMutableBufferPointer)

But with the precondition, the `assign` method could be reasonably added back, 
right?
+++ func assign(from:UnsafeMutableBufferPointer)

Likewise, I don’t have a problem with initialize(from: UnsafeBufferPointer) 
where self.count==source.count. The Sequence initializer is different. It’s 
designed for the Array use case and forces the caller to deal with partial 
initialization. 

UnsafeMutableRawBufferPointer.moveInitializeMemory on the other hand probably 
doesn't need that precondition since there's no way to deinitialize. It just 
needs clear comments.

-Andy___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution

> On Aug 19, 2017, at 5:33 PM, Taylor Swift  wrote:
> 
> I agree it’s probably a bad idea to add the default arg to those two 
> functions. However, the default argument in initialize(repeating:count:) is 
> there for backwards compatibility since it already had it before and there’s 
> like a hundred places in the stdlib that use this default value.

Alright, I could agree to that if no one else wants to weigh in. As long as you 
remove the default from the memory binding API.

-Andy

> On Sat, Aug 19, 2017 at 6:02 PM, Andrew Trick  > wrote:
> 
>> On Aug 15, 2017, at 9:47 PM, Taylor Swift via swift-evolution 
>> mailto:swift-evolution@swift.org>> wrote:
>> 
>> Implementation is here: https://github.com/apple/swift/pull/11464 
>> 
>> 
>> On Sat, Aug 12, 2017 at 8:23 PM, Taylor Swift > > wrote:
>> I’ve revised the proposal based on what I learned from trying to implement 
>> these changes. I think it’s worth tacking the existing methods that take 
>> Sequences at the same time as this actually makes the design a bit simpler.
>> > >
>> 
>> The previous version 
>>  of this 
>> document ignored the generic initialization methods on 
>> UnsafeMutableBufferPointer and UnsafeMutableRawBufferPointer, leaving them 
>> to be overhauled at a later date, in a separate proposal. Instead, this 
>> version of the proposal leverages those existing methods to inform a more 
>> compact API design which has less surface area, and is more future-proof 
>> since it obviates the need to design and add another (redundant) set of 
>> protocol-oriented pointer APIs later.
>> 
>> On Tue, Aug 8, 2017 at 12:52 PM, Taylor Swift > > wrote:
>> Since Swift 5 just got opened up for proposals, SE-184 Improved Pointers is 
>> ready for community review, and I encourage everyone to look it over and 
>> provide feedback. Thank you!
>> >  
>> >
> 
> 
> Thanks for continuing to improve this proposal. It’s in great shape now.
> 
> Upon rereading it today I have to say I strongly object to the `count = 1` 
> default in the following two cases:
> 
> + UnsafeMutablePointer.withMemoryRebound(to: count: Int = 1)
> + UnsafeMutableRawPointer.bindMemory(to:T.Type, count:Int = 1)
>   -> UnsafeMutablePointer
> 
> To aid understanding, it needs to be clear at the call-site that binding 
> memory only applies to the specified number of elements. It's a common 
> mistake for users to think they can obtain a pointer to a different type, 
> then use that pointer as a base to access other elements. These APIs are 
> dangerous expert interfaces. We certainly don't want to make their usage more 
> concise at the expense of clarity.
> 
> In general, I think there's very little value in the `count=1` default, and 
> it creates potential confusion on the caller side between the `BufferPointer` 
> API and the `Pointer` API. For example:
> 
> + initialize(repeating:Pointee, count:Int = 1)
> 
> Seeing `p.initialize(repeating: x)`, the user may think `p` refers to the 
> buffer instead of a pointer into the buffer and misunderstand the behavior.
> 
> + UnsafeMutablePointer.deinitialize(count: Int = 1)
> 
> Again, `p.deinitialize()` looks to me like it might be deinitializing an 
> entire buffer.
> 
> If the `count` label is always explicit, then there's a clear distinction 
> between the low-level `pointer` APIs and the `buffer` APIs.
> 
> The pointer-to-single-element case never seemed interesting enough to me to 
> worry about making convenient. If I'm wrong about that, is there some 
> real-world code you can point to where the count=1 default significantly 
> improves clarity?
> 
> -Andy
> 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
What you’re describing is basically an earlier version of the proposal
which had a slightly weaker precondition (source >= destination) than yours
(source == destination). That one basically ignored the Sequence methods at
the expense of greater API surface area.

On Sat, Aug 19, 2017 at 9:08 PM, Andrew Trick  wrote:

>
> On Aug 19, 2017, at 6:03 PM, Taylor Swift  wrote:
>
>
>
> On Sat, Aug 19, 2017 at 8:52 PM, Andrew Trick  wrote:
>
>>
>>> The problem is I would expect to be able to safely call deinitialize()
>>> and friends after calling initialize(from:). If Element is a class type and
>>> initialize doesn’t fill the entire buffer range, calling deinitialize()
>>> will crash. That being said, since copy(from:bytes:) and copyBytes(from:)
>>> don’t do any initialization and have no direct counterparts in
>>> UnsafeMutableBufferPointer, it’s okay if they have different behavior than
>>> the other methods.
>>>
>>>
>>> You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize()
>>> method is dangerous, and I asked you to add a warning to its comments.
>>> However, given the danger, I think we need to justify adding the method to
>>> begin with. Are there real use cases that greatly benefit from it?
>>>
>>
>> I agree that’s a problem, which is why i was iffy on supporting partial
>> initialization to begin with. The use case is for things like growing
>> collections where you have to periodically move to larger storage. However,
>> deinitialize is no more dangerous than moveInitialize,
>> assign(repeating:count:), or moveAssign; they all deinitialize at least one
>> entire buffer. If deinitialize is to be omitted, so must a majority of the
>> unsafe pointer API.
>>
>>
>> Here's an alternative. Impose the precondition(source.count ==
>> self.count) to the following UnsafeMutableBufferPointer convenience methods
>> that you propose adding:
>>
>> +++ func assign(from:UnsafeBufferPointer)
>> +++ func assign(from:UnsafeMutableBufferPointer)
>> +++ func moveAssign(from:UnsafeMutableBufferPointer)
>> +++ func moveInitialize(from:UnsafeMutableBufferPointer)
>> +++ func initialize(from:UnsafeBufferPointer)
>> +++ func initialize(from:UnsafeMutableBufferPointer)
>>
>> I don't that introduces any behavior that is inconsistent with other
>> methods. `copyBytes` is a totally different thing that only works on
>> trivial types. The currently dominant use case for UnsafeBufferPointer,
>> partially initialized backing store, does not need to use your new
>> convenience methods. It can continue dropping down to pointer+count style
>> initialization/deinitialization.
>>
>> -Andy
>>
>
> the latest draft does not have assign(from:UnsafeMutableBufferPointer<
> Element>) or  initialize(from:UnsafeMutableBufferPointer), it
> uses the generic Sequence methods that are already there that do not
> require that precondition.
>
>
> Sorry, I was pasting from your original proposal. Here are the relevant
> methods from the latest draft:
>
> https://github.com/kelvin13/swift-evolution/blob/
> 1b7738513c00388b8de3b09769eab773539be386/proposals/0184-
> improved-pointers.md
>
> +++ func moveInitialize(from:UnsafeMutableBufferPointer)
> +++ func moveAssign(from:UnsafeMutableBufferPointer)
>
> But with the precondition, the `assign` method could be reasonably added
> back, right?
> +++ func assign(from:UnsafeMutableBufferPointer)
>
> Likewise, I don’t have a problem with initialize(from:
> UnsafeBufferPointer) where self.count==source.count. The Sequence
> initializer is different. It’s designed for the Array use case and forces
> the caller to deal with partial initialization.
>
> UnsafeMutableRawBufferPointer.moveInitializeMemory on the other hand
> probably doesn't need that precondition since there's no way to
> deinitialize. It just needs clear comments.
>
> -Andy
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution

> On Aug 19, 2017, at 6:16 PM, Taylor Swift  wrote:
> 
> What you’re describing is basically an earlier version of the proposal which 
> had a slightly weaker precondition (source >= destination) than yours (source 
> == destination). That one basically ignored the Sequence methods at the 
> expense of greater API surface area.

The Sequence methods don’t provide the simpler, more convenient form of 
initialization/deinitialization that I thought you wanted. I see two reasonable 
options.

1. Don’t provide any new buffer initialization/deinitialization convenience. 
i.e. drop UsafeMutableBufferPointer moveInitialize, moveAssign, and 
deinitialize from your proposal.

2. Provide the full set of convenience methods: initialize, assign, 
moveInitialize, and moveAssign assuming self.count==source.count. And provide 
deinitialize() to be used only in conjunction with those new initializers.

The question is really whether those new methods are going to significantly 
simplify your code. If not, #1 is the conservative choice. Don't provide 
convenience which could be misused. Put off solving that problem until we can 
design a new move-only buffer type that tracks partially initialized state.

-Andy 

> On Sat, Aug 19, 2017 at 9:08 PM, Andrew Trick  > wrote:
> 
>> On Aug 19, 2017, at 6:03 PM, Taylor Swift > > wrote:
>> 
>> 
>> 
>> On Sat, Aug 19, 2017 at 8:52 PM, Andrew Trick > > wrote:
 
 The problem is I would expect to be able to safely call deinitialize() and 
 friends after calling initialize(from:). If Element is a class type and 
 initialize doesn’t fill the entire buffer range, calling deinitialize() 
 will crash. That being said, since copy(from:bytes:) and copyBytes(from:) 
 don’t do any initialization and have no direct counterparts in 
 UnsafeMutableBufferPointer, it’s okay if they have different behavior than 
 the other methods.
>>> 
>>> You astutely pointed out that the UnsafeMutableBufferPointer.deinitialize() 
>>> method is dangerous, and I asked you to add a warning to its comments. 
>>> However, given the danger, I think we need to justify adding the method to 
>>> begin with. Are there real use cases that greatly benefit from it?
>>> 
>>> I agree that’s a problem, which is why i was iffy on supporting partial 
>>> initialization to begin with. The use case is for things like growing 
>>> collections where you have to periodically move to larger storage. However, 
>>> deinitialize is no more dangerous than moveInitialize, 
>>> assign(repeating:count:), or moveAssign; they all deinitialize at least one 
>>> entire buffer. If deinitialize is to be omitted, so must a majority of the 
>>> unsafe pointer API.
>> 
>> Here's an alternative. Impose the precondition(source.count == self.count) 
>> to the following UnsafeMutableBufferPointer convenience methods that you 
>> propose adding:
>> 
>> +++ func assign(from:UnsafeBufferPointer)
>> +++ func assign(from:UnsafeMutableBufferPointer)
>> +++ func moveAssign(from:UnsafeMutableBufferPointer)
>> +++ func moveInitialize(from:UnsafeMutableBufferPointer)
>> +++ func initialize(from:UnsafeBufferPointer)
>> +++ func initialize(from:UnsafeMutableBufferPointer)
>> 
>> I don't that introduces any behavior that is inconsistent with other 
>> methods. `copyBytes` is a totally different thing that only works on trivial 
>> types. The currently dominant use case for UnsafeBufferPointer, partially 
>> initialized backing store, does not need to use your new convenience 
>> methods. It can continue dropping down to pointer+count style 
>> initialization/deinitialization.
>> 
>> -Andy
>>  
>> the latest draft does not have 
>> assign(from:UnsafeMutableBufferPointer) or  
>> initialize(from:UnsafeMutableBufferPointer), it uses the generic 
>> Sequence methods that are already there that do not require that 
>> precondition.
> 
> Sorry, I was pasting from your original proposal. Here are the relevant 
> methods from the latest draft:
> 
> https://github.com/kelvin13/swift-evolution/blob/1b7738513c00388b8de3b09769eab773539be386/proposals/0184-improved-pointers.md
>  
> 
> 
> +++ func moveInitialize(from:UnsafeMutableBufferPointer)
> +++ func moveAssign(from:UnsafeMutableBufferPointer)
> 
> But with the precondition, the `assign` method could be reasonably added 
> back, right?
> +++ func assign(from:UnsafeMutableBufferPointer)
> 
> Likewise, I don’t have a problem with initialize(from: UnsafeBufferPointer) 
> where self.count==source.count. The Sequence initializer is different. It’s 
> designed for the Array use case and forces the caller to deal with partial 
> initialization. 
> 
> UnsafeMutableRawBufferPointer.moveInitializeMemory on the other hand probably 
> doesn't need that precondition since there's no way to deinitialize. It 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 2:25 AM, Thomas  wrote:
> 
>> I think we need to be a little careful here—the mere fact that a message 
>> returns `Void` doesn't mean the caller shouldn't wait until it's done to 
>> continue. For instance:
>> 
>>  listActor.delete(at: index) // Void, so it 
>> doesn't wait
>>  let count = await listActor.getCount()  // But we want the count 
>> *after* the deletion!
> 
> In fact this will just work. Because both messages happen on the actor's 
> internal serial queue, the "get count" message will only happen after the 
> deletion. Therefore the "delete" message can return immediately to the caller 
> (you just need the dispatch call on the queue to be made).

Suppose `delete(at:)` needs to do something asynchronous, like ask a server to 
do the deletion. Is processing of other messages to the actor suspended until 
it finishes? (Maybe the answer is "yes"—I don't have experience with proper 
actors.)

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
On Sat, Aug 19, 2017 at 9:31 PM, Andrew Trick  wrote:

>
> On Aug 19, 2017, at 6:16 PM, Taylor Swift  wrote:
>
> What you’re describing is basically an earlier version of the proposal
> which had a slightly weaker precondition (source >= destination) than yours
> (source == destination). That one basically ignored the Sequence methods at
> the expense of greater API surface area.
>
>
> The Sequence methods don’t provide the simpler, more convenient form of
> initialization/deinitialization that I thought you wanted. I see two
> reasonable options.
>
> 1. Don’t provide any new buffer initialization/deinitialization
> convenience. i.e. drop UsafeMutableBufferPointer moveInitialize,
> moveAssign, and deinitialize from your proposal.
>
> 2. Provide the full set of convenience methods: initialize, assign,
> moveInitialize, and moveAssign assuming self.count==source.count. And
> provide deinitialize() to be used only in conjunction with those new
> initializers.
>
> The question is really whether those new methods are going to
> significantly simplify your code. If not, #1 is the conservative choice.
> Don't provide convenience which could be misused. Put off solving that
> problem until we can design a new move-only buffer type that tracks
> partially initialized state.
>
> -Andy
>
>
I’m not sure the answer is to just omit methods from
UnsafeMutableBufferPointer since most of the original complaints circulated
around having to un-nil baseAddress to do anything with them.

What if only unary methods should be added to UnsafeMutableBufferPointer
without count:, meaning:

initialize(repeating:)
assign(repeating:)
deinitialize()

and the other methods should take both an *offset* parameter instead of a
count parameter:

initialize(from:at:)
assign(from:at:)
moveInitialize(from:at:)
moveAssign(from:at:)

which provides maximum explicitness. This requires improvements to buffer
pointer slicing though. But I’m not a fan of the mission creep that’s
working into this proposal (i only originally wrote the thing to get
allocate(capacity:) and deallocate() into UnsafeMutableBufferPointer!)
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Karim Nassar via swift-evolution

> On Aug 19, 2017, at 7:17 PM, Chris Lattner  wrote:
> 
> 
>> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
>> mailto:swift-evolution@swift.org>> wrote:
>> 
>> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
>> Actors pattern looks really compelling.
>> 
>> One thought that occurred to me reading through the section of the 
>> "async/await" proposal on whether async implies throws:
>> 
>> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want 
>> to suppress the catch block with ?/!, does that mean we do it on the ‘await’ 
>> ? 
>> 
>> guard let foo = await? getAFoo() else {  …  }
> 
> Interesting question, I’d lean towards “no, we don’t want await? and await!”. 
>  My sense is that the try? and try! forms are only occasionally used, and 
> await? implies heavily that the optional behavior has something to do with 
> the async, not with the try.  I think it would be ok to have to write “try? 
> await foo()” in the case that you’d want the thrown error to turn into an 
> optional.  That would be nice and explicit.
> 
> -Chris
> 


I’d be curious to see numbers on the prevalence of try?/! across various kinds 
of codebases (i.e.: library code, app code, CLI utils, etc). I don’t use try? 
all that much, but I have used it. I also have found (IMHO) legitimate uses for 
try! in my unit tests where I want the test logic to be brittle with respect to 
expected conditions surrounding the test—to fail immediately if those 
conditions are not as I intend them to be.

If we can write "try? await foo()” (which I think is the right way to position 
this), will we be able to also write "try await foo()” ?  I’d hope that the 
latter wouldn’t be prohibited… as I think it would just make the former less 
discoverable. If I’m learning Swift and I see “try? await” on one line, and 
nearby just “await” it’s easier to assume that the first can produce an error 
and the second can’t, no?

If I’m honest, I *think* I’d rather we'd always have to be explicit about both 
try and await… yes it’s more to type, but it’s also a lot clearer about what 
exactly is happening. And if I always use "do try catch” together it forms a 
consistent pattern that’s easier to read and easier to learn. If sometimes it’s 
“do try catch” and sometimes "do await catch” and sometimes just “await" it 
seems to me we’ve lost some clarity and it’s harder for me to compose my 
understanding of the rules.

On the other hand, I can’t say I’ve given this a lot of deep thought either :)

—Karim

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 3:23 AM, Georgios Moschovitis via swift-evolution 
>  wrote:
> 
> I am wondering, am I the only one that *strongly* prefers `yield` over 
> `await`?
> 
> Superficially, `await` seems like the standard term, but given the fact that 
> the proposal is about coroutines, I think `yield` is actually the proper 
> name. Also, subjectively, it sounds much better/elegant to me!


Swift tends to take a pragmatic view of this kind of thing, naming features 
after their common uses rather than their formal names. For instance, there's 
no technical reason you *have* to use the error-handling features for 
errors—you could use them for routine but "special" return values like breaking 
out of a loop—but we still name things like the `Error` protocol and the `try` 
keyword in ways that emphasize their use for errors.

This feature is about coroutines, sure, but it's a coroutine feature strongly 
skewed towards use for asynchronous calls, so we prefer syntax that emphasizes 
its async-ness. When you're reading the code, the fact that you're calling a 
coroutine is not important; what's important is that the code may pause for a 
while during a given expression and run other stuff in the meantime. `await` 
says that more clearly than `yield` would.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Andrew Trick via swift-evolution

> On Aug 19, 2017, at 6:42 PM, Taylor Swift  wrote:
> 
> 
> 
> On Sat, Aug 19, 2017 at 9:31 PM, Andrew Trick  > wrote:
> 
>> On Aug 19, 2017, at 6:16 PM, Taylor Swift > > wrote:
>> 
>> What you’re describing is basically an earlier version of the proposal which 
>> had a slightly weaker precondition (source >= destination) than yours 
>> (source == destination). That one basically ignored the Sequence methods at 
>> the expense of greater API surface area.
> 
> The Sequence methods don’t provide the simpler, more convenient form of 
> initialization/deinitialization that I thought you wanted. I see two 
> reasonable options.
> 
> 1. Don’t provide any new buffer initialization/deinitialization convenience. 
> i.e. drop UsafeMutableBufferPointer moveInitialize, moveAssign, and 
> deinitialize from your proposal.
> 
> 2. Provide the full set of convenience methods: initialize, assign, 
> moveInitialize, and moveAssign assuming self.count==source.count. And provide 
> deinitialize() to be used only in conjunction with those new initializers.
> 
> The question is really whether those new methods are going to significantly 
> simplify your code. If not, #1 is the conservative choice. Don't provide 
> convenience which could be misused. Put off solving that problem until we can 
> design a new move-only buffer type that tracks partially initialized state.
> 
> -Andy 
> 
> 
> I’m not sure the answer is to just omit methods from 
> UnsafeMutableBufferPointer since most of the original complaints circulated 
> around having to un-nil baseAddress to do anything with them.

I know un-nil’ing baseAddress is horrible, but I don’t think working around 
that is an important goal yet. Eventually there will be a much safer, more 
convenient mechanism for manual allocation that doesn’t involve “pointers". I 
also considered adding API surface to UnsafeMutableBufferPointer.Slice, but 
that’s beyond what we should do now and may also become irrelevant when we have 
a more sophisticated buffer type. 

> What if only unary methods should be added to UnsafeMutableBufferPointer 
> without count:, meaning:
> 
> initialize(repeating:)

I actually have no problem with this one... except that it could be confused 
with UnsafeMutablePointer.initialize(repeating:), but I’ll ignore that since we 
already discussed it.

> assign(repeating:)
> deinitialize()

These are fine only if we have use cases that warrant them AND those use cases 
are expected to fully initialize the buffer, either via initialize(repeating:) 
or initialize(from: buffer) with precondition(source.count==self.count). They 
don’t really make sense for the use case that I’m familiar with. Without clear 
motivating code patterns, they aren’t worth the risk. “API Completeness” 
doesn’t have intrinsic value.

> and the other methods should take both an offset parameter instead of a count 
> parameter:
> 
> initialize(from:at:)
> assign(from:at:)
> moveInitialize(from:at:)
> moveAssign(from:at:)
> 
> which provides maximum explicitness. This requires improvements to buffer 
> pointer slicing though. But I’m not a fan of the mission creep that’s working 
> into this proposal (i only originally wrote the thing to get 
> allocate(capacity:) and deallocate() into UnsafeMutableBufferPointer!)

I’m open to that, with source.count <= self.count + index. They are potentially 
ambiguous (the `at` could refer to a source index) but consistent with the idea 
that this API is for copying an entire source buffer into a slice of the 
destination buffer. Again, we need to find real code that benefits from this, 
but I expect the stdlib could use these.

-Andy___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 7:41 AM, Matthew Johnson  wrote:
> 
> Regardless of which approach we take, it feels like something that needs to 
> be implicit for structs and enums where value semantics is trivially provable 
> by way of transitivity. When that is not the case we could require an 
> explicit `value` or `nonvalue` annotation (specific keywords subject to 
> bikeshedding of course).

There is no such thing as "trivially provable by way of transitivity". This 
type is comprised of only value types, and yet it has reference semantics:

struct EntryRef {
private var index: Int

var entry: Entry {
get { return entries[index] }
set { entries[index] = newValue }
}
}

This type is comprised of only reference types, and yet it has value semantics:

struct OpaqueToken: Equatable {
class Token {}
private let token: Token

static func == (lhs: OpaqueToken, rhs: OpaqueToken) -> Bool {
return lhs.token === rhs.token
}
}

I think it's better to have types explicitly declare that they have value 
semantics if they want to make that promise, and otherwise not have the 
compiler make any assumptions either way. Safety features should not be 
*guessing* that your code is safe. If you can somehow *prove* it safe, go 
ahead—but I don't see how that can work without a lot of manual annotations on 
bridged code.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Thomas via swift-evolution

> On 20 Aug 2017, at 03:36, Brent Royal-Gordon  wrote:
> 
>> On Aug 19, 2017, at 2:25 AM, Thomas > > wrote:
>> 
>>> I think we need to be a little careful here—the mere fact that a message 
>>> returns `Void` doesn't mean the caller shouldn't wait until it's done to 
>>> continue. For instance:
>>> 
>>> listActor.delete(at: index) // Void, so it 
>>> doesn't wait
>>> let count = await listActor.getCount()  // But we want the count 
>>> *after* the deletion!
>> 
>> In fact this will just work. Because both messages happen on the actor's 
>> internal serial queue, the "get count" message will only happen after the 
>> deletion. Therefore the "delete" message can return immediately to the 
>> caller (you just need the dispatch call on the queue to be made).
> 
> Suppose `delete(at:)` needs to do something asynchronous, like ask a server 
> to do the deletion. Is processing of other messages to the actor suspended 
> until it finishes? (Maybe the answer is "yes"—I don't have experience with 
> proper actors.)

It seems like the answer should be "yes". But then how do you implement 
something like a cancel() method? I don't know how the actor model solves that 
problem.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 1:29 PM, Michel Fortin via swift-evolution 
>  wrote:
> 
> I'm not actually that interested in the meaning of value semantics here. I'm 
> debating the appropriateness of determining whether something can be done in 
> another thread based on the type a function is attached to. Because that's 
> what the ValueSemantical protocol wants to do. ValueSemantical, as a 
> protocol, is whitelisting the whole type while in reality it should only 
> vouch for a specific set of safe functions on that type.


To state more explicitly what I think you might be implying here: In principle, 
we could have developers annotate value-semantic *members* instead of 
value-semantic *types* and only allow value-semantic members to be used on 
parameters to an actor. But I worry this might spread through the type system 
like `const` in C++, forcing large numbers of APIs to annotate parameters with 
`value` and restrict themselves to value-only APIs just in case they happen to 
be used in an actor.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 19, 2017, at 9:33 PM, Brent Royal-Gordon  
> wrote:
> 
>> On Aug 19, 2017, at 7:41 AM, Matthew Johnson  wrote:
>> 
>> Regardless of which approach we take, it feels like something that needs to 
>> be implicit for structs and enums where value semantics is trivially 
>> provable by way of transitivity. When that is not the case we could require 
>> an explicit `value` or `nonvalue` annotation (specific keywords subject to 
>> bikeshedding of course).
> 
> There is no such thing as "trivially provable by way of transitivity". This 
> type is comprised of only value types, and yet it has reference semantics:
> 
>   struct EntryRef {
>   private var index: Int
>   
>   var entry: Entry {
>   get { return entries[index] }
>   set { entries[index] = newValue }
>   }
>   }

This type uses global mutable state in its implementation.  This is not hard 
for the compiler to detect and is pretty rare in most code.

> 
> This type is comprised of only reference types, and yet it has value 
> semantics:
> 
>   struct OpaqueToken: Equatable {
>   class Token {}
>   private let token: Token
>   
>   static func == (lhs: OpaqueToken, rhs: OpaqueToken) -> Bool {
>   return lhs.token === rhs.token
>   }
>   }

Yes, of course this is possible.  I believe this type should have to include an 
annotation declaring value semantics and should also need to annotate the 
`token` property with an acknowledgement that value semantics is being 
preserved by the implementation of the type despite this member not having 
value semantics.  The annotation on the property is to prevent bugs that might 
occur because the programmer didn't realize this type does not have value 
semantics.

> 
> I think it's better to have types explicitly declare that they have value 
> semantics if they want to make that promise, and otherwise not have the 
> compiler make any assumptions either way. Safety features should not be 
> *guessing* that your code is safe. If you can somehow *prove* it safe, go 
> ahead—but I don't see how that can work without a lot of manual annotations 
> on bridged code.

I agree with you that *public* types should have to declare that they have 
value semantics.  And I'm not suggesting we attempt to *prove* value semantics 
everywhere. 

I'm suggesting that the proportion of value types in most applications for 
which we can reasonably infer value semantics is pretty large.  If the stored 
properties of a value type all have value semantics and the implementation of 
the type does not use global mutable state it has value semantics.  

Whether we require annotation or not, value semantics will be decided by the 
declaring module.  If we don't infer it we'll end up having to write `value 
struct` and `value enum` a lot.  The design of Swift has been vigorous in 
avoiding keyword soup and I really believe that rule applies here.  The primary 
argument I can think of against inferring value semantics for non-public value 
types in these cases is if proving a type does not use global mutable state in 
its implementation would have too large an impact on build times.

> 
> -- 
> Brent Royal-Gordon
> Architechies
> 
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Accepted] SE-0185 - Synthesizing Equatable and Hashable conformance

2017-08-19 Thread David Ungar via swift-evolution
Chris Lattner wrote:

> Also, if I were to nitpick your argument a bit, it isn’t true that the 
> protocol knows “nothing" about the type anyway, because the protocol has 
> access to self.  The default implementation could conceptually use reflection 
> to access all the state in the type: we’re producing the same effect with 
> more efficient code.

I had to go back to first principles because it seems to violate the separation 
of reflection and base-level that has been a keystone of the mirrors I invented 
for Self. (When Gilad & I wrote our paper on this, he called this separation 
the principle of "stratification.") So, I thought about why I believed in that 
separation in the first place. I think the key thought was to insulate clients 
of an abstraction from changes in its implementation. 

I went back to the proposal and realized that it ensured such insulation by 
excluding extensions:

> Requirements will be synthesized only for protocol conformances that are part 
> of the type declaration itself; conformances added in extensions will not be 
> synthesized.

Also, the exclusion of classes avoids problems with inheritance.

Bottom line: The proposal won't create problems arising from coupling clients 
to implementations. The restrictions are crucial. When this new feature is 
documented (in the Swift book?), this decoupling might be helpful to motivate 
the restrictions.

- David
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] SE-184 Improved Pointers

2017-08-19 Thread Taylor Swift via swift-evolution
On Sat, Aug 19, 2017 at 10:28 PM, Andrew Trick  wrote:

>
> On Aug 19, 2017, at 6:42 PM, Taylor Swift  wrote:
>
>
>
> On Sat, Aug 19, 2017 at 9:31 PM, Andrew Trick  wrote:
>
>>
>> On Aug 19, 2017, at 6:16 PM, Taylor Swift  wrote:
>>
>> What you’re describing is basically an earlier version of the proposal
>> which had a slightly weaker precondition (source >= destination) than yours
>> (source == destination). That one basically ignored the Sequence methods at
>> the expense of greater API surface area.
>>
>>
>> The Sequence methods don’t provide the simpler, more convenient form of
>> initialization/deinitialization that I thought you wanted. I see two
>> reasonable options.
>>
>> 1. Don’t provide any new buffer initialization/deinitialization
>> convenience. i.e. drop UsafeMutableBufferPointer moveInitialize,
>> moveAssign, and deinitialize from your proposal.
>>
>> 2. Provide the full set of convenience methods: initialize, assign,
>> moveInitialize, and moveAssign assuming self.count==source.count. And
>> provide deinitialize() to be used only in conjunction with those new
>> initializers.
>>
>> The question is really whether those new methods are going to
>> significantly simplify your code. If not, #1 is the conservative choice.
>> Don't provide convenience which could be misused. Put off solving that
>> problem until we can design a new move-only buffer type that tracks
>> partially initialized state.
>>
>> -Andy
>>
>>
> I’m not sure the answer is to just omit methods from
> UnsafeMutableBufferPointer since most of the original complaints
> circulated around having to un-nil baseAddress to do anything with them.
>
>
> I know un-nil’ing baseAddress is horrible, but I don’t think working
> around that is an important goal yet. Eventually there will be a much
> safer, more convenient mechanism for manual allocation that doesn’t involve
> “pointers". I also considered adding API surface to
> UnsafeMutableBufferPointer.Slice, but that’s beyond what we should do now
> and may also become irrelevant when we have a more sophisticated buffer
> type.
>
> What if only unary methods should be added to UnsafeMutableBufferPointer
> without count:, meaning:
>
> initialize(repeating:)
>
>
> I actually have no problem with this one... except that it could be
> confused with UnsafeMutablePointer.initialize(repeating:), but I’ll
> ignore that since we already discussed it.
>
> assign(repeating:)
> deinitialize()
>
>
> These are fine only if we have use cases that warrant them AND those use
> cases are expected to fully initialize the buffer, either via
> initialize(repeating:) or initialize(from: buffer) with
> precondition(source.count==self.count). They don’t really make sense for
> the use case that I’m familiar with. Without clear motivating code
> patterns, they aren’t worth the risk. “API Completeness” doesn’t have
> intrinsic value.
>

An example use for assign(repeating:) would be to zero out an image buffer.


>
> and the other methods should take both an *offset* parameter instead of a
> count parameter:
>
> initialize(from:at:)
> assign(from:at:)
> moveInitialize(from:at:)
> moveAssign(from:at:)
>
> which provides maximum explicitness. This requires improvements to buffer
> pointer slicing though. But I’m not a fan of the mission creep that’s
> working into this proposal (i only originally wrote the thing to get
> allocate(capacity:) and deallocate() into UnsafeMutableBufferPointer!)
>
>
> I’m open to that, with source.count <= self.count + index. They are
> potentially ambiguous (the `at` could refer to a source index) but
> consistent with the idea that this API is for copying an entire source
> buffer into a slice of the destination buffer. Again, we need to find real
> code that benefits from this, but I expect the stdlib could use these.
>
> -Andy
>

The more I think the more I believe using from:at: is the right approach.
The only problem is that it would have to be written as a generic on
Collection or Sequence to avoid having to provide up to 4 overloads for
each operation, since we would want these to work well with buffer slices
as well as buffers themselves. That puts them uncomfortably close to the
turf of the existing buffer pointer Sequence API though.

Or we could make UnsafeMutableBufferPointer its own slice type. Right now
MutableRandomAccessSlice> takes up 4
words of storage when it really only needs two.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors: cancellation

2017-08-19 Thread Jan Tuitman via swift-evolution
Hi Joe,

Thanks for the answers so far! 

Abrupt cancellation is indeed not a good idea, but I wander if it is possible 
on every place where “await” is being used, to let the compiler generate  code 
which handles cancellation, assuming that can be cheap enough (and I am not 
qualified to judge if that is indeed the case)

Especially in the case where “await” implies “throws”, part of what you need 
for that is already in place. I imagine that it would work like this:
I imagine f(x) -> T is compiled as something that looks like f(x, callback: (T) 
-> Void). What if this was f(x,process, callback) where process is a simple 
pointer, that goes out of scope together with callback? This pointer the 
compiler can use to access a compiler generated mutable state, to see if the 
top level beginAsync { } in which context the call is being executed, has been 
canceled. The compiler could generate this check whenever it is going to do a 
new await call to a deeper level, and if the check said that the top level is 
canceled, the compiler can throw an exception. 

Would that introduce too much overhead? It does not seem to need references to 
the top level any longer than the callback needs to be kept alive.

I am asking this, because once Async/await is there, it will probably 
immediately become very popular, but the use case of having to abort a task 
from the same location where you start the task, is of course a very common 
one. Think of a view controller downloading some resources and then being moved 
of the screen by the user.

if everybody needs to wrap Async/await in classes which handle the 
cancellation, and share state with the tasks that can be cancelled, it might be 
cleaner to solve this problem in an invisible way, so that it is also 
standardized. This way there is more separation between the code of the task 
and the code that starts and cancels the task.

I assume actors in the future also are going to need a way to tell each other 
that pending messages can be cancelled, so, I think, in the end you need 
something for cancellation anyways. 

For the programmer it would look like this:

var result 
var process = beginAsync {
   result = await someSlowFunctionWithManyAwaitsInside(x)

}

// if it is no longer needed.
process.cancel()
// this will raise an exception inside someSlowFunction if this function enters 
an await.
// but not if it is waiting or actively doing something. So, it is also not 
guaranteed to cancel the function.



Regards,
Jan



> Op 18 aug. 2017 om 21:04 heeft Joe Groff  het volgende 
> geschreven:
> 
> 
>> On Aug 17, 2017, at 11:53 PM, Jan Tuitman via swift-evolution 
>>  wrote:
>> 
>> Hi,
>> 
>> 
>> After reading Chris Lattners proposal for async/await I wonder if the 
>> proposal has any way to cancel outstanding tasks.
>> 
>> I saw these two:
>> 
>> @IBAction func buttonDidClick(sender:AnyObject) {
>> // 1
>> beginAsync {
>>  // 2
>>  let image = await processImage()
>>  imageView.image = image
>> }
>> // 3
>> } 
>> 
>> 
>> And:
>> 
>> /// Shut down the current coroutine and give its memory back to the
>> /// shareholders.
>> func abandon() async -> Never {
>> await suspendAsync { _ = $0 }
>> }
>> 
>> 
>> Now, if I understand this correctly, the second thing is abandoning the task 
>> from the context of the task by basically preventing the implicit callback 
>> of abandon() to ever be called.
>> 
>> But I don't see any way how the beginAsync {} block can be canceled after a 
>> certain amount of time by the synchronous thread/context that is running at 
>> location //3
> 
> This is not something the proposal aims to support, and as you noted, abrupt 
> cancellation from outside a thread is not something you should generally do, 
> and which is not really possible to do robustly with cooperatively-scheduled 
> fibers like the coroutine proposal aims to provide. The section above is 
> making the factual observation that, in our model, a coroutine once suspended 
> can end up being dropped entirely by releasing all references to its 
> continuation, and discusses the impact that possibility has on the model. 
> This shouldn't be mistaken for proper cancellation support; as David noted, 
> that's something you should still code explicit support for if you need it.
> 
> -Joe
> 
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread David Hart via swift-evolution


> On 20 Aug 2017, at 01:17, Chris Lattner via swift-evolution 
>  wrote:
> 
> 
>> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
>>  wrote:
>> 
>> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
>> Actors pattern looks really compelling.
>> 
>> One thought that occurred to me reading through the section of the 
>> "async/await" proposal on whether async implies throws:
>> 
>> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want 
>> to suppress the catch block with ?/!, does that mean we do it on the ‘await’ 
>> ? 
>> 
>> guard let foo = await? getAFoo() else {  …  }
> 
> Interesting question, I’d lean towards “no, we don’t want await? and await!”. 
>  My sense is that the try? and try! forms are only occasionally used, and 
> await? implies heavily that the optional behavior has something to do with 
> the async, not with the try.  I think it would be ok to have to write “try? 
> await foo()” in the case that you’d want the thrown error to turn into an 
> optional.  That would be nice and explicit.

That seems like an argument in favor of having async and throws as orthogonal 
concepts.

> -Chris
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Georgios Moschovitis via swift-evolution

> what's important is that the code may pause for a while during a given 
> expression and run other stuff in the meantime.

I think that’s what `yield` actually means. In you sentence there is nothing 
about (a)waiting, only about pausing and ‘yielding’ the cpu time to ‘run other 
stuff’.

-g.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution