I think the important thing to remember is that the label check is intended to 
prevent cases like this:

        let a:(left:Int, right:Int) = (1, 2)
        var b:(right:Int, left:Int) = a

While the two tuples are compatible by type, the meaning of the values may 
differ due to the different labels; in this case the values are represented in 
a different order that a developer should have to explicitly reverse to ensure 
they aren’t making a mistake, or they could represent radically different 
concepts altogether.

It’s certainly annoying when the labels are only different due to minor 
differences, but the compiler doesn’t know that. So yeah, I think that in any 
case where there are external labels that differ a warning should be raised; 
this comes down to being able to later ignore types of warnings, which could 
avoid the boiler-plate in future.

The alternative would be if we had some syntax for mapping parameters more 
cleanly, for example:

        hi(1, y: 2, fn: sum1 where left = lhs, right = rhs)

Or something along those lines anyway?

> On 21 Apr 2016, at 06:18, David Owens II via swift-evolution 
> <swift-evolution@swift.org> wrote:
> 
>> 
>> On Apr 20, 2016, at 4:47 PM, Chris Lattner <clatt...@apple.com 
>> <mailto:clatt...@apple.com>> wrote:
>> 
>> On Apr 20, 2016, at 12:31 PM, David Owens II via swift-evolution 
>> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
>>> This is similar to another concern I raised with functions and being able 
>>> to essentially erase the function argument names and apply two different 
>>> named parameters just because their types match.
>>> 
>>> It seems reasonable to me that you can go from (x: Int, y: Int) => (Int, 
>>> Int). However, going from (x: Int, y: Int) => (a: Int, b: Int) feels 
>>> somewhat odd. Yes, the types can obviously slot in there fine, but how much 
>>> importance do the labels for the types bring to the table?
>>> 
>>> Similarly, should this (Int, Int) => (x: Int, y: Int) be allowed through an 
>>> implicit means? If so, then it's really just an intermediate step for (x: 
>>> Int, y: Int) => (a: Int, b: Int) working.
>> 
>> I completely agree, I think it makes sense to convert from unlabeled to 
>> labeled (or back) but not from “labeled" to "differently labeled”.
>> 
>>> So what matters more, type signatures or label names?
>>> 
>>> Here's an example:
>>> 
>>> typealias Functor = (left: Int, right: Int) -> Int
>>> 
>>> func hi(x: Int, y: Int, fn: Functor) -> Int {
>>>     return fn(left: x, right: y)
>>> }
>>> 
>>> hi(1, y: 2, fn: +)
>>> hi(1, y: 2, fn: *)
>>> 
>>> If we say that the parameter names are indeed vital, then the above code 
>>> cannot work as the operators that match the type signature are defined as: 
>>> 
>>> public func +(lhs: Int, rhs: Int) -> Int
>>> 
>>> Obviously, given a name to the parameter brings clarity and can be self 
>>> documenting, but if we want the above to work while making names just as 
>>> vital as the type signature, then we need to declare `Functor` as such:
>>> 
>>> typealias Functor = (_ left: Int, _ right: Int) -> Int
>>> 
>>> However, that's not even legal code today, and even if it were, is that 
>>> really better?
>> 
>> I don’t think this follows, since operator parameters are always unlabeled.  
>> I suspect we don’t reject it, but I’d be in favor of rejecting:
>> 
>> func +(lhs xyz: Int, rhs abc: Int) -> Int { }
> 
> So maybe I think about this incorrectly, but I always think of any parameter 
> without an explicit label to have one that is equal to the parameter name. So 
> these two functions signatures would be equivalent:
> 
> func sum1(lhs: Int, rhs: Int) -> Int
> func sum2(lhs lhs: Int, rhs rhs: Int) -> Int
> 
> It’s only when you explicit “erase” the label where there is none:
> 
> func sum(_ lhs: Int, _ rhs: Int) -> Int
> 
> So back to the example above, it’s still somewhat odd that all of these are 
> valid:
> 
> hi(1, y: 2, fn: sum1)
> hi(1, y: 2, fn: sum2)
> hi(1, y: 2, fn: sum)   // makes the most sense, no label to labeled promotion
> 
> But if we did reject the differently labeled version, that would mean that we 
> would need to declare the `Functor` above as:
> 
> typealias Functor = (Int, Int) -> Int
> 
> Is that better? I’m not terribly convinced that it is.
> 
> If `Functor` keeps the labels, I suspect it would just lead to additional 
> boiler-plate code that would look like:
> 
> typealias Functor = (left: Int, right: Int) -> Int
> 
> hi(1, y: 2, fn: { left, right in sum1(lhs: left, rhs: right) })
> 
> While it does seem technically correct, is that really the kind of code we 
> want in Swift? 
> 
> -David
> _______________________________________________
> swift-evolution mailing list
> swift-evolution@swift.org <mailto:swift-evolution@swift.org>
> https://lists.swift.org/mailman/listinfo/swift-evolution 
> <https://lists.swift.org/mailman/listinfo/swift-evolution>
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to