> On Dec 30, 2017, at 1:21 PM, Félix Cloutier <felixclout...@icloud.com> wrote:
> 
> 
>> Le 29 déc. 2017 à 20:50, Michael Gottesman <mgottes...@apple.com> a écrit :
>> 
>> No worries. Happy to help = ). If you are interested, I would be happy to 
>> help guide you in implementing one of these optimizations.
> 
> That sounds fun. I'll have to check with my manager after the holidays.

Nerd snipe success? = p. In general, we haven't had as many people from oss 
work on the optimizer. This is unfortunate, so I am more than happy to help.

> 
>> The main downside is that in some cases you do want to have +1 parameters, 
>> namely places where you are forwarding values into a function for storage. 
>> An example of such a case is a value passed to a setter or an initializer. I 
>> would think of it more as an engineering trade-off.
>> 
>> I am currently experimenting with changing normal function arguments to +0 
>> for this purpose, leaving initializers and setters as taking values at +1. 
>> Regardless of the default, we can always provide an attribute for the 
>> purpose of allowing the user to specify such behavior.
> 
> It sounds like having flexible parameter ownership rules doesn't have too 
> much overhead if it can be user-specified (in some future). Would it be 
> feasible to use escape analysis to decide if a parameter should be +0 or +1?

No. A parameter's convention is ABI. You don't want to change ABI related 
things like that via escape analysis since it means that as a function changes, 
due to the optimizer, the ABI can change =><=. *BUT* recently Adam Nemet has 
added support for LLVM's opt remark infrastructure to Swift. Something like 
that /could/ provide suggestions when it is run that this might be a profitable 
parameter to change from +0 to +1 or vis-a-versa. Then the user could make that 
change via annotation. Keep in mind that this would most likely be an ABI 
breaking change.

> 
> More ARC questions! I remember that some time ago, someone else (John 
> McCall?) said that references aren't tied to a scope. This was in the context 
> of deterministic deallocation, and the point was that contrary to C++, an 
> object could potentially be released immediately after its last use in the 
> function rather than at the end of its scope. However, that's not really what 
> you said ("When the assignment to xhat occurs we retain x and at the end of 
> the function [...], we release xhat")

You are extrapolating too far from my example. That being said, I was unclear. 
What I was trying to show was the difference in between a case where xhat is 
passed off to another function, e.g.:

func g(x) {
  let xhat = x
  h(xhat)
}

and a situation where xhat is not consumed and is destroyed within g.

> , and it's not how Swift 4 works from what I can tell (release is not called 
> "as soon as possible" from my limited testing).

Cases like this are due to the optimizer seeing some use that it can not 
understand. The optimizer must be conservative so sometimes things that the 
user thinks the optimizer should see through/understand, it can not. The way to 
see that is to look at the SIL level and see what is stopping the code motion. 
There are ways that you can get debug output from the optimizer. This 
additionally may be a case where an opt-remark like system could help guide the 
user on why code motion has stopped.

> 
> It seems to me that this penalizes functional-style programming in at least 
> two ways:
> 
>       • This kills tail call optimizations, because often the compiler will 
> put a release call in the tail position
>       • Especially in recursion-heavy algorithms, objects can be kept around 
> for longer than they need to
> 
> Here's an example where both hit:
> 
>> class Foo {
>>      var value: [Int]
>>      
>>      init() {
>>              value = Array<Int>(repeating: 0, count: 10000)
>>      }
>>      
>>      init(from foo: Foo) {
>>              value = foo.value
>>              value[0] += 1
>>      }
>> }
>> 
>> func silly(loop: Int, foo: Foo) -> Foo {
>>      guard loop != 0 else { return foo }
>>      let copy = Foo(from: foo)
>>      return silly(loop: loop - 1, foo: copy)
>> }
>> 
>> print(silly(loop: 10000, foo: Foo()).value[0])
> 
> I wouldn't call that "expert Swift code" (indeed, if I come to my senses and 
> use a regular loop, it does just fine), but in this form, it does need about 
> 800MB of memory, and it can't use tail calls.
> 
> The function has the opposite problem from the pattern matching case. It is 
> specialized such that `foo` is passed at +0: it is retained at `return foo` 
> and released (in the caller) after the call to `silly`. However, the optimal 
> implementation would pass it at +1, do nothing for `return foo`, and release 
> it (in the callee) after the call to `Foo(from: foo)`. (Or, even better, it 
> would release it after `value = foo.value` in the init function.)
> 
> I'll note that escape analysis would correctly find that +1 is the "optimal" 
> ownership convention for `foo` in `silly` :) but it won't actually solve 
> either the memory use problem or the missed tail call unless the release call 
> is also moved up.
> 
> I guess that the question is: what does Swift gain by keeping objects around 
> for longer than they need to? Is it all about matching C++ or is there 
> something else?

Again, I think you are extrapolating a bit. Swift is not attempting to keep 
objects around for longer than they need to be at all. Such situations are more 
likely due to optimizer inadequacies or unimplemented optimizations [again, 
nerd snipe alert, patches welcome ; )]. All of these things take engineering 
time to do and engineering time is something that must be prioritized with 
respect to the overall needs of the project.

When I compile this on my machine. Here is what the SIL looks like:

Attachment: viewcfg.pdf
Description: Adobe PDF document


>From looking at this, it seems that perhaps function signature optimization is 
>being too aggressive. One could write a heuristic that attempts to identify 
>this case and not perform +0->+1 in this case.

Additionally, if/when we switch from +1->+0 as the default argument convention 
for "normal argument parameters", this shows a pattern where we would need an 
opposite form of function signature optimization that transforms a +0 argument 
to +1.

Did I answer your questions?
Michael

> 
> Félix
> 

_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev

Reply via email to