There are two reasonable semantics for deferred parameters:
1) lazy evaluation with caching, where the evaluation of the actual expression in the call is deferred until the sub actauly makes use of it and the result is then cached and reused as necessary. Any side effects happen only once.
2) ALGOL style pass by name, where the actual expression from the call is turned into a clouser called a thunk which is called when ever the sub access the parameter. Note that the thunk may need to be an lvalue clouser to handle is rw paramenters. Side effects happen each time the thunk is called. Also changes to the thunks environment can effect its value when called.
I think (2) would be best. Because of:
while $a < $b { ... }
That wouldn't be possible with just evaluating it once. I like the interface better, too (for the writer of C<while>), but that's just me.
I prefer 1). When I posted the original thread I wanted to emulate the effect of $a && $b or ($a) ?? $b :: $c. The parameters in these might get evaluated zero or one times, but not more than once. Having a side effect happen more than once brings back painful memories of C-style macros.
I tend to agree; but I note that it is possible to implement (1) over the semantics of (2) by simply assigning to a tmp variable. The reverse is not true. So if only one of the two is provided, then it should be (2). Multiple properties can be applied, so "is deferred is cached" would give (1).
Dave. -- http://dave.whipp.name