On 01/30/2015 12:05 PM, Rowan Collins wrote:
On 30/01/2015 13:07, Alexander Lisachenko wrote:
What is wrong here? Emulated immutability. All methods will create a
separate instance of request, so
$baseRequest->withUri()->withMethod()->withHeader()->withBody() will
create
total 5 object instances. It's a memory overhead and time consumption
for
each method call.
Like Andrea, I don't see how immutable variables/objects solve this
problem. The overhead is not from emulating the immutability, it is a
consequence of the design pattern choosing immutability. In fact, from
the code shown, there is no way to know that immutability is in any
way relevant, since the definition of withUri() could be a mutator
which ends "return $this", a pattern I've frequently used to create
such "fluent" interfaces. Copy-on-write doesn't help, either, since
all 5 calls are writes, so will still make 5 copies.
The with*() methods in PSR-7 are documented to return a new instance,
not modify the existing instance. Yes, there's no way in PHP itself to
force that syntactically, which is why documentation exists. :-)
Also, in the benchmarks we've run the performance cost of all those new
objects is measured in nanoseconds, ie, small enough that we're not
worried about it. (Hats off to the PHP Internals folks for making that
fast!)
What I want to discuss is true immutability flag for variables and
parameters. There are a lot of languages that use "final" or "const"
keywords to prevent modification of variables.
On the concrete suggestion, though, I do think there are possible
advantages to this. I've long felt the confusion people have over
pass-by-value for primitives,
pass-by-value-which-is-actually-a-pointer for objects, and
pass-by-reference for a variable which might or might not be an object
pointer, is a failure not of the programmer but of the programming
language. In a blog post about it a few years ago [1], I suggested
that deep immutability (along with deep cloning) could provide a
better framework than by-value vs by-reference in modern OO languages.
This is rather different from defining a *type* that is immutable,
since it implies temporary immutability of a particular instance; but
it seems to be what at least some of your examples are hinting at.
The problem is that deep cloning and deep immutability are
non-trivial; PHP notably doesn't support deep cloning of objects,
requiring each class to define what to do with any non-primitive
members, since some may represent resources which can't be
meaningfully cloned just by copying data in memory.
*snip*
Immutability, generally, offers two advantages:
1) It makes it easier for humans to reason about code.
2) It makes it easier for compilers/runtimes to reason about code.
For the former, good programming practices/standards can often suffice
as there are cases where immutability makes code uglier, not better.
For the latter, it allows the compiler/runtime to do two things: Catch
code errors early and optimize based on assumptions.
In practice, I don't see much value in a language allowing a variable to
be flagged as mutable or immutable *unless* the default is immutable, as
in F# or Rust. In those cases it encourages the developer to take more
functional, immutable approaches most of the time, which (it is argued)
lead to better code, and more optimizable code (because the compiler can
make more assumptions). Switching PHP to default-immutable variables is
clearly off the table, so allowing individual variables to be explicitly
marked as immutable, particularly scalars as Stanislav points out,
doesn't sound like it would offer much.
What *could* be valuable, however, is flagging *parameters*. Ie:
function foo(const MyClass $c, const $s) {
$s = 'abc'; // Compiler error
$c = new MyClass(); // Compiler error.
$c->foo = 'abc'; // Some kind of error?
}
With PHP's existing copy-on-write there's not much memory-savings to be
had with "const reference" parameters, as in C. What the above would
allow is for the compiler to catch certain errors, especially if the
parameters are on a method in an interface. Ie, the following:
interface Foo {
public function setFromThing(const Thing $t);
}
Makes it explicitly clear that an implementer is *not allowed* to modify
$t in their setFromThing() implementation. Of course, as Rowan notes
object encapsulation makes enforcing that quite difficult, which is
arguably by design.
So I suppose the more general question is, are there code annotations
(immutable values, marking a function as pure, explicit scalar types,
etc.) that developers could insert that would:
1) Provide guidance for developers reasoning about the code.
2) Allow the compiler to catch more bugs.
3) Allow the compiler to better optimize the code by safely making more
assumptions.
4) Some combination of the above.
Eg, if the compiler knows a given function is pure (explicit input and
output, no side-effects, immutable of the parameters) then it could
auto-memoize it, or inline safely (eliminating the function call
entirely), or other such things. Is there a way that a developer could
syntactically tell the compiler "you can safely make these assumptions
and optimize/error-check accordingly"? And could Zend Engine support
such optimizations in a meaningful way? (I have no idea on the latter.)
--Larry Garfield
--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php