On Monday, 7 September 2015 at 10:55:13 UTC, anonymous wrote:
On Monday 07 September 2015 12:40, Bahman Movaqar wrote:
I can see some serious advantages of this, most notable of which is minimum side-effect and predictability of the code. However I suppose it's going to impact the performance and memory footprint as well, though I have no idea how deep the impact will be.

I don't see how merely marking things immutable/pure would affect performance negatively. They're just marks on the type. If anything, you could get a performance boost from the stricter guarantees. But realistically, there won't be a difference.

Just "marks", eh?
I was under the impression that when a variable, that is declared as `immutable`, is passed to a function, a copy of the value is passed. However based on "marks" I can imagine that since the data is marked as `immutable` only a reference is passed; and the compiler guarantees that what is referenced to never changes. Am I right?

If you change your algorithms to avoid mutable/impure, then you may see worse performance than if you made use of them. But I suppose that would be "a reason not to" mark everything immutable/pure.

True.
Nowadays that more algorithms are designed with parallelism and distribution in mind, though, I believe mutating values and impure functions, at least in certain domains, will cease to exist.

Reply via email to