On 21/08/2012, at 4:02 AM, Dobes Vandermeer wrote:

> 
> It seems like if the compiler can enforce this purity, it can detect it also.

Quite a few properties of interest can be detected by data flow
analysis, or at least it can put some constraints on stuff.

There is no such analysis routine in the compiler unfortunately.
It is also rather slow (worse than quadratic).


>  And thus, it could "promote" any var or val to this const concept as well.

The compiler already does all that, and makes conservative assumptions.
It just doesn't catch the case where unknown closures are involved.
Even when you can see a closure is of a particular function, the compiler
may not be able to see that.

The problem you see is that a function is optimised on the assumption
it will be called directly which means it will behave in the way you'd
expect at the point of call. But the programmer cheats and makes
a closure at one time .. and evaluates the closure at another.

In that time a variable changed in value. What was previously determinate
with a direct calls now depends on how the function was optimised.

To fix this, we have to do data flow analysis. We have to track the closure
value to see the path between its construction and call point and if the
variable in question changes along that path .. which also means
tracking the value of the variable.

you keep saying just to eagerly evaluate everything and I keep 
pointing out that this is a major performance killer AND it destroys
semantics as well. 

You really need to understand that in ALL programming languages
almost everything is lazily evaluated. Eager evaluation is the exception,
not the rule.  in fact lazy evaluation IS control flow. Do one thing.
Do another. In order. That's lazy evaluation.

To see why this is fast, you must understand the alternative: evaluate
all control paths first, then chose one. Eager evaluation is sometimes
useful -- for example if you have parallelism. Most CPU's do it these days:
they calculate both outcomes of a branch before the condition is known
then later discard the one that isn't selected.

It's also important to understand the role of unspecified behaviour.
Allowing the compiler to choose increases performance.
It means certain things aren't predictable. This is not a problem if
the programmer can recognise and control it. 

Order of argument evaluation is one of these things in many languages.
The order isn't specified. The results aren't determinate if you depend
on the order. So long as you know this and have a way to fix the order,
its OK -- you will get booby trapped occasionally because you think
left to right. 

In Felix, the order of calling the function and evaluating
the arguments isn't specified. In C, the function is always called
after the arguments are evaluated. In Haskell the function is always
called before the arguments are evaluated.

Felix has more unspecified behaviour then either C or Haskell.
Hence it is faster. This is not a bug in the semantics, its quite
deliberate.

The problem here is to ensure the programmer can control the
order, and also recognise when it needs to be controlled.

In Ocaml you can control argument order evaluation:

        let x = expr in y = expr in f x y

this forces the order when you need to. If you want lazy evaluation,
so the function is called first THEN the arguments are evaluated
you can do that too (use closures, there's also a special lazy
construction which is evaluated on demand but memo-ises the
result so a second demand just returns the same value
without computational overhead).

i don't care that Felix doesn't work the way you expect. I DO care 
if you cannot recognise when you need to intervene in the default
behaviour to get what you want. You have to learn how it works,
like any system.

you can argue that in the val/closure case it is hard to recognise
when you need to intervene (use vars). In particular you can argue
that intervening in the function is wrong, that the intervention should
be at the call point. If you argue that, we can see that it is actually
very hard to do: the posted workaround is to make  the function
no-inline, and make the parameter var: there's no easy way
to make the intervention at the point of call. You cannot force
copying a value at a particular time. 

THAT would be a design fault: you can recognise an intervention
is needed but cannot actually do it without breaking open
a library function.

i happen to wish to put in this:

        inline call f (..)
        noinline call f(..)


for this reason. It gives more control and it is easy to implement.

It may be we need an expression like:

        now x

which gives the value of x "right now" whatever that means :)
Lazy evaluation of vars is easy (use a pointer, deref when
you want the value). Vars are more controllable.

I myself have taken to using vars a lot more now.


--
john skaller
skal...@users.sourceforge.net
http://felix-lang.org




------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Felix-language mailing list
Felix-language@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/felix-language

Reply via email to