Since i just love new features implemented before the existing
ones are completed ...
Consider this:
reduce assoc(x:int, y:int, z:int): x * z + y * z => (x + y) * z;
An interesting case is:
a * x * x * x + b * x * x + c * x + d
This is a standard polymomial evaluation, which can
perhaps be done more efficiently:
(a * x * x) * x + (b * x) * x
==> ((a * x * x) + (b * x)) * x
which saves one multiply:
((a * x * x) + (b * x)) * x + c * x + d
==> (((a * x * x) + (b * x)) + c) * x + d
which saves 2 .. felix really WILL do this reduction right now
(if you code the reduction, and the code takes the above form
after inlining ..)
However there are axioms where more simple code isn't
necessarily faster, and axioms where neither LHS or RHS
is simpler.
To solve this problem in general is hard .. but so is
proving program correctness. That doesn't mean we shouldn't
try to design something which can maybe help.
Let's suppose we have two functions:
fun f(x:float):float => ...
fun g(x:float):float => ...
where f x = g x. The functions are equivalent, which should
we use?
Well, typically, that depends on the arguments. So here's
an idea:
if cost_f(x) < cost_g(x) then f x else g x endif
The idea is that you can write:
fun f(x:float): float => ... cost x *2.0;
fun g(x:float): float => ... cost x * x;
so now, g is chosen for small x, whilst f is chosen for big x.
The key thing here is that if you cost several operators,
the cost function is combinatorial: for example suppose
you have costs for + and * then the *compiler* can cost:
x + y * z
and in particular given:
axiom assoc(x:int, y:int, z:int): x * z + y * z == (x + y) * z;
it can decide to use the reduction
reduce assoc(x:int, y:int, z:int): x * z + y * z => (x + y) * z;
if, and only if, it can prove the cost of the RHS is less than
the cost of the LHS for all x,y,z.
It could also conditionally apply the reduction based on
constraints on x,y,z eg we might know they're small due
to a precondition.
Alternatively, it could generate a run time test to choose
the most efficient function.
These ideas of estimation aren't new, they're done regularly
by performance programmers .. but they're done by mathematical
analysis or hand coded tests. The latter at least obscures
the semantics.
Here's an interesting case: a builtin primitive 'cost lit op lit = 0'
which says the cost of compile time constant folding is zero,
and 'cost var op var = 1' which says it costs 1 unit of time to do
the operation at run time.
Now given symmetry and associativity the expression:
2 * x * 3 * y
can automatically be reduced to
6 * x * y
Felix constant folder already tries to do this .. but of course
only for 'builtin types'. Controlling this in library code
is much more appealing.
Any thoughts on how to add 'costing' information to Felix?
--
John Skaller <skaller at users dot sf dot net>
Felix, successor to C++: http://felix.sf.net
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Felix-language mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/felix-language