Sergey Mechveliani:


> i asked the list, why the examples deal only with transformations like
> (map f).(map g) -> map (f.g),
> why not deal with algebraic simplification, say,
>        (x^2+x+1)*(x-1) -> x^3-1.

I am afraid that Sergey is dreaming of transforming Haskell into a
universal Computer Algebra system. We know for more than 30 years that 
the general problem of algebraic simplification is a mess. OK, some
polynomial transformations can be done, but I suspect that enhancing
a parser with such rule recognition system containing obviously
the non-linear pattern matching (if not a full unifier...) seems
a deadly idea.

His proposals, e.g.

>     {rules       (x*y)*z = x*(y*z)  ...}

>       x==x  &  (x==y ==> y==x)  &  (x==y & y==z ==> x==z)  &
>       (x==y ==> x*z == y*z)
>      )
>   }
>   {rules  belongs x & belongs y & belongs z ==>
>           (x*y)*z   = x*(y*z) &
>           unity*x   = x       &  x*unity   = x
>           x*(inv x) = unity   &  (inv x)*x = unity
>   }

are needlessly verbose. I would say that:

(==) is an equivalence relation (symm., trans., etc)
(*) is associative (eventually commutative)

etc. Surely the associativity and other similar properties hold for
other operations as well. We know that for real (inexact) numbers it
is a touchy affair, and rearranging the expression may yield different
numeric answers, but let's skip that problem. I maintain my
minimalistic philosophy: there is no point in polluting program with 
rules unless a *concrete* implementation of them is provided. The
algebraic ideas of Sergey do not seem to follow this principle.


By the way, the proliferation of such rules as:
 unity*x=x;  x*unity=x
is slightly redundant as well; there are some provable subsumptions 
in algebra, for example that if there is a left and a right unity, it
must be the same entity. But how the compiler can use it?


> Again, this is *not* a bug in compiler to optimize the program 
> "around bottom" and throw the "bottom" out. This is done according to
> user program, that may arrange rules this or that way.
> And this is not necessarily an optimization. It is often hard to 
> control whether the rules make the program worse.


Hm. If you can't control your rules, and if the compiler cannot
do that either...

If it is not a bug it is the result of a very dubious design and
ambiguous semantics. I remember the old glorious times when the language
Snobol4 was popular. It was a real jewel for writing parsers, you
wrote them faster than the equivalent BNF expressions, with full
recursion, non-determinism, etc. Ralph Griswold invented then the
"Quickscan" protocole which in a sense kicked out the null transitions
not by optimization, but by a sheer force, e.g. all ARB non-terminal
consumed at least one token from the input stream. 

The result wass that a whole bunch of badly designed, non-terminating
grammars generated terminating parsers. People wept sadly, and newer
implementations of the language, for example Spitbol abandoned Quickscan
altogether.

In order to get rid of the bottom in functional languages in a serious
way, one has to adopt some formal discipline. See the David Turner's
paper presented at the Nijmegen workshop FPLE'95.


Jerzy Karczmarczuk
Caen, France.


Reply via email to