On Mar 27, 2007, at 3:20 PM, Andre van Tonder wrote:
On Mon, 26 Mar 2007, Abdulaziz Ghuloum wrote:
1. the compiler has to insert additional runtime checks
in order to stop you from returning twice (which you
almost always never do in practice). So, you end up
paying for a useless feature.
Not if the requirement is "should" or "may", instead of
"must". It would not be the only "should" affecting an
aspect of the semantics of internal definitions - in
chapter 8, "must" is used to decribe the programmer's
responsibility, while "should" is used to decribe the
implementation's responsibility, for detecting when "A
definition in the sequence of forms must not define any
identifier ...." (and that is not even runtime, where
efficiency would provide a better excuse for using
"should").
Okay. I was objecting to the possibility of this comment
being understood as "implementations must signal an error".
In any case, implementing such a "should" should require
a single extra runtime comparison per DEFINE if we
translate the latter to something like SET!-IF-UNDEFINED
with the obvious semantics. It does not seem as if this
would often matter in comparison to time taken on typical
right-hand-sides, but I am not pretending to any competency
in efficiency issues, so feel free to correct me...
Three points:
1. I thought what you meant when you said "disallow" is an
"error-if-already-defined" as opposed to "set!-if-undefined"
semantics. "set!-if-undefined" feels a little weird to me
since I would much prefer a "Error! cannot redefine foo"
over the implementation silently ignoring my redefinition.
The rest of this message is why "error-if-already-defined"
is not a great idea either as far as efficiency is concerned.
2. In a single-processor single-threaded implementation, the
cost of performing "error-if-already-defined" may be very
little or too much depending on the time it takes to evaluate
the rhs. So, the overhead may vary anywhere between 0% (for
a rhs that never returns) and 100% (for very small leaf
routines). In addition to the run time overhead, there is a
compile time overhead as well (this is usually ignored).
When the check fails, the implementation must handle the
situation somehow. This usually involves generating the code
for performing a procedure call (with all the junk around it
like saving all caller-save registers, adjusting the stack,
setting the return point, jump, readjust the stack, reload
registers, etc.). This code that has to be generated for
some defines can blow up the code size by a factor of 2.
3. In a multi-processor multi-threaded implementation, the
situation is much worse. The "error-if-already-defined"
translates to requiring global synchronization (using a CAS
instruction if applicable or using whatever mechanism is
available). I would think this is expensive. Yes, the
report does not address synchronization and threads but many
implementations have them and future versions of the report
may include them. Requirements that are unlikely to be
implemented should not be included in the report (my opinion
of course).
Aziz,,,
PS. In your example, the common subexpression elimination
can be performed regardless of whether a and c are the same
as y or different from y since the common subexpression was
a pure function of y only. Also, in the same example, the
compiler can also derive that c and y are the same regardless
of your requirement. So, all you really managed to eliminate
was a single copy of c/y to a at the expense of either
1. ignoring programmer error (for "set!-if-undefined") or
2. inserting four additional checks (when following an
"error-if-already-defined" policy).
Which one would you pick?
_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss