On Feb 24, 2009, at 2:27 PM, Thomas Lord wrote:
>>
>>> It also suggests a parsimonious representation
>>> in Scheme for URI's and for XML's fully qualified element
>>> names.
>>
>> This seems an excellent argument against CI symbols in the language
>> when more than 7-bit ASCII character set is allowed.
>
> Do you mean because basic URI equivalence
> is defined in a case-sensitive way?

Not quite. Every feature in the language has its costs and benefits.  
Trying to have case sensitive identifiers in the 50s was not a good  
choice (APL's underscored letters aren't the most beautiful solution,  
IMO). One can argue that the tradition of CI is worth preserving in  
the ASCII world, because of the benefits of backward compatibility and  
simplicity of case folding. However, once we have Unicode, the  
complexity of staying CI grows tremendously and threatens to result in  
a ball-of-hair design. In addition, it begins to conflict with a  
mutual understandability meta-feature of the language. If R*RS  
codifies case folding, then, given realities of human languages  
involved, it will be an open-ended set of obscure algorithms that will  
change in time, for no other reasons that the corresponding rules of  
human languages change. This in turn will make the question of  
backward compatibility an interesting one. I do not think we want a  
design that require (read) to know which particular set of rules to  
use to correctly read a given file.

> I suggest that that is more like an EQ? / EQV?
> distinction.

If you mean that to make some symbols eqv? but not eq?, I find it a  
very unappealing change in semantics.

--andrew


_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to