On Wed, 2009-09-23 at 01:59 -0400, John Cowan wrote: > Thomas Lord scripsit: > > > That is not a problem with Unicode. That is a problem with > > the assumption that there is a bijection between upcase > > and downcase characters - an assumption violated by one > > character in one language. > > A lot more than one.
I stand corrected. There is more evidence for my point than I thought. > That makes 103 characters altogether that don't work in char-upcase > or char-downcase. Cool. > > A sequence of what now? What exactly is it represented as a > > string of length 1? > > A Unicode codepoint. These languages have no representation of > codepoints, but they do have representations of sequences of codepoints. > This is not paradoxical. Yes, that's my point. It's not completely absurd to imagine defining strings and string lengths inductively (take length 0 and length 1 strings as axiomatic and define appending) but it is a bit like walking the long way around the block instead of going two doors down. If strings look and quack like finite sequences of something, it's nice to be able to reflect on that domain of "something". A first-class character type is a natural move. It's a little unfair to suggest that because Javascript and Python lack first class characters, perhaps Scheme should do without them as well. Neither Javascript or Python is as general purpose a language as Scheme, nor are they deliberately conceived of as multi-paradigm languages to the degree that Scheme is. -t _______________________________________________ r6rs-discuss mailing list [email protected] http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
