On Feb 24, Guillermo J. Rozas wrote:
> >>
> >> No. That's not the point. It's hiding a C parser in a macro
> >> that I find 'curious' to say the least.
> >>
> >> After all, just because you can do something doesn't mean that
> >> you should.
> >
> > I don't see any fundamental difference between a "generator" and a
> > "macro", they both generate code,
>
> At a Turing level, of course not. But at a practical level, it
> makes a big difference, in my experience.
(I never make any "turing arguments". I think that Schemers in
general hardly ever make them -- seeing how one of the big advantages
of Scheme is its expressiveness rather than it's obvious turing
completeness.)
> >> [...]
> >> And that is not an issue of safety, but of clarity of the code.
> >
> > and since both produce code in the same way, they're both as
> > clear.
>
> No, I disagree, unless the macro happens to be an absolutely trivial
> wrapper around the rest of the procedural code.
That's the obvious way to implement such a macro (any why I consider
it just as clear).
> > (And BTW, we did do a couple of steps in the last decade. Macros
> > might still require a license, but they're much more well behaved.
> > This is regardless of the risk of making code more difficult to
> > deal with.)
>
> You misunderstand. It's not how well behaved the macros are. It is
> how they obfuscate code. When it comes to macros, parsimony is a
> good policy.
This is what I meant with "the risk of making code more difficult to
deal with".
> But again, there were two schools of thought about this 20 years
> ago, namely the 'MIT school' which viewed sharing code as sharing
> procedures, and macros as rarely used, and the 'Indiana school'
> which viewed sharing code as sharing macros (they were programming
> language researchers, by and large, while the MIT crowd largely was
> not).
[An unrelated side-note: one of the main goal of a good module system
like the PLT modules or like R6RS libraries, is making it possible to
share macros without the pain that used to be very common in such
cases.]
> > Ah, well you're right that this does solve the problem, but it
> > does so at the expense of complicating the framework to support
> > it.
>
> OK. So we've gone from "there is no solution" to "it complicates
> things".
(Yes, and if I said "there's no solution" then it was clearly a
mistake on my part. There was always at least the |...| solution for
writing these identifiers.)
> So it is a value judgement whether it is worth the effort or not.
> [...]
As long as we have the turing genie out of its bottle, then clearly
there always was "a solution", so it always was a "value judgement".
The same holds for practically any RnRS -- since the language was
complete from the beginning.
> > The least that you'll need (in practical cases at least) is to
> > make sure that this file is added to the repository, that the
> > generator (macro or not) will commit changes to the file when
> > needed (which also implies that everyone that uses this code
> > better have the same version of the library)
> > It will also need to communicate with some human to update the
> > documentation (for example, send an email -- because the tool
> > might run as part of a nightly automated build) -- or
> > alternatively it will need to generate the required bits to be
> > used by the documentation.
>
> If you look at industrial software systems, this kind of thing is
> bread and butter.
(a) So what? It's still a huge pile of hacks, one that disappears if
the language is case-sensitive. (b) I suspect that in any remotely
"industrial software system" project, if we compare the amount of
effort needed for the above and the effort needed to make the language
case-sensitive, then any manager who I'd try to convince for the
latter option will send me home vor a long vacation.
> Now we are talking about convenience... A very different argument.
I'll give you more. Even if in the near future
- it is decided that R7RS should be case-insensitive, and
- it is decided that R6RS should
retroactively change to be case insensitive, and
- as a result of a massive outcry PLT changes its default mode to be
case insensitive
then the amount of changes I'd need to do to my files is zero for
most, and about three extra characters for some files where I do use
cases (and I do use that).
So yes, it was always a question of convenience.
In any case, if you remember, I didn't join this thread from this
side. What always disturbed more was the arbitrary decision to treat
the case bit differently than many other similar bits. In the ASCII
world that Scheme was born to, this was a very minor wart. (I don't
know the details of punched cards, but I'd guess that Lisp was born in
a world that didn't have that bit.)
But these days ignoring something like unicode is no longer an option.
Given this, one solution is to keep the symmetry: the language is
still case insensitive, but it's done with unicode folding rules or
something similar -- so all similar bits have the same status. That
would be, IMO, the proper way of keeping case-insensitivite. But
there is a big problem here -- unicode has versions, and the rules are
likely to change, which means that code can break as a result. The
fundamental problem (again, IMO) here is that it's a redundant mixture
of cultural rules with a formal language. For all I know, it might be
decided tomorrow that "a" and "A" are no longer related, or that the
capital form of "a" is "A" or "
_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss