On Wed, Jul 6, 2011 at 1:16 AM, Peter T <ptaoussa...@gmail.com> wrote:
>> > > Does the file you are evaluating have more than 65535 characters?
>
> Nope. It's about 1400 LOC and not syntactically unique (no unusually
> long constants, etc.). It's also not the longest ns in the project:
> the longest is around 2000 LOC and is still evaluating fine. If I had
> to try find something unusual about the ns, I'd say that it
> probably :requires more namespaces than others (22).

Well, this is odd!

> 1. I'm more or less satisfied: if I know I can always work around the
> problem by using shorter namespaces, I'm happy.

It creates a tension, though:

1. In another recent thread people have been arguing for relatively
large namespaces.

2. It isn't nice to have code modularization boundaries dictated as
much by bug workaround considerations as by architectural ones!

> 2. While the namespace size seems to be a factor, I'm not convinced
> that the problem is as linear as "big namespace = problem". I have
> other namespaces that are larger (in line count, character count, and
> definitions) that have been evaluating fine without a problem. This
> problem feels more random/subtle to me.

The fact, mentioned recently in that other thread, that clojure.core
is 200K and around 6Kloc and compiles fine also points in this
direction.

My guess would be that it's some function of namespace size in bytes,
number of vars, and possibly as you say number of referred vars -- so
maybe also imports and anything else that grows the namespace's symbol
table too.

> 4. There seems to be a discrepancy in behaviour depending on how the
> compilation is requested: project-wide command-line compilation seems
> to keep working even when Slime/Swank evaluation fails.

It looks like it affects load-file but not AOT compilation. Both
presumably use eval, and eventually Compiler.java, to get the job
done, so the problem is probably in load-file's implementation
specifically. The simplest hypothesis, that it's granularity (the
compiler chokes on a single huge wodge of forms crammed down its
throat in one go but works fine if given the same forms one by one)
fails to fully explain the data because it predicts that AOT should
fail on the namespaces where load-file fails, but evaluating the
top-level forms one by one works as expected. This suggests again that
load-file is the problem rather than Compiler.java, or possibly that
AOT feeds forms to the compiler differently from load-file.

> 5. Personally I don't have any problem with hard limits (e.g. keep
> your namespaces/whatever below X lines/definitions/whatever) even if
> they're aggressive- but I think it'd be preferable to have an error
> message to point out the limit when it gets hit (if that's indeed
> what's happening).

I *do* have a problem with such limits; including limits on function
size. Architecture should be up to the programmer, and even if it is a
*bad idea* for programmers write huge namespaces or huge individual
functions, IMO that isn't a judgment call appropriate for the
*language compiler* to make. And let's not forget that we're a Lisp,
so we do meta-programming, and so we probably want to be able to
digest files, functions, and individual s-expressions that no sane
human would ever write but that may very well occur in
machine-generated inputs to the compiler.

-- 
Protege: What is this seething mass of parentheses?!
Master: Your father's Lisp REPL. This is the language of a true
hacker. Not as clumsy or random as C++; a language for a more
civilized age.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to