Maybe it's only because I'm coming from Ruby, in which number promotion is automatic and everything is slow, but if I have to choose between correctness and performance as a *default*, I'll choose correctness every time. I think there's a good reason that GCC, for instance, makes you push the compiler harder with compiler flags if you want to squeeze extra performance out of a program and accept the corresponding brittleness that it often brings. I also always thought that the transparent promotion of arithmetic was one of the strongest selling points of Common Lisp.
My impression has always been that performance of numerics is rarely the bottleneck in typical code (web stuff, text processing, network code etc), but that unexpected exceptions in such code are the source of a lot of programmer heartache. On the other hand, I think 99% of the cases in which I've had a number exceed a 64 bit value were also examples of errors that might as well have been exceptions because they indicated a flaw in the code. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en