On 19 June 2010 15:26, cageface <milese...@gmail.com> wrote:
> Maybe it's only because I'm coming from Ruby, in which number
> promotion is automatic and everything is slow, but if I have to choose
> between correctness and performance as a *default*, I'll choose
> correctness every time. I think there's a good reason that GCC, for
> instance, makes you push the compiler harder with compiler flags if
> you want to squeeze extra performance out of a program and accept the
> corresponding brittleness that it often brings. I also always thought
> that the transparent promotion of arithmetic was one of the strongest
> selling points of Common Lisp.
>
> My impression has always been that performance of numerics is rarely
> the bottleneck in typical code (web stuff, text processing, network
> code etc), but that unexpected exceptions in such code are the source
> of a lot of programmer heartache. On the other hand, I think 99% of
> the cases in which I've had a number exceed a 64 bit value were also
> examples of errors that might as well have been exceptions because
> they indicated a flaw in the code.

s/Ruby/Python/ and this is pretty much precisely my view.

I haven't yet hit this situation in Clojure, but a common type of
program I write is to collect and summarise database stats for a range
of systems we support. Here, tablespace sizes range from a few
hundreds of MB, to the half-terabyte mark. To handle these types of
figure, big integers are a necessity - but common test cases don't
need big integers. I understand that, by judiciously scattering
annotations around in my code, I can ensure that I don't hit primitive
integer size limits. I'm not sure that I'll have enough understanding
of numeric limits and promotion behaviour to know *where* to put
annotations, so I'll likely just scatter them round until I stop
hitting bugs (I would assume here that people for whom numeric
performance is crucial will be a lot more expert in numeric behaviour
and so will be better able to annotate precisely, but I accept that's
nothing more than an assumption).

But suppose I want to use some library code - for example, something
like clojure.contrib.accumulators, to summarise my data. It's not
clear to me from the things I've read whether I can expect it to work
with the big numbers I'm giving it. The documentation says nothing, as
it was written before this change. So do I have to read and understand
the code? If library code *won't* "just work" by adapting to the types
given to it, will we end up with a split between bignum and primitive
libraries? It seems to me that many of these libraries would be just
as good for people wanting fast math as for people like me slinging
about "big" numbers like disk file sizes.

The "it's only about which is default" argument is fine (I contend
that people wanting performance are likely to be the more numerically
expert and hence the better capable to enable non-default behaviour,
but that's just my view). But if libraries get split into
bignum-supporting vs fastnum-supporting, that seems to me to be a far
more significant issue.

Paul.

Paul

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to