On 29 Sep 2009, at 5:46 pm, Brian Mastenbrook wrote: > On Sep 28, 2009, at 5:16 PM, David Rush wrote: > >> Please no. It is far simpler to just provide the O(1) limiting size. >> If inexacts are good enough for overflow they're good enough for the >> entire computation. > > It's a mistake to assume that operations on fixnums are O(1). Your > processor might actually take longer for a 32-bit multiply than it > does for an addition, increment, compare, or any other operation. It > might multiply 8-bit numbers faster than 32-bit numbers.
Yes, but big-O complexity hand-waves over all that: multiplying two machine words on any CPU I've seen takes time with a small constant bound. It might go faster if the CPU detects you're multiplying by 0, or 1, or a power of 2, or a number with only a few bits set; but that's all optimisations. There's a relatively small worst-case time, so you can assume that, and be pleasantly surprised if it's faster. While bignums, due to having a largest upper bound that's a lot larger than *common* average sizes, has a significantly high worst-case running time compared to expected cases, so people are then interested in a little more fine-grained analysis. When I've said there may be some benefit in Schems that have fixnums that overflow to flonums in the world of real-time systems, it's purely because such a system can place an upper bound on the time taken to perform arithmetic on its values - which may be either fixnums or flonums - that's relatively low, and so therefore practical to work on that bound as a worst-case timing for operations and still be able to produce a useful system on affordable amounts of silicon. However, this indeed is hardly a consideration for Schemes running on workstation or server-class processors, or even set-top-box, handheld or industrial control levels of embedded system, so not actually worth all this fuss and debate IMHO... > I think there's a fair amount of incorrect assumptions about > arithmetic complexity floating around. Definitely ;-) > Arithmetic is fast on small numbers. Many algorithms are fast on > small objects. This is no argument for or against anything. Indeed. ABS -- Alaric Snell-Pym Work: http://www.snell-systems.co.uk/ Play: http://www.snell-pym.org.uk/alaric/ Blog: http://www.snell-pym.org.uk/archives/author/alaric/ _______________________________________________ r6rs-discuss mailing list [email protected] http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
