I've written a prototype lib that does arithmetic on rational numbers (fractions). I got the idea from the Maxima computer algebra system. ( http://maxima.sourceforge.net ). It's templated to work on any integer type where operators are properly overloaded, though in practice you'd probably want to use something arbitrary precision, since adding/subtracting fractions can yield some really really big numbers for numerators and denominators, and if you don't care that much about accuracy, floats are faster anyhow.
I'm still cleaning things up, etc, but usage is something like this: import std.bigint, fractions; void main() { auto f1 = fraction( BigInt("314159265"), BigInt("27182818")); auto f2 = fraction( BigInt("8675309"), BigInt("362436")); f1 += f2; assert(f1 == fraction( BigInt("174840986505151"), BigInt("4926015912324"))); // Print result. Prints: // "174840986505151 / 4926015912324" writeln(f1); // Print result in decimal form. Prints: // "35.4934" writeln(cast(real) result); } Some questions for the community: 1. Does this look useful to anyone? 2. What might be some non-obvious key features for a lib like this? 3. What is the status of arbitrary precision integer arithmetic in D2? Will we be getting something better than std.bigint in the foreseeable future? This lib isn't very useful without a fast BigInt underneath it. 4. There is one small part (conversion to float) where I had to assume that the BigInt implementation was the one in std.bigint, to cast certain division results back to native types. Will there eventually be a de facto standard way to cast BigInts to native types so I can get rid of this dependency? 5. Is there any use for approximate rational arithmetic built on machine-sized integers? For example, if adding two fractions would generate an overflow, try to find the closest answer that wouldn't? I would guess that if you want to do something like this, you're better off just using floats, but I could be wrong.