Beware, Bernard Parisse has just helped me track down why the Flint timings 
for the sparse division only benchmark looked so ridiculously low. It turns 
out that due to an accident of interfacing between Nemo and Flint, it was 
using reflected lexicographical ordering instead of true lexicographical 
ordering. If I had labelled them "exact division", instead of "quotient 
only" and not included the x^(n - 3) term in the benchmark itself, the 
timings could be considered correct (though Giac would also have been able 
to do the computation much faster in that case). But unfortunately, this 
discovery means I had to change the timings for Flint for that benchmark. 
It is now correct on the blog.

The timings for mppp are really good. I'm surprised you beat the Flint 
timings there, since we use pretty sophisticated templating in our C++ 
interface. But clearly there are tricks we missed!

Bill. 

On Wednesday, 12 July 2017 12:16:33 UTC+2, bluescarni wrote:
>
> Interesting timings, they give me some motivation to revisit the dense 
> multiplication algorithm in piranha :)
>
> As an aside (and apologies if this is a slight thread hijack?), I have 
> been spending some time in the last few weeks decoupling the multiprecision 
> arithmetic bits from piranha into its own project, called mp++:
>
> https://github.com/bluescarni/mppp
>
> So far I have extracted the integer and rational classes, and currently 
> working on the real class (arbitrary precision FP).
>
> Cheers,
>
>   Francesco.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to