So why is it faster than Flint say? Except for the overhead in the Flint 
fmpz type, which uses a single word initially and only upgrades to an mpz_t 
on overflow, it should currently be doing more allocations than Flint. And 
Flint should be faster for something like a dot product, especially if the 
integers are all small, since it never actually allocates mpz_t's in that 
case. What is the new innovation?

Bill.

On Wednesday, 12 July 2017 16:00:16 UTC+2, bluescarni wrote:
>
> In the benchmarks I use the C++ interfaces of FLINT and 
> Boost.Multiprecision only for ease of initialization/destruction. The bulk 
> of the operations is performed using directly the C API of FLINT and GMP. 
> mp++ itself has some moderate template metaprogramming in place, but for 
> instance it is currently lacking expression templates support (unlike 
> fmpzxx), the focus at the moment being on fast low-level primitives 
> (add/sub/mul/addmul etc.).
>
> Cheers,
>
>   Francesco.
>
> On 12 July 2017 at 15:13, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com <javascript:>> wrote:
>
>> Beware, Bernard Parisse has just helped me track down why the Flint 
>> timings for the sparse division only benchmark looked so ridiculously low. 
>> It turns out that due to an accident of interfacing between Nemo and Flint, 
>> it was using reflected lexicographical ordering instead of true 
>> lexicographical ordering. If I had labelled them "exact division", instead 
>> of "quotient only" and not included the x^(n - 3) term in the benchmark 
>> itself, the timings could be considered correct (though Giac would also 
>> have been able to do the computation much faster in that case). But 
>> unfortunately, this discovery means I had to change the timings for Flint 
>> for that benchmark. It is now correct on the blog.
>>
>> The timings for mppp are really good. I'm surprised you beat the Flint 
>> timings there, since we use pretty sophisticated templating in our C++ 
>> interface. But clearly there are tricks we missed!
>>
>> Bill. 
>>
>> On Wednesday, 12 July 2017 12:16:33 UTC+2, bluescarni wrote:
>>>
>>> Interesting timings, they give me some motivation to revisit the dense 
>>> multiplication algorithm in piranha :)
>>>
>>> As an aside (and apologies if this is a slight thread hijack?), I have 
>>> been spending some time in the last few weeks decoupling the multiprecision 
>>> arithmetic bits from piranha into its own project, called mp++:
>>>
>>> https://github.com/bluescarni/mppp
>>>
>>> So far I have extracted the integer and rational classes, and currently 
>>> working on the real class (arbitrary precision FP).
>>>
>>> Cheers,
>>>
>>>   Francesco.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sage-devel" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sage-devel+...@googlegroups.com <javascript:>.
>> To post to this group, send email to sage-...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/sage-devel.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to