On 10 June 2015 at 02:32, John Colvin via Digitalmars-d
<digitalmars-d@puremagic.com> wrote:
> On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
>>
>> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
>> <digitalmars-d@puremagic.com> wrote:
>>>
>>>
>>>> I believe that Phobos must support some common methods of linear algebra
>>>> and general mathematics. I have no desire to join D with Fortran libraries
>>>> :)
>>>
>>>
>>>
>>> D definitely needs BLAS API support for matrix multiplication. Best BLAS
>>> libraries are written in assembler like openBLAS. Otherwise D will have last
>>> position in corresponding math benchmarks.
>>
>>
>> A complication for linear algebra (or other mathsy things in general)
>> is the inability to detect and implement compound operations.
>> We don't declare mathematical operators to be algebraic operations,
>> which I think is a lost opportunity.
>> If we defined the properties along with their properties
>> (commutativity, transitivity, invertibility, etc), then the compiler
>> could potentially do an algebraic simplification on expressions before
>> performing codegen and optimisation.
>> There are a lot of situations where the optimiser can't simplify
>> expressions because it runs into an arbitrary function call, and I've
>> never seen an optimiser that understands exp/log/roots, etc, to the
>> point where it can reduce those expressions properly. To compete with
>> maths benchmarks, we need some means to simplify expressions properly.
>
>
> Optimising floating point is a massive pain because of precision concerns
> and IEEE-754 conformance. Just because something is analytically the same
> doesn't mean you want the optimiser to go ahead and make the switch for you.

We have flags to control this sort of thing (fast-math, strict ieee, etc).
I will worry about my precision, I just want the optimiser to do its
job and do the very best it possibly can. In the case of linear
algebra, the optimiser generally fails and I must manually simplify
expressions as much as possible.
In the event the expressions emerge as a result of a series of
inlines, or generic code (the sort that appears frequently as a result
of stream/range based programming), then there's nothing you can do
except to flatten and unroll your work loops yourself.

> Of the things that can be done, lazy operations should make it
> easier/possible for the optimiser to spot.

My experience is that they possibly make it harder, although I don't
know why. I find the compiler becomes very unpredictable optimising
deep lazy expressions. The backend inline heuristics may not be tuned
for typical D expressions of this type?

I often wish I could address common compound operations myself, by
implementing something like a compound operator which I can special
case with an optimised path for particular expressions. But I can't
think of any reasonable ways to approach that.

Reply via email to