Manu wrote: > On 3 November 2012 01:41, Walter Bright <newshou...@digitalmars.com> wrote: > > > On 11/2/2012 3:10 PM, Jens Mueller wrote: > > > >> I see. Thanks for clarifying. > >> If I want fast vector operations I have to use core.simd. The built-in > >> vector operations won't fit the bill. > >> > > > I think a better quote would be "If i want *HARDWARE* vector > operations..."; this is not automatically faster by nature, it requires > strict self-control in terms of application, and very careful attention if > you want your code to be portable. > > At the moment, yes. > > > > However, Manu is working on developing a higher order layer. > > > > I have a fork; some people are using it already. It still needs a lot of > work though; some compilers missing parts, platforms not supported. > That said, it's not an effort to address D's natural vector syntax, the key > goal is to provide a hardware SIMD API that is as orthogonal as possible > and portable (with confidence it will run reasonably well). > I wonder if druntime could be enhanced to use the SIMD stuff though in the > functions that perform the natural vector operations, might offer some nice > little boosts.
Cool. It'll be nice though if D's vector operations could be expressed on top of it. I mean a[] + b[] is so much easier to read. Jens