f16 in AVX?

~greg heil
picsrp.github.io
i.tgu.ca/real_cal

--

from: Eric Iverson <eric.b.iver...@gmail.com>
date: Feb 28, 2022, 10:36 AM
subject: [Jprogramming] arrayfire addon updated

arrayfire addon updated with minor fixes

>Main change is matmul now has wrappers for best performance for either f64 or 
>f32 (avoids expensive conversion of row major to column major with square 
>matrices by using matmul transpose flag).

Get started:
   load '~addons/math/arrayfire/arrayfire.ijs'
...
  init_jaf_ 'cpu' NB. or 'cuda' if you have an nvidia card can cuda
software installed
  tut_jaf_ '' NB. tutorials
...
  tut_jaf_ 'qmp' NB. matmul tutorial
...
   tut_jaf_'xaf_cpp' NB. c++ tutorial that opens up new possibilities
...
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to