If you get them all to export the same API, you could, in principle, just 
switch `using VML` to `using Yeppp`.

My question: are we finally conceding that add! and co. is probably worth 
having?

 — John

On Feb 28, 2014, at 7:10 AM, Dahua Lin <linda...@gmail.com> wrote:

> This is very nice.
> 
> Now that we have several "back-ends" for vectorized computation, VML, Yeppp, 
> Julia's builtin functions, as well as the @simd-ized versions, I am 
> considering whether there is a way to switch back-end without affecting the 
> client codes.
> 
> - Dahua
> 
> 
> On Thursday, February 27, 2014 11:58:20 PM UTC-6, Stefan Karpinski wrote:
> We really need to stop using libm for those.
> 
> 
> On Fri, Feb 28, 2014 at 12:40 AM, Simon Kornblith <si...@simonster.com> wrote:
> Some of the poorest performers here are trunc, ceil, floor, and round (as of 
> LLVM 3.4). We currently call out to libm for these, but there are LLVM 
> intrinsics that are optimized into a single instruction. It looks like the 
> loop vectorizer may even vectorize these intrinsics automatically.
> 
> Simon
> 
> 
> On Thursday, February 27, 2014 9:04:18 PM UTC-5, Simon Kornblith wrote:
> I created a package that makes Julia use Intel's Vector Math Library for 
> operations on arrays of Float32/Float64. VML provides vectorized versions of 
> many of the functions in openlibm with equivalent (<1 ulp) accuracy (in 
> VML_HA mode). The speedup is often quite stunning; see benchmarks at 
> https://github.com/simonster/VML.jl.
> 
> Simon
> 

Reply via email to