alexander.ula...@hpe.com>
> To: Kazuaki Ishizaki <ishiz...@jp.ibm.com>, "dev@spark.apache.org" <
> dev@spark.apache.org>, Joseph Bradley <jos...@databricks.com>
> Cc: John Canny <ca...@berkeley.edu>, "Evan R. Sparks" <
> evan.spa...@gmail
-
From: Ulanov, Alexander
Sent: Wednesday, March 25, 2015 2:31 PM
To: Sam Halliday
Cc: dev@spark.apache.org; Xiangrui Meng; Joseph Bradley; Evan R. Sparks;
jfcanny
Subject: RE: Using CUDA within Spark / boosting linear algebra
Hi again,
I finally managed to use nvblas within Spark+netlib
).
On an even deeper level, using natives has consequences to JIT and GC which
isn't suitable for everybody and we'd really like people to go into that
with their eyes wide open.
On 26 Mar 2015 07:43, Sam Halliday sam.halli...@gmail.com wrote:
I'm not at all surprised ;-) I fully expect the GPU
25, 2015 at 3:07 PM, Sam Halliday sam.halli...@gmail.com
wrote:
Yeah, MultiBLAS... it is dynamic.
Except, I haven't written it yet :-P
On 25 Mar 2015 22:06, Ulanov, Alexander alexander.ula...@hp.com
wrote:
Netlib knows nothing about GPU (or CPU), it just uses cblas symbols
from
blas.
Best regards, Alexander
-Original Message-
From: Ulanov, Alexander
Sent: Tuesday, March 24, 2015 6:57 PM
To: Sam Halliday
Cc: dev@spark.apache.org; Xiangrui Meng; Joseph Bradley; Evan R. Sparks
Subject: RE: Using CUDA within Spark / boosting linear algebra
Hi,
I am trying
/edit?usp=sharing
Best regards, Alexander
-Original Message-
From: Sam Halliday [mailto:sam.halli...@gmail.com]
Sent: Tuesday, March 03, 2015 1:54 PM
To: Xiangrui Meng; Joseph Bradley
Cc: Evan R. Sparks; Ulanov, Alexander; dev@spark.apache.org
Subject: Re: Using CUDA within Spark
suggested. I wonder, are John Canny (BIDMat) and Sam Halliday
(Netlib-java) interested to compare their libraries.
Best regards, Alexander
From: Evan R. Sparks [mailto:evan.spa...@gmail.commailto:
evan.spa...@gmail.com]
Sent: Friday, February 06, 2015 5:58 PM
To: Ulanov, Alexander
Cc: Joseph
to load
nvblas.so first and then some CPU BLAS library in JNI. I wonder
whether the setup was correct.
Alexander, could you check whether GPU is used in the netlib-cublas
experiments? You can tell it by watching CPU/GPU usage.
Best,
Xiangrui
On Thu, Feb 26, 2015 at 10:47 PM, Sam Halliday
seconds.
I'm not sure whether my CPU-GPU-CPU code simulates the netlib-cublas
path. But based on the result, the data copying overhead is definitely
not as big as 20x at n = 1.
Best,
Xiangrui
On Thu, Feb 26, 2015 at 2:21 PM, Sam Halliday sam.halli...@gmail.com
wrote:
I've had some email
precompiled MKL BLAS performs better than precompiled OpenBLAS given
that
BIDMat and Netlib-java are supposed to be on par with JNI overheads.
Though, it might be interesting to link Netlib-java with Intel MKL, as
you suggested. I wonder, are John Canny (BIDMat) and Sam Halliday
(Netlib-java
with Intel MKL, as
you suggested. I wonder, are John Canny (BIDMat) and Sam Halliday
(Netlib-java) interested to compare their libraries.
Best regards, Alexander
From: Evan R. Sparks [mailto:evan.spa...@gmail.commailto:
evan.spa...@gmail.com]
Sent: Friday, February 06, 2015 5:58 PM
, to be sure, I am going to ask
the developer of BIDMat on his upcoming talk.
Best regards, Alexander
From: Sam Halliday [mailto:sam.halli...@gmail.com]
Sent: Thursday, February 26, 2015 1:56 PM
To: Xiangrui Meng
Cc: dev@spark.apache.org; Joseph Bradley; Ulanov, Alexander; Evan R. Sparks
Subject
12 matches
Mail list logo