replace CUSPARSE?
Regards,
Keita Teranishi
Scientific Library Group
Cray Inc.
keita at cray.com
==
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-
Hello,
I'd like to know if you have any plan to release Pipelined GMRES method,
recently published by Intel Research in Europe.
If so, are you going to include it in the next release?
Thank you,
Keita Teranishi
Scientific Library Group
Cray Inc.
kei
th 32 and 64 bit integers.
Thanks,
Keita Teranishi
Scientific Library Group
Cray Inc.
keita at cray.com
==
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/p
Barry,
As long as the GPU stuff can coexist with the existing external packages
(SUperLU, MUMPS, etc.), I am very happy to grab it.
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
Barry,
Do you have any plan to introduce new features of MPICH3
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: petsc-dev-bounces at mcs.anl.gov
Hi,
Does AIJ format of PETSc allows to have sparse matrix in which the nonzero
elements in each row are not ordered in the increasing order of column indices?
I think it's OK for MatMult, but how about other routines?
Thank you,
====
Keita Teranishi
Scien
Does this PETSc use timers from CUDA?
Keita Teranishi
Scientific Library Group
Cray, Inc.
keita at cray.com
From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-boun...@mcs.anl.gov]
On Behalf Of Barry Smith
Sent: Tuesday
-dev] [GPU] Performance of ex19
Your MatMult is now slower. Are your results reproducible, if you run 5 times
how similar are them?
Barry
On Aug 31, 2010, at 2:57 PM, Keita Teranishi wrote:
VecDot 2 1.0 0.e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0
1.0 3.53e+07 1.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 2812
On Aug 31, 2010, at 2:45 PM, Keita Teranishi wrote:
Barry,
Your performance data is identical with mine. Could you repost?
Thanks,
Keita Teranishi
Scientific Library Group
Cray
Barry,
Your performance data is identical with mine. Could you repost?
Thanks,
Keita Teranishi
Scientific Library Group
Cray, Inc.
keita at cray.com
From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-boun
with the options ./ex19 -da_vec_type seqcuda -da_mat_type
seqaijcuda -pc_type none -dmmg_nlevels 1 -da_grid_x 100 -da_grid_y 100
-log_summary -mat_no_inode -preload off -cuda_synchronize
On Aug 31, 2010, at 11:45 AM, Keita Teranishi wrote:
Hi PETSc Developer team,
I have just measured the
Hi PETSc Developer team,
I have just measured the performance of ex19 program running on Fermi GPU. I
hope it will help you to develop GPU-enabled PETSc further.
Thanks,
Keita
./ex19 -pc_type jacobi -dmmg_nlevels 5 -da_vec_type cuda -da_mat_type aijcuda
-log_summary -cuda_synchronize
---
.
This particular case is just a mistake in configure script (and it's not a big
deal to fix), but it will be great if you have any ideas to avoid picking up
wrong settings.
Thanks,
====
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cra
Yes, I replaced all the compiler flags by -O3.
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: Jed Brown [mailto:five...@gmail.com] On Behalf Of Jed Brown
Sent: Friday
Barry,
The CPU timing I reported was after recompiling the code (I removed
PETSC_USE_DEBUG and GDB macros from petscconf.h).
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original
ssage
[0]PETSC ERROR: Argument out of range!
[0]PETSC ERROR: Given Bad partition!
[0]PETSC ERROR:
====
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cra
Barry,
CPU version takes another digit. So it is 1.6 sec on Fermi and 17 sec 1 core
CPU.
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: petsc-dev-bounces at
Barry,
Yes. It improves the performance dramatically, but the execution time for
KSPSolve stays the same.
MatMult 5.2 Gflops
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original
: 1.5 Sec
1 core Istanbul:1.7 Sec
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-boun...@mcs.anl.gov]
On
Barry,
I already see a big difference in MatMult routine of
ksp/ksp/examples/tutorials/ex2.c, and I am very happy to try that example
program.
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
Satish,
Now I got the latest copy using mercurial. Thanks!
I am going to check the performance with Fermi. Is there any command line
option available to swith CUSP? Or do I have to apply MatConvet() with PETSc
function calls?
Thanks,
?Keita Teranishi
Satish,
Thanks. I do not see any mercurial package for SUSE, let me try if it works.
Regards,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: petsc-dev-bounces at
Satish,
I still have the same error with the tar ball I downloaded several minutes ago.
Regards,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: petsc-dev-bounces at
I downloaded a nightly tar ball.
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Original Message-
From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-boun...@mcs.anl.gov]
On Behalf Of
Hi,
I haven't been able to run the configure script of petsc-dev. The error
message is, "No module named cmakegen." What does the message mean?
Thanks,
========
Keita Teranishi
Scientific Library Group
Cray, Inc.
ke
The default solver is GMRES, which can solve indefinite systems.
Keita Teranishi
Scientific Library Group
Cray, Inc.
keita at cray.com
From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-boun...@mcs.anl.gov]
On Behalf Of
meter (MAT_SHIFT_NONE=0)
parameter (MAT_SHIFT_NONZERO=1)
parameter (MAT_SHIFT_POSITIVE_DEFINITE=2)
parameter (MAT_SHIFT_INBLOCKS=3)
Thank you,
====
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
-Ori
Victor,
I am wondering if you have any plan to implement Induced Dimension Reduction
(IDR(s) by Sonneveld and van Gijzen) in PETSc.
Thanks,
Keita Teranishi
Scientific Library Group
Cray, Inc.
keita at cray.com
From: petsc
Barry,
It is great to hear the new release of PETSc. I am wondering if you have any
performance numbers of the new triangular solution. Is there any
document/publication?
Thanks,
?Keita Teranishi
?Scientific Library Group
?Cray, Inc.
?keita at cray.com
f you have any
plan to introduce this version in the forthcoming PETSc release. I will
appreciate your comments on this issue.
Thanks in advance,
====
Keita Teranishi
Scientific Library Group
Cray, Inc.
keita at cra
Hi Barry,
I'd like to grab the dev version with new direct solver interface.
Just a question. Will the packaging (bmake and configure script) form of the
final release be similar to the dev version?
Thanks,
====
Keita Teranishi
Math Software Group
Cray
,
===
Keita Teranishi, Ph.D.
Math Software Team
Cray Inc.
===
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20070926/e9a75b44/attachment.html>
32 matches
Mail list logo