> On Jan 8, 2017, at 6:22 PM, Manuel Valera wrote:
>
> Ok many thanks Barry,
>
> For the cpu:sockets binding i get an ugly error:
You need to find out for your MPI what binding option to use. Sadly it is
different for different MPI implementations and can change over time. The
material in
Ok many thanks Barry,
For the cpu:sockets binding i get an ugly error:
[valera@ocean petsc]$ make streams NPMAX=4 MPI_BINDING="--binding
cpu:sockets"
cd src/benchmarks/streams; /usr/bin/gmake --no-print-directory
PETSC_DIR=/home/valera/petsc PETSC_ARCH=arch-linux2-c-debug streams
/home/valera
Manuel,
Ok there are two (actually 3) distinct things you need to deal with to get
get any kind of performance out of this machine.
0) When running on the machine you cannot share it with other peoples jobs or
you will get timings all over the place so run streams and benchmarks of you
Ok, i just did the streams and log_summary tests, im attaching the output
for each run, with NPMAX=4 and NPMAX=32, also -log_summary runs with
-pc_type hypre and without it, with 1 and 2 cores, all of this with
debugging turned off.
The matrix is 200,000x200,000, full curvilinear 3d meshes, non-hy
we need to see the -log_summary with hypre on 1 and 2 processes (with
debugging tuned off) also we need to see the output from
make stream NPMAX=4
run in the PETSc directory.
> On Jan 7, 2017, at 7:38 PM, Manuel Valera wrote:
>
> Ok great, i tried those command line args and this is
I suggest you check the code is valgrind clean.
See the petsc FAQ page for details of how to do this.
Thanks,
Dave
On Sun, 8 Jan 2017 at 04:57, Mark Adams wrote:
> This error seems to be coming from the computation of the extreme
> eigenvalues of the matrix for smoothing in smoothed aggregat