Re: [petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Smith, Barry F. via petsc-users
> On Jun 2, 2019, at 11:52 PM, Sajid Ali > wrote: > > @Barry : Perahps config/BuildSystem/config/packages/fftw.py should use the > --host option when configure for PETSc is run with-batch=1. > > If anyone here knows what --host must be set to for KNL, I'd appreciate it. We've tried

Re: [petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Sajid Ali via petsc-users
@Barry : Perahps config/BuildSystem/config/packages/fftw.py should use the --host option when configure for PETSc is run with-batch=1. If anyone here knows what --host must be set to for KNL, I'd appreciate it. PS : I know that Intel-MKL-FFT provides FFTW api. If I'd want to try with this, is

Re: [petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Sajid Ali via petsc-users
Hi Barry, fftw-configure fails on login node. I'm attaching the error message at the bottom of this email. I tried request 1 hour of time on a compute node to compile fftw on it but for some reason 1 hour is not enough to compile fftw, hence I was forced to use cray-fftw-3.3.8.1 for which I had

Re: [petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Smith, Barry F. via petsc-users
I assume the example runs fine with --download-fftw on theta? Is the cray-fftw-3.3.8.1 compatible with the MPI you are using? Perhaps the cray-fftw-3.3.8.1 assumes extra padding in the array lengths then standard fftw. You could add some extra length to the arrays allocated by PETSc

[petsc-users] MPI-FFTW example crashes

2019-06-02 Thread Sajid Ali via petsc-users
Hi PETSc-developers, I'm trying to run ex143 on a cluster (alcf-theta). I compiled PETSc on login node with cray-fftw-3.3.8.1 and there was no error in either configure or make. When I try running ex143 with 1 MPI rank on compute node, everything works fine but with 2 MPI ranks, it crashes due