Daniel,

    The way we have things set up currently is kind of stupid. We have two 
cuda.py files one to set up the cuda compiler and the other that sets up PETSc 
to use cuda and cusp/thrust. So the first one handles the -with-cudac="" 
command which makes sense but the second one handles the --with-cuda-arch 
command which does not make sense; essentially the configureType() method from 
the second cuda.py should be moved to the first cuda.py and used.  Then the 
second cuda.py should be renamed something else; perhaps the way to set PETSc 
to use everything could be with --with-cuda-cusp-thrust or some other better 
command. 

   Anyways you or anyone else can try moving the configureType() from 
config/PETSc/packages/cuda.py to config/BuildSystem/config/compile/CUDA.py and 
rework it to use the precision provider stuff like is done in 
config/BuildSystem/config/packages/BlasLapack.py and then you would just 
provide the --with-cudac and --with-cuda-arch and not use the --with-cuda



   Barry




On Sep 12, 2011, at 4:00 PM, Daniel Lowell wrote:

> Is there a way to configure PETSc to build with nvcc CUDA support, but 
> without the CUSP flags being set? I'd like to set up PETSc independent of any 
> CUSP flags or MACROs.
> 
> Default, any configuration settings require you specify CUSP and THRUST 
> support.
> 
> Here is my current configuration file:
> 
> CFLG="--CFLAGS=-O2 -fopenmp -msse2 -mfpmath=sse -g -ggdb"
> FFLG="--FFLAGS=-O2 -fopenmp -g"
> WCC="--with-cc=gcc"
> MPI_DIR="--with-mpi-dir=$MPICH2_HOME"
> WBL="--with-blas-lapack-dir=$INTEL_MKL"
> LAD="--download-c-blas-lapack=yes --download-f-blas-lapack=yes"
> HYP="--download-hypre=1"
> HDF="--with-hdf5=1 --download-hdf5=1"
> CUD="--with-cuda-dir=/soft/cuda-4.0/cuda"
> USP="--with-thrust-dir=/soft/cuda-4.0/cuda/include 
> --with-cusp-dir=/soft/cuda-4.0/cuda/include"
> 
> 
> ./config/configure.py $MPI_DIR $LAD $CFLG $FFLG $HYP $HDF $CUD \
> --with-cudac="nvcc -m64" \
> --with-precision=double \
> --with-clanguage=c \
> --with-cuda-arch=sm_20 \
> PETSC_ARCH=structgrid_cuda
> 
> 
> 
> Note my USP flag is not included in the configuration invocation.
> 
> 
> 
> Thanks,
> 
> Daniel Lowell


Reply via email to