I think I solved this issue. It popped up when I updated to CUDA11.      I 
had to patch up
three functions  dnn_fwd.c, dnn_gi.c, dnn_gw.c. I added the config 
option     gcc.cxxflags=-DCUDAVERSION=11, then added checks in the 
code.  It's can use cleaning up.  If someone wants, they can take this fix 
and check it in to the repo.
cheers, Paul


In dnn_fwd.c, I put in this check:

        #if CUDAVERSION  > 10
           // PMB
           #define CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT 2
           int ret_alg_cnt;
           cudnnConvolutionFwdAlgoPerf_t ta;

           err = cudnnGetConvolutionForwardAlgorithm_v7(
             params->handle, APPLY_SPECIFIC(input), APPLY_SPECIFIC(kerns),
             desc, APPLY_SPECIFIC(output),
             CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT, &ret_alg_cnt, 
&ta);
             algo=ta.algo;
        #else
           err = cudnnGetConvolutionForwardAlgorithm(
             params->handle, APPLY_SPECIFIC(input), APPLY_SPECIFIC(kerns),
             desc, APPLY_SPECIFIC(output),
             CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT, maxfree, &algo);
        #endif

In dnn_gi.c, I put in this check:

#if CUDAVERSION > 10
           int ret_alg_cnt;
           cudnnConvolutionBwdDataAlgoPerf_t at;
           #define CUDNN_CONVOLUTION_BWD_DATA_SPECIFY_WORKSPACE_LIMIT 2
           err = cudnnGetConvolutionBackwardDataAlgorithm_v7(
             params->handle, APPLY_SPECIFIC(kerns), APPLY_SPECIFIC(output),
             desc, APPLY_SPECIFIC(input),
            
 CUDNN_CONVOLUTION_BWD_DATA_SPECIFY_WORKSPACE_LIMIT,&ret_alg_cnt, &at);
             algo=at.algo;
        #else
        err = cudnnGetConvolutionBackwardDataAlgorithm(
          params->handle, APPLY_SPECIFIC(kerns), APPLY_SPECIFIC(output),
          desc, APPLY_SPECIFIC(input),
          CUDNN_CONVOLUTION_BWD_DATA_SPECIFY_WORKSPACE_LIMIT, maxfree, 
&algo);
        #endif

In dnn_gw.c, I put in this check:

       #if CUDAVERSION > 10
            // PMB
            #define CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT 2
            cudnnConvolutionBwdFilterAlgoPerf_t ta;
            int alg_cnt;
            err = cudnnGetConvolutionBackwardFilterAlgorithm_v7(
              params->handle, APPLY_SPECIFIC(input), APPLY_SPECIFIC(output),
              desc, APPLY_SPECIFIC(kerns),
              CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT, 
&alg_cnt, &ta);
              algo=ta.algo;
        #else
            err = cudnnGetConvolutionBackwardFilterAlgorithm(
              params->handle, APPLY_SPECIFIC(input), APPLY_SPECIFIC(output),
              desc, APPLY_SPECIFIC(kerns),
              CUDNN_CONVOLUTION_BWD_FILTER_SPECIFY_WORKSPACE_LIMIT, 
maxfree, &algo);
        #endif

On Thursday, September 1, 2022 at 8:37:48 AM UTC+2 drb3...@gmail.com wrote:

>
>
>
>
> *My installed packages are:CUDA 11.7, cudnn 8.5, theano 1.0.5When I run 
> the convolution network in the test file of Michael Nielsen's tutorial on 
> Deep Learning 
> <https://github.com/MichalDanielDobrzanski/DeepLearningPython>:*
>     import network3
>     from network3 import Network, ConvPoolLayer, FullyConnectedLayer, 
> SoftmaxLayer
>     training_data, validation_data, test_data = network3.load_data_shared()
>     mini_batch_size = 10
>     net = Network([
>             ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 
> 28),filter_shape=(20, 1, 5, 5), poolsize=(2, 
> 2)),FullyConnectedLayer(n_in=20*12*12, n_out=100),SoftmaxLayer(n_in=100, 
> n_out=10)], mini_batch_size)
>     net.SGD(training_data, 60, mini_batch_size, 0.1, 
>                 validation_data, test_data)  
>
> *   Problem occurred during compilation with the command line below:*
> dnn_fwd.c:326:60: error: invalid conversion from 'size_t {aka long 
> unsigned int}' to 'int*' [-fpermissive]
>     dnn_fwd.c:326:60: error: cannot convert 'cudnnConvolutionFwdAlgo_t*' 
> to 'cudnnConvolutionFwdAlgoPerf_t* {aka 
> cudnnConvolutionFwdAlgoPerfStruct*}' for argument '8' to 'cudnnStatus_t 
> cudnnGetConvolutionForwardAlgorithm_v7(cudnnHandle_t, 
> cudnnTensorDescriptor_t, cudnnFilterDescriptor_t, 
> cudnnConvolutionDescriptor_t, cudnnTensorDescriptor_t, int, int*, 
> cudnnConvolutionFwdAlgoPerf_t*)'
>
> *I understand that incompatibility of packages should be the reason, 
> however I was wondering if there is any way to resolve this by modify the 
> module or the theano's config file without downgrading the packages?*
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/theano-users/4221f407-2af1-4c46-86bf-6384c993c69bn%40googlegroups.com.

Reply via email to