That is correct, we would be happy to merge a PR to change that.

Jörn

On Mon, Mar 6, 2017 at 10:49 AM, Damiano Porta <damianopo...@gmail.com>
wrote:

> Jorn, i think it is really important. For the moment we should allow more
> threads for perceptron training. If remember correctly it is only allowed
> for MAXENT classifier, right ?
>
> 2017-03-06 10:17 GMT+01:00 Joern Kottmann <kottm...@gmail.com>:
>
> > Hello,
> >
> > no, we don't support CUDA. At some point we probably add support for one
> of
> > the deep learning packages and those usually use CUDA.
> >
> > Jörn
> >
> > On Sat, Mar 4, 2017 at 5:17 PM, Damiano Porta <damianopo...@gmail.com>
> > wrote:
> >
> > > Hello everybody,
> > >
> > > does OpenNLP support CUDA parallel computing?
> > >
> > > Damiano
> > >
> >
>

Reply via email to