Dear Marcin and Moses community,
Thanks for the tips!
Yeah, g2.8xlarge is painfully expensive... Training on separate instances
sounds more reasonable. Now, I've to explain to the devs why I need 2
instances ;P
Regards,
Liling
On Tue, Apr 4, 2017 at 4:25 PM, liling tan wrote:
> Dear Marcin a
Why would you train your SMT model on a GPU instance? That's far to
expensive. Train on a GPU-less instance, then when done attach to an
instance that has a GPU. That's what I did. For deployment 15GB might be
enough if you make your SMT models small enough.
W dniu 04.04.2017 o 10:25, liling ta
Dear Marcin and Moses community,
Are you running on g2.8xlarge on AWS?
I think I went to the cheap g2.2xlarge and 15GB RAM is a little too low for
MGIZA++ , taking forever... I think I've got to recreate a new larger
instance.
Regards,
Liling
___
Moses
Hi Liling,
I did both on AWS for my WMT2016 en-ru/ru-en systems. No problems with
that. What would be the problems you ran in?
W dniu 04.04.2017 o 09:30, liling tan pisze:
> Dear Moses community,
>
> Amittai had written a nice package and setup guide for Moses on AWS.
> But to do some NMT on GPU
Dear Moses community,
Amittai had written a nice package and setup guide for Moses on AWS. But to
do some NMT on GPU, the instances wouldn't usually have enough RAM for
Moses.
Does anyone have experiencing deploying Moses on GPU instances on AWS or
any other cloud servers?
Regards,
Liling
__