Why would you train your SMT model on a GPU instance? That's far to 
expensive. Train on a GPU-less instance, then when done attach to an 
instance that has a GPU. That's what I did. For deployment 15GB might be 
enough if you make your SMT models small enough.

W dniu 04.04.2017 o 10:25, liling tan pisze:
> Dear Marcin and Moses community,
>
> Are you running on g2.8xlarge on AWS?
>
> I think I went to the cheap g2.2xlarge and 15GB RAM is a little too 
> low for MGIZA++ , taking forever... I think I've got to recreate a new 
> larger instance.
>
> Regards,
> Liling
>
>
>
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support


_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to