That would make very cool student projects.
Also that video is acing it, even the voice-over is synthetic :)

On 23.06.2015 00:27, Ondrej Bojar wrote:
> ...and I wouldn't be surprised to find Moses also behind this Java-to-C# 
> automatic translation:
>
> https://www.youtube.com/watch?v=CHDDNnRm-g8
>
> O.
>
> ----- Original Message -----
>> From: "Marcin Junczys-Dowmunt" <junc...@amu.edu.pl>
>> To: moses-support@mit.edu
>> Sent: Friday, 19 June, 2015 19:21:45
>> Subject: Re: [Moses-support] Major bug found in Moses
>> On that interesting idea that moses should be naturally good at
>> translating things, just for general considerations.
>>
>> Since some said this thread has educational value I would like to share
>> something that might not be obvious due to the SMT-biased posts here.
>> Moses is also the _leading_ tool for automatic grammatical error
>> correction (GEC) right now. The first and third system of the CoNLL
>> shared task 2014 were based on Moses. By now I have results that surpass
>> the CoNLL results by far by adding some specialized features to Moses
>> (which thanks to Hieu is very easy).
>>
>> It even gets good results for GEC when you do crazy things like
>> inverting the TM (so it should actually make the input worse) provided
>> you tune on the correct metric and for the correct task. The interaction
>> of all the other features after tuning makes that possible.
>>
>> So, if anything, Moses is just a very flexible text-rewriting tool.
>> Tuning (and data) turns into a translator, GEC tool, POS-tagger,
>> Chunker, Semantic Tagger etc.
>>
>> On 19.06.2015 18:40, Lane Schwartz wrote:
>>> On Fri, Jun 19, 2015 at 11:28 AM, Read, James C <jcr...@essex.ac.uk
>>> <mailto:jcr...@essex.ac.uk>> wrote:
>>>
>>>      What I take issue with is the en-masse denial that there is a
>>>      problem with the system if it behaves in such a way with no LM +
>>>      no pruning and/or tuning.
>>>
>>>
>>> There is no mass denial taking place.
>>>
>>> Regardless of whether or not you tune, the decoder will do its best to
>>> find translations with the highest model score. That is the expected
>>> behavior.
>>>
>>> What I have tried to tell you, and what other people have tried to
>>> tell you, is that translations with high model scores are not
>>> necessarily good translations.
>>>
>>> We all want our models to be such that high model scores correspond to
>>> good translations, and that low model scores correspond with bad
>>> translations. But unfortunately, our models do not innately have this
>>> characteristic. We all know this. We also know a good way to deal with
>>> this shortcoming, namely tuning. Tuning is the process by which we
>>> attempt to ensure that high model scores correspond to high quality
>>> translations, and that low model scores correspond to low quality
>>> translations.
>>>
>>> If you can design models that naturally correspond with translation
>>> quality without tuning, that's great. If you can do that, you've got a
>>> great shot at winning a Best Paper award at ACL.
>>>
>>> In the meantime, you may want to consider an apology for your rude
>>> behavior and unprofessional attitude.
>>>
>>> Goodbye.
>>> Lane
>>>
>>>
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>> _______________________________________________
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to