Re: [Moses-support] Issue with run moses with PhraseDictionaryCompact is not registered

2016-11-24 Thread Hieu Hoang

hiya

the build.log you sent is uninformative. You should look at it yourself.

If the error says Boost is not installed, then you should install it 
before compiling moses. There is instructions for Fedora here:


http://www.statmt.org/moses/?n=Development.GetStarted



On 24/11/2016 12:39, ZELENKA - Jiří Šrotíř wrote:


Dear all,

At first I would like to say that I am testing MOSES trough prepared 
FEDORA image for virtual box.


At the end of procedure a got same error message as in this post: 
https://www.mail-archive.com/moses-support@mit.edu/msg14751.html


1)Along the answer from Hieu Hoang 
 
I try to compile MOSES by /./bjam --with-cmph=/home/hieu/workspace/cmph-2.0/
2)I got next error message “Boost does not seem to be installed or G++ 
is confused”.
/3)/So I add a path to boost too /./bjam --with-boost=/home/hieu/workspace/boost_1_57_0 
--with-cmph=/home/hieu/workspace/cmph-2.0/
4)Compiling process started bud build failure. Please find attached 
the build.log.gz and error log from compiling process.

For case of needs I am sending moses.ini attached too.
I will very grateful for any support because I can’t wait how it will 
works J

Thank you in advance
Jiri Srotir


___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Moses-support Digest, Vol 121, Issue 39

2016-11-24 Thread Terence Lewis

On 24/11/2016 11:03, moses-support-requ...@mit.edu wrote:

Imagine it's a translator using MT and somehow he/she has translated
the sentence before and just wants the exact translation. A TM would
solve the problem and Moses surely could emulate the TM but NMT tends
to go overly creative and produces something else.

Then just use a TM for this. Fast and simple.


Barry,

We've been doing that for years and have now integrated our MT system 
with the memoQ TM environment.


Quick & simples.

Cheers,

Terence

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] NMT vs Moses

2016-11-24 Thread Barry Haddow

Hi Nat

Imagine it's a translator using MT and somehow he/she has translated 
the sentence before and just wants the exact translation. A TM would 
solve the problem and Moses surely could emulate the TM but NMT tends 
to go overly creative and produces something else.

Then just use a TM for this. Fast and simple.

You can probably create a seq2seq model which will do the copying when 
appropriate (see e.g. 
https://www.aclweb.org/anthology/P/P16/P16-1154.pdf), but in the 
scenario you describe I think there is really no need.


cheers - Barry

On 24/11/16 10:22, Nat Gillin wrote:

Dear Moses Community,

This seems to be prickly topic to discuss but my experiments on a 
different kind of data set than WMT or WAT (workshop for asian 
translation) has not been able to achieve the stella scores that the 
recent advancement in MT has been reporting.


Using state-of-art encoder-attention-decoder framework, just by 
running things like lamtram or tensorflow, I'm unable to beat Moses' 
scores from sentences that appears both in the train and test data.


Imagine it's a translator using MT and somehow he/she has translated 
the sentence before and just wants the exact translation. A TM would 
solve the problem and Moses surely could emulate the TM but NMT tends 
to go overly creative and produces something else. Although it is 
consistent in giving the same output for the same sentence, it's just 
unable to regurgitate the sentence that was seen in the training data. 
In that matter, Moses does it pretty well.


For sentences that is not in train but in test, NMT does fairly the 
same or sometimes better than Moses.


So the question is 'has anyone encounter similar problems?' Is the 
solution simply to do a fetch in the train set before translating? Or 
a system/output chooser to rerank outputs?


Are there any other ways to resolve such a problem? What could have 
happened such that NMT is not "remembering"? (Maybe it needs some 
memberberries)


Any tips/hints/discussion on this is much appreciated.

Regards,
Nat


___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] NMT vs Moses

2016-11-24 Thread Marcin Junczys-Dowmunt

Hi,
the short answer to your problem would be, that the typical encoder 
decoder models are not really meant to do what you want it to do, there 
is however interesting new work on archive:


https://arxiv.org/abs/1611.01874

which could exactly solve your problem. However, I am always weary of 
results of that particular group of researchers. It seems reproducing 
their results for anything but Chinese does not really work, also their 
train sets are really small, so it is not clear what the effects really 
are. Maybe those models are just dealing better with smaller data.

Best,
Marcin



W dniu 24/11/16 o 10:22, Nat Gillin pisze:

Dear Moses Community,

This seems to be prickly topic to discuss but my experiments on a 
different kind of data set than WMT or WAT (workshop for asian 
translation) has not been able to achieve the stella scores that the 
recent advancement in MT has been reporting.


Using state-of-art encoder-attention-decoder framework, just by 
running things like lamtram or tensorflow, I'm unable to beat Moses' 
scores from sentences that appears both in the train and test data.


Imagine it's a translator using MT and somehow he/she has translated 
the sentence before and just wants the exact translation. A TM would 
solve the problem and Moses surely could emulate the TM but NMT tends 
to go overly creative and produces something else. Although it is 
consistent in giving the same output for the same sentence, it's just 
unable to regurgitate the sentence that was seen in the training data. 
In that matter, Moses does it pretty well.


For sentences that is not in train but in test, NMT does fairly the 
same or sometimes better than Moses.


So the question is 'has anyone encounter similar problems?' Is the 
solution simply to do a fetch in the train set before translating? Or 
a system/output chooser to rerank outputs?


Are there any other ways to resolve such a problem? What could have 
happened such that NMT is not "remembering"? (Maybe it needs some 
memberberries)


Any tips/hints/discussion on this is much appreciated.

Regards,
Nat


___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support



___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] NMT vs Moses

2016-11-24 Thread Nat Gillin
Dear Moses Community,

This seems to be prickly topic to discuss but my experiments on a different
kind of data set than WMT or WAT (workshop for asian translation) has not
been able to achieve the stella scores that the recent advancement in MT
has been reporting.

Using state-of-art encoder-attention-decoder framework, just by running
things like lamtram or tensorflow, I'm unable to beat Moses' scores from
sentences that appears both in the train and test data.

Imagine it's a translator using MT and somehow he/she has translated the
sentence before and just wants the exact translation. A TM would solve the
problem and Moses surely could emulate the TM but NMT tends to go overly
creative and produces something else. Although it is consistent in giving
the same output for the same sentence, it's just unable to regurgitate the
sentence that was seen in the training data. In that matter, Moses does it
pretty well.

For sentences that is not in train but in test, NMT does fairly the same or
sometimes better than Moses.

So the question is 'has anyone encounter similar problems?' Is the solution
simply to do a fetch in the train set before translating? Or a
system/output chooser to rerank outputs?

Are there any other ways to resolve such a problem? What could have
happened such that NMT is not "remembering"? (Maybe it needs some
memberberries)

Any tips/hints/discussion on this is much appreciated.

Regards,
Nat
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Moses SMT Evaluation

2016-11-24 Thread Maxim Khalilov
Try http://asiya.cs.upc.edu/demo/asiya_online.php

On Thu, Nov 24, 2016 at 8:41 AM, Emmanuel Dennis 
wrote:

> Is there a way to evaluate SMT system online using BLEU??
>
>
> I will appreciate your feedback.
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>


-- 

*Maxim Khalilov*

*Mob.: +31 615 602 017*

*Skype: Maxim Khalilov (desperbcn)*

*Twitter: twitter.com/maximkhalilov *

*LinkedIn: https://nl.linkedin.com/in/maximkhalilov
*
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support