Hello,
You can also have a look at force alignment using mgiza++ for aligning
additional documents without retraining ( if you have previous
training models saved.)
Regards,
Amit
On Wednesday, January 12, 2011, Sébastien Druon
wrote:
> Hello,
> Is it possible to train moses incrementally?
> As
Since it is binary phrase table with 'on-demand' loading, is that why it
takes the same time irrespective of phrase table size?
Amit
On Tue, Nov 23, 2010 at 12:23 PM, Amit Abbi wrote:
> Hi,
>
> When trying to decode using lattice input on an unfiltered binarized phr
Hi,
When trying to decode using lattice input on an unfiltered binarized phrase
table(~ 692MB), decoding took the following times:-
Single Path for each sentence ~ 1Hr 30 minutes
Mutiple paths for each sentence (~5000 on average) ~ 1 Hr 35 minutes.
However, on using a filtered binarized phrase t
Hi,
When trying to decode using lattice input on an unfiltered binarized phrase
table(~ 692MB), decoding took the following times:-
Single Path for each sentence ~ 1Hr 30 minutes
Mutiple paths for each sentence (~5000 on average) ~ 1 Hr 35 minutes.
However, on using a filtered binarized phrase t
Thanks Chris, Hieu.
These should be identical. However, it is possible that the beam
> search used by the decoder might run slightly differently in the two
> cases, leading to different results. With no pruning and unlimited
> stack sizes, the results should not be different (unless you have
> fea
Hi,
Could someone kindly help me out with respect to the following?
(Why should there be a difference in translations produced when I am
basically providing the same input except I am giving a different inputtype
- namely lattice format?)
On Fri, Nov 12, 2010 at 6:49 PM, Amit Abbi wrote:
>
Hi,
I had a query with regard to use of lattice input in moses.
There is a little difference in the translations generated when I run moses
using the 'normal' input format and when I run it with 'lattice input'
format.
The translations weren't radically different - only a few phrases were
differen