ieu Hoang
> http://www.hoang.co.uk/hieu
>
> On 25 April 2016 at 15:10, Rajnath Patel wrote:
>
>> Hi Hieu,
>>
>> We are using simple tuning command with default settings as given bellow.
>> Kindly suggest, what is missing here?
>> Thank you!
>&g
l be in deep water if you don't
>https://www.mail-archive.com/moses-support%40mit.edu/msg12446.html
>
> If they produce bad results, it indicates there's something wrong
> somewhere in your pipeline
>
> Hieu Hoang
> http://www.hoang.co.uk/hieu
>
> On 25 April 2016 a
using
> the tuning set.
>
> Best,
> Jasneet
>
> > On Apr 25, 2016, at 2:38 AM, Rajnath Patel
> wrote:
> >
> > Hi all,
> >
> > I am trying to tune a phrase based model with default tuning parameters
> (MERT, BLEU). But, instead of improvement gett
Hi all,
I am trying to tune a phrase based model with default tuning parameters
(MERT, BLEU). But, instead of improvement getting reduced BLEU on test set.
Kindly help to choose the appropriate algorithm and metrics for
English-French SMT.
Thank you!
--
Regards,
Raj Nath Patel
_
Thanks,
I guess, I misinterpreted Rico's response. :(
I tried the configuration you suggested and got improved results(12.10 to
19.37). But I have used 5-gram arpa LM by Kenlm (using "lmplz" command). Is
that ok? or It should be trained in some other way.
--
Regards
On Mon, Sep 14, 2015 at 11:
; I am not 100% sure but it should work.
>
>
>
>
> On Mon, Sep 14, 2015 at 1:54 PM, Rajnath Patel
> wrote:
>
>> Thanks for quick response.
>>
>> @Raj Dabre
>> Corpus statistics as follows-
>> Approx -65k sentences, 1200k words, 50k vocab.
>> Plea
eural LM for
> English-Hindi SMT
> To: Rajnath Patel
> Cc: moses-support
> Message-ID:
> q81ia5l7jizs4kkju...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
> I have had a similar experience with NPLM.
> Do you perhaps h
Hi all,
I have tried Neural LM(nplm) with phrase based English-Hindi SMT, but
translation quality is kind of not good as compared to n-gram LM(scores are
given below). I have trained LM for 3-gram and 5-gram with default
setting(as mentioned on statmt.org/moses). Kindly suggest, If some one has
tr
nofiledumpsyn 1
t2 0
t2to3 0
t3 0
t345 0
th 1
Updated:
model2dumpfrequency 1
model345dumpfrequency 1
model3dumpfrequency 1
nodumps 0
nofiledumpsyn 0
t2 1
t2to3 1
t3 1
t345 1
--
Regards
Raj Nath Patel
On Wed, Jan 21, 2015 at 4:12 PM, Rajnath Patel
wrote:
> Hi all,
>
> I was trying to use force
Hi all,
I was trying to use force alignment script given with mgiza. Full
alignment training generates just "eng-hin.A3.final", where as script
requires 'eng-hin.t3.final', 'eng-hin.d3.final' and 'eng-hin.n3.final' etc.
Kindly suggest how to get these files in full training with mgiza.
Thank you
>
> https://docs.google.com/document/d/1G9RjczZXWGHU6byJFORf6uToItph1jU_piL53wQhGXg/edit
>
> I have tried following the documentation and was able to run it
> successfully.
>
> Regards
> Anoop.
>
> On Tue, Jan 6, 2015 at 4:27 PM, Rajnath Patel
> wrote:
>
>&
The NIST scorer is not included but you can add it to moses if you wish to.
>
> Christophe
>
> 2015-01-06 11:35 GMT+01:00 Rajnath Patel :
>
>> Thanks all for your quick response.
>>
>> Hi Christophe,
>>
>> Is it not possible to use any other metrics
Hi all,
I am trying to use instructions given on moses site (
http://www.statmt.org/moses/?n=Moses.AdvancedFeatures#ntoc5) to train a
transliteration model using parallel bilingual corpus, but unable to train.
Actually its not clear to me what exactly 'alignment' switch is suppose to
refer? If pos
t; Moses' tuning script can use other metrics. Hopefully, someone who knows
>> how will tell you shortly
>>
>> On 6 January 2015 at 12:37, HOANG Cong Duy Vu wrote:
>>
>>> Hi,
>>>
>>> You can use other metrics by using ZMERT together with Mos
Hi All,
As we know, MOSES uses BLEU for evaluation in tuning process . We want to
use evaluation metrics NIST instead of BLEU. Please suggest how it can be
done?
Thank you.
--
Regards:
Raj Nath Patel
___
Moses-support mailing list
Moses-support@mit.edu
Very useful. Adding some more resources, available at -
http://kbcs.in/tools.html
On Tue, Nov 25, 2014 at 4:33 PM, wrote:
> Send Moses-support mailing list submissions to
> moses-support@mit.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.mit.ed
Hi mahima,
Set the environment variable "LD_LIBRARY_PATH" to the libxmlrpc*.so.7 as
per your syste. In my case it was at "/usr/local/lib". So i have added
following line to my "bashrc"-
export LD_LIBRARY_PATH=/usr/local/lib
Thank you.
On 28-Jun-2014 9:48 PM, wrote:
>
> Send Moses-support mailing
gt;
> A similar functionality can be gotten with the parameter
> [alternate-weight-setting]
> See here for details
> http://www.statmt.org/moses/?n=Moses.AdvancedFeatures#ntoc59
>
> Or just run multiple instances of moses server, 1 for each system.
>
> On 19/04/2
Hi,
I am trying to use same moses server for multiple translation systems. But
it is giving following error-
"ERROR:Unknown parameter translation-systems"
moses.in file as follows:
# D - decoding path, R - reordering model, L - language model
[translation-systems]
hi D 0 R 0 L 0
ta D 1
Thanks Lakshya and Barry,
Now its working fine.
On Thu, Apr 17, 2014 at 10:32 AM, Rajnath Patel wrote:
> Hi Lakshya,
>
> Thanks for clarification. Could you please share the code snipped, how you
> are passing "nbest" as integer.
>
>
>
>
> On Thu, Ap
eger parameter.
>
>
> Regards
> Lakshya
>
>
>
> -- Forwarded message --
> From: Barry Haddow
> Date: Thu, Apr 17, 2014 at 12:43 AM
> Subject: Re: [Moses-support] Moses server with -n-best-list option
> To: Rajnath Patel
> Cc: moses-support@
ve a look at java code also and suggest. What Should I do?? Using
moses v2.1.
On Wed, Apr 16, 2014 at 8:55 PM, Barry Haddow wrote:
> Hi Raj
>
> You need to specify nbest=true in the request you send to moses server.
> There was a thread last week on this, and I think
> nbest lists
Hi,
We are using mosesserver for our translation system. We are using java API
given with the decoder, and its working fine. We want multiple
translations for same text. As in one moses support conversation It was
responded in Sept 2012, that mosesserver do not support -n-best-list
option. Is this
Hi,
There is no problem if factors are at source as well as at target.
Everything(training+testing) works fine without binarization when "no
factors source + some factors target".
The following *error occurs with the binarized phrase table at the time of
testing when "no factors source + some fa
Hi,
I have trained the factored model using moses (release 0.91) and
GIZA++ (version 1.0.7). Everything was working fine as after training
as well as after tuning . But after converting the phrase table and
reordering model
in binary format , I stuck with the following decoding error -
-
25 matches
Mail list logo