[Moses-support] Negative weights for translation model and reordering models

2015-01-29 Thread HOANG Cong Duy Vu
Hi,

I trained the conventional baseline system with translation model, lexical
reordering model (wbe-msd-bidirectional-fe-allff), language model.

I've encountered the following problems:

1) When I *add* another hierarchical reordering model
(hier-mslr-bidirectional-fe-allff, with 8 dense features), after tuning
with MIRA, some of the weight of lexical reordering model become negative
like:
...
LexicalReordering0= -0.00398112195593915 0.00177901253382393
-0.00561620995293243 0.0115717603473397 0.00416629648355966
0.0123725554271622
LexicalReordering1= 0.0603959132391883 0.0637905029984998
0.0453474898750001 0.0255712011871197 0.0333243029043151 0.0360465652275341
0.0102744718438895 0.037224797723084
...

2) When I *add* the bilingual NPLM model, after tuning with MIRA, one of
the weight of translation model becomes negative like:
..
NNJM0= 0.0425000449522696
TranslationModel0= 0.0528812807140059 0.0459795108631494 0.0322642254177683
-0.00103257479908177
...

I suspect there is a sign of *over-fitting* problem.
Do you used to encounter this?

May I seek for your advise?

Thank you very much!

--
Cheers,
Vu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Sparse features and overfitting

2015-01-15 Thread HOANG Cong Duy Vu
Thanks for your replies!

Hi Prashant,

there is definitely an option for sparse l1/l2 regularization with mira. I
> don't know how to call it through command line though.


Yes. For MIRA, we can set the *C* parameter to control its regularization.
I tried different C values (0.01, 0.001) but it didn't work in my case.

Hi Matthias,

Do the sparse features give you any large improvement on the tuning set?


Yes. The improvement is around ~2-3 BLEU scores on the tuning set.

Does this mean that there are hundreds of sentences in your original
> tuning and test sets that are equal on the source side but have
> different references? That sounds a bit odd. Maybe it indicates that
> something about your data is generally problematic.


Yes. It's quite odd, I think so. But this data (Chinese-to-English) is
extracted from an official competition.
Probably, I will have to remove overlapping before moving on with other
kinds of features.

--
Cheers,
Vu

On Fri, Jan 16, 2015 at 6:31 AM, Matthias Huck  wrote:

> On Thu, 2015-01-15 at 13:54 +0800, HOANG Cong Duy Vu wrote:
>
>
> > - tune & test
> > (based on source)
> > size of overlap set = 624
> > (based on target)
> > size of overlap set = 386
>
> >
> > (tune & test have high overlapping parts based on source sentences,
> > but half of them have different target sentences)
>
>
>
> Does this mean that there are hundreds of sentences in your original
> tuning and test sets that are equal on the source side but have
> different references? That sounds a bit odd. Maybe it indicates that
> something about your data is generally problematic.
>
>
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] Sparse features and overfitting

2015-01-14 Thread HOANG Cong Duy Vu
Hi,

I am working on applying sparse features for *phrase-based* system on
*conversational
*domain (e.g. SMS, Chat).

I used sparse features such as: TargetWordInsertionFeature,
SourceWordDeletionFeature, WordTranslationFeature, PhraseLengthFeature.
Sparse features are used only for top source and target words (100, 150,
200, 250, ).

My parallel data include: train(201K); tune(6214); test(641).
My system configuration: tuning with MIRA, 5-gram LM with KenLM, others by
default.

Here is the result:

*BLEU NIST METEOR*
Baseline 0.2009 5.2175 0.2603
Baseline + SP100 0.2021 5.135 0.2645
Baseline + SP150 0.2048 5.1804 0.2653
Baseline + SP200 0.2093 5.2272 0.2671
Baseline + SP250 0.2148 5.2603 0.2680
Baseline + SP300 0.2146 5.2631 0.2680
(SP: sparse features)

Although I got significantly improved result with SP250, I believe it was
due to over-fitting problem.
Then I tried to study the overlapping between train, tune and test data
sets.
The overlapping information is as follows:
- *train & test*:
*(based on source)*
size of test set = 625 ( 641 with duplicates )
size of overlap set = 65
proportion of train set inside test set = 6394 / 201301
*(based on target)*
size of test set = 621 ( 641 with duplicates )
size of overlap set = 69
proportion of training set inside test set = 13808 / 201301

- *tune & test*
*(based on source)*
size of test set = 625 ( 641 with duplicates )
size of overlap set = 624
proportion of tune set inside test set = 939 / 6214
*(based on target)*
size of test set = 621 ( 641 with duplicates )
size of overlap set = 386
proportion of training set inside test set = 706 / 6214

(tune & test have high overlapping parts based on source sentences, but
half of them have different target sentences)

After filtering overlapping parts (based on source sentences) for train and
tune based on test, my resulting parallel data include: train(194K);
tune(5274);
test(641).

And here is the result:

*BLEU NIST METEOR*
Baseline 0.1990 5.1764 0.2589
Baseline + SP250 0.1967 5.0109 0.2606

Only METEOR got slightly improved, others were dropped remarkably.

Is there any way to prevent over-fitting when applying the sparse features?
Or in this case, sparse features will not generalize well over "unseen"
data?
I am seeking for your advise.

Thanks so much!

--
Cheers,
Vu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Using evaluation metrics other than BLEU in tuning

2015-01-05 Thread HOANG Cong Duy Vu
Hi,

You can use other metrics by using ZMERT together with Moses:
*ZMERT toolkit*: http://cs.jhu.edu/~ozaidan/zmert/


--
Cheers,
Vu

On Tue, Jan 6, 2015 at 12:35 PM, Rajnath Patel 
wrote:

> Hi All,
> As we know, MOSES uses BLEU for evaluation in tuning process . We want to
> use evaluation metrics NIST instead of BLEU. Please suggest how it can be
> done?
>
> Thank you.
>
> --
> Regards:
> Raj Nath Patel
>
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] string of Words + states in feature functions

2014-12-10 Thread HOANG Cong Duy Vu
More:

word_str = source_sent.GetWord(pos).GetString(m_factorType)

--
Cheers,
Vu

On Wed, Dec 10, 2014 at 5:26 PM, HOANG Cong Duy Vu 
wrote:

> Hi Amir,
>
> I'm implementing a feature function in moses-chart. I need the source
>> words string and also their indexes in the source sentence. I've written a
>> function that gets the source words but I don't know how extract word
>> string from a word.
>> could anyone guide me how to do that? as I know, each word is implemented
>> as an array of factors, which of them is its string?
>
>
> You can utilize some of the following functions to get the source
> information:
>
> //target phrase and range
> const TargetPhrase& currTargetPhrase = cur_hypo.GetCurrTargetPhrase();
> const WordsRange& sourceWordRage = cur_hypo.GetCurrSourceWordsRange();
>
> //source sentence
> Manager& manager = cur_hypo.GetManager();
> const Sentence& source_sent = static_cast Sentence&>(manager.GetSource());
>
> //alignment
> const AlignmentInfo& alignments = targetPhrase.GetAlignTerm();
>
>  I have also some questions about the states in the stateful features,
>> what kind of variables should be stored in each state? only those ones
>> that should be used in the compare function? or any variable from the
>> previous hypothesis  that we use in our feature?
>
>
> Normally, for stateful functions, for instance, previous target words will
> be stored.
>
>
> --
> Cheers,
> Vu
>
> On Wed, Dec 10, 2014 at 4:11 PM, amir haghighi  > wrote:
>
>> Hi everyone
>>
>> I'm implementing a feature function in moses-chart. I need the source
>> words string and also their indexes in the source sentence. I've written a
>> function that gets the source words but I don't know how extract word
>> string from a word.
>> could anyone guide me how to do that? as I know, each word is implemented
>> as an array of factors, which of them is its string?
>>
>> I have also some questions about the states in the stateful features,
>> what kind of variables should be stored in each state? only those ones
>> that should be used in the compare function? or any variable from the
>> previous hypothesis  that we use in our feature?
>>
>> Thanks in advance!
>>
>> Cheers
>> Amir
>>
>> ___
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>>
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] string of Words + states in feature functions

2014-12-10 Thread HOANG Cong Duy Vu
Hi Amir,

I'm implementing a feature function in moses-chart. I need the source words
> string and also their indexes in the source sentence. I've written a
> function that gets the source words but I don't know how extract word
> string from a word.
> could anyone guide me how to do that? as I know, each word is implemented
> as an array of factors, which of them is its string?


You can utilize some of the following functions to get the source
information:

//target phrase and range
const TargetPhrase& currTargetPhrase = cur_hypo.GetCurrTargetPhrase();
const WordsRange& sourceWordRage = cur_hypo.GetCurrSourceWordsRange();

//source sentence
Manager& manager = cur_hypo.GetManager();
const Sentence& source_sent = static_cast(manager.GetSource());

//alignment
const AlignmentInfo& alignments = targetPhrase.GetAlignTerm();

 I have also some questions about the states in the stateful features,
> what kind of variables should be stored in each state? only those ones
> that should be used in the compare function? or any variable from the
> previous hypothesis  that we use in our feature?


Normally, for stateful functions, for instance, previous target words will
be stored.


--
Cheers,
Vu

On Wed, Dec 10, 2014 at 4:11 PM, amir haghighi 
wrote:

> Hi everyone
>
> I'm implementing a feature function in moses-chart. I need the source
> words string and also their indexes in the source sentence. I've written a
> function that gets the source words but I don't know how extract word
> string from a word.
> could anyone guide me how to do that? as I know, each word is implemented
> as an array of factors, which of them is its string?
>
> I have also some questions about the states in the stateful features,
> what kind of variables should be stored in each state? only those ones
> that should be used in the compare function? or any variable from the
> previous hypothesis  that we use in our feature?
>
> Thanks in advance!
>
> Cheers
> Amir
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Add a new LM feature in Moses

2014-08-14 Thread HOANG Cong Duy Vu
Hi Christian,

It works now. Thanks!

--
Cheers,
Vu


On Fri, Aug 15, 2014 at 12:00 PM, Christian Hadiwinoto <
chr...@comp.nus.edu.sg> wrote:

> Hi,
>
>
>
> Have you added the “HybKen.cpp” after “alias LM” in the Jamfile?
>
>
>
> I hope this helps.
>
>
>
> Regards,
>
> Christian Hadiwinoto
>
>
>
> *From:* moses-support-boun...@mit.edu [mailto:
> moses-support-boun...@mit.edu] *On Behalf Of *HOANG Cong Duy Vu
> *Sent:* Friday, August 15, 2014 10:44 AM
> *To:* moses-support@mit.edu
> *Subject:* [Moses-support] Add a new LM feature in Moses
>
>
>
> Hi,
>
>
>
> I would like to add a new simple LM named HybLanguageModelKen (HybKen.h
> and HybKen.cpp) which will inherit from LanguageModelKen.
>
>
>
> In Factory.cpp, I added as follows:
>
>
>
> ...
>
> //#include "moses/LM/Ken.h"
>
> #include "moses/LM/HybKen.h"
>
> ...
>
>
>
> class KenFactory : public FeatureFactory
>
> {
>
> public:
>
>   void Create(const std::string &line) {
>
> DefaultSetup(ConstructKenLM(line));
>
>   }
>
> };
>
>
>
> class HybKenFactory : public FeatureFactory
>
> {
>
> public:
>
>   void Create(const std::string &line) {
>
> DefaultSetup(ConstructHybKenLM(line));
>
>   }
>
> };
>
>
>
> ...
>
> Add("KENLM", new KenFactory());
>
>
>
> Add("HKENLM", new HybKenFactory());
>
>
>
> ...
>
>
>
> I've created HybKen.h as follows:
>
>
>
> #ifndef moses_LanguageModelHybKen_h
>
> #define moses_LanguageModelHybKen_h
>
>
>
> //#include 
>
> //#include 
>
>
>
> //#include "lm/word_index.hh"
>
>
>
> //#include "moses/LM/Base.h"
>
> //#include "moses/Hypothesis.h"
>
> //#include "moses/TypeDef.h"
>
> //#include "moses/Word.h"
>
>
>
> #include "moses/LM/Ken.h"
>
> namespace Moses
>
> {
>
>
>
> LanguageModel *ConstructHybKenLM(const std::string &line);
>
>
>
> //! This will also load. Returns a templated KenLM class
>
> LanguageModel *ConstructHybKenLM(const std::string &line, const
> std::string &file, const std::string &fileM, FactorType factorType, bool
> lazy);
>
>
>
> void LoadMapping(const std::string &f, std::map&
> m);
>
>
>
> /*
>
>  * An implementation of single factor LM using Kenneth's code.
>
>  */
>
> template  class LanguageModelHybKen : public
> LanguageModelKen
>
> {
>
> ...
>
>
>
> Factory.cpp, HybKen.h and HybKen.cpp are attached for your reference.
>
>
>
> But I always got the compilation error message: "*moses/FF/Factory.cpp:166:
> error: undefined reference to 'Moses::ConstructHybKenLM(std::string const&)*
> '".
>
> I understand that Moses::ConstructHybKenLM(std::string const&) is already
> defined in Moses namespace.
>
>
>
> May I ask for your help?
>
>
>
> Thank you!
>
>
>
> --
> Cheers,
> Vu
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] Add a new LM feature in Moses

2014-08-14 Thread HOANG Cong Duy Vu
Hi,

I would like to add a new simple LM named HybLanguageModelKen (HybKen.h and
HybKen.cpp) which will inherit from LanguageModelKen.

In Factory.cpp, I added as follows:

...
//#include "moses/LM/Ken.h"
#include "moses/LM/HybKen.h"
...

class KenFactory : public FeatureFactory
{
public:
  void Create(const std::string &line) {
DefaultSetup(ConstructKenLM(line));
  }
};

class HybKenFactory : public FeatureFactory
{
public:
  void Create(const std::string &line) {
DefaultSetup(ConstructHybKenLM(line));
  }
};

...
Add("KENLM", new KenFactory());

Add("HKENLM", new HybKenFactory());

...

I've created HybKen.h as follows:

#ifndef moses_LanguageModelHybKen_h
#define moses_LanguageModelHybKen_h

//#include 
//#include 

//#include "lm/word_index.hh"

//#include "moses/LM/Base.h"
//#include "moses/Hypothesis.h"
//#include "moses/TypeDef.h"
//#include "moses/Word.h"

#include "moses/LM/Ken.h"
namespace Moses
{

LanguageModel *ConstructHybKenLM(const std::string &line);

//! This will also load. Returns a templated KenLM class
LanguageModel *ConstructHybKenLM(const std::string &line, const std::string
&file, const std::string &fileM, FactorType factorType, bool lazy);

void LoadMapping(const std::string &f, std::map&
m);

/*
 * An implementation of single factor LM using Kenneth's code.
 */
template  class LanguageModelHybKen : public
LanguageModelKen
{
...

Factory.cpp, HybKen.h and HybKen.cpp are attached for your reference.

But I always got the compilation error message: "*moses/FF/Factory.cpp:166:
error: undefined reference to 'Moses::ConstructHybKenLM(std::string const&)*
'".
I understand that Moses::ConstructHybKenLM(std::string const&) is already
defined in Moses namespace.

May I ask for your help?

Thank you!

--
Cheers,
Vu
#include "moses/FF/Factory.h"
#include "moses/StaticData.h"

#include "moses/TranslationModel/PhraseDictionaryTreeAdaptor.h"
#include "moses/TranslationModel/RuleTable/PhraseDictionaryOnDisk.h"
#include "moses/TranslationModel/PhraseDictionaryMemory.h"
#include "moses/TranslationModel/PhraseDictionaryMultiModel.h"
#include "moses/TranslationModel/PhraseDictionaryMultiModelCounts.h"
#include "moses/TranslationModel/RuleTable/PhraseDictionaryALSuffixArray.h"
#include "moses/TranslationModel/PhraseDictionaryDynSuffixArray.h"
#include "moses/TranslationModel/PhraseDictionaryScope3.h"
#include "moses/TranslationModel/PhraseDictionaryTransliteration.h"
#include "moses/TranslationModel/RuleTable/PhraseDictionaryFuzzyMatch.h"

#include "moses/FF/LexicalReordering/LexicalReordering.h"

#include "moses/FF/BleuScoreFeature.h"
#include "moses/FF/TargetWordInsertionFeature.h"
#include "moses/FF/SourceWordDeletionFeature.h"
#include "moses/FF/GlobalLexicalModel.h"
#include "moses/FF/GlobalLexicalModelUnlimited.h"
#include "moses/FF/UnknownWordPenaltyProducer.h"
#include "moses/FF/WordTranslationFeature.h"
#include "moses/FF/TargetBigramFeature.h"
#include "moses/FF/TargetNgramFeature.h"
#include "moses/FF/PhraseBoundaryFeature.h"
#include "moses/FF/PhrasePairFeature.h"
#include "moses/FF/PhraseLengthFeature.h"
#include "moses/FF/DistortionScoreProducer.h"
#include "moses/FF/SparseHieroReorderingFeature.h"
#include "moses/FF/WordPenaltyProducer.h"
#include "moses/FF/InputFeature.h"
#include "moses/FF/PhrasePenalty.h"
#include "moses/FF/OSM-Feature/OpSequenceModel.h"
#include "moses/FF/ControlRecombination.h"
#include "moses/FF/ExternalFeature.h"
#include "moses/FF/ConstrainedDecoding.h"
#include "moses/FF/CoveredReferenceFeature.h"
#include "moses/FF/TreeStructureFeature.h"
#include "moses/FF/SoftMatchingFeature.h"
#include "moses/FF/SourceGHKMTreeInputMatchFeature.h"
#include "moses/FF/HyperParameterAsWeight.h"
#include "moses/FF/SetSourcePhrase.h"
#include "CountNonTerms.h"
#include "ReferenceComparison.h"
#include "RuleScope.h"
#include "MaxSpanFreeNonTermSource.h"
#include "NieceTerminal.h"
#include "SpanLength.h"
#include "SyntaxRHS.h"
#include "SkeletonChangeInput.h"

#include "moses/FF/SkeletonStatelessFF.h"
#include "moses/FF/SkeletonStatefulFF.h"
#include "moses/LM/SkeletonLM.h"
#include "moses/TranslationModel/SkeletonPT.h"

#ifdef HAVE_CMPH
#include "moses/TranslationModel/CompactPT/PhraseDictionaryCompact.h"
#endif
#ifdef PT_UG
#include "moses/TranslationModel/UG/mmsapt.h"
#endif
#ifdef HAVE_PROBINGPT
#include "moses/TranslationModel/ProbingPT/ProbingPT.h"
#endif

//#include "moses/LM/Ken.h"
#include "moses/LM/HybKen.h"

#ifdef LM_IRST
#include "moses/LM/IRST.h"
#endif

#ifdef LM_SRI
#include "moses/LM/SRI.h"
#endif

#ifdef LM_MAXENT_SRI
#include "moses/LM/MaxEntSRI.h"
#endif

#ifdef LM_RAND
#include "moses/LM/Rand.h"
#endif

#ifdef HAVE_SYNLM
#include "moses/SyntacticLanguageModel.h"
#endif

#ifdef LM_NEURAL
#include "moses/LM/NeuralLMWrapper.h"
#endif

#ifdef LM_DALM
#include "moses/LM/DALMWrapper.h"
#endif

#ifdef LM_LBL
#include "moses/LM/oxlm/LBLLM.h"
#endif

#include "ExampleSLFF.h"

#include "ExampleSFFF.h"

#include "util/exception.hh"

#include 

namespace Moses
{

c

Re: [Moses-support] Creating Language Model from google 1gram file

2013-01-24 Thread HOANG Cong Duy Vu
Hi,

I guess you can run as follows:

build-sublm.pl --size  --ngrams  --sublm
 [--prune-singletons] [--kneser-ney|--witten-bell]
merge-sublm.pl --size  --sublm   -lm iARPA_LM.gz
(then with ARPA files you can use KenLM to build binary LM files)

--
Cheers,
Vu


On Thu, Jan 24, 2013 at 6:14 AM, Peled Guy  wrote:

> Hi,
>
> I'm working on a Transliteration project.
> The input is a word in one language and the output is the same word in
> English (not translated).
> My language Model will created from google 1gram file - while each letter
> of a word should be a word.
> This is the original file:
>
> 95119665584
>  95119665584
> ,   30578667846
> .   22077031422
>21594821357
> the 19401194714
> -   16337125274
> of  12765289150
> and 12522922536
>
> This is the file after inserting spaces between words letters:
>
> t h e 19401194714
> -   16337125274
> o f  12765289150
> a n d 12522922536
>
> Now I have "1gram" file that contains not just 1gram (1 word each line),
> but also 2grams\3grams\etc.
> How can I run the SRILM "ngram-count" script to create a Language Model ?
> When I'm running the script normally , the integers are calculated as
> words too - and not as Probability\number of appearances.
>
> Can anyone help me please?
>
> Thank you,
> Guy.
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] Google Web1T 5-gram

2012-12-05 Thread HOANG Cong Duy Vu
Hi everyone,

I would like to build large LMs from the Google Web1T
5-gram
.
I tried to use the goograms2ngrams.pl script from IRSTLM toolkit to extract
raw n-gram counts but don't know how to build LMs (e.g. arpa file) from
those count files.

Does anyone use to deal with it? Please advise me.

Thanks in advance!

--
Cheers,
Vu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] recaser error

2012-09-19 Thread HOANG Cong Duy Vu
Hi,

You need to train the recaser first, then recase, something like this:

#train recaser
~/smt_tools/moses/scripts/recaser/train-recaser.perl -train-script
~/smt_tools/moses/scripts/training/train-model.perl -ngram-count
~/smt_tools/srilm/bin/i686-m64/ngram-count -corpus
corpus/viva-phase1-final.train.tok.en -dir ~/smt_tools/working/smt-recaser/
-scripts-root-dir ~/smt_tools/moses/scripts/

#run recaser
~/smt_tools/moses/scripts/recaser/recase.perl -model smt-recaser/moses.ini
-in smt-translated/viva-phase1-final.test.tok.low.translated-Vb.en -moses
~/smt_tools/moses/moses-cmd/src/moses >
smt-translated/viva-phase1-final.test.tok.recased.en

Hope it helps!

--
Cheers,
Vu


2012/9/20 Henry Hu 

> Hi Folks,
>
> I came into issues about recaser. After decoding, I issue the following
> command:
>
> ~/moses/mosesdecoder/scripts/recaser/recase.perl -model
> ~/train/model/moses.ini  -in ~/decoding/translated.txt -moses
> ~/moses/mosesdecoder/dist/bin/moses > ~/decoding/translated.recased
>
> Most resulting lines are correct, only the first letter has been
> capitalized. But in some lines, the first WORD has been replace all.
> Even more, some sentences are totally different. I give 4 examples
> below, from English to Italian:
>
> 1.
> sarà quindi visualizzare un {g} che hanno richiesto l' accesso per
> Supporto senza utente {g} messaggio . il cliente avrà completato il
> passaggio successivo .
>
> Recased:
>
> Sarà quindi visualizzare Riduci {g} che hanno richiesto l' accesso per
> Supporto senza utente {g} messaggio . Il cliente avrà completato il
> passaggio successivo .
>
> 2.
> se che desidera consentono l' accesso senza utente al suo computer
> memorizzandovi i loro Windows accedere password dell' applicazione di
> supporto ai in Remote , dovrebbero lasciare il {g} INVIO password di
> Windows selezionato {g} casella di controllo , inserire la propria
> password di Windows e fare clic su Consenti {30} {/30} Supporto senza
> utente .
>
> Recased:
>
> Tilizza che desidera consentono l' accesso senza utente al suo
> computer memorizzandovi i loro Windows accedere password dell'
> applicazione di supporto ai in campi remoti , dovrebbero lasciare il
> {g} INVIO password di Windows selezionato {g} casella di controllo ,
> inserire la propria password di Windows e tariffa clic su Consenti
> {30} {/30} Supporto senza utente .
>
> 3.
> le password salvate dall' dell' applicazione di supporto ai remoto non
> sono visibili in il tecnico , nemmeno per istanze disponibile a Citrix
> Online .
>
> Recased:
>
> Tile password salvate dall' dell' applicazione di supporto ai remoto
> non sono visibili in il tecnico , nemmeno per istanze disponibile
> Citrix Online .
>
> 4.
> 8 . un popup verrà visualizzata nella notifica computer del cliente
> {g} Support è una volta senza utente riuscita .
>
> Recased:
>
> 8 . Riduci menu di scelta rapida ) . Verrà visualizzata nella notifica
> computer Isole del Mar dei Coralli &bar; cliente {g} supporto è una
> volta senza utente riuscita .
>
>
> What is the correct process of recaser. Thanks for any suggestions.
>
> Best regards,
> Henry
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] minimum amount of parallel data required for SMT to perform well

2012-05-10 Thread HOANG Cong Duy Vu
Hi,

You may consider reading this paper (
http://aclweb.org/anthology-new/E/E12/E12-1016.pdf) to figure out the
answer for your question.

--
Cheers,
Vu


On Fri, May 11, 2012 at 9:00 AM, Wang Pidong  wrote:

> In my opinion, that depends on the differences between the source language
> and the target language, and also depends on the domain of the test set.
>
> 1. if the two languages are quite different, e.g. Chinese-English: the
> words are totally different, and the grammars are also different, so we
> need more training data;
>
> 2. if the test set contains many different domains of texts, of course the
> training data also need to contain these domains in order to get good
> performance.
>
> Best wishes!
> Pidong
>
>  On 11 May 2012 00:02, tharaka weheragoda  wrote:
>
>>  Hi All,
>>   If anybody knows about the minimum amount of parallel data required for
>> SMT to perform well please let me know.
>>
>> Thanks in advance!
>> Tharaka
>>
>> ___
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>>
>
>
> --
> Wang Pidong
>
> Department of Computer Science
> School of Computing
> National University of Singapore
>
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


[Moses-support] Incremental training for SMT

2011-10-05 Thread HOANG Cong Duy Vu
Hi all,

I am working on the problem that tries to develop a SMT system that can
learn incrementally. The scenario is as follows:

- A state-of-the-art SMT system tries to translate a source language
sentence from users.
- Users identify some translation errors in translated sentence and then
give the correction.
- SMT system gets the correction and learn from that immediately.

What I mean is whether SMT system can learn the user corrections (without
re-training) incrementally.

Do you know any similar ideas or have any advice or suggestion?

Thanks in advance!

--
Cheers,
Vu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support