Re: [Moses-support] Sparse features and overfitting

2015-01-15 Thread HOANG Cong Duy Vu
Thanks for your replies! Hi Prashant, there is definitely an option for sparse l1/l2 regularization with mira. I > don't know how to call it through command line though. Yes. For MIRA, we can set the *C* parameter to control its regularization. I tried different C values (0.01, 0.001) but it di

Re: [Moses-support] Sparse features and overfitting

2015-01-15 Thread Matthias Huck
On Thu, 2015-01-15 at 13:54 +0800, HOANG Cong Duy Vu wrote: > - tune & test > (based on source) > size of overlap set = 624 > (based on target) > size of overlap set = 386 > > (tune & test have high overlapping parts based on source sentences, > but half of them have different target sentences)

Re: [Moses-support] Sparse features and overfitting

2015-01-15 Thread Matthias Huck
We typically try to increase the tuning set in order to obtain more reliable sparse feature weights. But in your case it's rather the test set that seems a bit small for trusting the BLEU scores. Do the sparse features give you any large improvement on the tuning set? On Thu, 2015-01-15 at 13:

[Moses-support] Sparse features and overfitting

2015-01-14 Thread HOANG Cong Duy Vu
Hi, I am working on applying sparse features for *phrase-based* system on *conversational *domain (e.g. SMS, Chat). I used sparse features such as: TargetWordInsertionFeature, SourceWordDeletionFeature, WordTranslationFeature, PhraseLengthFeature. Sparse features are used only for top source and