Thanks for your replies!
Hi Prashant,
there is definitely an option for sparse l1/l2 regularization with mira. I
> don't know how to call it through command line though.
Yes. For MIRA, we can set the *C* parameter to control its regularization.
I tried different C values (0.01, 0.001) but it di
On Thu, 2015-01-15 at 13:54 +0800, HOANG Cong Duy Vu wrote:
> - tune & test
> (based on source)
> size of overlap set = 624
> (based on target)
> size of overlap set = 386
>
> (tune & test have high overlapping parts based on source sentences,
> but half of them have different target sentences)
We typically try to increase the tuning set in order to obtain more
reliable sparse feature weights. But in your case it's rather the test
set that seems a bit small for trusting the BLEU scores.
Do the sparse features give you any large improvement on the tuning set?
On Thu, 2015-01-15 at 13:
Hi,
I am working on applying sparse features for *phrase-based* system on
*conversational
*domain (e.g. SMS, Chat).
I used sparse features such as: TargetWordInsertionFeature,
SourceWordDeletionFeature, WordTranslationFeature, PhraseLengthFeature.
Sparse features are used only for top source and