Hi,

I am not familiar with that, but somewhat related is
Arne Mauser's global lexical model, which also exists
as a secret feature in Moses (secret because no
effiencient training exists):

Citation:
A. Mauser, S. Hasan, and H. Ney. Extending Statistical Machine
Translation with Discriminative and Trigger-Based Lexicon Models. In
Conference on Empirical Methods in Natural Language Processing
(EMNLP), Singapore, August 2009.
http://www-i6.informatik.rwth-aachen.de/publications/download/628/MauserArneHasanSav%7Bs%7DaNeyHermann--ExtendingStatisticalMachineTranslationwithDiscriminativeTrigger-BasedLexiconModels--2009.pdf

-phi


On Fri, Oct 22, 2010 at 7:02 PM, Francis Tyers <fty...@prompsit.com> wrote:
> Hello all,
>
> I have a rather strange request. Does anyone know of any papers (or
> impementations) on bag-of-words language models ? That is, a language
> model which does not take into account the order in which the words
> appear in an ngram, so if you have the string 'police chief of' in your
> model, you will get a result for both 'chief of police' and 'police
> chief of'. I have thought of using IRSTLM or some generic model and
> scoring all the permutations, but wondered if there was a more efficient
> implementation already in existence. I have searched without much luck
> in Google, but perhaps I am searching with the wrong words.
>
> Best regards,
>
> Fran
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to