Hi Lakysha

There has been very little change in the moses server.

Try running the following before you launch moses server
export XMLRPC_TRACE_XML=1
and you should get a dump of the xmlrpc messages. This may help you 
debug the problem,

cheers - Barry

On 09/04/14 20:47, Lakshya wrote:
> Hi Everybody,
>
> I am also facing problem with Mosesdecoder.v211 moseserver.  I have 
> compiled the moseserver with out any error and the moseserver is 
> listenening to the port also. But when a translation request is going 
> from the interface, there is no responds from the mosesserver.
>
> I am getting the folowing exception.. 
> org.apache.xmlrpc.XmlRpcException: Failed to read server's response: 
> Connection refused
>
> Is there any difference in the moseswerver connection of Mosesdecoder 
> Release 1.0 and Mosesdecoder.V211. ?
>
>
>
> could anybody please clarify these doubts and how can I establish the 
> moseserver connection..
>
>
> Regards
> Lakshya
>
> Message: 1
> Date: Mon, 7 Apr 2014 18:25:35 +0100
> From: kamel nebhi <k.ne...@sheffield.ac.uk 
> <mailto:k.ne...@sheffield.ac.uk>>
> Subject: [Moses-support] moses server segmentation fault (core dumped)
> To: moses-support <moses-support@mit.edu <mailto:moses-support@mit.edu>>
> Message-ID:
>         <CAG66Y3c2UFq5+
> 2w4eV00RrwVrMWTtVxpQ7=jgxfwng+afnr...@mail.gmail.com 
> <mailto:jgxfwng%2bafnr...@mail.gmail.com>>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I try to install mosesserver on localhost. I have installed xml-rpc and
> rebuild moses with no problem.
>
> Next i use this command to run the server : 
> *~/mosesdecoder/bin/mosesserver
> -f working/model/moses.ini --server-port 8999*
>
> But it failed with this message :
>
> Defined parameters (per moses.ini or switch):
>  config: /home/kamelnebhi/recaser/training/moses.ini
> distortion-limit: 6
> feature: UnknownWordPenalty WordPenalty PhrasePenalty
> PhraseDictionaryMemory name=TranslationModel0 table-limit=20 
> num-features=4
> path=/home/kamelnebhi/recaser/training/phrase-table.gz input-factor=0
> output-factor=0 Distortion KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
>  input-factors: 0
> mapping: 0 T 0
> weight: UnknownWordPenalty0= 1 WordPenalty0= -1 PhrasePenalty0= 0.2
> TranslationModel0= 0.2 0.2 0.2 0.2 Distortion0= 0.3 LM0= 0.5
> /home/kamelnebhi/mosesdecoder/bin
> line=UnknownWordPenalty
> FeatureFunction: UnknownWordPenalty0 start: 0 end: 0
> line=WordPenalty
> FeatureFunction: WordPenalty0 start: 1 end: 1
> line=PhrasePenalty
> FeatureFunction: PhrasePenalty0 start: 2 end: 2
> line=PhraseDictionaryMemory name=TranslationModel0 table-limit=20
> num-features=4 path=/home/kamelnebhi/recaser/training/phrase-table.gz
> input-factor=0 output-factor=0
> FeatureFunction: TranslationModel0 start: 3 end: 6
> line=Distortion
> FeatureFunction: Distortion0 start: 7 end: 7
> line=KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
> FeatureFunction: LM0 start: 8 end: 8
> Loading the LM will be faster if you build a binary file.
> Reading /home/kamelnebhi/recaser/training//cased.srilm.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> *The ARPA file is missing <unk>.  Substituting log10 probability -100.
> ***************************************************************************************************
> Loading UnknownWordPenalty0
> Loading WordPenalty0
> Loading PhrasePenalty0
> Loading Distortion0
> Loading LM0
> Loading TranslationModel0
> Start loading text SCFG phrase table. Moses  format : [3.69361] seconds
> Reading /home/kamelnebhi/recaser/training/phrase-table.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> ****************************************************************************************************
> Erreur de segmentation (core dumped)
> root@kamelnebhi-MacBookPro:/home/kamelnebhi#
> /home/kamelnebhi/mosesdecoder/bin/mosesserver -f
> /home/kamelnebhi/recaser/training/moses.ini  --server-port 80
> Defined parameters (per moses.ini or switch):
>  config: /home/kamelnebhi/recaser/training/moses.ini
> distortion-limit: 6
> feature: UnknownWordPenalty WordPenalty PhrasePenalty
> PhraseDictionaryMemory name=TranslationModel0 table-limit=20 
> num-features=4
> path=/home/kamelnebhi/recaser/training/phrase-table.gz input-factor=0
> output-factor=0 Distortion KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
>  input-factors: 0
> mapping: 0 T 0
> weight: UnknownWordPenalty0= 1 WordPenalty0= -1 PhrasePenalty0= 0.2
> TranslationModel0= 0.2 0.2 0.2 0.2 Distortion0= 0.3 LM0= 0.5
> /home/kamelnebhi/mosesdecoder/bin
> line=UnknownWordPenalty
> FeatureFunction: UnknownWordPenalty0 start: 0 end: 0
> line=WordPenalty
> FeatureFunction: WordPenalty0 start: 1 end: 1
> line=PhrasePenalty
> FeatureFunction: PhrasePenalty0 start: 2 end: 2
> line=PhraseDictionaryMemory name=TranslationModel0 table-limit=20
> num-features=4 path=/home/kamelnebhi/recaser/training/phrase-table.gz
> input-factor=0 output-factor=0
> FeatureFunction: TranslationModel0 start: 3 end: 6
> line=Distortion
> FeatureFunction: Distortion0 start: 7 end: 7
> line=KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
> FeatureFunction: LM0 start: 8 end: 8
> Loading the LM will be faster if you build a binary file.
> Reading /home/kamelnebhi/recaser/training//cased.srilm.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> *The ARPA file is missing <unk>.  Substituting log10 probability -100.
> ***************************************************************************************************
> Loading UnknownWordPenalty0
> Loading WordPenalty0
> Loading PhrasePenalty0
> Loading Distortion0
> Loading LM0
> Loading TranslationModel0
> Start loading text SCFG phrase table. Moses  format : [3.69152] seconds
> Reading /home/kamelnebhi/recaser/training/phrase-table.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> ****************************************************************************************************
> Segmentation fault (core dumped)
>
> *Thanks for your help*
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> http://mailman.mit.edu/mailman/private/moses-support/attachments/20140407/82302fab/attachment-0001.htm
>
> ------------------------------
>
> ---------- Forwarded message ----------
> From: <moses-support-requ...@mit.edu 
> <mailto:moses-support-requ...@mit.edu>>
> Date: Tue, Apr 8, 2014 at 3:57 AM
> Subject: Moses-support Digest, Vol 90, Issue 19
> To: moses-support@mit.edu <mailto:moses-support@mit.edu>
>
>
> Send Moses-support mailing list submissions to
> moses-support@mit.edu <mailto:moses-support@mit.edu>
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.mit.edu/mailman/listinfo/moses-support
> or, via email, send a message with subject or body 'help' to
> moses-support-requ...@mit.edu <mailto:moses-support-requ...@mit.edu>
>
> You can reach the person managing the list at
> moses-support-ow...@mit.edu <mailto:moses-support-ow...@mit.edu>
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Moses-support digest..."
>
>
> Today's Topics:
>
>    1. moses server segmentation fault (core dumped) (kamel nebhi)
>    2. Re: Monolingual Word alignment (Philipp Koehn)
>    3. Call for Participation: Automatic and Manual Metrics for
>       Operational Translation Evaluation (Lucia Specia)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 7 Apr 2014 18:25:35 +0100
> From: kamel nebhi <k.ne...@sheffield.ac.uk 
> <mailto:k.ne...@sheffield.ac.uk>>
> Subject: [Moses-support] moses server segmentation fault (core dumped)
> To: moses-support <moses-support@mit.edu <mailto:moses-support@mit.edu>>
> Message-ID:
>         
> <CAG66Y3c2UFq5+2w4eV00RrwVrMWTtVxpQ7=jgxfwng+afnr...@mail.gmail.com 
> <mailto:jgxfwng%2bafnr...@mail.gmail.com>>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> I try to install mosesserver on localhost. I have installed xml-rpc and
> rebuild moses with no problem.
>
> Next i use this command to run the server : 
> *~/mosesdecoder/bin/mosesserver
> -f working/model/moses.ini --server-port 8999*
>
> But it failed with this message :
>
> Defined parameters (per moses.ini or switch):
>  config: /home/kamelnebhi/recaser/training/moses.ini
> distortion-limit: 6
> feature: UnknownWordPenalty WordPenalty PhrasePenalty
> PhraseDictionaryMemory name=TranslationModel0 table-limit=20 
> num-features=4
> path=/home/kamelnebhi/recaser/training/phrase-table.gz input-factor=0
> output-factor=0 Distortion KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
>  input-factors: 0
> mapping: 0 T 0
> weight: UnknownWordPenalty0= 1 WordPenalty0= -1 PhrasePenalty0= 0.2
> TranslationModel0= 0.2 0.2 0.2 0.2 Distortion0= 0.3 LM0= 0.5
> /home/kamelnebhi/mosesdecoder/bin
> line=UnknownWordPenalty
> FeatureFunction: UnknownWordPenalty0 start: 0 end: 0
> line=WordPenalty
> FeatureFunction: WordPenalty0 start: 1 end: 1
> line=PhrasePenalty
> FeatureFunction: PhrasePenalty0 start: 2 end: 2
> line=PhraseDictionaryMemory name=TranslationModel0 table-limit=20
> num-features=4 path=/home/kamelnebhi/recaser/training/phrase-table.gz
> input-factor=0 output-factor=0
> FeatureFunction: TranslationModel0 start: 3 end: 6
> line=Distortion
> FeatureFunction: Distortion0 start: 7 end: 7
> line=KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
> FeatureFunction: LM0 start: 8 end: 8
> Loading the LM will be faster if you build a binary file.
> Reading /home/kamelnebhi/recaser/training//cased.srilm.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> *The ARPA file is missing <unk>.  Substituting log10 probability -100.
> ***************************************************************************************************
> Loading UnknownWordPenalty0
> Loading WordPenalty0
> Loading PhrasePenalty0
> Loading Distortion0
> Loading LM0
> Loading TranslationModel0
> Start loading text SCFG phrase table. Moses  format : [3.69361] seconds
> Reading /home/kamelnebhi/recaser/training/phrase-table.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> ****************************************************************************************************
> Erreur de segmentation (core dumped)
> root@kamelnebhi-MacBookPro:/home/kamelnebhi#
> /home/kamelnebhi/mosesdecoder/bin/mosesserver -f
> /home/kamelnebhi/recaser/training/moses.ini  --server-port 80
> Defined parameters (per moses.ini or switch):
>  config: /home/kamelnebhi/recaser/training/moses.ini
> distortion-limit: 6
> feature: UnknownWordPenalty WordPenalty PhrasePenalty
> PhraseDictionaryMemory name=TranslationModel0 table-limit=20 
> num-features=4
> path=/home/kamelnebhi/recaser/training/phrase-table.gz input-factor=0
> output-factor=0 Distortion KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
>  input-factors: 0
> mapping: 0 T 0
> weight: UnknownWordPenalty0= 1 WordPenalty0= -1 PhrasePenalty0= 0.2
> TranslationModel0= 0.2 0.2 0.2 0.2 Distortion0= 0.3 LM0= 0.5
> /home/kamelnebhi/mosesdecoder/bin
> line=UnknownWordPenalty
> FeatureFunction: UnknownWordPenalty0 start: 0 end: 0
> line=WordPenalty
> FeatureFunction: WordPenalty0 start: 1 end: 1
> line=PhrasePenalty
> FeatureFunction: PhrasePenalty0 start: 2 end: 2
> line=PhraseDictionaryMemory name=TranslationModel0 table-limit=20
> num-features=4 path=/home/kamelnebhi/recaser/training/phrase-table.gz
> input-factor=0 output-factor=0
> FeatureFunction: TranslationModel0 start: 3 end: 6
> line=Distortion
> FeatureFunction: Distortion0 start: 7 end: 7
> line=KENLM lazyken=0 name=LM0 factor=0
> path=/home/kamelnebhi/recaser/training//cased.srilm.gz order=3
> FeatureFunction: LM0 start: 8 end: 8
> Loading the LM will be faster if you build a binary file.
> Reading /home/kamelnebhi/recaser/training//cased.srilm.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> *The ARPA file is missing <unk>.  Substituting log10 probability -100.
> ***************************************************************************************************
> Loading UnknownWordPenalty0
> Loading WordPenalty0
> Loading PhrasePenalty0
> Loading Distortion0
> Loading LM0
> Loading TranslationModel0
> Start loading text SCFG phrase table. Moses  format : [3.69152] seconds
> Reading /home/kamelnebhi/recaser/training/phrase-table.gz
> ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> ****************************************************************************************************
> Segmentation fault (core dumped)
>
> *Thanks for your help*
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> http://mailman.mit.edu/mailman/private/moses-support/attachments/20140407/82302fab/attachment-0001.htm
>
> ------------------------------
>
> Message: 2
> Date: Mon, 7 Apr 2014 16:00:47 -0400
> From: Philipp Koehn <pko...@inf.ed.ac.uk <mailto:pko...@inf.ed.ac.uk>>
> Subject: Re: [Moses-support] Monolingual Word alignment
> To: Mostafa Dehghani <dehghani.most...@gmail.com 
> <mailto:dehghani.most...@gmail.com>>
> Cc: "moses-support@mit.edu <mailto:moses-support@mit.edu>" 
> <moses-support@mit.edu <mailto:moses-support@mit.edu>>
> Message-ID:
>         
> <CAAFADDBXoZv7=u5rqjay3brfwob92ywkrvfekdz6yrx29m7...@mail.gmail.com 
> <mailto:u5rqjay3brfwob92ywkrvfekdz6yrx29m7...@mail.gmail.com>>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> this outcome is not that surprising to me.
>
> If you align identical sentences, then translating each word to itself
> is a pretty good model.
>
> Since your goal is paraphrasing words into synonyms, you should
> rather use methods such as the one proposed by Bannard and
> Callison-Burch: http://acl.ldc.upenn.edu/P/P05/P05-1074.pdf
>
> -phi
>
> On Sun, Apr 6, 2014 at 11:11 AM, Mostafa Dehghani
> <dehghani.most...@gmail.com <mailto:dehghani.most...@gmail.com>> wrote:
> > Dear all,
> >
> > I am working on a method for Multilingual Information Retrieval. In my
> > method I expand the text of each document by probabilistically 
> translating
> > its words to other languages' words (interlingual expansion). 
> However, to
> > pass some axioms, I need to expand text of each document in its own 
> language
> > (intralingual expansion). So, beside bilingual word alignments, I need
> > monolingual word alignments table (that probably contains the 
> alignment of
> > each word to the words those are related/concurred with that word). 
> To do
> > so, I used one side of each language sentences and their copy as 
> parallel
> > corpus. Then I used the following command:
> >
> >
> > train-model.perl -root-dir train  -corpus corpus/fr-fr -f fr1 -e fr2
> > -alignment grow-diag-final-and -reordering msd-bidirectional-fe
> > -external-bin-dir externalbin -last-step 4
> >
> >
> > such that fr-fr.fr1 fr-fr.fr2 are the same files containing French
> > sentences.
> > However, I got f2e and e2f files that are only contain alignments of 
> each
> > word to itself with probability of 1.
> > I am wondering is there any parameter that I should set to achieve words
> > alignments (e2f/f2e) those are proper for intralingual expansion?
> >
> > Regards,
> >
> > --
> > Mostafa
> > ,
> >
> > http://khorshid.ut.ac.ir/~m.dehghani 
> <http://khorshid.ut.ac.ir/%7Em.dehghani>
> >
> > _______________________________________________
> > Moses-support mailing list
> > Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 7 Apr 2014 23:27:07 +0100
> From: Lucia Specia <lspe...@gmail.com <mailto:lspe...@gmail.com>>
> Subject: [Moses-support] Call for Participation: Automatic and Manual
>         Metrics for Operational Translation Evaluation
> To: moses-support@mit.edu <mailto:moses-support@mit.edu>, 
> wmt-ta...@googlegroups.com <mailto:wmt-ta...@googlegroups.com>
> Message-ID:
>         
> <caaleuxzvnsv0xp-uvxok0z8qwtkd-jt11ftoporho0ras9u...@mail.gmail.com 
> <mailto:caaleuxzvnsv0xp-uvxok0z8qwtkd-jt11ftoporho0ras9u...@mail.gmail.com>>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Dear all,
>
> This workshop may be relevant for those of you interested in MT evaluation
> metrics.
>
> ----
>
> Automatic and Manual Metrics for Operational Translation Evaluation
>
> http://mte2014.github.io/
>
> 26 May 2014
>
> Workshop at Language Resources and Evaluation Conference (LREC) 2014
>
> http://lrec2014.lrec-conf.org
>
> In brief:
>
> We invite you to join us for an interesting day of work (and play!) as we
> discuss metrics for machine translation quality assessment and participate
> in some hands-on task-based translation evaluation.
>
>  This workshop on Automatic and Manual Metrics for Operational Translation
> Evaluation (MTE 2014) will be a full-day LREC workshop to be held on
> Monday, May 26, 2014 in Reykjavik, Iceland. The format of MTE 2014 will be
> interactive and energizing:  a half-day of short presentations and
> discussion of recent work on machine translation quality assessment,
> followed by a half-day of hands-on collaborative work with MT metrics that
> show promise for the prediction of task suitability of MT output. The
> afternoon hands-on work will follow from the morning's presentations, with
> some of the hands-on exercises developed directly from the submissions to
> the workshop.
>
>  Details:
>
> While a significant body of work has been done by the machine translation
> (MT) research community towards the development and meta-evaluation of
> automatic metrics to assess overall MT quality, less attention has been
> dedicated to more operational evaluation metrics aimed at testing whether
> translations are adequate within a specific context: purpose, end-user,
> task, etc., and why the MT system fails in some cases. Both of these can
> benefit from some form of manual analysis. Most work in this area is
> limited to productivity tests (e.g. contrasting time for human translation
> and MT post-editing). A few initiatives consider more detailed metrics for
> the problem, which can also be used to understand and diagnose errors 
> in MT
> systems. These include the Multidimensional Quality Metrics (MQM) recently
> proposed by the EU F7 project QTLaunchPad, the TAUS Dynamic Quality
> Framework, and past projects such as FEMTI, EAGLES and ISLE. Some of these
> metrics are also applicable to human translation evaluation. A number of
> task-based metrics have also been proposed for applications such as topic
> ID / triage and reading comprehension. The purpose of this workshop is to
> bring together representatives from academia, industry and government
> institutions to discuss and assess metrics for manual and automatic 
> quality
> evaluation, with an eye toward how they might be leveraged or further
> developed into task-based metrics for more objective "fitness for purpose"
> assessment. We will also consider comparisons to well-established metrics
> for automatic evaluation such as BLEU, METEOR and others, including
> reference-less metrics for quality prediction. The workshop will benefit
> from datasets already collected and manually annotated for translation
> errors by the QTLaunchPad project (http://www.qt21.eu/launchpad/) and will
> cover concepts from many the metrics proposed by participants in the
> half-day of hands-on tasks.
>
> Up-to-the-minute information and (most importantly) Registration:
>
> Additional details and schedule will be posted at the workshop website
> http://mte2014.github.io/ as they become available. Register to attend via
> the LREC registration site at 
> http://lrec2014.lrec-conf.org/en/registration/
> .
>
>  We look forward to seeing you there!
>
>  The MTE 2014 Organizing Committee
>
> Keith J. Miller (MITRE)
>
> Lucia Specia (University of Sheffield)
>
> Kim Harris (GALA and text & form)
>
> Stacey Bailey  (MITRE)
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: 
> http://mailman.mit.edu/mailman/private/moses-support/attachments/20140407/a8dbf5d0/attachment.htm
>
> ------------------------------
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
> End of Moses-support Digest, Vol 90, Issue 19
> *********************************************
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support


-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to