Re: [Moses-support] Segmentation fault (core dumped) while translating

2019-12-10 Thread Hieu Hoang
hi there

the input sentence can't contain <> characters.

If you had tokenized the input with the moses script

     /scripts/tokenizer/tokenizer.perl
then those characters would have been converted to  

If you used your own tokenizer, be sure to

    /scripts/tokenizer/escape-special-chars.perl

before feeding it to the moses decoder.

And of course, you should process your training data the same way you 
process your test and tuning data

On 12/10/2019 2:13 AM, Claudia Matos Veliz wrote:
> Hello everyone
>
> I’m getting a very estrange error while translating using Moses. I’m 
> doing 10-CV experiments and in one of the folds I get a Segmentation 
> Fault error. For the rest everything finished without problem. I hope 
> you can help me with that. Bellow you can see the log. I get the error 
> in the first line of the file I want to translate. I’ve tried removing 
> the line and I got the same error.
>
> $ /opt/mosesdecoder-4.0-prebuilt/moses/bin/moses -config 
> ../model/moses.ini -input-file token.all.fold2.test.clean.ori > 
> token.all.fold2.test.norm
> Defined parameters (per moses.ini or switch):
> config: ../model/moses.ini
> distortion-limit: 6
> feature: UnknownWordPenalty WordPenalty PhrasePenalty 
> PhraseDictionaryOnDisk name=TranslationModel0 num-features=4 
> path=/home/claudia/SMT_ext_data/data/mixed/amica/1folds/2/model/phrase-table.table
>  
> input-factor=0 output-factor=0 LexicalReordering 
> name=LexicalReordering0 num-features=6 
> type=wbe-msd-bidirectional-fe-allff input-factor=0 output-factor=0 
> path=/home/claudia/SMT_ext_data/data/mixed/amica/1folds/2/model/reordering-table.wbe-msd-bidirectional-fe.gz
>  
> Distortion KENLM name=LM0 factor=0 
> path=/home/claudia/SMT_ext_data/data/lm/lm/sms.cgn.token.lm order=5
> input-factors: 0
> input-file: token.all.fold2.test.clean.ori
> mapping: 0 T 0
> weight: UnknownWordPenalty0= 1 WordPenalty0= -1 PhrasePenalty0= 0.2 
> TranslationModel0= 0.2 0.2 0.2 0.2 LexicalReordering0= 0.3 0.3 0.3 0.3 
> 0.3 0.3 Distortion0= 0.3 LM0= 0.5
> line=UnknownWordPenalty
> FeatureFunction: UnknownWordPenalty0 start: 0 end: 0
> line=WordPenalty
> FeatureFunction: WordPenalty0 start: 1 end: 1
> line=PhrasePenalty
> FeatureFunction: PhrasePenalty0 start: 2 end: 2
> line=PhraseDictionaryOnDisk name=TranslationModel0 num-features=4 
> path=/home/claudia/SMT_ext_data/data/mixed/amica/1folds/2/model/phrase-table.table
>  
> input-factor=0 output-factor=0
> FeatureFunction: TranslationModel0 start: 3 end: 6
> line=LexicalReordering name=LexicalReordering0 num-features=6 
> type=wbe-msd-bidirectional-fe-allff input-factor=0 output-factor=0 
> path=/home/claudia/SMT_ext_data/data/mixed/amica/1folds/2/model/reordering-table.wbe-msd-bidirectional-fe.gz
> Initializing Lexical Reordering Feature..
> FeatureFunction: LexicalReordering0 start: 7 end: 12
> line=Distortion
> FeatureFunction: Distortion0 start: 13 end: 13
> line=KENLM name=LM0 factor=0 
> path=/home/claudia/SMT_ext_data/data/lm/lm/sms.cgn.token.lm order=5
> Loading the LM will be faster if you build a binary file.
> Reading /home/claudia/SMT_ext_data/data/lm/lm/sms.cgn.token.lm
> 5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
> 
> FeatureFunction: LM0 start: 14 end: 14
> Loading UnknownWordPenalty0
> Loading WordPenalty0
> Loading PhrasePenalty0
> Loading LexicalReordering0
> Loading table into memory...done.
> Loading Distortion0
> Loading LM0
> Loading TranslationModel0
> Created input-output object : [2.910] seconds
> Translating:  ik vind die zo mooii . ik hou van je 
> Segmentation fault (core dumped)
>
>
>
>
>
>
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support

-- 
Hieu Hoang
http://statmt.org/hieu

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation Fault (Core Dumped)

2016-01-07 Thread gozde gul
Dear Rajeen,

Thanks a lot for your help. When I change t2 from 0-0 to 0-0,1,2  as you
suggested I don't get a segmentation fault anymore. I tried reading the
manual but I don't really understand why. How are the language models used
during decoding can you briefly explain ?

I thought when I define my translation factors as such:
1-1+2-2+0-0
Language model 1 (lemmaLM) will be used for 1-1, Language model 2 (postagLM)
will be used for 2-2 and Language model 0 (surLM) will be sufficient for
0-0 ? Why do I need to generate other factors for translation surface
forms, I don't really get it.

Thanks a lot for your help,

Gözde


On Mon, Dec 28, 2015 at 7:29 PM, Rajen Chatterjee <
rajen.k.chatter...@gmail.com> wrote:

> Hi Gozde,
>
> Apart from the possible problem of memory, another problem can be that
> your mapping do not generate the LM factors.
> I think that for "t2" instead of 0-0 should have 0-0,1,2 (since you are
> using 3 LM, one for each factor), and similar changes will be required for
> another decoding path.
>
> On Mon, Dec 28, 2015 at 1:30 PM, Hieu Hoang  wrote:
>
>> you should start with simple factored models 1st, perhaps using only 1
>> translation model with 2 factors. Then move onto 1 translation model and 1
>> generation model.
>>
>> The factored you are difficult to control, they use a lot of memory and
>> takes a lot of time. You may be getting errors because it runs out of
>> memory.
>>
>>
>> On 28/12/15 10:01, gozde gul wrote:
>>
>> Hi,
>>
>> I am trying to perform a 3 factored translation from English to Turkish.
>> My example input is as follows:
>>
>> En: Life+NNP|Life|NNP end+VBZ_never+RB|end|VBZ_never+RB but+CC|but|CC
>> earthly+JJ|earthly|JJ life+NN|life|NN do+VBZ|do|VBZ .+.|.|.
>> Tr:  Hayat|hayat|+Noun+A3sg+Pnon+Nom hiç|hiç|+Adverb
>> bitmez|bit|+Verb+Neg+Aor+A3sg fakat|fakat|+Conj
>> dünyadaki|dünya|+Noun+A3sg+Pnon+Loc^DB+Adj+Rel
>> hayat|hayat|+Noun+A3sg+Pnon+Nom biter|bit|+Verb+Pos+Aor+A3sg .|.|+Punc
>>
>> My translation and generation factors and decoding steps are as follows.
>> I am pretty sure they are correct:
>> --translation-factors 1-1+2-2+0-0 \
>> --generation-factors 1,2-0 \
>> --decoding-steps t2:t0,t1,g0
>>
>> I have created language models for all 3 factors with irstlm with the
>> steps explained in moses website.
>>
>> If I train with the following model, it creates a moses.ini. When I
>> manually check the phrase tables and generation table they look meaningful.
>> ~/mosesdecoder/scripts/training/train-model.perl \
>> --parallel --mgiza \
>> --external-bin-dir ~/workspace/bin/training-tools/mgizapp \
>> --root-dir ~/FactoredModel/SmallModel/  \
>> --corpus
>> ~/FactoredModel/SmallModel/factored-corpus/training/korpus_1000K.en-tr.KO.recleaned_new
>> \
>> --f en --e tr --alignment grow-diag-final-and \
>> --reordering msd-bidirectional-fe \
>> --lm 0:3:$HOME/corpus/FilteredCorpus/training/lm/surLM/sur.lm.blm.tr:8 \
>> --lm 1:3:$HOME/corpus/FilteredCorpus/training/lm/lemmaLM/
>> lemma.lm.blm.tr:8 \
>> --lm 2:3:$HOME/corpus/FilteredCorpus/training/lm/postagLM/
>> postags.lm.blm.tr:8 \
>> --alignment-factors 1-1 \
>> --translation-factors 1-1+2-2+0-0 \
>> --generation-factors 1,2-0 \
>> --decoding-steps t2:t0,t1,g0 >& ~/FactoredModel/trainingSmall3lm.out
>>
>> But when I try to decode a very simple one-line sentence, I get a
>> "Segmentation fault (Core Dumped)" message. You can see the detailed
>> decoding log here
>> . I tried many
>> things and I'm in a dead end, so I would really appreciate your help.
>>
>> Thanks,
>>
>> Gozde
>>
>>
>>
>> ___
>> Moses-support mailing 
>> listMoses-support@mit.eduhttp://mailman.mit.edu/mailman/listinfo/moses-support
>>
>>
>> --
>> Hieu Hoanghttp://www.hoang.co.uk/hieu
>>
>>
>> ___
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>>
>
>
> --
> -Regards,
>  Rajen Chatterjee.
>
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation Fault (Core Dumped)

2015-12-28 Thread Hieu Hoang
you should start with simple factored models 1st, perhaps using only 1 
translation model with 2 factors. Then move onto 1 translation model and 
1 generation model.


The factored you are difficult to control, they use a lot of memory and 
takes a lot of time. You may be getting errors because it runs out of 
memory.


On 28/12/15 10:01, gozde gul wrote:

Hi,

I am trying to perform a 3 factored translation from English to 
Turkish. My example input is as follows:


En: Life+NNP|Life|NNP end+VBZ_never+RB|end|VBZ_never+RB but+CC|but|CC 
earthly+JJ|earthly|JJ life+NN|life|NN do+VBZ|do|VBZ .+.|.|.
Tr:  Hayat|hayat|+Noun+A3sg+Pnon+Nom hiç|hiç|+Adverb 
bitmez|bit|+Verb+Neg+Aor+A3sg fakat|fakat|+Conj 
dünyadaki|dünya|+Noun+A3sg+Pnon+Loc^DB+Adj+Rel 
hayat|hayat|+Noun+A3sg+Pnon+Nom biter|bit|+Verb+Pos+Aor+A3sg .|.|+Punc


My translation and generation factors and decoding steps are as 
follows. I am pretty sure they are correct:

--translation-factors 1-1+2-2+0-0 \
--generation-factors 1,2-0 \
--decoding-steps t2:t0,t1,g0

I have created language models for all 3 factors with irstlm with the 
steps explained in moses website.


If I train with the following model, it creates a moses.ini. When I 
manually check the phrase tables and generation table they look 
meaningful.

~/mosesdecoder/scripts/training/train-model.perl \
--parallel --mgiza \
--external-bin-dir ~/workspace/bin/training-tools/mgizapp \
--root-dir ~/FactoredModel/SmallModel/  \
--corpus 
~/FactoredModel/SmallModel/factored-corpus/training/korpus_1000K.en-tr.KO.recleaned_new 
\

--f en --e tr --alignment grow-diag-final-and \
--reordering msd-bidirectional-fe \
--lm 0:3:$HOME/corpus/FilteredCorpus/training/lm/surLM/sur.lm.blm.tr:8 
 \
--lm 
1:3:$HOME/corpus/FilteredCorpus/training/lm/lemmaLM/lemma.lm.blm.tr:8 
 \
--lm 
2:3:$HOME/corpus/FilteredCorpus/training/lm/postagLM/postags.lm.blm.tr:8 
 \

--alignment-factors 1-1 \
--translation-factors 1-1+2-2+0-0 \
--generation-factors 1,2-0 \
--decoding-steps t2:t0,t1,g0 >& ~/FactoredModel/trainingSmall3lm.out

But when I try to decode a very simple one-line sentence, I get a 
"Segmentation fault (Core Dumped)" message. You can see the detailed 
decoding log here 
. I tried 
many things and I'm in a dead end, so I would really appreciate your help.


Thanks,

Gozde



___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


--
Hieu Hoang
http://www.hoang.co.uk/hieu

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation Fault (Core Dumped)

2015-12-28 Thread Rajen Chatterjee
Hi Gozde,

Apart from the possible problem of memory, another problem can be that your
mapping do not generate the LM factors.
I think that for "t2" instead of 0-0 should have 0-0,1,2 (since you are
using 3 LM, one for each factor), and similar changes will be required for
another decoding path.

On Mon, Dec 28, 2015 at 1:30 PM, Hieu Hoang  wrote:

> you should start with simple factored models 1st, perhaps using only 1
> translation model with 2 factors. Then move onto 1 translation model and 1
> generation model.
>
> The factored you are difficult to control, they use a lot of memory and
> takes a lot of time. You may be getting errors because it runs out of
> memory.
>
>
> On 28/12/15 10:01, gozde gul wrote:
>
> Hi,
>
> I am trying to perform a 3 factored translation from English to Turkish.
> My example input is as follows:
>
> En: Life+NNP|Life|NNP end+VBZ_never+RB|end|VBZ_never+RB but+CC|but|CC
> earthly+JJ|earthly|JJ life+NN|life|NN do+VBZ|do|VBZ .+.|.|.
> Tr:  Hayat|hayat|+Noun+A3sg+Pnon+Nom hiç|hiç|+Adverb
> bitmez|bit|+Verb+Neg+Aor+A3sg fakat|fakat|+Conj
> dünyadaki|dünya|+Noun+A3sg+Pnon+Loc^DB+Adj+Rel
> hayat|hayat|+Noun+A3sg+Pnon+Nom biter|bit|+Verb+Pos+Aor+A3sg .|.|+Punc
>
> My translation and generation factors and decoding steps are as follows. I
> am pretty sure they are correct:
> --translation-factors 1-1+2-2+0-0 \
> --generation-factors 1,2-0 \
> --decoding-steps t2:t0,t1,g0
>
> I have created language models for all 3 factors with irstlm with the
> steps explained in moses website.
>
> If I train with the following model, it creates a moses.ini. When I
> manually check the phrase tables and generation table they look meaningful.
> ~/mosesdecoder/scripts/training/train-model.perl \
> --parallel --mgiza \
> --external-bin-dir ~/workspace/bin/training-tools/mgizapp \
> --root-dir ~/FactoredModel/SmallModel/  \
> --corpus
> ~/FactoredModel/SmallModel/factored-corpus/training/korpus_1000K.en-tr.KO.recleaned_new
> \
> --f en --e tr --alignment grow-diag-final-and \
> --reordering msd-bidirectional-fe \
> --lm 0:3:$HOME/corpus/FilteredCorpus/training/lm/surLM/sur.lm.blm.tr:8 \
> --lm 1:3:$HOME/corpus/FilteredCorpus/training/lm/lemmaLM/lemma.lm.blm.tr:8
>  \
> --lm 2:3:$HOME/corpus/FilteredCorpus/training/lm/postagLM/
> postags.lm.blm.tr:8 \
> --alignment-factors 1-1 \
> --translation-factors 1-1+2-2+0-0 \
> --generation-factors 1,2-0 \
> --decoding-steps t2:t0,t1,g0 >& ~/FactoredModel/trainingSmall3lm.out
>
> But when I try to decode a very simple one-line sentence, I get a
> "Segmentation fault (Core Dumped)" message. You can see the detailed
> decoding log here
> . I tried many
> things and I'm in a dead end, so I would really appreciate your help.
>
> Thanks,
>
> Gozde
>
>
>
> ___
> Moses-support mailing 
> listMoses-support@mit.eduhttp://mailman.mit.edu/mailman/listinfo/moses-support
>
>
> --
> Hieu Hoanghttp://www.hoang.co.uk/hieu
>
>
> ___
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>


-- 
-Regards,
 Rajen Chatterjee.
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation fault (core dumped)

2014-03-24 Thread Hieu Hoang
quite difficult to know without some additional information.
  1. What OS are you running on? 32 or 64 bit?
  2. What is the exact command you ran when you got the error?
  3. Is your training data encoded in UTF8?
  4. Are you sure the source and target side of your training corpus have
exactly the same number of sentences?
  5. Did you clean your data with the Moses script
scripts/training/clean-corpus-n.perl
  This gets rid of lines which are too long, double spaces etc.



On 23 March 2014 12:14, Arezoo Arjomand arezooarjom...@yahoo.com wrote:


 I've ran GIZA++ on cygwin but it occurred an error Segmentation fault
 (core dumped). i cann't find any config file. Please help me, how can I
 fix it.
 --
 Best Regards
 Arezoo Arjomandzadeh
 MSc student in Artificial Intelligence
 Computer  IT engineering
 Shahrood University of Technology, Iran


 ___
 Moses-support mailing list
 Moses-support@mit.edu
 http://mailman.mit.edu/mailman/listinfo/moses-support




-- 
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation fault (core dumped) in moses_chart during TUNING for tree-to-tree models

2013-03-25 Thread Kenneth Heafield
Hi,

There's not much we can do with the information you have provided.  Can 
you run the command again with gdb and post a backtrace?  It will be 
more helpful if you use the release or the current version.

Kenneth

On 03/25/13 10:47, Guchun Zhang wrote:
 Hi,

 Hope someone can give me some clues for this issue. Basically, tuning
 dies in the first run and the error is

 //var/spool/gridengine/execd/ubuntu11/job_scripts/82585: line 9: 23008
 Segmentation fault  (core dumped)
 /home/guchun/Work/moses/mosesdecoder/bin/moses_chart -w -0.285714 -lm
 0.142857 -tm 0.057143 0.057143 0.057143 0.057143 0.057143 0.285714
 -config
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/moses.filtered.ini.1
 -inputtype 0 -n-best-list
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/run1.best100.out.split27625-aa
 100 -input-file input.tc.1.split27625-aa 
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/input.tc.1.split27625-aa.trans/

 The language pair is DE-EN. I am using Berkeley Parser for both
 languages. The Moses version is about 6 months old. If I omit
 /n-best-list/ and /inputtype/, the decoder runs OK. But since
 /n-best-list/ is essential to tuning, what can I do to solve this issue?

 Many thanks,
 Guchun


 ___
 Moses-support mailing list
 Moses-support@mit.edu
 http://mailman.mit.edu/mailman/listinfo/moses-support

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation fault (core dumped) in moses_chart during TUNING for tree-to-tree models

2013-03-25 Thread Guchun Zhang
Many thanks, Kenneth.

Here it is.

...
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x76ae1700 (LWP 11397)]
Moses::ChartManager::CalcNBest (this=optimised out, count=100, ret=...,
onlyDistinct=false) at moses/src/ChartManager.cpp:217
217  const HypoList topHypos = *(lastCell.GetSortedHypotheses(*w));
(gdb) backtrace
#0  Moses::ChartManager::CalcNBest (this=optimised out, count=100,
ret=..., onlyDistinct=false) at moses/src/ChartManager.cpp:217
#1  0x0041027b in TranslationTask::Run (this=0x2a67f50) at
moses-chart-cmd/src/Main.cpp:122
#2  0x0052e5b1 in Moses::ThreadPool::Execute (this=0x7fffd960)
at moses/src/ThreadPool.cpp:58
#3  0x0068d034 in thread_proxy ()
#4  0x771aae9a in start_thread (arg=0x76ae1700) at
pthread_create.c:308
#5  0x76ed7cbd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#6  0x in ?? ()

It seems something is checking at address 0. Hope the info helps you (and
me).

Regards,
Guchun

On 25 March 2013 10:47, Guchun Zhang gzh...@alphacrc.com wrote:

 Hi,

 Hope someone can give me some clues for this issue. Basically, tuning dies
 in the first run and the error is

 */var/spool/gridengine/execd/ubuntu11/job_scripts/82585: line 9: 23008
 Segmentation fault  (core dumped)
 /home/guchun/Work/moses/mosesdecoder/bin/moses_chart -w -0.285714 -lm
 0.142857 -tm 0.057143 0.057143 0.057143 0.057143 0.057143 0.285714 -config
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/moses.filtered.ini.1
 -inputtype 0 -n-best-list
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/run1.best100.out.split27625-aa
 100 -input-file input.tc.1.split27625-aa 
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/input.tc.1.split27625-aa.trans
 *

 The language pair is DE-EN. I am using Berkeley Parser for both languages.
 The Moses version is about 6 months old. If I omit *n-best-list* and *
 inputtype*, the decoder runs OK. But since *n-best-list* is essential to
 tuning, what can I do to solve this issue?

 Many thanks,
 Guchun

___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation fault (core dumped) in moses_chart during TUNING for tree-to-tree models

2013-03-25 Thread Hieu Hoang
If you have parsed data for the source and target, you should change
the *-inputtype
from
  0 (SentenceInput)
to
  3 (TreeInputType)
it's quite difficult to debug this error. Can you make available a small
model and test data that replicate this problem?

also, bear in mind tree-to-tree models will have coverage problems so
they're unlikely to get competitive bleu scores.
*
On 25 March 2013 10:47, Guchun Zhang gzh...@alphacrc.com wrote:

 Hi,

 Hope someone can give me some clues for this issue. Basically, tuning dies
 in the first run and the error is

 */var/spool/gridengine/execd/ubuntu11/job_scripts/82585: line 9: 23008
 Segmentation fault  (core dumped)
 /home/guchun/Work/moses/mosesdecoder/bin/moses_chart -w -0.285714 -lm
 0.142857 -tm 0.057143 0.057143 0.057143 0.057143 0.057143 0.285714 -config
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/moses.filtered.ini.1
 -inputtype 0 -n-best-list
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/run1.best100.out.split27625-aa
 100 -input-file input.tc.1.split27625-aa 
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/input.tc.1.split27625-aa.trans
 *

 The language pair is DE-EN. I am using Berkeley Parser for both languages.
 The Moses version is about 6 months old. If I omit *n-best-list* and *
 inputtype*, the decoder runs OK. But since *n-best-list* is essential to
 tuning, what can I do to solve this issue?

 Many thanks,
 Guchun

 ___
 Moses-support mailing list
 Moses-support@mit.edu
 http://mailman.mit.edu/mailman/listinfo/moses-support




-- 
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


Re: [Moses-support] Segmentation fault (core dumped) in moses_chart during TUNING for tree-to-tree models

2013-03-25 Thread Guchun Zhang
Thanks for the advice, Hieu. I will prepare a sample model later.

Cheers,
Guchun

On 25 March 2013 17:33, Hieu Hoang hieu.ho...@ed.ac.uk wrote:

 If you have parsed data for the source and target, you should change the 
 *-inputtype
 from
   0 (SentenceInput)
 to
   3 (TreeInputType)
 it's quite difficult to debug this error. Can you make available a small
 model and test data that replicate this problem?

 also, bear in mind tree-to-tree models will have coverage problems so
 they're unlikely to get competitive bleu scores.
 *
 On 25 March 2013 10:47, Guchun Zhang gzh...@alphacrc.com wrote:

 Hi,

 Hope someone can give me some clues for this issue. Basically, tuning
 dies in the first run and the error is

 */var/spool/gridengine/execd/ubuntu11/job_scripts/82585: line 9: 23008
 Segmentation fault  (core dumped)
 /home/guchun/Work/moses/mosesdecoder/bin/moses_chart -w -0.285714 -lm
 0.142857 -tm 0.057143 0.057143 0.057143 0.057143 0.057143 0.285714 -config
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/moses.filtered.ini.1
 -inputtype 0 -n-best-list
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/run1.best100.out.split27625-aa
 100 -input-file input.tc.1.split27625-aa 
 /home/guchun/Work/moses/tests/exp-german/model/DE_DE-EN_US/tuning/tmp.1/tmp27625/input.tc.1.split27625-aa.trans
 *

 The language pair is DE-EN. I am using Berkeley Parser for both
 languages. The Moses version is about 6 months old. If I omit *
 n-best-list* and *inputtype*, the decoder runs OK. But since *n-best-list
 * is essential to tuning, what can I do to solve this issue?

 Many thanks,
 Guchun

 ___
 Moses-support mailing list
 Moses-support@mit.edu
 http://mailman.mit.edu/mailman/listinfo/moses-support




 --
 Hieu Hoang
 Research Associate
 University of Edinburgh
 http://www.hoang.co.uk/hieu


___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support