submissions:
Paper submission deadline: August 15th, 2020
Notification of acceptance: September 29th, 2020
Camera-ready deadline: October 10th, 2020
Online conference: November 19-20th, 2020
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Notification of acceptance: September 29th, 2020
Camera-ready deadline: October 10th, 2020
Online conference: November 19-20th, 2020
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336
some initial MT results.
https://arxiv.org/abs/2001.09907
Barry Haddow and Faheem Kirefu
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
___
Moses-support mailing list
Moses-support
.
IMPORTANT DATES
Paper submissions:
Paper submission deadline: May 17th, 2019
Notification of acceptance: June 7th, 2019
Camera-ready deadline: June 17th, 2019
Conference in Florence : August 1-2, 2019
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body
encouraged to register with the mailing list
for further announcements
(https://groups.google.com/forum/#!forum/wmt-tasks
<https://groups.google.com/forum/#%21forum/wmt-tasks>)
For all tasks, participants will also be invited to submit a short
paper describing their system.
Best wishes
.
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
are encouraged to register with the mailing list
for further announcements
(https://groups.google.com/forum/#!forum/wmt-tasks)
For all tasks, participants will also be invited to submit a short
paper describing their system.
Best wishes
Barry Haddow
(On behalf of the organisers)
The
ieces of 10K sentences each and loop
> over the files. I usually have bad experience with trying to translate
> large batches of text with moses.
>
> Is still trying to load the entire corpus into memory? It used to do that.
>
> W dniu 12.12.2017 o 10:16, Barry Haddow pisze:
&g
Hi Liling
The short answer is you need need to prune/filter your phrase table
prior to creating the compact phrase table. I don't mean "filter model
given input", because that won't make much difference if you have a very
large input, I mean getting rid of rare translations which won't be used
Hi All
I did produce a version of experiment.perl for Groundhog (remember
that?) but it's not much use for any other nmt system. The problem (well
actually the big advantage!) of nmt is that the pipeline is too simple
for a tool like experiment.perl. And the experiments that do need tool
suppo
Hi Jorg
Since the operation sequence model is based in minimal phrase pairs, its
training code should be able to do the extraction (although I'm not
familiar with this code)
cheers - Barry
On 08/11/17 19:12, Jorg Tiedemann wrote:
Hi,
Can I use moses extract or any other tool to extract onl
Hi Vincent
Looks fine to me:
> wc -l news-commentary-v12.de-en.*
> 270769 news-commentary-v12.de-en.de
> 270769 news-commentary-v12.de-en.en
> 541538 total
What are you running that shows you different line numbers?
cheers - Barry
On 12/09/17 10:06, Vincent Nguyen wrote:
> Hi,
> Is there
15th, 2017
Notification of acceptance: June 30th, 2017
Camera-ready deadline: July 14th, 2017
Workshop in Copenhagen preceding EMNLP: September 7-8th, 2017
Barry Haddow
(On behalf of the organisers)
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration
Hi Amir
You could also try this paper for a derivation of the complexity of PBMT
decoding
https://www.aclweb.org/anthology/E/E09/E09-1061v2.pdf
cheers - Barry
On 27/02/17 15:54, Philipp Koehn wrote:
> Hi,
>
> I am not sure if you follow your question - in the formula you cite,
> there are expon
.
IMPORTANT DATES
Paper submissions:
Paper submission deadline: June 9th, 2017
Notification of acceptance: June 30th, 2017
Camera-ready deadline: July 14th, 2017
Workshop in Copenhagen preceding EMNLP: September 7-8th, 2017
For shared task timetable, see website.
Barry Haddow
(On behalf of the
participants are encouraged to register
with the mailing list for further announcements
(https://groups.google.com/forum/#!forum/wmt-tasks)
For all tasks, participants will also be invited to submit a short
paper describing their system.
Best wishes
Barry Haddow
(On behalf of the
In steps/0
On 05/12/16 22:36, Fred Blain wrote:
hi Lane,
if you omit the '-exec' in your call to experiment.perl, it will only
generate the required scripts without running anything. you will find the
scripts under the steps/ folder.
best,
___
Mose
Hi Nat
Imagine it's a translator using MT and somehow he/she has translated
the sentence before and just wants the exact translation. A TM would
solve the problem and Moses surely could emulate the TM but NMT tends
to go overly creative and produces something else.
Then just use a TM for this
lt;https://drive.google.com/file/d/0BxvJK3H5ZKsnYzJiZmhjUWI0Qlk/view?usp=drive_web>
2016-11-02 14:28 GMT+02:00 Barry Haddow <mailto:bhad...@staffmail.ed.ac.uk>>:
Adding
-first-step 5 -last-step 5
will just run step 5 (phrase extraction)
On 02/11/16 12:01, Has
Adding
-first-step 5 -last-step 5
will just run step 5 (phrase extraction)
On 02/11/16 12:01, Hasan Sait ARSLAN wrote:
For instance, could you show me an example?
Thanks,
2016-11-02 13:57 GMT+02:00 Barry Haddow <mailto:bhad...@staffmail.ed.ac.uk>>:
Hi Hasan
You can use
FactoredTraining.BuildReorderingModel>
* 8 Generation model
<http://www.statmt.org/moses/?n=FactoredTraining.BuildGenerationModel>
* 9 Configuration file
<http://www.statmt.org/moses/?n=FactoredTraining.CreateConfigurationFile>
steps manually?
201
11:13, Hasan Sait ARSLAN wrote:
Hi Barry,
Unfortunately I didn't keep the log file. Is it really a hopeless
situation?
2016-11-02 13:10 GMT+02:00 Barry Haddow <mailto:bhad...@staffmail.ed.ac.uk>>:
Hi Hasan
You should have run train_model.perl something like this:
.
Cheers,
2016-11-02 12:55 GMT+02:00 Barry Haddow <mailto:bhad...@staffmail.ed.ac.uk>>:
Hi Hasan
If your phrase table is empty, that would explain why tuning
didn't work. Something went wrong earlier in the process. Could
you post your log file from train_model.perl
have trained my data for 5
days, and the folder "train" is 39 G, but there is no any phrases
saved on phrase table. It is annoying. What should I do now? I hope I
won't need to rerun everything from the scratch
2016-11-02 12:26 GMT+02:00 Barry Haddow <mailto:bhad...@staffmail.ed.
Hi Hasan
The error message should be written into filterphrases.err, inside your
working directory,
cheers - Barry
On 02/11/16 10:02, Hasan Sait ARSLAN wrote:
Hi Hieu,
I did in the way, you want. Plus, I am sure the path and file names
are correctly spelt.
But still, I get the same error
nce ? (with the wmt11 files)
>
>
>
> Le 04/10/2016 à 21:46, Barry Haddow a écrit :
>> Hi Vincent
>>
>> Are you comparing compressed with uncompressed files?
>>
>> cheers - Barry
>>
>> On 04/10/16 14:40, Vincent Nguyen wrote:
>>> Hi,
&
Hi Vincent
Are you comparing compressed with uncompressed files?
cheers - Barry
On 04/10/16 14:40, Vincent Nguyen wrote:
> Hi,
>
> on this link:
>
> http://www.statmt.org/wmt11/translation-task.html
>
> on the download section for monolingual data, there is :
>
> one big file : http://www.statmt
N-Best Hypotheses Generation Time: : [401.645] seconds
Sentence Decoding Time: : [401.650] seconds
Translation took 1907.342 seconds
Should I check anything else?
Regards
Arefeh
On Sat, Aug 20, 2016 at 4:53 PM, Barry Haddow
mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi Arefeh
Attached.
6:13 PM, Barry Haddow
mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi Arefeh
The quickest way to see if Moses is using your feature is to put a
debug message in it to see if it gets called. You can also
increase the debug of Moses (try -v 2) to see if your feature's
Moses runs normally but weights file remains empty. It
seems moses doesn't use my feature.
Regards
Arefeh
On Wed, Aug 17, 2016 at 12:57 PM, Barry Haddow
mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi Arefeh
That seems OK. Tuning (with kbmira or pro) will create a weights
Hi Arefeh
That seems OK. Tuning (with kbmira or pro) will create a weights file
for the sparse features, which you can add with:
[weight-file]
/path/to/sparse/weights
What goes wrong when you run moses?
cheers - Barry
On 17/08/16 07:50, arefeh kazemi wrote:
Hi
This is just a kindly reminde
Hi Bogdan
Why do you set the maximum phrase length to 20? Such long phrases are
unlikely to be useful, and could be the cause of the excessive resource
usage.
Other than that, the system you describe should not be using up 192G ram.
cheers - Barry
On 01/08/16 20:40, Bogdan Vasilescu wrote:
>
Hi Tomasz
The error message about missing the ini file is a consequence of the
tuning crash, so just ignore this.
To find out why Moses is failing, run it again in the console like this:
/home/moses/src/mosesdecoder/bin/moses -threads 16 -v 0 -config
/home/moses/working/experiments/NGRAM5/m
Hi Joe
You could also look at the entropy of the distribution. I'll leave Matt
to post the one-liner for that one,
cheers - Barry
On 13/05/16 15:10, Matt Post wrote:
gzip -cd model/phrase-table.gz | cut -d\| -f1 | sort | uniq -c | sort
-nr | head -n5
(according to one definition of "ambigu
Hi Dorra
I think this is the classic paper
http://dl.acm.org/citation.cfm?id=778824
Although a quick google turned up this paper, which is more specific to
your question
http://www.mt-archive.info/MTS-2007-Wu.pdf
cheers - Barry
On 10/05/16 23:51, haoua...@iro.umontreal.ca wrote:
> Hi,
>
> In
Hi Ales
Well, bitPos=18446744073708512633 looks bogus. Marcin?
cheers - Barry
On 13/04/16 17:23, Aleš Tamchyna wrote:
Hi all,
sorry for the delay. I'm attaching the debug backtrace.
Best,
Ales
On Wed, Apr 13, 2016 at 1:49 PM, Barry Haddow
mailto:bhad...@staffmail.ed.ac.uk>
Hi
The backtrace would be more informative if you run with a debug build
(add variant=debug to bjam). Sometimes this makes bugs go away, or new
bugs appear, but if not then it will give more information. You can run
with core files enabled (ulimit -c unlimited) to save having to run
Moses ins
o the Moses directory. Which files are missing from the
*/tools/* directory?
Thanks.
Sergey
2016-03-19 17:01 GMT+02:00 Barry Haddow <mailto:bhad...@staffmail.ed.ac.uk>>:
Hi Sergey
It's looking for mgiza, which you don't have. Either install mgiza
into your tools
Hi Sergey
It's looking for mgiza, which you don't have. Either install mgiza into
your tools directory, or remove the mgiza arguments from your
train-model.perl command line.
cheers - Barry
On 19/03/16 13:56, Sergey A. wrote:
Hello Hieu Hoang.
Thank you for your suggestion, everything work
Hi Lane
SRILM is no longer required, since Nadir made some EMS updates last
October. Try upgrading to a recent version,
cheers - Barry
On 16/02/16 15:04, Lane Schwartz wrote:
Hi,
This is mostly an FYI, but I thought I'd point it out. The OSM
documentation (http://www.statmt.org/moses/?n=Ad
to silently
not
translate complete sentences.
(I must admit that I didn't look into it in too much detail, but it
sho
uld be easy to confirm.)
Cheers,
Matthias
On Fri, 2016-01-29 at 20:28 +, Barry Haddow wrote:
Hi All
I think I see what happened now.
When you give the input "di
sentences.
(I must admit that I didn't look into it in too much detail, but it
sho
uld be easy to confirm.)
Cheers,
Matthias
On Fri, 2016-01-29 at 20:28 +, Barry Haddow wrote:
Hi All
I think I see what happened now.
When you give the input "dies ist ein haus" to the sam
it should not crash.
In the log pasted by Martin, he passed "das ist ein haus" to
command-line Moses, which works, and gives a translation.
I think ideally the sample models should handle unknown words, and give
a translation. Maybe adding a glue rule would be sufficient?
cheers - B
Hi
When I run command-line Moses, I get the output below - i.e. no best
translation. The server crashes for me since it does not check for the
null pointer, but the command-line version does.
I think there should be a translation for this example.
cheers - Barry
[gna]bhaddow: echo 'dies ist e
Hi
We are looking for a new researcher to join the statmt group in Edinburgh
Link to the advert:
https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=035233
About the group:
http://www.statmt.org/ued/
cheers - Barry
--
The University of Edinburgh is a charitable
Hi Dingyuan
What platform are you running on? I could not reproduce your error on
Ubuntu 12.04, and valgrind is clean,
cheers - Barry
On 19/01/16 16:31, Barry Haddow wrote:
> Hi Dingyuan
>
> I ran for over 200 iterations and saw no problem. I tried with your LANG
> and LANGUAGE
ot;zh_CN.UTF-8"
> LC_MONETARY="zh_CN.UTF-8"
> LC_MESSAGES="zh_CN.UTF-8"
> LC_PAPER="zh_CN.UTF-8"
> LC_NAME="zh_CN.UTF-8"
> LC_ADDRESS="zh_CN.UTF-8"
> LC_TELEPHONE="zh_CN.UTF-8"
> LC_MEASUREMENT="zh_CN.UTF-8"
'repeatnbest.sh' which runs moses
> repeatedly until encoding error occurs.
>
> The file run7.best100.out and run7.out in the archive is the last run
> that produces the error.
>
> It seems that it is WordTranslationFeature that causes the problem.
>
> 在 2016年01月19日 00
in nbest list occurs only in the
> feature list (3 different samples), without affecting translation
> result. Therefore, the phrase table or training corpus may not be the
> problem.
>
> 在 2016年01月18日 23:04, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> Are these encoding errors
;
> https://gist.github.com/gumblex/0d9d0848b435e4f9818f
>
> 在 2016年01月18日 20:42, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> The extractor expects feature names to contain an underscore (not sure
>> exactly why) but some of yours don't, and Moses skips them, interpre
middle of line 61 a few bytes are corrupted. Is that
a moses problem or my memory has a problem?
I also checked other files using iconv, they are all OK in UTF-8.
在 2016年01月18日 19:32, Barry Haddow 写道:
Hi Dingyuan
Yes, that's very possible. The error could be in extracting features.dat
from
es = "target-word-insertion top 50, source-word-deletion
> top 50, word-translation top 50 50, phrase-length"
>
> I suspect there is something unexpected in the extractor.
>
>
> 在 2016年01月18日 19:03, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> In fact it is not
; Hi,
>
> I've attached that. The line number is 1694.
>
> 在 2016年01月18日 16:43, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> Is it possible to attach the features.dat file that is causing the
>> error? Almost certainly Moses is failing to parse the line because of
&g
Hi Dingyuan
Is it possible to attach the features.dat file that is causing the
error? Almost certainly Moses is failing to parse the line because of
the Asian characters in the feature names,
cheers - Barry
On 16/01/16 15:58, Dingyuan Wang wrote:
> I ran
>
> ~/software/moses/bin/kbmira -J 75
Hi Lane
Can you get a stack trace to see which line the message is coming from?
That error message is repeated in a few files.
From looking at the code, I'd guess that the OutputFactorOrder is not
being initialised correctly. Possibly due to the refactoring of the
config code. Does your exam
:
Paper submission deadline: May 8th, 2016
Notification of acceptance: June 5th, 2016
Camera-ready deadline: June 22nd, 2016
Workshop in Berlin following ACL: August 11-12th, 2016
For shared task timetable, see website.
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a
ef van Genabith, Deutsches Forschungszentrum für Künstliche
Intelligenz (DFKI), Germany
Barry Haddow, University of Edinburgh, UK
Jan Hajic, Charles University in Prague, Czech Republic
Kim Harris, text&form, Germany
Matthias Heyn, SDL, Belgium
Philipp Koehn, Johns Hopkins University
The aim is to find translated document pairs from a large collection of
documents in two languages.
Best wishes
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336
Hi James
sh: 1:
/media/bigdata/jcread/3rd_party_software/mosesdecoder/scripts/../bin/symal:
not found
The script expects to be able to navigate the file system and find the
binaries. If you've built Moses with a "--prefix" option then it won't
be able to find the binaries. If you are runni
tère de ||| LexicalReordering0= -4.41886 0 0 0 0 0
Distortion0= 0 LM0= -57.5157 WordPenalty0= -2 PhrasePenalty0= 1
PhraseDictionaryMultiModel0= -1.09861 -1.4366 -1.53505 -1.59179 |||
-1.6079
Vito
2015-11-26 12:16 GMT+01:00 Barry Haddow <mailto:bhad...@inf.ed.ac.uk>>:
Hi
know which could be the cause.
Sometimes there is this message on loading the phrase-tables
tcmalloc: large alloc 1149427712 bytes == 0x28a54000 @
After re-tuning however the difference in BLEU score gets smaller even
with compact phrase-table.
Best regards,
Vito
2015-11-25 21:23 GMT+01:00
Hi Vito
The 0.2 difference is after retuning? That's normal then.
But a difference of 5 bleu without retuning suggests a bug. Did you say
that this only happens with PhraseDictionaryMultiModel?
cheers - Barry
On 25/11/15 13:53, Vito Mandorino wrote:
Thank you. In our tests it seems that with
Hi Nick
The best solution is to use the compact phrase table, and for this just add
ttable-binarizer = $moses-bin-dir/processPhraseTableMin
to the general section.
If you need to use the ondisk phrase table (sparse features, properties
etc.) then replace the above with
ttable-binarizer = "$
,
I tried it, I got decrease in BLEU Score say from 16.39 to 14.35,
But the size of PT was greatly reduced. When I tried some positive
values the BLEU Score varies. The following is a sample table.
Inline image 1
On Tue, Nov 24, 2015 at 3:40 PM, Barry Haddow
mailto:bhad
parameters -l -n.
On Nov 24, 2015 2:44 PM, "Barry Haddow" <mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi
You're better off using the Johnson pruning method
http://www.statmt.org/moses/?n=Advanced.RuleTables#ntoc5 . The
relent code is no longer maintained,
c
Hi
You're better off using the Johnson pruning method
http://www.statmt.org/moses/?n=Advanced.RuleTables#ntoc5 . The relent
code is no longer maintained,
cheers - Barry
On 24/11/15 05:42, Sanjanashree Palanivel wrote:
Dear All,
I just tried to prune the phrase table using relent-fi
Hi Tomasz
The moseserver is just the decoder, so it doesn't do any of the pre- and
post-processing steps that you also need. In particular it does not do
tokenisation. You need to send it tokenised text, and then de-tokenise
the output,
cheers - Barry
On 12/11/15 13:40, Tomasz Gawryl wrote:
>
Hi Davood
The first command you give has a quote missing at the end - is this correct?
Another difference is that you have "-v 0", so moses will run silently.
What was the actual output when you ran this command? What you have
below looks correct to me.
cheers - Barry
On 28/10/15 21:57, Dav
Hi Hieu
That's exactly why I took to pre-pruning the phrase table, as I
mentioned on Friday. I had something like 750,000 translations of the
most common word, and it took half-an-hour to get the first sentence
translated.
cheers - Barry
On 05/10/15 15:48, Hieu Hoang wrote:
what pt implemen
And there's prunePhraseTable, which prunes according to weighted TM
score (as Moses does at runtime).
Some day there will be one pruner to rule them all ...
On 02/10/15 18:39, Philipp Koehn wrote:
Hi,
there is also scripts/training/threshold-filter.perl
which filters out phrase pairs based on
Hi Nakul
The Emille project released parallel corpora for several South Asian
languages
http://catalog.elra.info/product_info.php?products_id=696
cheers - Barry
On 27/09/15 15:45, nakul sharma wrote:
> Dear All,
>
> Is there any online repository of parallel corpus for Indian Regional
> languag
Hi Jian
You could also try using dropout. Adding something like
--dropout 0.8 --input_dropout 0.9 --null_index 1
to nplm training can help - look at your vocabulary file to see what the
null index should be set to. This works with the Moses version of nplm,
cheers - Barry
On 21/09/15 08:45,
Hi Tomek
Yes, that's quite a low score. Have a look at the translation output, do
the sentences have lots of English words in them, are they very long,
very short, or scrambled in some other way?
The commonest problem is that something went wrong in corpus
preparation, for example the corpor
Hi Sujay
Could you post the log of the output of filter-pt ?
cheers - Barry
On 17/08/15 08:19, Hegde, Sujay wrote:
Dear Moses support/Joy,
We are pruning the phrase Tables using the method
shown in the link(SALM):
http://www.statmt.org/moses/?n=Advanced.RuleTables#ntoc5
<
You could try this tutorial
http://www.statmt.org/mtma15/uploads/mtma15-domain-adaptation.pdf
On 14/08/15 20:20, Vincent Nguyen wrote:
> I had read this section, which deals with translation model combination.
> not much on language model or tuning.
>
> For instance : if I want to make sure that
ith signal 9, without coredump
any clue what signal 9 means ?
Le 04/08/2015 17:28, Barry Haddow a écrit :
Hi Vincent
If you are comparing to the results of WMT11, then you can look at
the system descriptions to see what the authors did. In fact it's
worth looking at the WMT14 descrip
;> another reason I do not understand.
>> it's at build_ttable time
>> I attached the error + config
>> cheers,
>> Vincent
>>
>> Le 04/08/2015 17:28, Barry Haddow a écrit :
>>> Hi Vincent
>>>
>>> If you are comparing to the results
Hi Vincent
If you are comparing to the results of WMT11, then you can look at the
system descriptions to see what the authors did. In fact it's worth
looking at the WMT14 descriptions (WMT15 will be available next month)
to see how state-of-the-art systems are built.
For fr-en or en-fr, the fi
John
>
>
> On 8/3/15, Barry Haddow wrote:
>> Hi John
>>
>>> Is there a reason the example weight file has this feature name that I’m
>>> missing?
>> My fault I'm afraid. I streamlined bilingual-lm in EMS, but didn't
>> realise th
Hi John
> Is there a reason the example weight file has this feature name that I’m
> missing?
My fault I'm afraid. I streamlined bilingual-lm in EMS, but didn't
realise that the example bypassed tuning. I've fixed it now according to
your suggestion,
cheers - Barry
On 02/08/15 15:46, John Jos
I have to binarize first or can I convert directly to Compact ?
(ie can I skip the CreateOnDisk stuff)
if so is there a predefined script or should do it manually ?
thanks
Le 28/07/2015 15:44, Barry Haddow a écrit :
Hi Vincent
I think the quotes are getting stripped off further down the
Try using the -b option in the tokenizer / detokenizer to disable buffering.
On 29/07/15 18:47, Vincent Nguyen wrote:
> Hi,
>
> As is, it was working fine except the tokenizer / detokenizer .perl code
> is outdated.
> It causes problem with the apostrophe in French.
>
> so I changed the translate.
Hi Fatma
I don't see any error in the file. What do you mean "the output was
wrong." ?
cheers - Barry
On 28/07/15 19:13, fatma elzahraa Eltaher wrote:
Dear All,
I try to build a Model but I get an attached error file . is this mean
that there are a problem in model . Because I test it by w
00 2"
echo 'finished at '`date`
touch /home/moses/working/steps/6/TRAINING_binarize-config.6.DONE
Le 28/07/2015 14:47, Barry Haddow a écrit :
Hi Vincent
It could be a bug. Could you edit
mosesdecoder/scripts/ems/experiment.meta and change the line:
template: $binarize-all IN
e I can add the 5 arguments or if I need to reference
> ttable-binarizer somewhere
>
>
> Le 28/07/2015 13:49, Barry Haddow a écrit :
>> Hi Vincent
>>
>> If you look at the error log, you will see:
>>
>>> Usage: /home/moses/mosesdecoder/bin/CreateOnDiskPt nu
Hi Vincent
If you look at the error log, you will see:
> Usage: /home/moses/mosesdecoder/bin/CreateOnDiskPt numSourceFactors
> numTargetFactors numScores tableLimit sortScoreIndex inputPath outputPath
You are missing the first 5 arguments to CreateOnDiskPt, as given in
config.basic.
cheers -
Hi Vincent
On 28/07/15 10:18, Vincent Nguyen wrote:
> Thanks Barry. Answers and other questions below.
>
> Le 28/07/2015 10:25, Barry Haddow a écrit :
>> Hi Vincent
>>
>>> 2 bugs report :
>>> in the LM Corpus definition for Europarl : the $pair-extension i
Hi Vincent
> 2 bugs report :
> in the LM Corpus definition for Europarl : the $pair-extension is
> missing before .$output-extension
> in the step 5 (maybe for others too) generation of the moses.tuned.ini.5
> file there is a missing ".gz" at the end of phrase-table.5
> in the PhraseDictionaryMemo
ds: 8 (i.e. abyss: 16)
client: shoots 10 threads => about 11 seconds, server shows busy CPU
workload => OK
5.)
server: --threads: 16 (i.e. abyss: 32)
client: shoots 20 threads => about 11 seconds, server shows busy CPU
workload => OK
Helps. :-)
Best wishes,
Martin
Am 24.07.2015 um 13:2
Hi Martin
Thanks for the detailed information. It's a bit strange since
command-line Moses uses the same threadpool, and we always overload the
threadpool since the entire test set is read in and queued.
The server was refactored somewhat recently - which git revision are you
using?
In the
ng these
models into memory will require raising our already excessive RAM
requirements...
Thanks again for the help.
On Wednesday, July 22, 2015, Barry Haddow
> wrote:
Hi Oren
I'm not aware of any threading problems with
PhraseDictionaryMemory,
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu
On 21 July 2015 at 18:07, Barry Haddow
wrote:
On 21/07/15 14:51, Oren wrote:
I am using the in-memory mode, using about 50GB of RAM. (No
swap issues as far as I can tell.) Could that
to be
something configurable beyond the -threads switch. Am I missing something?
The commit enables you to set the maximum number of connections to be
the same as the maximum number of threads.
On Tuesday, July 21, 2015, Barry Haddow <mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi
rver.
The slowness issue persists but in a different form. Most requests
return right away, even under heavy load, but some requests (about 5%)
take far longer - about 15-20seconds.
Perhaps there are other relevant switches?
Thanks again.
On Monday, July 20, 2015, Barry Haddow &l
Hi Oren
The threading model is different. In v1, the server created a new thread
for every request, v3 uses a thread pool. Try increasing the number of
threads.
Also, make sure you use the compact phrase table and KenLM as they are
normally faster, and pre-pruning your phrase table can help,
(Sent on behalf of Jan Hajic)
We cordially invite you to take part in the first Deep Machine
Translation Workshop, which will take place in Prague, Czech Republic,
on 3rd-4th September 2015.
https://ufal.mff.cuni.cz/events/deep-machine-translation-workshop
This is the first workshop on "Deep M
Hi Jeroen
> Am I right in thinking this comparison is more or less arbitrary, as
> long as the result is consistent and only zero if the two pointers are
> both null? If so, would anyone mind if I made it compare just the
> nullness of the two pointers?
From memory, I think you are correct. For
Just remove steps/1/TUNING_tune.1.DONE (replacing 1 with your experiment
id) and then re-run.
It would be nice if EMS supported multiple tuning runs without
intervention, but afaik it doesn't.
On 22/06/15 16:15, Lane Schwartz wrote:
Given a successful run of EMS, what do I need to do to confi
Do you think that my medium system is effective? (Core i5 2400 , 4GB
RAM, Ubuntu 32bit 14.04). Of course i wanted to train about 5
sentences.
For a small data set of 50k sentences, this should work. You could try
on 10k sentences to be sure.
On 17/06/15 13:46, Davood Mohammadifar wrote:
Hi Davood
From line 20113 onwards there's a whole bunch of error messages
indicating that the giza alignment didn't run properly, so then the
resulting phrase extraction didn't work. I can't actually see why giza
failed though - possibly the corpus was not preprocessed correctly. I'm
not fami
1 - 100 of 844 matches
Mail list logo