FULLY FUNDED FOUR-YEAR PHD STUDENTSHIPS
- UKRI CENTRE FOR DOCTORAL TRAINING IN NATURAL LANGUAGE PROCESSING
Based at the University of Edinburgh: in conjunction with the School of
Informatics and School of Philosophy, Psychology and Language Sciences.
Deadlines:
* Non UK :25th November 2022
Friday April 21, 2017
-
Notification of acceptance: Friday May 19, 2017
-
Camera ready submission due: Friday May 26, 2017
-
Early registration deadline (ACL'17): TBD
-
Workshop: August 3 or 4, 2017
Workshop Organizers
-
Alexandra Birch (Edinburgh)
-
Friday April 21, 2017
-
Notification of acceptance: Friday May 19, 2017
-
Camera ready submission due: Friday May 26, 2017
-
Early registration deadline (ACL'17): TBD
-
Workshop: August 3 or 4, 2017
Workshop Organizers
-
Alexandra Birch (Edinburgh)
-
Hi Fatma,
Models are routinely trained with millions of parallel sentence pairs. You
need more data. Please read the Moses software documentation, and Philipp
Koehn's book for more background.
Lexi
On Thu, Jul 2, 2015 at 9:42 AM, fatma elzahraa Eltaher <
fatmaelta...@gmail.com> wrote:
> Dear Al
Hi there,
I have a seg fault with a normal master branch of Moses from 1 month ago,
on a normal seeming test sentence. This was an en-cs system, and it
translated the first 6000+ sentences fine. It also translates a short
version of the sentence fine.
So
"Daniel , the previous owner"
translates
OK,
Here is 1-4:
1. You would normally train bilingual lm on the same corpus as the SMT
model, but it is not required
2. Yes, but there are also other ways to make training faster which you
might want to explore
3. Yes it is important that the bilingual lm corpus matches the format that
will be p
Hi Kenneth,
In train-model.perl, the -e and -f arguments are used to determine
filenames and extensions so they could easily be changed to src and tgt
within the script. But Tom has a handle on how this could be painful to
change in the wrapper code. clean-corpus-n.perl doesn't have a -e and -f
ar
the phrase tables and the translation process itself (I
> > guess this is actually the problem). Lazy loading was unbearably slow
> > for me with the above mentioned configuration, but I was using 64
> > threads in total, so a lot of concurrent disk access happing, no wonder
&g
use it
because the disk does not have to seek during decoding. Lazy loading works
best with local disk and is not recommended for networked filesystems.
On Thu, Mar 20, 2014 at 2:32 PM, Alexandra Birch wrote:
> Hi there,
>
> I want to run 4 MT servers at the same time on a machine wit
Hi there,
I want to run 4 MT servers at the same time on a machine with limited
memory. Kenlm seems to reserve the amount of memory which the language
model would have taken if it had been loaded into memory. So I don't have
enough memory to run all these servers and the machine grinds to a halt i
Hi Yvette,
Barry was spot on as usual. The code is in a branch:
svn co
https://mosesdecoder.svn.sourceforge.net/svnroot/mosesdecoder/branches/mert-mtm5/
moseslrscore
You could also use the latest version of moses and just take the files
I changed for this branch. There weren't too many.
There
11 matches
Mail list logo