Hi Hieu,
just sent you the patch for mosesserver, because my attempts to push to
github failed for some unknown reasons ;-)
Best regards,
Martin
Am 05.08.2015 um 14:23 schrieb Hieu Hoang:
It would be good if you can check in your change and take charge of it.
If you're waiting for us academics to fix it, you'll be waiting a long
time. We rarely use the server, we don't know what the issues are and
we won't know if we've really fixed it when we change it
Hieu Hoang
Sent while bumping into things
On 5 Aug 2015 4:15 pm, "Martin Baumgärtner"
<martin.baumgaert...@star-group.net
<mailto:martin.baumgaert...@star-group.net>> wrote:
Hi Oren,
we temporarily fixed this issue with the following quick hack for
Abyss server's constructor call:
xmlrpc_c::serverAbyss myAbyssServer(
xmlrpc_c::serverAbyss::constrOpt()
.registryP(&myRegistry)
.portNumber(port) // TCP port on which to listen
.logFileName(logfile)
.allowOrigin("*")
.maxConn((unsigned int)numThreads*4) // *4 (performance issue,
inofficial quick hack)
);
I'm also looking forward to the official fix, i.e. a configurable
value for abyss connections ...
Kind regards,
Martin
Am 04.08.2015 um 09:08 schrieb Oren:
Hi Barry and Martin,
Has this issue been fixed in the source code? Should I take thr
current master branch and compile it myself to avoid this issue?
Thanks.
On Friday, July 24, 2015, Barry Haddow
<bhad...@staffmail.ed.ac.uk <mailto:bhad...@staffmail.ed.ac.uk>>
wrote:
Hi Martin
So it looks like it was the abyss connection limit that was
causing the problem? I'm not sure why this should be, either
it should queue the jobs up or discard them.
Probably Moses server should allow users to configure the
number of abyss connections directly rather than tying it to
the number of Moses threads.
cheers - Barry
On 24/07/15 14:17, Martin Baumgärtner wrote:
Hi Barry,
thanks for your quick reply!
We're currently testing on SHA
e53ad4085942872f1c4ce75cb99afe66137e1e17 (master, from
2015-07-23). This version includes the fix for mosesserver
recently mentioned by Hieu in the performance thread.
Following my first intuition, I ran the critical experiments
after having modified mosesserver.cpp just by simply
doubling the given --threads value, but only for abyss
server: .maxConn((unsigned int)numThreads*2):
2.)
server: --threads: 8 (i.e. abyss: 16)
client: shoots 10 threads => about 11 seconds, server shows
busy CPU workload => OK
5.)
server: --threads: 16 (i.e. abyss: 32)
client: shoots 20 threads => about 11 seconds, server shows
busy CPU workload => OK
Helps. :-)
Best wishes,
Martin
Am 24.07.2015 um 13:26 schrieb Barry Haddow:
Hi Martin
Thanks for the detailed information. It's a bit strange
since command-line Moses uses the same threadpool, and we
always overload the threadpool since the entire test set is
read in and queued.
The server was refactored somewhat recently - which git
revision are you using?
In the case where Moses takes a long time, and cpu activity
is low, it could be either waiting on IO, or waiting on
locks. If the former, I don't know why it works fine for
command-line Moses, and if the latter then it's odd how it
eventually frees itself.
Is it possible to run scenario 2, then attach a debugger
whilst Moses is in the low-CPU phase to see what it is
doing? (You can do this in gdb with "info threads")
cheers - Barry
On 24/07/15 12:07, Martin Baumgärtner wrote:
Hi,
followed your discussion about mosesserver performance
issue with much interest so far.
We're having similar behaviour in our perfomance tests
with a current github master clone. Both, mosesserver and
complete engine run from same local machine, i.e. no NFS.
Machine is virtualized CentOS 7 using Hyper-V:
> lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 30
Model name: Intel(R) Core(TM) i7 CPU
860 @ 2.80GHz
Stepping: 5
CPU MHz: 2667.859
BogoMIPS: 5335.71
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
Following experiments using an engine with 75000 segments
for TM/LM (--minphr-memory, --minlexr-memory):
1.)
server: --threads: 8
client: shoots 8 threads => about 12 seconds, server shows
full CPU workload => OK
2.)
server: --threads: 8
client: shoots 10 threads => about 85 seconds, server
shows mostly low activity, full CPU workload only near end
of process => NOT OK
3.)
server: --threads: 16
client: shoots 10 threads => about 12 seconds, server
shows busy CPU workload => OK
4.)
server: --threads: 16
client: shoots 16 threads => about 11 seconds, server
shows busy CPU workload => OK
5.)
server: --threads: 16
client: shoots 20 threads => about 40-60 seconds
(depending), server shows mostly low activity, full CPU
workload only near end of process => NOT OK
We've seen a breakdown in performance always when the
client threads exceed the number of threads given by the
--threads param.
Kind regards,
Martin
--
*STAR Group* <http://www.star-group.net>
<http://www.star-group.net/>
*Martin Baumgärtner*
STAR Language Technology & Solutions GmbH
Umberto-Nobile-Straße 19 | 71063 Sindelfingen | Germany
Tel. +49 70 31-4 10 92-0 martin.baumgaert...@star-group.net
Fax +49 70 31-4 10 92-70 www.star-group.net
<http://www.star-group.net/>
Geschäftsführer: Oliver Rau, Bernd Barth
Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
--
*STAR Group* <http://www.star-group.net>
<http://www.star-group.net/>
*Martin Baumgärtner*
STAR Language Technology & Solutions GmbH
Umberto-Nobile-Straße 19 | 71063 Sindelfingen | Germany
Tel. +49 70 31-4 10 92-0 martin.baumgaert...@star-group.net
Fax +49 70 31-4 10 92-70 www.star-group.net
<http://www.star-group.net/>
Geschäftsführer: Oliver Rau, Bernd Barth
Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
--
*STAR Group* <http://www.star-group.net>
<http://www.star-group.net/>
*Martin Baumgärtner*
STAR Language Technology & Solutions GmbH
Umberto-Nobile-Straße 19 | 71063 Sindelfingen | Germany
Tel. +49 70 31-4 10 92-0 martin.baumgaert...@star-group.net
<mailto:martin.baumgaert...@star-group.net>
Fax +49 70 31-4 10 92-70 www.star-group.net
<http://www.star-group.net/>
Geschäftsführer: Oliver Rau, Bernd Barth
Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu <mailto:Moses-support@mit.edu>
http://mailman.mit.edu/mailman/listinfo/moses-support
--
*STAR Group* <http://www.star-group.net>
<http://www.star-group.net/>
*Martin Baumgärtner*
STAR Language Technology & Solutions GmbH
Umberto-Nobile-Straße 19 | 71063 Sindelfingen | Germany
Tel. +49 70 31-4 10 92-0 martin.baumgaert...@star-group.net
<mailto:martin.baumgaert...@star-group.net>
Fax +49 70 31-4 10 92-70 www.star-group.net <http://www.star-group.net/>
Geschäftsführer: Oliver Rau, Bernd Barth
Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support