On 12 December 2014 at 13:53, Martin Liška <[email protected]> wrote:
>
> On 12 December 2014 at 14:19, Hieu Hoang <[email protected]> wrote:
> > thanks for that.
> >
> > Can you tell me what LTO is?
>
> Hello.
>
> It's Link-Time Optimization, briefly, idea is to emit intermediate
> language for all translation units (cpp file)
> to object files. And during linking, all these object files are
> collected and a compiler can optimize
> entire program (called inter-procedural optimizations). Classic
> compilation approach emits assembly language
> for all TUs (translation units), where optimizations are performed
> locally on these units.
>
> It brings few percents, would be interesting to identify why Moses is
> not so beneficial about it.
>
> And there's one more interesting approach: profile-guided
> optimization, where you compile a program twice:
> 1) first time, you annotate functions with special profiling
> instructions which count number of executions
> of a branch, function, etc. This training can be run on test suite.
> 2) second compilation benefits from these hints (statistic collected
> data) and can produce faster code.
> Even if you have quite poor testsuite, there data are much better that
> embedded heuristics.
>
> >
> > From the results, you get a 2% improvement by not using dynanic cast, and
> > another 2.65% by using unordered set? Is that correct
>
> You are right.
>
> >
> > The biggest use of set are the stacks (classes ChartHypothesisCollection,
> > HypothesisStack). We can change that to unordered_set but that'll require
> > redoing the state information classes of all stateful FF. It's a big job
> to
> > redo
>
> In general, all std::sets that do not require any particular order can
> be replaced
> with unordered_set, where a potential speed-up can occur.
>

Our data structures implicitly implement operator< which is required to add
them to std::set

To use unordered set, they would need to implement operator= and hash().

They are a few dozen of these datastructures. That's why its a big job and
having an estimate of the speedup would be useful before we embark on it

>
> Martin
>
> >
> > However, it would be good to measure how much time the stack operation
> takes
> > so we can size up the enemy :)
> >
> >
> > On 12 December 2014 at 12:45, Martin Liška <[email protected]>
> wrote:
> >>
> >> Hello.
> >>
> >> As part of my SUSE Hackweek project ([1]), I've spent couple of days
> >> playing with Moses performance tuning. I cooperated with Aleš and our
> >> effort produced two patches that have been just merged to mainline. If
> >> you are interested in more details, please visit my blog post: [2].
> >> I would be really happy if my blog post would become a kick-off for
> >> further performance tuning.
> >>
> >> Thanks,
> >> Martin Liška,
> >> SUSE Labs
> >>
> >> [1] https://hackweek.suse.com/11/projects/284
> >> [2] http://marxin.github.io/posts/moses-performance-tuning/
> >>
> >> _______________________________________________
> >> Moses-support mailing list
> >> [email protected]
> >> http://mailman.mit.edu/mailman/listinfo/moses-support
> >
> >
> >
> > --
> > Hieu Hoang
> > Research Associate
> > University of Edinburgh
> > http://www.hoang.co.uk/hieu
> >
>
>

-- 
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to