On 06/05/11 17:13, Andreas Jonsson wrote:
> I had not analyzed the parts of the core parser that I consider
> "preproprocessing", and it came as a suprise to me that it was as slow
> as the Barack Obama benchmark shows.  But integrating template
> expansion with the parser would solve this performance problem, and is
> therefore in itself a strong argument for working towards replacing
> it.  I will write about this on wikitext-l.

That benchmark didn't have any templates in it, I expanded them with
Special:ExpandTemplates before I started. So it's unlikely that a
significant amount of the time was spent in the preprocessor.

It was a really quick benchmark, with no profiling or further testing
whatsoever. It took a few minutes to do. You shouldn't base
architecture decisions on it, it might be totally invalid. It might
not be a parser benchmark at all. I might have made some configuration
error, causing it to test an unrelated region of the code.

All I know is, I sent in wikitext, the CPU usage went to 100% for a
while, then HTML came back.

I've spent a lot of time profiling and optimising the parser in the
past. It's a complex process. You can't just look at one number for a
large amount of very complex text and conclude that you've found an
optimisation target.

-- Tim Starling


_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to