Author: Romain Guillebert <[email protected]> Branch: extradoc Changeset: r4155:83876e46ca36 Date: 2012-03-20 01:32 +0100 http://bitbucket.org/pypy/extradoc/changeset/83876e46ca36/
Log: Typos diff --git a/talk/ep2012/tools/abstract.rst b/talk/ep2012/tools/abstract.rst --- a/talk/ep2012/tools/abstract.rst +++ b/talk/ep2012/tools/abstract.rst @@ -4,9 +4,9 @@ When writing code in C, the mapping between written code and compiled assembler is generally pretty well understood. Compilers employ tricks, but barring few extreme examples they're, typically within 20% performance difference -from "naive" compilation. Tools used to asses performance are usually +from "naive" compilation. Tools used to assess performance are usually per-function profilers, like valgrind, oprofile or gnuprof. They all can -display time spent per function as well as cumultative times and call +display time spent per function as well as cumulative times and call chains. This works very well for small-to-medium programs, however large, already profiled programs tend to have flat performance profile with unclear message from profilers. @@ -16,7 +16,7 @@ not well known. Writing "good" vs "bad" python (from the JIT perspective), can make a 20x performance difference. -This talk will cover current profilers available for Pyhon and PyPy, -as well as other tools that can be used to asses performance. I'll also +This talk will cover current profilers available for Python and PyPy, +as well as other tools that can be used to assess performance. I'll also present in which cases using current tools does not give necessary information and what kind of tools can address this problem in the future. _______________________________________________ pypy-commit mailing list [email protected] http://mail.python.org/mailman/listinfo/pypy-commit
