Author: Carl Friedrich Bolz <cfb...@gmx.de>
Branch: extradoc
Changeset: r3704:20adeea40d74
Date: 2011-06-16 16:59 +0200
http://bitbucket.org/pypy/extradoc/changeset/20adeea40d74/

Log:    a motivation example

diff --git a/talk/iwtc11/paper.tex b/talk/iwtc11/paper.tex
--- a/talk/iwtc11/paper.tex
+++ b/talk/iwtc11/paper.tex
@@ -171,7 +171,94 @@
 % jump(i2, i3)
 % none of the operations is loop-invariant, but loop peeling will still remove 
the second addition
 
-\subsection{Running Example}
+\section{Motivation}
+\label{sec:Motivation}
+
+To motivate the approach we propose here, let's look at a trivial (unrealistic)
+trace which corresponds to an infinite loop:
+
+\begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
+$l_0$($i_{0}$):
+$i_1$ = $i_0$ + 1
+print($i_1$)
+jump($l_0$, $i_0$)
+\end{lstlisting}
+
+The first line is a label $l_0$ with argument $i_0$. Every label has a list of
+arguments. The \lstinline{print} operation just prints its argument (it is not
+an operation that PyPy's tracing JIT really supports, we just use it for this
+example). The \lstinline{jump} operation jumps back to the beginning of the
+trace, listing the new values of the arguments of the trace. In this case, the
+new value of $i_0$ is $i_0$, making it a loop-invariant.
+
+Because $i_0$ is loop-invariant, the addition could be moved out of the loop.
+However, we want to get this effect using our existing optimization passes
+without changing them too much. To achieve this, we peel one iteration off the
+loop before running the optimizations. This peeling gives the following trace:
+
+\begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
+$l_0$($i_{0}$):
+$i_1$ = $i_0$ + 1
+print($i_1$)
+jump($l_1$, $i_0$)
+
+$l_1$($i_{0}$):
+$i_2$ = $i_0$ + 1
+print($i_2$)
+jump($l_1$, $i_0$)
+\end{lstlisting}
+
+The iteration of the loop that was peeled off (lines 1-4) is called the
+\emph{preamble}, the loop afterwards the \emph{peeled loop}.
+
+Now the optimizer optimizes both of these two iterations of the loop together,
+disregarding the \lstinline{jump} and the label in lines 4-6. Doing this, 
common
+subexpression elimination will discover that the two additions are the same, 
and
+replace $i_2$ with $i_1$. This leads to the following trace:
+
+\begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
+$l_0$($i_{0}$):
+$i_1$ = $i_0$ + 1
+print($i_1$)
+jump($l_1$, $i_0$)
+
+$l_1$($i_{0}$):
+print($i_1$)
+jump($l_1$, $i_0$)
+\end{lstlisting}
+
+This trace is malformed, because $i_1$ is used after the label $l_1$ without
+being passed there, so we need to add $i_1$ as an argument to the label and 
pass
+it along the \lstinline{jump}s:
+
+\begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
+$l_0$($i_{0}$):
+$i_1$ = $i_0$ + 1
+print($i_1$)
+jump($l_1$, $i_0$, $i_1$)
+
+$l_1$($i_{0}$, $i_1$):
+print($i_1$)
+jump($l_1$, $i_0$, $i_1$)
+\end{lstlisting}
+
+The final result is that the loop-invariant code was moved out of the loop into
+the peeled-off iteration. Thus the addition is only executed in the first
+iteration, while the result is reused in all further iterations.
+
+This scheme is quite powerful. It allows simple linear optimization passes to
+perform loop-aware optimizations, such as loop-invariant code motion without
+changing them at all. All that is needed is to peel off one iteration, then
+apply simple one-pass optimizations and make sure that the necessary extra
+arguments are inserted into the label of the loop itself and the jumps
+afterwards. The peeling off of one iteration gives the optimization enough
+context to remove operations from the peeled loop, because the optimization
+detects that the operation was performed in the preamble already.
+
+
+% section Motivation (end)
+
+\section{Running Example}
 \label{sub:example}
 
 For the purpose of this paper, we are going to use a tiny interpreter for a 
dynamic language with
@@ -346,7 +433,6 @@
 the preamble will be executed only once while the peeled loop will
 be used for every further iteration.
 
-
 When applying the following optimizations to this two-iteration trace
 some care has to taken as to how the arguments of the two
 \lstinline{jump} operations and the input arguments of the peeled loop are
@@ -686,7 +772,7 @@
 Note that virtuals are only exploded into their attributes when
 constructing the arguments of the jump of the preamble. This
 explosion can't be repeated when constructing the arguments of the
-jump of the peeled loop as it has to mach the first jump. This means that
+jump of the peeled loop as it has to match the first jump. This means that
 the objects that was passed as pointers (non virtuals) from the first
 iteration to the second (from preamble to peeled loop) also has to be
 passed as pointers from the second iteration to the third (from peeled
_______________________________________________
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit

Reply via email to