Author: Maciej Fijalkowski <fij...@gmail.com>
Branch: extradoc
Changeset: r4920:82a0c82fdcf9
Date: 2012-11-16 15:52 +0100
http://bitbucket.org/pypy/extradoc/changeset/82a0c82fdcf9/

Log:    merge

diff too long, truncating to 2000 out of 21927 lines

diff --git a/blog/draft/arm-status-update.rst b/blog/draft/arm-status-update.rst
--- a/blog/draft/arm-status-update.rst
+++ b/blog/draft/arm-status-update.rst
@@ -1,12 +1,17 @@
 ARM Backend News Update
 =======================
 
-Starting with the good news, we finally merged the ``arm-backend-2`` branch
-into PyPy's main development line. As described in previous_ posts_ the main
-goal of this branch was to add support for ARM processors to PyPY's JIT.  As a
-general byproduct PyPy should now do a better job supporting non-x86
+As announced in the previous release notes for the PyPy 2.0 beta the ARM JIT
+compiler backend branch ``arm-backend-2`` has been merged into the main
+development line and is going to be part of the upcomming release.
+As a general byproduct PyPy should now do a better job supporting non-x86
 architectures such as ARM and the in-progress support for PPC64.
 
+In this post I want to describe a bit more in detail the current state of
+PyPy's ARM support.  Currently we provide two versions of PyPy for ARM, one
+with and one without the JIT compiler.  Currently both target the ARM
+soft-float ABI. And in this post I'm going to focus on the JIT version.
+
 On ARM, the JIT requires an ARMv7 processor or newer with a VFP unit targeting
 the ARM application profile. Although this sounds like a strong restriction,
 most of the ARM processors used in mobile devices and development boards are
@@ -30,10 +35,12 @@
 as *softfp*, *soft-float* and *hard-float*. The first two use the core
 registers to pass floating point arguments and do not make any assumptions
 about a floating point unit. The first uses a software based
-float-implementation, while the second can use a floating point unit. The
-latter and incompatible one requires a floating point unit and uses the
-coprocessor registers to pass floating arguments to calls. A detailed
-comparison can be found `here`_.
+float-implementation, while the second can use a floating point unit to
+actually perform the floating point operations while still copying the
+arguments to and from core registers to perform calls. The latter and
+incompatible one requires a floating point unit and uses the coprocessor
+registers to pass floating arguments to calls. A detailed comparison can be
+found `here`_.
 
 At the time we started implementing the float support in the ARM backend of the
 JIT, the soft-float calling conventions were the most commonly supported ones
@@ -62,7 +69,7 @@
 We also have some hardware to run the subset of the PyPy test-suite relevant to
 the ARM-JIT backend and to run the tests suite that tests the translated ARM
 binaries. The nightly tests are run on a Beagleboard-xM_ and an i.MX53_
-versatile board (kindly provided by Michael Foord), both boards run the ARM 
port `Ubuntu
+versatile board (kindly provided by Michael Foord), both boards run the ARM 
port of `Ubuntu
 12.04 Precise Pangolin`_. The current results for the different builders can be
 seen on the `PyPy buildbot`_. As can be seen there are still some issues to be
 fixed, but we are getting there.
@@ -94,8 +101,9 @@
   a regular basis. A fully emulated approach using QEMU might still be worth 
trying.*
 * Improve the tools, i.e. integrate with jitviewer_.
 
-Long term on open topics/projects for ARM:
+Long term or open topics/projects for ARM:
 
+* Fully support the hard-float calling convention and have nightly builds and 
tests.
 * Review of the generated machine code the JIT generates on ARM to see if the
   instruction selection makes sense for ARM.
 * Build a version that runs on Android.
diff --git a/blog/draft/numpy-status-update-5.rst 
b/blog/draft/numpy-status-update-5.rst
--- a/blog/draft/numpy-status-update-5.rst
+++ b/blog/draft/numpy-status-update-5.rst
@@ -7,12 +7,13 @@
 and there has been quite a bit of progress on the NumPy front in PyPy in the
 past two months. Things that happened:
 
-* **complex dtype support** - thanks to matti picus, NumPy on PyPy now supports
+* **complex dtype support** - thanks to Matti Picus, NumPy on PyPy now supports
   complex dtype (only complex128 so far, there is work on the other part)
 
 * **big refactoring** - probably the biggest issue we did was finishing
   a big refactoring that disabled some speedups (notably lazy computation
   of arrays), but lowered the barrier of implementing cool new features.
+  XXX: explain better why removing a speedup is a good thing, maybe?
 
 * **fancy indexing support** - all fancy indexing tricks should now work,
   including ``a[b]`` where ``b`` is an array of integers.
@@ -30,8 +31,8 @@
 
 * **pickling support for numarray** - hasn't started yet, but next on the list
 
-More importantly, we're getting very close to able to import the python part
-of the original numpy with only import modifications and running it's tests.
+More importantly, we're getting very close to be able to import the python part
+of the original numpy with only import modifications, and running its tests.
 Most tests will fail at this point, however it'll be a good start for another
 chapter :-)
 
diff --git a/blog/draft/py3k-status-update-7.rst 
b/blog/draft/py3k-status-update-7.rst
--- a/blog/draft/py3k-status-update-7.rst
+++ b/blog/draft/py3k-status-update-7.rst
@@ -12,38 +12,38 @@
 Python standard library tests.
 
 We currently pass 160 out of approximately 355 modules of CPython's standard
-test suite, fail 144 and skip apprixmately 51.
+test suite, fail 144 and skip approximately 51.
 
 Some highlights:
 
-o dictviews (the objects returned by dict.keys/values/items) has been greatly
+* dictviews (the objects returned by dict.keys/values/items) has been greatly
   improved, and now they full support set operators
 
-o a lot of tests has been fixed wrt complex numbers (and in particular the
-``__complex__`` method)
+* a lot of tests has been fixed wrt complex numbers (and in particular the
+  ``__complex__`` method)
 
-o _csv has been fixed and now it correctly handles unicode instead of bytes
+* _csv has been fixed and now it correctly handles unicode instead of bytes
 
-o more parser fixes, py3k list comprehension semantics; now you can no longer
+* more parser fixes, py3k list comprehension semantics; now you can no longer
   access the list comprehension variable after it finishes
 
-o 2to3'd most of our custom lib_pypy
+* 2to3'd most of the lib_pypy modules (pypy's custom standard lib
+  replacements/additions)
 
-o py3-enabled pyrepl: this means that finally readline works at the command
+* py3-enabled pyrepl: this means that finally readline works at the command
   prompt, as well as builtins.input(). ``pdb`` seems to work, as well as
   fancycompleter_ to get colorful TAB completions :-)
 
-o py3 round
+* py3 round
 
-o further tightening/cleanup of the unicode handling (more usage of
-surrogateescape, surrogatepass among other things)
+* further tightening/cleanup of the unicode handling (more usage of
+  surrogateescape, surrogatepass among other things)
 
-o as well as keeping up with some big changes happening on the default branch
+* as well as keeping up with some big changes happening on the default branch
+  and of course various other fixes.
 
-Finally, I would like to thank Amaury Forgeot d'Arc for his significant
-contributions; among other things, Amaury recently worked on <all kinds of
-stuff listed above>
-
+Finally, we would like to thank Amaury Forgeot d'Arc for his significant
+contributions.
 
 cheers,
 Philip&Antonio
@@ -52,3 +52,4 @@
 .. _`py3k proposal`: http://pypy.org/py3donate.html
 .. _`py3k branch`: https://bitbucket.org/pypy/pypy/src/py3k
 .. _`py3k buildbots`: http://buildbot.pypy.org/summary?branch=py3k
+.. _`fancycompleter`: http://pypi.python.org/pypi/fancycompleter
diff --git a/talk/fscons2012/GIL.fig b/talk/fscons2012/GIL.fig
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/GIL.fig
@@ -0,0 +1,25 @@
+#FIG 3.2  Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4      
+100.00
+Single
+-2
+1200 2
+2 2 0 1 0 9 50 -1 20 0.000 0 0 7 0 0 5
+        1125 1350 2475 1350 2475 1800 1125 1800 1125 1350
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        3600 1350 4725 1350 4725 1800 3600 1800 3600 1350
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        5850 1350 7200 1350 7200 1800 5850 1800 5850 1350
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        2475 1800 3600 1800 3600 2250 2475 2250 2475 1800
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        4725 1800 5850 1800 5850 2250 4725 2250 4725 1800
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        1125 2025 7425 2025
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        1125 1575 7425 1575
diff --git a/talk/fscons2012/GIL.png b/talk/fscons2012/GIL.png
new file mode 100644
index 
0000000000000000000000000000000000000000..aba489c1cacb9e419d76cff1b59c4a395c27abb1
GIT binary patch

[cut]

diff --git a/talk/fscons2012/Makefile b/talk/fscons2012/Makefile
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/Makefile
@@ -0,0 +1,18 @@
+# you can find rst2beamer.py here:
+# http://codespeak.net/svn/user/antocuni/bin/rst2beamer.py
+
+# WARNING: to work, it needs this patch for docutils
+# 
https://sourceforge.net/tracker/?func=detail&atid=422032&aid=1459707&group_id=38414
+
+talk.pdf: talk.rst author.latex title.latex stylesheet.latex
+       rst2beamer.py --stylesheet=stylesheet.latex --documentoptions=14pt 
talk.rst talk.latex || exit
+       sed 's/\\date{}/\\input{author.latex}/' -i talk.latex || exit
+       sed 's/\\maketitle/\\input{title.latex}/' -i talk.latex || exit
+       sed 's/\\usepackage\[latin1\]{inputenc}/\\usepackage[utf8]{inputenc}/' 
-i talk.latex || exit
+       pdflatex talk.latex  || exit
+
+view: talk.pdf
+       evince talk.pdf &
+
+xpdf: talk.pdf
+       xpdf talk.pdf &
diff --git a/talk/fscons2012/STM.fig b/talk/fscons2012/STM.fig
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/STM.fig
@@ -0,0 +1,27 @@
+#FIG 3.2  Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4      
+100.00
+Single
+-2
+1200 2
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        1125 1575 7425 1575
+2 2 0 1 0 9 50 -1 20 0.000 0 0 7 0 0 5
+        1125 1350 2475 1350 2475 1800 1125 1800 1125 1350
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        1125 2025 7425 2025
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        1125 1800 2250 1800 2250 2250 1125 2250 1125 1800
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        2700 1350 3825 1350 3825 1800 2700 1800 2700 1350
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        2475 1800 3600 1800 3600 2250 2475 2250 2475 1800
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        4050 1350 5400 1350 5400 1800 4050 1800 4050 1350
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        3825 1800 4950 1800 4950 2250 3825 2250 3825 1800
diff --git a/talk/fscons2012/STM.png b/talk/fscons2012/STM.png
new file mode 100644
index 
0000000000000000000000000000000000000000..2de6551346d2db3ad3797f265f2b88a232b387fa
GIT binary patch

[cut]

diff --git a/talk/fscons2012/author.latex b/talk/fscons2012/author.latex
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/author.latex
@@ -0,0 +1,8 @@
+\definecolor{rrblitbackground}{rgb}{0.0, 0.0, 0.0}
+
+\title[Software Transactional Memory "for real"]{Software Transactional 
Memory{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ }{ 
}"for real"}
+\author[Armin Rigo]
+{Armin Rigo}
+
+\institute{FSCONS 2012}
+\date{November 10, 2012}
diff --git a/talk/fscons2012/beamerdefs.txt b/talk/fscons2012/beamerdefs.txt
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/beamerdefs.txt
@@ -0,0 +1,108 @@
+.. colors
+.. ===========================
+
+.. role:: green
+.. role:: red
+
+
+.. general useful commands
+.. ===========================
+
+.. |pause| raw:: latex
+
+   \pause
+
+.. |small| raw:: latex
+
+   {\small
+
+.. |end_small| raw:: latex
+
+   }
+
+.. |scriptsize| raw:: latex
+
+   {\scriptsize
+
+.. |end_scriptsize| raw:: latex
+
+   }
+
+.. |strike<| raw:: latex
+
+   \sout{
+
+.. closed bracket
+.. ===========================
+
+.. |>| raw:: latex
+
+   }
+
+
+.. example block
+.. ===========================
+
+.. |example<| raw:: latex
+
+   \begin{exampleblock}{
+
+
+.. |end_example| raw:: latex
+
+   \end{exampleblock}
+
+
+
+.. alert block
+.. ===========================
+
+.. |alert<| raw:: latex
+
+   \begin{alertblock}{
+
+
+.. |end_alert| raw:: latex
+
+   \end{alertblock}
+
+
+
+.. columns
+.. ===========================
+
+.. |column1| raw:: latex
+
+   \begin{columns}
+      \begin{column}{0.45\textwidth}
+
+.. |column2| raw:: latex
+
+      \end{column}
+      \begin{column}{0.45\textwidth}
+
+
+.. |end_columns| raw:: latex
+
+      \end{column}
+   \end{columns}
+
+
+
+.. |snake| image:: ../../img/py-web-new.png
+           :scale: 15%
+           
+
+
+.. nested blocks
+.. ===========================
+
+.. |nested| raw:: latex
+
+   \begin{columns}
+      \begin{column}{0.85\textwidth}
+
+.. |end_nested| raw:: latex
+
+      \end{column}
+   \end{columns}
diff --git a/talk/fscons2012/stylesheet.latex b/talk/fscons2012/stylesheet.latex
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/stylesheet.latex
@@ -0,0 +1,11 @@
+\usetheme{Boadilla}
+\usecolortheme{whale}
+\setbeamercovered{transparent}
+\setbeamertemplate{navigation symbols}{}
+
+\definecolor{darkgreen}{rgb}{0, 0.5, 0.0}
+\newcommand{\docutilsrolegreen}[1]{\color{darkgreen}#1\normalcolor}
+\newcommand{\docutilsrolered}[1]{\color{red}#1\normalcolor}
+
+\newcommand{\green}[1]{\color{darkgreen}#1\normalcolor}
+\newcommand{\red}[1]{\color{red}#1\normalcolor}
diff --git a/talk/fscons2012/summary.txt b/talk/fscons2012/summary.txt
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/summary.txt
@@ -0,0 +1,36 @@
+
+- multicore machines, manycore machines
+
+- focus on cases where using more cores is harder than making more processes
+
+
+- using locks
+- using stm/htm
+
+- HTM: Haswell
+- STM: read/write barriers, picture
+
+
+- the deeper problem is: you have to use threads
+- messy
+
+- parallel with GC: common programming languages are either fully GC'ed or
+  not at all
+
+- what do you get if you run *everything* in STM/HTM?
+
+- longer transactions, corresponding to larger parts of the program
+
+- the thread model becomes implicit
+
+- demo
+
+
+- GIL/STM pictures
+- can pretend it is one-core
+
+- always gives correct results, but maybe too many conflicts
+
+- "the right side" of the problem, in my opinion: debugging tools etc.
+  (same as: GC is "the right side": no crashes, but you need occasionally
+  to understand memory-not-freed problems)
diff --git a/talk/fscons2012/talk.latex b/talk/fscons2012/talk.latex
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/talk.latex
@@ -0,0 +1,722 @@
+\documentclass[14pt]{beamer}
+\usepackage{hyperref}
+\usepackage{fancyvrb}
+
+\makeatletter
+\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax%
+    \let\PY@ul=\relax \let\PY@tc=\relax%
+    \let\PY@bc=\relax \let\PY@ff=\relax}
+\def\PY@tok#1{\csname PY@tok@#1\endcsname}
+\def\PY@toks#1+{\ifx\relax#1\empty\else%
+    \PY@tok{#1}\expandafter\PY@toks\fi}
+\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{%
+    \PY@it{\PY@bf{\PY@ff{#1}}}}}}}
+\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
+
+\def\PY@tok@gd{\def\PY@bc##1{\fcolorbox[rgb]{0.80,0.00,0.00}{1.00,0.80,0.80}{##1}}}
+\def\PY@tok@gu{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.20,0.00}{##1}}}
+\def\PY@tok@gt{\def\PY@tc##1{\textcolor[rgb]{0.60,0.80,0.40}{##1}}}
+\def\PY@tok@gs{\let\PY@bf=\textbf}
+\def\PY@tok@gr{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
+\def\PY@tok@cm{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.00,0.60,1.00}{##1}}}
+\def\PY@tok@vg{\def\PY@tc##1{\textcolor[rgb]{0.00,0.20,0.20}{##1}}}
+\def\PY@tok@m{\def\PY@tc##1{\textcolor[rgb]{1.00,0.40,0.00}{##1}}}
+\def\PY@tok@mh{\def\PY@tc##1{\textcolor[rgb]{1.00,0.40,0.00}{##1}}}
+\def\PY@tok@cs{\let\PY@bf=\textbf\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.00,0.60,1.00}{##1}}}
+\def\PY@tok@ge{\let\PY@it=\textit}
+\def\PY@tok@vc{\def\PY@tc##1{\textcolor[rgb]{0.00,0.20,0.20}{##1}}}
+\def\PY@tok@il{\def\PY@tc##1{\textcolor[rgb]{1.00,0.40,0.00}{##1}}}
+\def\PY@tok@go{\def\PY@tc##1{\textcolor[rgb]{0.67,0.67,0.67}{##1}}}
+\def\PY@tok@cp{\def\PY@tc##1{\textcolor[rgb]{0.00,0.60,0.60}{##1}}}
+\def\PY@tok@gi{\def\PY@bc##1{\fcolorbox[rgb]{0.00,0.80,0.00}{0.80,1.00,0.80}{##1}}}
+\def\PY@tok@gh{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.20,0.00}{##1}}}
+\def\PY@tok@ni{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}}
+\def\PY@tok@nl{\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,1.00}{##1}}}
+\def\PY@tok@nn{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.80,1.00}{##1}}}
+\def\PY@tok@no{\def\PY@tc##1{\textcolor[rgb]{0.20,0.40,0.00}{##1}}}
+\def\PY@tok@na{\def\PY@tc##1{\textcolor[rgb]{0.20,0.00,0.60}{##1}}}
+\def\PY@tok@nb{\def\PY@tc##1{\textcolor[rgb]{0.20,0.40,0.40}{##1}}}
+\def\PY@tok@nc{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.67,0.53}{##1}}}
+\def\PY@tok@nd{\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,1.00}{##1}}}
+\def\PY@tok@ne{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.80,0.00,0.00}{##1}}}
+\def\PY@tok@nf{\def\PY@tc##1{\textcolor[rgb]{0.80,0.00,1.00}{##1}}}
+\def\PY@tok@si{\def\PY@tc##1{\textcolor[rgb]{0.67,0.00,0.00}{##1}}}
+\def\PY@tok@s2{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@vi{\def\PY@tc##1{\textcolor[rgb]{0.00,0.20,0.20}{##1}}}
+\def\PY@tok@nt{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.20,0.00,0.60}{##1}}}
+\def\PY@tok@nv{\def\PY@tc##1{\textcolor[rgb]{0.00,0.20,0.20}{##1}}}
+\def\PY@tok@s1{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@gp{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.60}{##1}}}
+\def\PY@tok@sh{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@ow{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.00}{##1}}}
+\def\PY@tok@sx{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@bp{\def\PY@tc##1{\textcolor[rgb]{0.20,0.40,0.40}{##1}}}
+\def\PY@tok@c1{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.00,0.60,1.00}{##1}}}
+\def\PY@tok@kc{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.40,0.60}{##1}}}
+\def\PY@tok@c{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.00,0.60,1.00}{##1}}}
+\def\PY@tok@mf{\def\PY@tc##1{\textcolor[rgb]{1.00,0.40,0.00}{##1}}}
+\def\PY@tok@err{\def\PY@tc##1{\textcolor[rgb]{0.67,0.00,0.00}{##1}}\def\PY@bc##1{\colorbox[rgb]{1.00,0.67,0.67}{##1}}}
+\def\PY@tok@kd{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.40,0.60}{##1}}}
+\def\PY@tok@ss{\def\PY@tc##1{\textcolor[rgb]{1.00,0.80,0.20}{##1}}}
+\def\PY@tok@sr{\def\PY@tc##1{\textcolor[rgb]{0.20,0.67,0.67}{##1}}}
+\def\PY@tok@mo{\def\PY@tc##1{\textcolor[rgb]{1.00,0.40,0.00}{##1}}}
+\def\PY@tok@mi{\def\PY@tc##1{\textcolor[rgb]{1.00,0.40,0.00}{##1}}}
+\def\PY@tok@kn{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.40,0.60}{##1}}}
+\def\PY@tok@o{\def\PY@tc##1{\textcolor[rgb]{0.33,0.33,0.33}{##1}}}
+\def\PY@tok@kr{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.40,0.60}{##1}}}
+\def\PY@tok@s{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@kp{\def\PY@tc##1{\textcolor[rgb]{0.00,0.40,0.60}{##1}}}
+\def\PY@tok@w{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
+\def\PY@tok@kt{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.47,0.53}{##1}}}
+\def\PY@tok@sc{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@sb{\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@k{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.40,0.60}{##1}}}
+\def\PY@tok@se{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+\def\PY@tok@sd{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.80,0.20,0.00}{##1}}}
+
+\def\PYZbs{\char`\\}
+\def\PYZus{\char`\_}
+\def\PYZob{\char`\{}
+\def\PYZcb{\char`\}}
+\def\PYZca{\char`\^}
+\def\PYZsh{\char`\#}
+\def\PYZpc{\char`\%}
+\def\PYZdl{\char`\$}
+\def\PYZti{\char`\~}
+% for compatibility with earlier versions
+\def\PYZat{@}
+\def\PYZlb{[}
+\def\PYZrb{]}
+\makeatother
+
+
+\definecolor{rrblitbackground}{rgb}{0.55, 0.3, 0.1}
+
+\newenvironment{rtbliteral}{
+
+\begin{ttfamily}
+
+\color{rrblitbackground}
+
+}{
+
+\end{ttfamily}
+
+}
+
+% generated by Docutils <http://docutils.sourceforge.net/>
+\usepackage{fixltx2e} % LaTeX patches, \textsubscript
+\usepackage{cmap} % fix search and cut-and-paste in Acrobat
+\usepackage{ifthen}
+\usepackage[T1]{fontenc}
+\usepackage[utf8]{inputenc}
+\usepackage{graphicx}
+
+%%% Custom LaTeX preamble
+% PDF Standard Fonts
+\usepackage{mathptmx} % Times
+\usepackage[scaled=.90]{helvet}
+\usepackage{courier}
+
+%%% User specified packages and stylesheets
+\input{stylesheet.latex}
+
+%%% Fallback definitions for Docutils-specific commands
+
+% hyperlinks:
+\ifthenelse{\isundefined{\hypersetup}}{
+  \usepackage[colorlinks=true,linkcolor=blue,urlcolor=blue]{hyperref}
+  \urlstyle{same} % normal text font (alternatives: tt, rm, sf)
+}{}
+\hypersetup{
+  pdftitle={Software Transactional Memory ``for real''},
+}
+
+%%% Title Data
+\title{\phantomsection%
+  Software Transactional Memory ``for real''%
+  \label{software-transactional-memory-for-real}}
+\author{}
+\input{author.latex}
+
+%%% Body
+\begin{document}
+\input{title.latex}
+
+% colors
+
+% ===========================
+
+% general useful commands
+
+% ===========================
+
+% closed bracket
+
+% ===========================
+
+% example block
+
+% ===========================
+
+% alert block
+
+% ===========================
+
+% columns
+
+% ===========================
+
+% nested blocks
+
+% ===========================
+\begin{frame}
+\frametitle{Introduction}
+
+%
+\begin{itemize}
+
+\item This talk is about programming multi- or many-core machines
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{About myself}
+
+%
+\begin{itemize}
+
+\item Armin Rigo
+
+\item ``Language implementation guy''
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item PyPy project
+%
+\begin{itemize}
+
+\item Python in Python
+
+\item includes a Just-in-Time Compiler ``Generator'' for Python
+and any other dynamic language
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Motivation}
+
+%
+\begin{itemize}
+
+\item A single-core program is getting exponentially slower than a multi-core 
one
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Using several processes exchanging data
+%
+\begin{itemize}
+
+\item works fine in some cases
+
+\item but becomes a large mess in others
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Using several threads
+%
+\begin{itemize}
+
+\item this talk!
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Common solution}
+
+%
+\begin{itemize}
+
+\item Organize your program in multiple threads
+
+\item Add synchronization when accessing shared, non-read-only data
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Synchronization with locks}
+
+%
+\begin{itemize}
+
+\item Carefully place locks around every access to shared data
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item How do you know if you missed a place?
+%
+\begin{itemize}
+
+\item hard to catch by writing tests
+
+\item instead you get obscure rare run-time crashes
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Issues when scaling to a large program
+%
+\begin{itemize}
+
+\item order of acquisition
+
+\item deadlocks
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}[containsverbatim,fragile]
+\frametitle{Synchronization with TM}
+
+%
+\begin{itemize}
+
+\item TM = Transactional Memory
+
+\end{itemize}
+
+\pause
+
+\begin{Verbatim}[commandchars=\\\{\}]
+----------------   --------------------
+Locks              Transactional Memory
+----------------   --------------------
+
+mylock.acquire();    atomic \PYZob{}
+x = list1.pop();       x = list1.pop();
+list2.append(x);       list2.append(x);
+mylock.release();    \PYZcb{}
+\end{Verbatim}
+
+\end{frame}
+\begin{frame}
+\frametitle{Locks versus TM}
+
+%
+\begin{itemize}
+
+\item Locks
+
+\end{itemize}
+
+\noindent\makebox[\textwidth][c]{\includegraphics[scale=0.700000]{withlock.png}}
+%
+\begin{itemize}
+
+\item TM
+
+\end{itemize}
+
+\noindent\makebox[\textwidth][c]{\includegraphics[scale=0.700000]{withstm0.png}}
+\end{frame}
+\begin{frame}
+\frametitle{Locks versus TM}
+
+%
+\begin{itemize}
+
+\item Locks
+
+\end{itemize}
+
+\noindent\makebox[\textwidth][c]{\includegraphics[scale=0.700000]{withlock.png}}
+%
+\begin{itemize}
+
+\item TM in case of conflict
+
+\end{itemize}
+
+\noindent\makebox[\textwidth][c]{\includegraphics[scale=0.700000]{withstm.png}}
+\end{frame}
+\begin{frame}
+\frametitle{Synchronization with TM}
+
+%
+\begin{itemize}
+
+\item ``Optimistic'' approach:
+%
+\begin{itemize}
+
+\item no lock to protect shared data in memory
+
+\item instead, track all memory accesses
+
+\item detect actual conflicts
+
+\item if conflict, restart the whole ``transaction''
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Easier to use
+%
+\begin{itemize}
+
+\item no need to name locks
+
+\item no deadlocks
+
+\item ``composability''
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{HTM versus STM}
+
+%
+\begin{itemize}
+
+\item HTM = Hardware Transactional Memory
+%
+\begin{itemize}
+
+\item Intel Haswell CPU, 2013
+
+\item and others
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item STM = Software Transactional Memory
+%
+\begin{itemize}
+
+\item various approaches
+
+\item large overhead (2x-10x), but getting faster
+
+\item experimental in PyPy: read/write barriers, as with GC
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{The catch}
+
+
+\pause
+%
+\begin{itemize}
+
+\item You Still Have To Use Threads
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Threads are hard to debug, non-reproductible
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Threads are Messy
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Issue with threads}
+
+%
+\begin{itemize}
+
+\item TM does not solve this problem:
+
+\item How do you know if you missed a place to put \texttt{atomic} around?
+%
+\begin{itemize}
+
+\item hard to catch by writing tests
+
+\item instead you get obscure rare run-time crashes
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item What if we put \texttt{atomic} everywhere?
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Analogy with Garbage Collection}
+
+%
+\begin{itemize}
+
+\item Explicit Memory Management:
+%
+\begin{itemize}
+
+\item messy, hard to debug rare leaks or corruptions
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Automatic GC solves it
+%
+\begin{itemize}
+
+\item common languages either have a GC or not
+
+\item if they have a GC, it controls almost \emph{all} objects
+
+\item not just a small part of them
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Proposed solution}
+
+%
+\begin{itemize}
+
+\item Put \texttt{atomic} everywhere...
+
+\item in other words, Run Everything with TM
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Proposed solution}
+
+%
+\begin{itemize}
+
+\item Really needs TM.  With locks, you'd get this:
+
+\end{itemize}
+
+\noindent\makebox[\textwidth][c]{\includegraphics[scale=0.700000]{GIL.png}}
+%
+\begin{itemize}
+
+\item With TM you can get this:
+
+\end{itemize}
+
+\noindent\makebox[\textwidth][c]{\includegraphics[scale=0.700000]{STM.png}}
+\end{frame}
+\begin{frame}
+\frametitle{In a few words}
+
+%
+\begin{itemize}
+
+\item Longer transactions
+
+\item Corresponding to larger parts of the program
+
+\item The underlying multi-threaded model becomes implicit
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Typical example}
+
+%
+\begin{itemize}
+
+\item You want to run \texttt{f1()} and \texttt{f2()} and \texttt{f3()}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Assume they are ``mostly independent''
+%
+\begin{itemize}
+
+\item i.e. we expect that we can run them in parallel
+
+\item but we cannot prove it, we just hope that in the common case we can
+
+\end{itemize}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item In case of conflicts, we don't want random behavior
+%
+\begin{itemize}
+
+\item i.e. we don't want thread-like non-determinism and crashes
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Pooling and atomic statements}
+
+%
+\begin{itemize}
+
+\item Solution: use a library that creates a pool of threads
+
+\item Each thread picks a function from the list and runs it
+with \texttt{atomic}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Results}
+
+%
+\begin{itemize}
+
+\item The behavior is ``as if'' we had run \texttt{f1()}, \texttt{f2()}
+and \texttt{f3()} sequentially
+
+\item The programmer chooses if he wants this fixed order,
+or if any order is fine
+
+\item Threads are hidden from the programmer
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{More generally}
+
+%
+\begin{itemize}
+
+\item This was an example only
+
+\item \textbf{TM gives various new ways to hide threads under a nice interface}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{Not the Ultimate Solution}
+
+%
+\begin{itemize}
+
+\item Much easier for the programmer to get reproducible results
+
+\item But maybe too many conflicts
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item ``The right side'' of the problem
+%
+\begin{itemize}
+
+\item start with a working program, and improve performance
+
+\item as opposed to: with locks, start with a fast program, and debug crashes
+
+\item we will need new debugging tools
+
+\end{itemize}
+
+\end{itemize}
+\end{frame}
+\begin{frame}
+\frametitle{PyPy-STM}
+
+%
+\begin{itemize}
+
+\item PyPy-STM: a version of PyPy with Software Transactional Memory
+%
+\begin{itemize}
+
+\item in-progress, but basically working
+
+\item solves the ``GIL issue'' but more importantly adds \texttt{atomic}
+
+\end{itemize}
+
+\item \url{http://pypy.org/}
+
+\end{itemize}
+
+\pause
+%
+\begin{itemize}
+
+\item Thank you!
+
+\end{itemize}
+\end{frame}
+
+\end{document}
diff --git a/talk/fscons2012/talk.pdf b/talk/fscons2012/talk.pdf
new file mode 100644
index 
0000000000000000000000000000000000000000..1a6257d12b0ddced11a617033ddb00dbafc7612d
GIT binary patch

[cut]

diff --git a/talk/fscons2012/talk.rst b/talk/fscons2012/talk.rst
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/talk.rst
@@ -0,0 +1,340 @@
+.. include:: beamerdefs.txt
+
+============================================
+Software Transactional Memory "for real"
+============================================
+
+
+Introduction
+------------------
+
+* This talk is about programming multi- or many-core machines
+
+
+About myself
+------------------
+
+* Armin Rigo
+
+* "Language implementation guy"
+
+|pause|
+
+* PyPy project
+
+  - Python in Python
+
+  - includes a Just-in-Time Compiler "Generator" for Python
+    and any other dynamic language
+
+
+Motivation
+----------------------
+
+* A single-core program is getting exponentially slower than a multi-core one
+
+|pause|
+
+* Using several processes exchanging data
+
+  - works fine in some cases
+
+  - but becomes a large mess in others
+
+|pause|
+
+* Using several threads
+
+  - this talk!
+
+
+Common solution
+----------------------
+
+* Organize your program in multiple threads
+
+* Add synchronization when accessing shared, non-read-only data
+
+
+Synchronization with locks
+--------------------------
+
+* Carefully place locks around every access to shared data
+
+|pause|
+
+* How do you know if you missed a place?
+
+  - hard to catch by writing tests
+
+  - instead you get obscure rare run-time crashes
+
+|pause|
+
+* Issues when scaling to a large program
+
+  - order of acquisition
+
+  - deadlocks
+
+
+Synchronization with TM
+-----------------------
+
+* TM = Transactional Memory
+
+|pause|
+
+.. sourcecode:: plain
+
+    ----------------   --------------------
+    Locks              Transactional Memory
+    ----------------   --------------------
+
+    mylock.acquire();    atomic {
+    x = list1.pop();       x = list1.pop();
+    list2.append(x);       list2.append(x);
+    mylock.release();    }
+
+
+Locks versus TM
+---------------
+
+* Locks
+
+.. image:: withlock.png
+   :scale: 70%
+   :align: center
+
+* TM
+
+.. image:: withstm0.png
+   :scale: 70%
+   :align: center
+
+
+Locks versus TM
+---------------
+
+* Locks
+
+.. image:: withlock.png
+   :scale: 70%
+   :align: center
+
+* TM in case of conflict
+
+.. image:: withstm.png
+   :scale: 70%
+   :align: center
+
+
+Synchronization with TM
+-----------------------
+
+* "Optimistic" approach:
+
+  - no lock to protect shared data in memory
+
+  - instead, track all memory accesses
+
+  - detect actual conflicts
+
+  - if conflict, restart the whole "transaction"
+
+|pause|
+
+* Easier to use
+
+  - no need to name locks
+
+  - no deadlocks
+
+  - "composability"
+
+
+HTM versus STM
+--------------
+
+* HTM = Hardware Transactional Memory
+
+  - Intel Haswell CPU, 2013
+
+  - and others
+
+|pause|
+
+* STM = Software Transactional Memory
+
+  - various approaches
+
+  - large overhead (2x-10x), but getting faster
+
+  - experimental in PyPy: read/write barriers, as with GC
+
+
+The catch
+---------
+
+|pause|
+
+* You Still Have To Use Threads
+
+|pause|
+
+* Threads are hard to debug, non-reproductible
+
+|pause|
+
+* Threads are Messy
+
+
+Issue with threads
+------------------
+
+* TM does not solve this problem:
+
+* How do you know if you missed a place to put ``atomic`` around?
+
+  - hard to catch by writing tests
+
+  - instead you get obscure rare run-time crashes
+
+|pause|
+
+* What if we put ``atomic`` everywhere?
+
+
+Analogy with Garbage Collection
+-------------------------------
+
+* Explicit Memory Management:
+
+  - messy, hard to debug rare leaks or corruptions
+
+|pause|
+
+* Automatic GC solves it
+
+  - common languages either have a GC or not
+
+  - if they have a GC, it controls almost *all* objects
+
+  - not just a small part of them
+
+
+Proposed solution
+-----------------
+
+* Put ``atomic`` everywhere...
+
+* in other words, Run Everything with TM
+
+
+Proposed solution
+-----------------
+
+* Really needs TM.  With locks, you'd get this:
+
+.. image:: GIL.png
+   :scale: 70%
+   :align: center
+
+* With TM you can get this:
+
+.. image:: STM.png
+   :scale: 70%
+   :align: center
+
+
+In a few words
+--------------
+
+* Longer transactions
+
+* Corresponding to larger parts of the program
+
+* The underlying multi-threaded model becomes implicit
+
+
+Typical example
+---------------
+
+* You want to run ``f1()`` and ``f2()`` and ``f3()``
+
+|pause|
+
+* Assume they are "mostly independent"
+
+  - i.e. we expect that we can run them in parallel
+
+  - but we cannot prove it, we just hope that in the common case we can
+
+|pause|
+
+* In case of conflicts, we don't want random behavior
+
+  - i.e. we don't want thread-like non-determinism and crashes
+
+
+Pooling and atomic statements
+-----------------------------
+
+* Solution: use a library that creates a pool of threads
+
+* Each thread picks a function from the list and runs it
+  with ``atomic``
+
+
+Results
+-------
+
+* The behavior is "as if" we had run ``f1()``, ``f2()``
+  and ``f3()`` sequentially
+
+* The programmer chooses if he wants this fixed order,
+  or if any order is fine
+
+* Threads are hidden from the programmer
+
+
+More generally
+--------------
+
+* This was an example only
+
+* **TM gives various new ways to hide threads under a nice interface**
+
+
+Not the Ultimate Solution
+-------------------------
+
+* Much easier for the programmer to get reproducible results
+
+* But maybe too many conflicts
+
+|pause|
+
+* "The right side" of the problem
+
+  - start with a working program, and improve performance
+
+  - as opposed to: with locks, start with a fast program, and debug crashes
+
+  - we will need new debugging tools
+
+
+PyPy-STM
+--------
+
+* PyPy-STM: a version of PyPy with Software Transactional Memory
+
+  - in-progress, but basically working
+
+  - solves the "GIL issue" but more importantly adds ``atomic``
+
+* http://pypy.org/
+
+|pause|
+
+* Thank you!
diff --git a/talk/fscons2012/title.latex b/talk/fscons2012/title.latex
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/title.latex
@@ -0,0 +1,5 @@
+\begin{titlepage}
+\begin{figure}[h]
+\includegraphics[width=60px]{../../logo/pypy_small128.png}
+\end{figure}
+\end{titlepage}
diff --git a/talk/fscons2012/withlock.fig b/talk/fscons2012/withlock.fig
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/withlock.fig
@@ -0,0 +1,27 @@
+#FIG 3.2  Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4      
+100.00
+Single
+-2
+1200 2
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        3600 1350 4725 1350 4725 1800 3600 1800 3600 1350
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        4725 1800 5850 1800 5850 2250 4725 2250 4725 1800
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        630 2025 8010 2025
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        630 1575 8010 1575
+2 2 0 1 0 6 50 -1 20 0.000 0 0 7 0 0 5
+        1125 1350 3600 1350 3600 1800 1125 1800 1125 1350
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        4725 1350 7200 1350 7200 1800 4725 1800 4725 1350
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        5850 1800 7515 1800 7515 2250 5850 2250 5850 1800
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        1125 1800 3735 1800 3735 2295 1125 2295 1125 1800
diff --git a/talk/fscons2012/withlock.png b/talk/fscons2012/withlock.png
new file mode 100644
index 
0000000000000000000000000000000000000000..005e6f0d2dc8168971fa293603656d515687de5e
GIT binary patch

[cut]

diff --git a/talk/fscons2012/withstm.fig b/talk/fscons2012/withstm.fig
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/withstm.fig
@@ -0,0 +1,29 @@
+#FIG 3.2  Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4      
+100.00
+Single
+-2
+1200 2
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        4725 1350 7200 1350 7200 1800 4725 1800 4725 1350
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        630 1575 8010 1575
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        630 2025 8010 2025
+2 2 0 1 0 6 50 -1 20 0.000 0 0 7 0 0 5
+        1125 1350 3600 1350 3600 1800 1125 1800 1125 1350
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        3600 1350 4725 1350 4725 1800 3600 1800 3600 1350
+2 2 0 1 0 18 50 -1 20 0.000 0 0 -1 0 0 5
+        3735 1800 4860 1800 4860 2295 3735 2295 3735 1800
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        4275 1800 5400 1800 5400 2295 4275 2295 4275 1800
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        5400 1800 7065 1800 7065 2295 5400 2295 5400 1800
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        1125 1800 3735 1800 3735 2295 1125 2295 1125 1800
diff --git a/talk/fscons2012/withstm.png b/talk/fscons2012/withstm.png
new file mode 100644
index 
0000000000000000000000000000000000000000..f266dceadc96324510e1919d661d81245d24a74c
GIT binary patch

[cut]

diff --git a/talk/fscons2012/withstm0.fig b/talk/fscons2012/withstm0.fig
new file mode 100644
--- /dev/null
+++ b/talk/fscons2012/withstm0.fig
@@ -0,0 +1,27 @@
+#FIG 3.2  Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4      
+100.00
+Single
+-2
+1200 2
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        4725 1350 7200 1350 7200 1800 4725 1800 4725 1350
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        630 1575 8010 1575
+2 1 0 1 0 4 51 -1 20 0.000 0 0 7 1 0 2
+       0 0 1.00 60.00 120.00
+        630 2025 8010 2025
+2 2 0 1 0 6 50 -1 20 0.000 0 0 7 0 0 5
+        1125 1350 3600 1350 3600 1800 1125 1800 1125 1350
+2 2 0 1 0 9 50 -1 20 0.000 0 0 -1 0 0 5
+        3600 1350 4725 1350 4725 1800 3600 1800 3600 1350
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        4860 1800 6525 1800 6525 2295 4860 2295 4860 1800
+2 2 0 1 0 4 50 -1 20 0.000 0 0 -1 0 0 5
+        3735 1800 4860 1800 4860 2295 3735 2295 3735 1800
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+        1125 1800 3735 1800 3735 2295 1125 2295 1125 1800
diff --git a/talk/fscons2012/withstm0.png b/talk/fscons2012/withstm0.png
new file mode 100644
index 
0000000000000000000000000000000000000000..19fddb01f1ec2fd774299bde2d3995ca59b18eec
GIT binary patch

[cut]

diff --git a/talk/vmil2012/Makefile b/talk/vmil2012/Makefile
--- a/talk/vmil2012/Makefile
+++ b/talk/vmil2012/Makefile
@@ -1,11 +1,23 @@
 
-jit-guards.pdf: paper.tex paper.bib zotero.bib figures/log.tex 
figures/example.tex figures/benchmarks_table.tex figures/backend_table.tex 
figures/ops_count_table.tex figures/loop_bridge.pdf figures/guard_table.tex 
figures/resume_data_table.tex figures/failing_guards_table.tex
+jit-guards.pdf: paper.tex paper.bib zotero.bib figures/log.tex 
figures/example.tex figures/benchmarks_table.tex figures/backend_table.tex 
figures/ops_count_table.tex figures/resume_data.pdf figures/loop_bridge.pdf 
figures/guard_table.tex figures/resume_data_table.tex 
figures/failing_guards_table.tex
        pdflatex paper
        bibtex paper
        pdflatex paper
        pdflatex paper
        mv paper.pdf jit-guards.pdf
 
+print: paper.tex paper.bib zotero.bib figures/log.tex figures/example.tex 
figures/benchmarks_table.tex figures/backend_table.tex 
figures/ops_count_table.tex figures/resume_data.eps figures/loop_bridge.eps 
figures/guard_table.tex figures/resume_data_table.tex 
figures/failing_guards_table.tex
+       latex paper
+       bibtex paper
+       latex paper
+       latex paper
+       dvips -t letter paper
+       sed -i '' -e "s/paper.dvi/The Efficient Handling of Guards in the 
Design of RPython's Tracing JIT/g" paper.ps
+       ps2pdf -sPAPERSIZE=letter paper.ps      
+       mv paper.pdf vmil01-schneider.pdf
+       mv paper.ps vmil01-schneider.ps
+       @echo "Or run Adobe Distiller"  
+
 UNAME := $(shell "uname")
 view: jit-guards.pdf
 ifeq ($(UNAME), Linux)
@@ -48,4 +60,6 @@
 clean:
        rm -f *.aux *.bbl *.blg *.log *.tdo
        rm -f *.pdf
+       rm -f *.dvi
+       rm -f *.ps
        rm -f figures/*table.tex figures/*table.aux
diff --git a/talk/vmil2012/figures/loop_bridge.eps 
b/talk/vmil2012/figures/loop_bridge.eps
new file mode 100644
--- /dev/null
+++ b/talk/vmil2012/figures/loop_bridge.eps
@@ -0,0 +1,15289 @@
+%!PS-Adobe-3.0 EPSF-3.0
+%%HiResBoundingBox: 0.000000 0.000000 407.000000 544.000000
+%APL_DSC_Encoding: UTF8
+%APLProducer: (Version 10.8.2 (Build 12C60) Quartz PS Context)
+%%Title: (Unknown)
+%%Creator: (Unknown)
+%%CreationDate: (Unknown)
+%%For: (Unknown)
+%%DocumentData: Clean7Bit
+%%LanguageLevel: 2
+%%Pages: 1
+%%BoundingBox: 0 0 407 544
+%%EndComments
+%%BeginProlog
+%%BeginFile: cg-pdf.ps
+%%Copyright: Copyright 2000-2004 Apple Computer Incorporated.
+%%Copyright: All Rights Reserved.
+currentpacking true setpacking
+/cg_md 141 dict def
+cg_md begin
+/L3? languagelevel 3 ge def
+/bd{bind def}bind def
+/ld{load def}bd
+/xs{exch store}bd
+/xd{exch def}bd
+/cmmtx matrix def
+mark
+/sc/setcolor
+/scs/setcolorspace
+/dr/defineresource
+/fr/findresource
+/T/true
+/F/false
+/d/setdash
+/w/setlinewidth
+/J/setlinecap
+/j/setlinejoin
+/M/setmiterlimit
+/i/setflat
+/rc/rectclip
+/rf/rectfill
+/rs/rectstroke
+/f/fill
+/f*/eofill
+/sf/selectfont
+/s/show
+/xS/xshow
+/yS/yshow
+/xyS/xyshow
+/S/stroke
+/m/moveto
+/l/lineto
+/c/curveto
+/h/closepath
+/n/newpath
+/q/gsave
+/Q/grestore
+counttomark 2 idiv
+{ld}repeat pop
+/SC{   
+    /ColorSpace fr scs
+}bd
+/sopr /setoverprint where{pop/setoverprint}{/pop}ifelse ld
+/soprm /setoverprintmode where{pop/setoverprintmode}{/pop}ifelse ld
+/cgmtx matrix def
+/sdmtx{cgmtx currentmatrix pop}bd
+/CM {cgmtx setmatrix}bd                
+/cm {cmmtx astore CM concat}bd 
+/W{clip newpath}bd
+/W*{eoclip newpath}bd
+statusdict begin product end dup (HP) anchorsearch{
+    pop pop pop        
+    true
+}{
+    pop        
+   (hp) anchorsearch{
+       pop pop true
+    }{
+       pop false
+    }ifelse
+}ifelse
+{      
+    { 
+       { 
+           pop pop 
+           (0)dup 0 4 -1 roll put
+           F charpath
+       }cshow
+    }
+}{
+    {F charpath}
+}ifelse
+/cply exch bd
+/cps {cply stroke}bd
+/pgsave 0 def
+/bp{/pgsave save store}bd
+/ep{pgsave restore showpage}def                
+/re{4 2 roll m 1 index 0 rlineto 0 exch rlineto neg 0 rlineto h}bd
+/scrdict 10 dict def
+/scrmtx matrix def
+/patarray 0 def
+/createpat{patarray 3 1 roll put}bd
+/makepat{
+scrmtx astore pop
+gsave
+initgraphics
+CM 
+patarray exch get
+scrmtx
+makepattern
+grestore
+setpattern
+}bd
+/cg_BeginEPSF{
+    userdict save/cg_b4_Inc_state exch put
+    userdict/cg_endepsf/cg_EndEPSF load put
+    count userdict/cg_op_count 3 -1 roll put 
+    countdictstack dup array dictstack userdict/cg_dict_array 3 -1 roll put
+    3 sub{end}repeat
+    /showpage {} def
+    0 setgray 0 setlinecap 1 setlinewidth 0 setlinejoin
+    10 setmiterlimit [] 0 setdash newpath
+    false setstrokeadjust false setoverprint   
+}bd
+/cg_EndEPSF{
+  countdictstack 3 sub { end } repeat
+  cg_dict_array 3 1 index length 3 sub getinterval
+  {begin}forall
+  count userdict/cg_op_count get sub{pop}repeat
+  userdict/cg_b4_Inc_state get restore
+  F setpacking
+}bd
+/cg_biproc{currentfile/RunLengthDecode filter}bd
+/cg_aiproc{currentfile/ASCII85Decode filter/RunLengthDecode filter}bd
+/ImageDataSource 0 def
+L3?{
+    /cg_mibiproc{pop pop/ImageDataSource{cg_biproc}def}bd
+    /cg_miaiproc{pop pop/ImageDataSource{cg_aiproc}def}bd
+}{
+    /ImageBandMask 0 def
+    /ImageBandData 0 def
+    /cg_mibiproc{
+       string/ImageBandMask xs
+       string/ImageBandData xs
+       /ImageDataSource{[currentfile/RunLengthDecode filter dup 
ImageBandMask/readstring cvx
+           /pop cvx dup ImageBandData/readstring cvx/pop cvx]cvx bind}bd
+    }bd
+    /cg_miaiproc{      
+       string/ImageBandMask xs
+       string/ImageBandData xs
+       /ImageDataSource{[currentfile/ASCII85Decode filter/RunLengthDecode 
filter
+           dup ImageBandMask/readstring cvx
+           /pop cvx dup ImageBandData/readstring cvx/pop cvx]cvx bind}bd
+    }bd
+}ifelse
+/imsave 0 def
+/BI{save/imsave xd mark}bd
+/EI{imsave restore}bd
+/ID{
+counttomark 2 idiv
+dup 2 add      
+dict begin
+{def} repeat
+pop            
+/ImageType 1 def
+/ImageMatrix[Width 0 0 Height neg 0 Height]def
+currentdict dup/ImageMask known{ImageMask}{F}ifelse exch
+L3?{
+    dup/MaskedImage known
+    { 
+       pop
+       <<
+           /ImageType 3
+           /InterleaveType 2
+           /DataDict currentdict
+           /MaskDict
+           <<  /ImageType 1
+               /Width Width
+               /Height Height
+               /ImageMatrix ImageMatrix
+               /BitsPerComponent 1
+               /Decode [0 1]
+               currentdict/Interpolate known
+               {/Interpolate Interpolate}if
+           >>
+       >>
+    }if
+}if
+exch
+{imagemask}{image}ifelse       
+end    
+}bd
+/cguidfix{statusdict begin mark version end
+{cvr}stopped{cleartomark 0}{exch pop}ifelse
+2012 lt{dup findfont dup length dict begin
+{1 index/FID ne 2 index/UniqueID ne and
+{def} {pop pop} ifelse}forall
+currentdict end definefont pop
+}{pop}ifelse
+}bd
+/t_array 0 def
+/t_i 0 def
+/t_c 1 string def
+/x_proc{ 
+    exch t_array t_i get add exch moveto
+    /t_i t_i 1 add store
+}bd
+/y_proc{ 
+    t_array t_i get add moveto
+    /t_i t_i 1 add store
+}bd
+/xy_proc{
+        
+       t_array t_i 2 copy 1 add get 3 1 roll get 
+       4 -1 roll add 3 1 roll add moveto
+       /t_i t_i 2 add store
+}bd
+/sop 0 def             
+/cp_proc/x_proc ld     
+/base_charpath         
+{
+    /t_array xs
+    /t_i 0 def
+    { 
+       t_c 0 3 -1 roll put
+        currentpoint
+       t_c cply sop
+        cp_proc
+    }forall
+    /t_array 0 def
+}bd
+/sop/stroke ld         
+/nop{}def
+/xsp/base_charpath ld
+/ysp{/cp_proc/y_proc ld base_charpath/cp_proc/x_proc ld}bd
+/xysp{/cp_proc/xy_proc ld base_charpath/cp_proc/x_proc ld}bd
+/xmp{/sop/nop ld /cp_proc/x_proc ld base_charpath/sop/stroke ld}bd
+/ymp{/sop/nop ld /cp_proc/y_proc ld base_charpath/sop/stroke ld}bd
+/xymp{/sop/nop ld /cp_proc/xy_proc ld base_charpath/sop/stroke ld}bd
+/refnt{ 
+findfont dup length dict copy dup
+/Encoding 4 -1 roll put 
+definefont pop
+}bd
+/renmfont{ 
+findfont dup length dict copy definefont pop
+}bd
+L3? dup dup{save exch}if
+/Range 0 def
+/DataSource 0 def
+/val 0 def
+/nRange 0 def
+/mulRange 0 def
+/d0 0 def
+/r0 0 def
+/di 0 def
+/ri 0 def
+/a0 0 def
+/a1 0 def
+/r1 0 def
+/r2 0 def
+/dx 0 def
+/Nsteps 0 def
+/sh3tp 0 def
+/ymax 0 def
+/ymin 0 def
+/xmax 0 def
+/xmin 0 def
+/setupFunEval 
+{
+    begin
+       /nRange Range length 2 idiv store
+       /mulRange   
+                   
+       [ 
+           0 1 nRange 1 sub
+           { 
+                   2 mul/nDim2 xd              
+                   Range nDim2 get             
+                   Range nDim2 1 add get       
+                   1 index sub                 
+                                               
+                   255 div                     
+                   exch                        
+           }for
+       ]store
+    end
+}bd
+/FunEval 
+{
+    begin
+       
+       nRange mul /val xd      
+                               
+       0 1 nRange 1 sub
+       {
+           dup 2 mul/nDim2 xd 
+           val 
+           add DataSource exch get 
+           mulRange nDim2 get mul      
+           mulRange nDim2 1 add get 
+           add 
+       }for    
+    end
+}bd
+/max 
+{
+       2 copy lt
+       {exch pop}{pop}ifelse
+}bd
+/sh2
+{      
+       /Coords load aload pop  
+       3 index 3 index translate       
+                                       
+       3 -1 roll sub   
+       3 1 roll exch   
+       sub                             
+       2 copy
+       dup mul exch dup mul add sqrt   
+       dup
+       scale  
_______________________________________________
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit

Reply via email to