Author: Armin Rigo <[email protected]> Branch: extradoc Changeset: r5370:3190704f9cd2 Date: 2014-07-22 18:01 +0200 http://bitbucket.org/pypy/extradoc/changeset/3190704f9cd2/
Log: Tweaks diff --git a/talk/ep2014/stm/talk.html b/talk/ep2014/stm/talk.html --- a/talk/ep2014/stm/talk.html +++ b/talk/ep2014/stm/talk.html @@ -488,7 +488,7 @@ <div class="slide" id="transactional-memory"> <h1>Transactional Memory</h1> <ul class="simple"> -<li>like GIL, but instead of locking, each thread runs optimistically</li> +<li>like GIL, but instead of blocking, each thread runs optimistically</li> <li>"easy" to implement:<ul> <li>GIL acquire -> transaction start</li> <li>GIL release -> transaction commit</li> @@ -507,7 +507,7 @@ <ul class="simple"> <li>application-level locks still needed...</li> <li>but <em>can be very coarse:</em><ul> -<li>even two big transactions can hopefully run in parallel</li> +<li>even two big transactions can optimistically run in parallel</li> <li>even if they both <em>acquire and release the same lock</em></li> </ul> </li> @@ -546,6 +546,7 @@ <li>current status:<ul> <li>basics work</li> <li>best case 25-40% overhead (much better than originally planned)</li> +<li>parallelizing user locks not done yet</li> <li>tons of things to improve</li> <li>tons of things to improve</li> <li>tons of things to improve</li> @@ -563,18 +564,38 @@ <li>counting primes</li> </ul> </div> +<div class="slide" id="benefits"> +<h1>Benefits</h1> +<ul class="simple"> +<li>Keep locks coarse-grained</li> +<li>Potential to enable parallelism:<ul> +<li>in CPU-bound multithreaded programs</li> +<li>or as a replacement of <tt class="docutils literal">multiprocessing</tt></li> +<li>but also in existing applications not written for that</li> +<li>as long as they do multiple things that are "often independent"</li> +</ul> +</li> +</ul> +</div> +<div class="slide" id="issues"> +<h1>Issues</h1> +<ul class="simple"> +<li>Performance hit: 25-40% everywhere (may be ok)</li> +<li>Keep locks coarse-grained:<ul> +<li>but in case of systematic conflicts, performance is bad again</li> +<li>need to track and fix them</li> +<li>need tool support (debugger/profiler)</li> +</ul> +</li> +</ul> +</div> <div class="slide" id="summary"> <h1>Summary</h1> <ul class="simple"> <li>Transactional Memory is still too researchy for production</li> -<li>Potential to enable parallelism:<ul> -<li>as a replacement of <tt class="docutils literal">multiprocessing</tt></li> -<li>but also in existing applications not written for that</li> -<li>as long as they do multiple things that are "often independent"</li> -</ul> -</li> -<li>Keep locks coarse-grained:<ul> -<li>need to track and fix issues in case of systematic conflicts</li> +<li>But it has the potential to enable "easier parallelism"</li> +<li>Still alpha but slowly getting there!<ul> +<li>see <a class="reference external" href="http://morepypy.blogspot.com/">http://morepypy.blogspot.com/</a></li> </ul> </li> </ul> diff --git a/talk/ep2014/stm/talk.rst b/talk/ep2014/stm/talk.rst --- a/talk/ep2014/stm/talk.rst +++ b/talk/ep2014/stm/talk.rst @@ -122,7 +122,7 @@ Transactional Memory -------------------- -* like GIL, but instead of locking, each thread runs optimistically +* like GIL, but instead of blocking, each thread runs optimistically * "easy" to implement: @@ -145,7 +145,7 @@ * but *can be very coarse:* - - even two big transactions can hopefully run in parallel + - even two big transactions can optimistically run in parallel - even if they both *acquire and release the same lock* @@ -186,6 +186,7 @@ - basics work - best case 25-40% overhead (much better than originally planned) + - parallelizing user locks not done yet - tons of things to improve - tons of things to improve - tons of things to improve @@ -201,22 +202,46 @@ * counting primes +Benefits +-------- + +* Keep locks coarse-grained + +* Potential to enable parallelism: + + - in CPU-bound multithreaded programs + + - or as a replacement of ``multiprocessing`` + + - but also in existing applications not written for that + + - as long as they do multiple things that are "often independent" + + +Issues +------ + +* Performance hit: 25-40% everywhere (may be ok) + +* Keep locks coarse-grained: + + - but in case of systematic conflicts, performance is bad again + + - need to track and fix them + + - need tool support (debugger/profiler) + + Summary ------- * Transactional Memory is still too researchy for production -* Potential to enable parallelism: +* But it has the potential to enable "easier parallelism" - - as a replacement of ``multiprocessing`` +* Still alpha but slowly getting there! - - but also in existing applications not written for that - - - as long as they do multiple things that are "often independent" - -* Keep locks coarse-grained: - - - need to track and fix issues in case of systematic conflicts + - see http://morepypy.blogspot.com/ Part 2 - Under The Hood _______________________________________________ pypy-commit mailing list [email protected] https://mail.python.org/mailman/listinfo/pypy-commit
