[pypy-commit] extradoc extradoc: minor changes

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5211:94a4f114fe63
Date: 2014-05-02 09:31 +0200
http://bitbucket.org/pypy/extradoc/changeset/94a4f114fe63/

Log:minor changes

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -123,15 +123,15 @@
 
 \section{Discussion}
 
-\paragraph{dynamic language VM problems}
-XXX:
-- high allocation rate (short lived objects)\\
-- (don't know anything about the program that runs until it actually runs: 
arbitrary atomic block size)
+%% \paragraph{dynamic language VM problems}
+%% XXX:
+%% - high allocation rate (short lived objects)\\
+%% - (don't know anything about the program that runs until it actually runs: 
arbitrary atomic block size)
 
 
 \subsection{Why is there a GIL?}
 The GIL is a very simple synchronisation mechanism for supporting
-multi-threading in the interpreter. The basic guarantee is that the
+multithreading in the interpreter. The basic guarantee is that the
 GIL may only be released in-between bytecode instructions. The
 interpreter can thus rely on complete isolation and atomicity of these
 instructions. Additionally, it provides the application with a
@@ -139,7 +139,7 @@
 on certain operations to be atomic and that they will always be
 executed in the order in which they appear in the code. While
 depending on this may not always be a good idea, it is done in
-practice. A solution replacing the GIL should therefore uphold these
+practice. A GIL-replacement should therefore uphold these
 guarantees, while preferably also be as easily implementable as a GIL
 for the interpreter.
 [xxx mention that the interpreter is typically very large and maintained
@@ -151,7 +151,7 @@
 thread-safe can voluntarily release the GIL themselves in order to
 still provide some parallelism. This is done for example for
 potentially long I/O operations. Consequently, I/O-bound,
-multi-threaded applications can actually parallelise to some
+multithreaded applications can actually parallelise to some
 degree. Again, a potential solution should be able to integrate with
 external libraries with similar ease. We will however focus our
 argumentation more on running code in the interpreted language in
@@ -277,9 +277,7 @@
 recently gained a lot of popularity by its introduction in common
 desktop CPUs from Intel (Haswell generation).
 
-\paragraph{HTM}
-
-HTM provides us with transactions like any TM system does. It can
+\paragraph{HTM} provides us with transactions like any TM system does. It can
 be used as a direct replacement for the GIL. However, as is common
 with hardware-only solutions, there are quite a few limitations
 that can not be lifted easily. For this comparison, we look at
@@ -304,16 +302,14 @@
 synchronisation mechanism for the application. It is not possible
 in general to expose the hardware-transactions to the application
 in the form of atomic blocks because that would require much
-longer transactions. 
+longer transactions.
 
 %% - false-sharing on cache-line level\\
 %% - limited capacity (caches, undocumented)\\
 %% - random aborts (haswell)\\
 %% - generally: transaction-length limited (no atomic blocks)
 
-\paragraph{STM}
-
-STM provides all the same benefits as HTM except for its performance.
+\paragraph{STM} provides all the same benefits as HTM except for its 
performance.
 It is not unusual for the overhead introduced by STM to be between
 100\% to even 1000\%. While STM systems often scale very well to a big
 number of threads and eventually overtake the single-threaded
@@ -347,9 +343,9 @@
 & \textbf{GIL} & \textbf{Fine-grained locking}
 & \textbf{Shared-nothing} & \textbf{HTM} & \textbf{STM}\\
 \hline
-Performance (single-threaded) & ++   & +  & ++   & ++ & -{-} \\
+Performance (single threaded) & ++   & +  & ++   & ++ & -{-} \\
 \hline
-Performance (multi-threaded)  & -{-} & +  & +& +  & +\\
+Performance (multithreaded)   & -{-} & +  & +& +  & +\\
 \hline
 Existing applications & ++   & ++ & -{-} & ++ & ++   \\
 \hline
@@ -357,7 +353,7 @@
 \hline
 Implementation& ++   & -  & ++   & ++ & ++   \\
 \hline
-External libra\-ries  & ++   & ++ & ++   & ++ & ++   \\
+External libraries& ++   & ++ & ++   & ++ & ++   \\
 \hline
   \end{tabular}
   \caption{Comparison between the approaches (-{-}/-/o/+/++)}
@@ -405,7 +401,7 @@
 
 \acks
 
-Acknowledgments...
+Acknowledgements...
 
 % We recommend abbrvnat bibliography style.
 
@@ -423,4 +419,3 @@
 
 
 \end{document}
-
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: start citing things

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5213:25fe21890b67
Date: 2014-05-02 10:11 +0200
http://bitbucket.org/pypy/extradoc/changeset/25fe21890b67/

Log:start citing things

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -10,6 +10,7 @@
 
 \usepackage[utf8]{inputenc}
 \usepackage{array}
+\usepackage{hyperref}
 \usepackage{amsmath}
 
 
@@ -213,7 +214,7 @@
 single-threaded case compared to the GIL, which requires much less
 acquire-release operations.
 
-Jython\footnote{www.jython.org} is one project that implements an
+Jython\cite{webjython} is one project that implements an
 interpreter for Python on the JVM\footnote{Java Virtual Machine} and
 that uses fine-grained locking to correctly synchronise the
 interpreter. For a language like Python, one needs quite a few,
@@ -282,46 +283,48 @@
 recently gained a lot of popularity by its introduction in common
 desktop CPUs from Intel (Haswell generation).
 
-\paragraph{HTM} provides us with transactions like any TM system does. It can
-be used as a direct replacement for the GIL. However, as is common
-with hardware-only solutions, there are quite a few limitations
-that can not be lifted easily. For this comparison, we look at
-the implementation of Intel in recent Haswell generation CPUs.
+\paragraph{HTM} provides us with transactions like any TM system does.
+It can be used as a direct replacement for the GIL. However, as is
+common with hardware-only solutions, there are quite a few limitations
+that can not be lifted easily. For this comparison, we look at the
+implementation of Intel in recent Haswell generation CPUs.
 
 HTM in these CPUs works on the level of caches. This has a few
 consequences like false-sharing on the cache-line level, and most
 importantly it limits the amount of memory that can be accessed within
 a transaction. This transaction-length limitation makes it necessary
 to have a fallback in place in case this limit is reached. In recent
-attempts, the usual fallback is the GIL (XXX: cite). The current
-generation of HTM hits this limit very often for our use case (XXX:
-cite ruby GIL paper) and therefore does not parallelise that well.
+attempts, the usual fallback is the GIL\cite{odaira14}. In our
+experiments, the current generation of HTM proved to be very fragile
+and thus needing the fallback very often. Consequently, scalability
+suffered a lot from this.
 
-The performance of HTM is pretty good (XXX: cite again...) as it does
-not introduce much overhead. And it can transparently parallelise
-existing applications to some degree. The implementation is very
-straight-forward because it directly replaces the GIL in a central
-place. HTM is also directly compatible with any external library that
-needs to be integrated and synchronised for use in multiple
-threads. The one thing that is missing is support for a better
-synchronisation mechanism for the application. It is not possible
-in general to expose the hardware-transactions to the application
-in the form of atomic blocks because that would require much
-longer transactions.
+The performance of HTM is pretty good as it does not introduce much
+overhead ($<40\%$ overhead\cite{odaira14}). And it can transparently
+parallelise existing applications to some degree. The implementation
+is very straight-forward because it directly replaces the GIL in a
+central place. HTM is also directly compatible with any external
+library that needs to be integrated and synchronised for use in
+multiple threads. The one thing that is missing is support for a
+better synchronisation mechanism for the application. It is not
+possible in general to expose the hardware-transactions to the
+application in the form of atomic blocks because that would require
+much longer transactions.
 
 %% - false-sharing on cache-line level\\
 %% - limited capacity (caches, undocumented)\\
 %% - random aborts (haswell)\\
 %% - generally: transaction-length limited (no atomic blocks)
 
-\paragraph{STM} provides all the same benefits as HTM except for its 
performance.
-It is not unusual for the overhead introduced by STM to be between
-100\% to even 1000\%. While STM systems often scale very well to a big
-number of threads and eventually overtake the single-threaded
-execution, they often provide no benefits at all for low numbers of
-threads (1-8). There are some attempts (XXX: cite fastlane) that can
-reduce the overhead a lot, but also scale very badly so that their
-benefit on more than one thread is little.
+\paragraph{STM} provides all the same benefits as HTM except for its
+performance.  It is not unusual for the overhead introduced by STM to
+be between 100\% to even 1000\% \cite{cascaval08,drago11}. While STM
+systems often scale very well to a big number of threads and
+eventually overtake the single-threaded execution, they often provide
+no benefits at all for lo

[pypy-commit] extradoc extradoc: Avoid using the word 'hardware' to describe our purely-STM system.

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: extradoc
Changeset: r5214:2c6c55e951e5
Date: 2014-05-02 10:17 +0200
http://bitbucket.org/pypy/extradoc/changeset/2c6c55e951e5/

Log:Avoid using the word 'hardware' to describe our purely-STM system.

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -383,9 +383,10 @@
 synchronise memory accesses using atomic blocks.
 
 Unfortunately, STM has a big performance problem. One way to approach
-this problem is to make STM systems that use the available hardware
-better. We are currently working on a STM system that makes use of
-several hardware features like virtual memory and memory segmentation.
+this problem is to make STM systems that make better use of low-level
+features in existing OS kernels.
+We are currently working on a STM system that makes use of
+several such features like virtual memory and memory segmentation.
 We further tailor the system to the discussed use case which gives us
 an advantage over other STM systems that are more general.  With this
 approach, initial results suggest that we can keep the overhead of STM
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: backout da193f0b119d. Done in this way, "print()" will print an empty

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: 
Changeset: r71193:0bdf01779070
Date: 2014-05-02 10:25 +0200
http://bitbucket.org/pypy/pypy/changeset/0bdf01779070/

Log:backout da193f0b119d. Done in this way, "print()" will print an
empty tuple in Python 2. Is this really necessary for the py3k
branch?

diff --git a/pypy/module/_lsprof/test/test_cprofile.py 
b/pypy/module/_lsprof/test/test_cprofile.py
--- a/pypy/module/_lsprof/test/test_cprofile.py
+++ b/pypy/module/_lsprof/test/test_cprofile.py
@@ -43,7 +43,7 @@
 )
 by_id = set()
 for entry in stats:
-if entry.code == f1.__code__:
+if entry.code == f1.func_code:
 assert len(entry.calls) == 2
 for subentry in entry.calls:
 assert subentry.code in expected
@@ -144,8 +144,8 @@
 entries = {}
 for entry in stats:
 entries[entry.code] = entry
-efoo = entries[foo.__code__]
-ebar = entries[bar.__code__]
+efoo = entries[foo.func_code]
+ebar = entries[bar.func_code]
 assert 0.9 < efoo.totaltime < 2.9
 # --- cannot test .inlinetime, because it does not include
 # --- the time spent doing the call to time.time()
@@ -219,12 +219,12 @@
 lines.remove(line)
 break
 else:
-print('NOT FOUND: %s' % pattern.rstrip('\n'))
-print('--- GOT ---')
-print(got)
-print()
-print('--- EXPECTED ---')
-print(expected)
+print 'NOT FOUND:', pattern.rstrip('\n')
+print '--- GOT ---'
+print got
+print
+print '--- EXPECTED ---'
+print expected
 assert False
 assert not lines
 finally:
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Skip this line of the test on 32-bit, with an explanation about why.

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: 
Changeset: r71194:0e00b9b987eb
Date: 2014-05-02 10:32 +0200
http://bitbucket.org/pypy/pypy/changeset/0e00b9b987eb/

Log:Skip this line of the test on 32-bit, with an explanation about why.

diff --git a/pypy/module/pypyjit/test_pypy_c/test_cprofile.py 
b/pypy/module/pypyjit/test_pypy_c/test_cprofile.py
--- a/pypy/module/pypyjit/test_pypy_c/test_cprofile.py
+++ b/pypy/module/pypyjit/test_pypy_c/test_cprofile.py
@@ -1,4 +1,4 @@
-import py
+import py, sys
 from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC
 
 class TestCProfile(BaseTestPyPyC):
@@ -26,6 +26,10 @@
 for method in ['append', 'pop']:
 loop, = log.loops_by_id(method)
 print loop.ops_by_id(method)
-assert ' call(' not in repr(loop.ops_by_id(method))
+# on 32-bit, there is f1=read_timestamp(); ...;
+# f2=read_timestamp(); f3=call(llong_sub,f1,f2)
+# which should turn into a single PADDQ/PSUBQ
+if sys.maxint != 2147483647:
+assert ' call(' not in repr(loop.ops_by_id(method))
 assert ' call_may_force(' not in repr(loop.ops_by_id(method))
 assert ' cond_call(' in repr(loop.ops_by_id(method))
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: more random citescites...

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5215:72d2204d81d3
Date: 2014-05-02 10:42 +0200
http://bitbucket.org/pypy/extradoc/changeset/72d2204d81d3/

Log:more random citescites...

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -284,7 +284,7 @@
 desktop CPUs from Intel (Haswell generation).
 
 \paragraph{HTM} provides us with transactions like any TM system does.
-It can be used as a direct replacement for the GIL. However, as is
+It can be used as a direct replacement for the 
GIL\cite{nicholas06,odaira14,fuad10}. However, as is
 common with hardware-only solutions, there are quite a few limitations
 that can not be lifted easily. For this comparison, we look at the
 implementation of Intel in recent Haswell generation CPUs.
@@ -294,7 +294,7 @@
 importantly it limits the amount of memory that can be accessed within
 a transaction. This transaction-length limitation makes it necessary
 to have a fallback in place in case this limit is reached. In recent
-attempts, the usual fallback is the GIL\cite{odaira14}. In our
+attempts, the usual fallback is the GIL\cite{odaira14,fuad10}. In our
 experiments, the current generation of HTM proved to be very fragile
 and thus needing the fallback very often. Consequently, scalability
 suffered a lot from this.
@@ -387,12 +387,12 @@
 better. We are currently working on a STM system that makes use of
 several hardware features like virtual memory and memory segmentation.
 We further tailor the system to the discussed use case which gives us
-an advantage over other STM systems that are more general.  With this
+an advantage over other STM systems that are more general. With this
 approach, initial results suggest that we can keep the overhead of STM
-already below 50\%. A hybrid TM system, which also uses HTM to
-accelerate certain tasks, looks like a very promising direction of
-research too. In general we believe that further work to reduce the
-overhead of STM is very worthwhile.
+below 50\%. A hybrid TM system, which also uses HTM to accelerate
+certain tasks, looks like a very promising direction of research
+too. In general we believe that further work to reduce the overhead of
+STM is very worthwhile.
 
 
 
@@ -446,6 +446,18 @@
   Cascaval, Calin, et al. "Software transactional memory: Why is it
   only a research toy?." \emph{Queue} 6.5 (2008): 40.
 
+\bibitem{nicholas06}
+  Nicholas Riley and Craig Zilles. 2006. Hardware tansactional memory
+  support for lightweight dynamic language evolution. \emph{In
+Companion to the 21st ACM SIGPLAN symposium on Object-oriented
+programming systems, languages, and applications} (OOPSLA
+  '06). ACM, New York, NY, USA
+
+\bibitem{fuad10}
+  Fuad Tabba. 2010. Adding concurrency in python using a commercial
+  processor's hardware transactional memory support. \emph{SIGARCH
+  Comput. Archit. News 38}, 5 (April 2010)
+
 \end{thebibliography}
 
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: merge

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5216:d7be425514e9
Date: 2014-05-02 10:43 +0200
http://bitbucket.org/pypy/extradoc/changeset/d7be425514e9/

Log:merge

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -383,9 +383,10 @@
 synchronise memory accesses using atomic blocks.
 
 Unfortunately, STM has a big performance problem. One way to approach
-this problem is to make STM systems that use the available hardware
-better. We are currently working on a STM system that makes use of
-several hardware features like virtual memory and memory segmentation.
+this problem is to make STM systems that make better use of low-level
+features in existing OS kernels.
+We are currently working on a STM system that makes use of
+several such features like virtual memory and memory segmentation.
 We further tailor the system to the discussed use case which gives us
 an advantage over other STM systems that are more general. With this
 approach, initial results suggest that we can keep the overhead of STM
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: remove example citation

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5217:e020fe62fcfc
Date: 2014-05-02 10:49 +0200
http://bitbucket.org/pypy/extradoc/changeset/e020fe62fcfc/

Log:remove example citation

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -421,9 +421,6 @@
 \begin{thebibliography}{}
 \softraggedright
 
-\bibitem[Smith et~al.(2009)Smith, Jones]{smith02}
-  P. Q. Smith, and X. Y. Jones. ...reference text...
-
 \bibitem{webjython}
   The Jython Project, \url{www.jython.org}
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: typo

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: extradoc
Changeset: r5218:616a76d3b5b9
Date: 2014-05-02 10:54 +0200
http://bitbucket.org/pypy/extradoc/changeset/616a76d3b5b9/

Log:typo

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -445,7 +445,7 @@
   only a research toy?." \emph{Queue} 6.5 (2008): 40.
 
 \bibitem{nicholas06}
-  Nicholas Riley and Craig Zilles. 2006. Hardware tansactional memory
+  Nicholas Riley and Craig Zilles. 2006. Hardware transactional memory
   support for lightweight dynamic language evolution. \emph{In
 Companion to the 21st ACM SIGPLAN symposium on Object-oriented
 programming systems, languages, and applications} (OOPSLA
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: add something about automatic barrier placement

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5219:511162cc1095
Date: 2014-05-02 11:35 +0200
http://bitbucket.org/pypy/extradoc/changeset/511162cc1095/

Log:add something about automatic barrier placement

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -337,6 +337,14 @@
 parallel programming forward. Together with sequential consistency it
 provides a lot of simplification for parallel applications.
 
+While one can argue that STM requires the insertion of read and write
+barriers in the whole program, this can be done automatically and
+locally by a program transformation\cite{felber07}. There are attempts
+to do the same for fine-grained locking\cite{bill06} but they require
+a whole program analysis since locks are inherently non-composable.
+The effectiveness of these approaches still has to be proven for our
+use case.
+
 %% - overhead (100-1000\%) (barrier reference resolution, kills performance on 
low \#cpu)
 %% (FastLane: low overhead, not much gain)\\
 %% - unlimited transaction length (easy atomic blocks)
@@ -456,6 +464,17 @@
   processor's hardware transactional memory support. \emph{SIGARCH
   Comput. Archit. News 38}, 5 (April 2010)
 
+\bibitem{felber07}
+  Felber, Pascal, et al. "Transactifying applications using an open
+  compiler framework." \emph{TRANSACT}, August (2007): 4-6.
+
+\bibitem{bill06}
+  Bill McCloskey, Feng Zhou, David Gay, and Eric
+  Brewer. 2006. Autolocker: synchronization inference for atomic
+  sections. \emph{In Conference record of the 33rd ACM SIGPLAN-SIGACT
+  symposium on Principles of programming languages (POPL '06)}. ACM,
+  New York, NY, USA
+
 \end{thebibliography}
 
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] lang-smalltalk 64bit-c2: Merged default

2014-05-02 Thread Hubert Hesse
Author: Hubert Hesse 
Branch: 64bit-c2
Changeset: r796:e6c406381c28
Date: 2014-04-24 14:41 +0200
http://bitbucket.org/pypy/lang-smalltalk/changeset/e6c406381c28/

Log:Merged default

diff --git a/.hgignore b/.hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -12,3 +12,5 @@
 versions
 coglinux
 *.orig
+spy-*.log
+SDL.dll
diff --git a/spyvm/model.py b/spyvm/model.py
--- a/spyvm/model.py
+++ b/spyvm/model.py
@@ -15,12 +15,14 @@
 that create W_PointersObjects of correct size with attached shadows.
 """
 import sys, weakref
-from spyvm import constants, error, system
+from spyvm import constants, error, version, system
+from spyvm.version import elidable_for_version
 
 from rpython.rlib import rrandom, objectmodel, jit, signature
 from rpython.rlib.rarithmetic import intmask, r_uint32, r_uint, r_int
+from rpython.rlib.debug import make_sure_not_resized
 from rpython.tool.pairtype import extendabletype
-from rpython.rlib.objectmodel import instantiate, compute_hash
+from rpython.rlib.objectmodel import instantiate, compute_hash, 
import_from_mixin
 from rpython.rtyper.lltypesystem import lltype, rffi
 from rsdl import RSDL, RSDL_helper
 
@@ -448,7 +450,7 @@
 
 def __str__(self):
 if isinstance(self, W_PointersObject) and self.has_shadow():
-return self._shadow.getname()
+return self._get_shadow().getname()
 else:
 name = None
 if self.has_class():
@@ -488,15 +490,20 @@
 
 class W_AbstractPointersObject(W_AbstractObjectWithClassReference):
 """Common object."""
-_attrs_ = ['_shadow']
+_attrs_ = ['shadow']
+
+def changed(self):
+# This is invoked when an instance-variable is changed.
+# Kept here in case it might be usefull in the future.
+pass
 
-_shadow = None # Default value
+shadow = None # Default value
 
 @jit.unroll_safe
 def __init__(self, space, w_class, size):
 """Create new object with size = fixed + variable size."""
 W_AbstractObjectWithClassReference.__init__(self, space, w_class)
-self._shadow = None # Default value
+self.store_shadow(None)
 
 def fillin(self, space, g_self):
 from spyvm.fieldtypes import fieldtypes_of
@@ -514,12 +521,12 @@
 
 def fetch(self, space, n0):
 if self.has_shadow():
-return self._shadow.fetch(n0)
+return self._get_shadow().fetch(n0)
 return self._fetch(n0)
 
 def store(self, space, n0, w_value):
 if self.has_shadow():
-return self._shadow.store(n0, w_value)
+return self._get_shadow().store(n0, w_value)
 return self._store(n0, w_value)
 
 def varsize(self, space):
@@ -533,13 +540,17 @@
 
 def size(self):
 if self.has_shadow():
-return self._shadow.size()
+return self._get_shadow().size()
 return self.basic_size()
 
 def store_shadow(self, shadow):
-assert self._shadow is None or self._shadow is shadow
-self._shadow = shadow
+assert self.shadow is None or self.shadow is shadow
+self.shadow = shadow
+self.changed()
 
+def _get_shadow(self):
+return self.shadow
+
 @objectmodel.specialize.arg(2)
 def attach_shadow_of_class(self, space, TheClass):
 shadow = TheClass(space, self)
@@ -549,7 +560,7 @@
 
 @objectmodel.specialize.arg(2)
 def as_special_get_shadow(self, space, TheClass):
-shadow = self._shadow
+shadow = self._get_shadow()
 if not isinstance(shadow, TheClass):
 if shadow is not None:
 raise DetachingShadowError(shadow, TheClass)
@@ -568,7 +579,7 @@
 # Should only be used during squeak-image loading.
 def as_class_get_penumbra(self, space):
 from spyvm.shadow import ClassShadow
-s_class = self._shadow
+s_class = self._get_shadow()
 if s_class is None:
 s_class = ClassShadow(space, self)
 self.store_shadow(s_class)
@@ -587,7 +598,7 @@
 def as_context_get_shadow(self, space):
 from spyvm.shadow import ContextPartShadow
 # XXX TODO should figure out itself if its method or block context
-if self._shadow is None:
+if self._get_shadow() is None:
 if ContextPartShadow.is_block_context(self, space):
 return self.as_blockcontext_get_shadow(space)
 return self.as_methodcontext_get_shadow(space)
@@ -606,17 +617,19 @@
 return self.as_special_get_shadow(space, ObserveeShadow)
 
 def has_shadow(self):
-return self._shadow is not None
+return self._get_shadow() is not None
 
 def become(self, w_other):
 if not isinstance(w_other, W_AbstractPointersObject):
 return False
 # switching means also switching shadows
-self._shadow, w_other._shadow = w_other._shadow, self._shadow
+self.shadow, w_other.shadow = w_other.shadow, self.shadow
  

[pypy-commit] lang-smalltalk 64bit-c2: STM GC has no support for destructors atm.

2014-05-02 Thread Hubert Hesse
Author: Hubert Hesse 
Branch: 64bit-c2
Changeset: r797:0e0c70ccb883
Date: 2014-05-02 12:44 +0200
http://bitbucket.org/pypy/lang-smalltalk/changeset/0e0c70ccb883/

Log:STM GC has no support for destructors atm.

diff --git a/spyvm/model.py b/spyvm/model.py
--- a/spyvm/model.py
+++ b/spyvm/model.py
@@ -1076,7 +1076,8 @@
 return self._real_depth_buffer
 
 def __del__(self):
-lltype.free(self._real_depth_buffer, flavor='raw')
+pass
+#lltype.free(self._real_depth_buffer, flavor='raw')
 
 
 class W_16BitDisplayBitmap(W_DisplayBitmap):
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: add some text

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5220:c40d093dd7ec
Date: 2014-05-02 13:33 +0200
http://bitbucket.org/pypy/extradoc/changeset/c40d093dd7ec/

Log:add some text

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -390,13 +390,18 @@
 simple memory model (sequential consistency) and a composable way to
 synchronise memory accesses using atomic blocks.
 
-Unfortunately, STM has a big performance problem. One way to approach
-this problem is to make STM systems that make better use of low-level
-features in existing OS kernels.
-We are currently working on a STM system that makes use of
-several such features like virtual memory and memory segmentation.
-We further tailor the system to the discussed use case which gives us
-an advantage over other STM systems that are more general. With this
+Unfortunately, STM has a big performance problem. Particularly, for
+our use case there is not much static information available since we
+are executing a program only known at runtime. Additionally, replacing
+the GIL means running everything in transactions, so there is not much
+code that can run outside and be optimized better.
+
+One way to get more performance is to make STM systems that make
+better use of low-level features in existing OS kernels.  We are
+currently working on a STM system that makes use of several such
+features like virtual memory and memory segmentation.  We further
+tailor the system to the discussed use case which gives us an
+advantage over other STM systems that are more general. With this
 approach, initial results suggest that we can keep the overhead of STM
 below 50\%. A hybrid TM system, which also uses HTM to accelerate
 certain tasks, looks like a very promising direction of research
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: minor changes

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5221:b88e3053f8a9
Date: 2014-05-02 13:46 +0200
http://bitbucket.org/pypy/extradoc/changeset/b88e3053f8a9/

Log:minor changes

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -393,10 +393,11 @@
 Unfortunately, STM has a big performance problem. Particularly, for
 our use case there is not much static information available since we
 are executing a program only known at runtime. Additionally, replacing
-the GIL means running everything in transactions, so there is not much
-code that can run outside and be optimized better.
+the GIL means running every part of the application in transactions,
+so there is not much code that can run outside and that can be
+optimized better. The performance of the TM system is vital.
 
-One way to get more performance is to make STM systems that make
+One way to get more performance is to develop STM systems that make
 better use of low-level features in existing OS kernels.  We are
 currently working on a STM system that makes use of several such
 features like virtual memory and memory segmentation.  We further
@@ -405,8 +406,8 @@
 approach, initial results suggest that we can keep the overhead of STM
 below 50\%. A hybrid TM system, which also uses HTM to accelerate
 certain tasks, looks like a very promising direction of research
-too. In general we believe that further work to reduce the overhead of
-STM is very worthwhile.
+too. We believe that further work to reduce the overhead of STM is
+very worthwhile.
 
 
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: additional cite

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5222:5d9030322a0b
Date: 2014-05-02 13:53 +0200
http://bitbucket.org/pypy/extradoc/changeset/5d9030322a0b/

Log:additional cite

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -322,9 +322,9 @@
 systems often scale very well to a big number of threads and
 eventually overtake the single-threaded execution, they often provide
 no benefits at all for low numbers of threads (1-8). There are some
-attempts \cite{warmhoff13} that can reduce the overhead a lot, but
-also scale very badly so that their benefit on more than one thread is
-little.
+attempts \cite{warmhoff13,spear09} that can reduce the overhead a lot,
+but scale badly or only for certain workloads. Often the benefits
+on more than one thread are too little in real world applications.
 
 However, STM compared to HTM does not suffer from the same restricting
 limitations. Transactions can be arbitrarily long.  This makes it
@@ -481,6 +481,10 @@
   symposium on Principles of programming languages (POPL '06)}. ACM,
   New York, NY, USA
 
+\bibitem{spear09}
+  Spear, Michael F., et al. "Transactional mutex locks." \emph{SIGPLAN
+Workshop on Transactional Computing.} 2009.
+
 \end{thebibliography}
 
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: fixup (arigato)

2014-05-02 Thread mattip
Author: Matti Picus 
Branch: 
Changeset: r71195:67fb13391dc6
Date: 2014-05-02 12:45 +
http://bitbucket.org/pypy/pypy/changeset/67fb13391dc6/

Log:fixup (arigato)

diff --git a/pypy/doc/release-2.3.0.rst b/pypy/doc/release-2.3.0.rst
--- a/pypy/doc/release-2.3.0.rst
+++ b/pypy/doc/release-2.3.0.rst
@@ -18,17 +18,18 @@
 http://pypy.org/download.html
 
 We would like to thank our donors for the continued support of the PyPy
-project. We showed quite a bit of progress on all three projects (see below)
-and we're slowly running out of funds.
-Please consider donating more so we can finish those projects!  The three
-projects are:
+project, and for those who donate to our three sub-projects.
+We showed quite a bit of progress 
+but we're slowly running out of funds.
+Please consider donating more, or even better convince your employer to donate,
+so we can finish those projects!  The three sub-projects are:
 
 * `Py3k`_ (supporting Python 3.x): the release PyPy3 2.2 is imminent.
 
 * `STM`_ (software transactional memory): a preview will be released very soon,
   once we fix a few bugs
 
-* `NumPy`_ the work done is included in the PyPy 2.2 release. More details 
below.
+* `NumPy`_ which is included in the PyPy 2.3 release. More details below.
 
 .. _`Py3k`: http://pypy.org/py3donate.html
 .. _`STM`: http://pypy.org/tmdonate2.html
@@ -44,8 +45,8 @@
 =
 
 PyPy is a very compliant Python interpreter, almost a drop-in replacement for
-CPython 2.7. It's fast (`pypy 2.2 and cpython 2.7.2`_ performance comparison;
-note that the latest cpython is not faster than cpython 2.7.2)
+CPython 2.7. It's fast (`pypy 2.3 and cpython 2.7.x`_ performance comparison;
+note that cpython's speed has not changed since 2.7.2)
 due to its integrated tracing JIT compiler.
 
 This release supports x86 machines running Linux 32/64, Mac OS X 64, Windows,
@@ -56,13 +57,13 @@
 bit python is still stalling, we would welcome a volunteer
 to `handle that`_.
 
-.. _`pypy 2.2 and cpython 2.7.2`: http://speed.pypy.org
+.. _`pypy 2.3 and cpython 2.7.x`: http://speed.pypy.org
 .. _`handle that`: 
http://doc.pypy.org/en/latest/windows.html#what-is-missing-for-a-full-64-bit-translation
 
 Highlights
 ==
 
-Bugfixes
+Bugfixes 
 
 
 Many issues were cleaned up after being reported by users to 
https://bugs.pypy.org (ignore the bad SSL certificate) or on IRC at #pypy. Note 
that we consider
@@ -71,7 +72,7 @@
 * The ARM port no longer crashes on unaligned memory access to floats and 
doubles,
   and singlefloats are supported in the JIT.
 
-* Generators are faster since they now skip unecessary cleanup
+* Generators are faster since they now skip unnecessary cleanup
 
 * A first time contributor simplified JIT traces by adding integer bound
   propagation in indexing and logical operations.
@@ -84,6 +85,8 @@
 
 * Fix a rpython bug with loop-unrolling that appeared in the `HippyVM`_ PHP 
port
 
+* Support for corner cases on objects with __int__ and __float__ methods
+
 .. _`HippyVM`: http://www.hippyvm.com
 
 New Platforms and Features
@@ -97,8 +100,6 @@
 
 * Support for precompiled headers in the build process for MSVC
 
-* Support for objects with __int__ and __float__ methods
-
 * Tweak support of errno in cpyext (the PyPy implemenation of the capi)
 
 
@@ -127,8 +128,12 @@
 * A cffi-based ``numpy.random`` module is available as a branch in the numpy
   repository, it will be merged soon after this release.
 
-* enhancements to the PyPy JIT were made to support virtualizing the 
raw_store/raw_load memory operations used in numpy arrays. Further work remains 
here in virtualizing the alloc_raw_storage when possible. This will allow 
scalars to have storages but still be virtualized when possible in loops.
+* enhancements to the PyPy JIT were made to support virtualizing the 
raw_store/raw_load 
+  memory operations used in numpy arrays. Further work remains here in 
virtualizing the 
+  alloc_raw_storage when possible. This will allow scalars to have storages 
but still be 
+  virtualized when possible in loops.
 
 Cheers
+
 The PyPy Team
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: add citation

2014-05-02 Thread Raemi
Author: Remi Meier 
Branch: extradoc
Changeset: r5223:bd5dc3629fa4
Date: 2014-05-02 15:30 +0200
http://bitbucket.org/pypy/extradoc/changeset/bd5dc3629fa4/

Log:add citation

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -13,7 +13,6 @@
 \usepackage{hyperref}
 \usepackage{amsmath}
 
-
 \begin{document}
 
 \special{papersize=8.5in,11in}
@@ -132,19 +131,19 @@
 
 \subsection{Why is there a GIL?}
 The GIL is a very simple synchronisation mechanism for supporting
-multithreading in the interpreter. The basic guarantee is that the
-GIL may only be released in-between bytecode instructions. The
-interpreter can thus rely on complete isolation and atomicity of these
+multithreading in the interpreter. The basic guarantee is that the GIL
+may only be released in-between bytecode instructions. The interpreter
+can thus rely on complete isolation and atomicity of these
 instructions. Additionally, it provides the application with a
-sequential consistency model. As a consequence, applications can rely
-on certain operations to be atomic and that they will always be
-executed in the order in which they appear in the code. While
-depending on this may not always be a good idea, it is done in
-practice. A GIL-replacement should therefore uphold these
+sequential consistency model\cite{lamport79}. As a consequence,
+applications can rely on certain operations to be atomic and that they
+will always be executed in the order in which they appear in the
+code. While depending on this may not always be a good idea, it is
+done in practice. A GIL-replacement should therefore uphold these
 guarantees, while preferably also be as easily implementable as a GIL
 for the interpreter.
-[xxx mention that the interpreter is typically very large and maintained
-by open-source communities]
+[xxx mention that the interpreter is typically
+  very large and maintained by open-source communities]
 
 The GIL also allows for easy integration with external C libraries that
 may not be thread-safe. For the duration of the calls, we
@@ -350,9 +349,10 @@
 %% - unlimited transaction length (easy atomic blocks)
 
 
+
 \section{The Way Forward}
 
-\begin{table*}[!ht]
+\begin{table*}[h]
   \centering
   \begin{tabular}{|l|c|c|c|c|c|}
 \hline
@@ -422,9 +422,9 @@
 
 %% This is the text of the appendix, if you need one.
 
-\acks
+%% \acks
 
-Acknowledgements...
+%% Acknowledgements...
 
 % We recommend abbrvnat bibliography style.
 
@@ -485,6 +485,12 @@
   Spear, Michael F., et al. "Transactional mutex locks." \emph{SIGPLAN
 Workshop on Transactional Computing.} 2009.
 
+\bibitem{lamport79}
+  Lamport, Leslie. "How to make a multiprocessor computer that
+  correctly executes multiprocess programs." \emph{Computers, IEEE
+Transactions} on 100.9 (1979): 690-691.
+
+
 \end{thebibliography}
 
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy stmgc-c7: Bah, can't pass anything via r11 across the call --- the CALL instruction itself may need r11

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: stmgc-c7
Changeset: r71196:34b987aa50c1
Date: 2014-05-02 17:04 +0200
http://bitbucket.org/pypy/pypy/changeset/34b987aa50c1/

Log:Bah, can't pass anything via r11 across the call --- the CALL
instruction itself may need r11

diff --git a/rpython/jit/backend/x86/assembler.py 
b/rpython/jit/backend/x86/assembler.py
--- a/rpython/jit/backend/x86/assembler.py
+++ b/rpython/jit/backend/x86/assembler.py
@@ -395,10 +395,6 @@
 mc = codebuf.MachineCodeBlockWrapper()
 #
 if not for_frame:
-if self.cpu.gc_ll_descr.stm:
-assert IS_X86_64
-mc.PUSH_r(X86_64_SCRATCH_REG.value)
-#
 self._push_all_regs_to_frame(mc, [], withfloats, callee_only=True)
 #
 if self.cpu.gc_ll_descr.stm:
@@ -407,7 +403,7 @@
 # current 'stm_location' so that it is found.  The easiest
 # is to simply push it on the shadowstack, from its source
 # location as two extra arguments on the machine stack
-# (at this point containing: [ref][retaddr][num][obj]...)
+# (at this point containing: [retaddr][ref][num][obj]...)
 # XXX this should also be done if 'for_frame' is true...
 mc.MOV(esi, self.heap_shadowstack_top())
 mc.MOV_rs(edi.value, 2 * WORD)   # [num]
@@ -416,9 +412,8 @@
 mc.LEA_ra(edi.value, (self.SEGMENT_NO, rx86.NO_BASE_REGISTER,
   edi.value, 1, +1))
 mc.MOV_mr((self.SEGMENT_NO, esi.value, 0), edi.value)
-mc.MOV_rs(edi.value, 0 * WORD)   # [ref]
+mc.MOV_rs(edi.value, 1 * WORD)   # [ref]
 mc.MOV_mr((self.SEGMENT_NO, esi.value, WORD), edi.value)
-mc.MOV_sr(0 * WORD, esi.value)   # save org shadowstack_top
 mc.LEA_rm(esi.value, (self.SEGMENT_NO, esi.value, 2 * WORD))
 mc.MOV(self.heap_shadowstack_top(), esi)
 mc.MOV_rs(edi.value, 3 * WORD)   # [obj]
@@ -466,13 +461,17 @@
 #
 
 if not for_frame:
+if self.cpu.gc_ll_descr.stm:
+# SUB touches CPU flags
+mc.MOV(esi, self.heap_shadowstack_top())
+mc.LEA_rm(esi.value, (self.SEGMENT_NO, esi.value, -2 * WORD))
+mc.MOV(self.heap_shadowstack_top(), esi)
 if IS_X86_32:
 # ADD touches CPU flags
 mc.LEA_rs(esp.value, 2 * WORD)
 self._pop_all_regs_from_frame(mc, [], withfloats, callee_only=True)
 if self.cpu.gc_ll_descr.stm:
-mc.POP(self.heap_shadowstack_top())
-mc.RET16_i(2 * WORD)
+mc.RET16_i(3 * WORD)
 else:
 mc.RET16_i(WORD)
 else:
@@ -2269,7 +2268,7 @@
 num, ref = extract_raw_stm_location(
 self._regalloc.stm_location)
 mc.PUSH(imm(rffi.cast(lltype.Signed, num)))
-mc.MOV(X86_64_SCRATCH_REG, imm(rffi.cast(lltype.Signed, ref)))
+mc.PUSH(imm(rffi.cast(lltype.Signed, ref)))
 if is_frame and align_stack:
 mc.SUB_ri(esp.value, 16 - WORD) # erase the return address
 mc.CALL(imm(self.wb_slowpath[helper_num]))
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy release-2.3.x: merge default into branch

2014-05-02 Thread mattip
Author: mattip 
Branch: release-2.3.x
Changeset: r71197:119b5f4c257f
Date: 2014-05-02 15:00 +0300
http://bitbucket.org/pypy/pypy/changeset/119b5f4c257f/

Log:merge default into branch

diff --git a/pypy/module/_lsprof/test/test_cprofile.py 
b/pypy/module/_lsprof/test/test_cprofile.py
--- a/pypy/module/_lsprof/test/test_cprofile.py
+++ b/pypy/module/_lsprof/test/test_cprofile.py
@@ -43,7 +43,7 @@
 )
 by_id = set()
 for entry in stats:
-if entry.code == f1.__code__:
+if entry.code == f1.func_code:
 assert len(entry.calls) == 2
 for subentry in entry.calls:
 assert subentry.code in expected
@@ -144,8 +144,8 @@
 entries = {}
 for entry in stats:
 entries[entry.code] = entry
-efoo = entries[foo.__code__]
-ebar = entries[bar.__code__]
+efoo = entries[foo.func_code]
+ebar = entries[bar.func_code]
 assert 0.9 < efoo.totaltime < 2.9
 # --- cannot test .inlinetime, because it does not include
 # --- the time spent doing the call to time.time()
@@ -219,12 +219,12 @@
 lines.remove(line)
 break
 else:
-print('NOT FOUND: %s' % pattern.rstrip('\n'))
-print('--- GOT ---')
-print(got)
-print()
-print('--- EXPECTED ---')
-print(expected)
+print 'NOT FOUND:', pattern.rstrip('\n')
+print '--- GOT ---'
+print got
+print
+print '--- EXPECTED ---'
+print expected
 assert False
 assert not lines
 finally:
diff --git a/pypy/module/pypyjit/test_pypy_c/test_cprofile.py 
b/pypy/module/pypyjit/test_pypy_c/test_cprofile.py
--- a/pypy/module/pypyjit/test_pypy_c/test_cprofile.py
+++ b/pypy/module/pypyjit/test_pypy_c/test_cprofile.py
@@ -1,4 +1,4 @@
-import py
+import py, sys
 from pypy.module.pypyjit.test_pypy_c.test_00_model import BaseTestPyPyC
 
 class TestCProfile(BaseTestPyPyC):
@@ -26,6 +26,10 @@
 for method in ['append', 'pop']:
 loop, = log.loops_by_id(method)
 print loop.ops_by_id(method)
-assert ' call(' not in repr(loop.ops_by_id(method))
+# on 32-bit, there is f1=read_timestamp(); ...;
+# f2=read_timestamp(); f3=call(llong_sub,f1,f2)
+# which should turn into a single PADDQ/PSUBQ
+if sys.maxint != 2147483647:
+assert ' call(' not in repr(loop.ops_by_id(method))
 assert ' call_may_force(' not in repr(loop.ops_by_id(method))
 assert ' cond_call(' in repr(loop.ops_by_id(method))
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: skip tests on 32 bit

2014-05-02 Thread mattip
Author: mattip 
Branch: 
Changeset: r71198:c03087ca1843
Date: 2014-05-02 18:07 +0300
http://bitbucket.org/pypy/pypy/changeset/c03087ca1843/

Log:skip tests on 32 bit

diff --git a/pypy/module/cppyy/test/test_datatypes.py 
b/pypy/module/cppyy/test/test_datatypes.py
--- a/pypy/module/cppyy/test/test_datatypes.py
+++ b/pypy/module/cppyy/test/test_datatypes.py
@@ -7,6 +7,8 @@
 def setup_module(mod):
 if sys.platform == 'win32':
 py.test.skip("win32 not supported so far")
+if sys.maxsize < 2 ** 31:
+py.test.skip("32 bit not supported so far")
 err = os.system("cd '%s' && make datatypesDict.so" % currpath)
 if err:
 raise OSError("'make' failed (see stderr)")
@@ -30,7 +32,7 @@
 def test02_instance_data_read_access(self):
 """Test read access to instance public data and verify values"""
 
-import cppyy, sys
+import cppyy
 cppyy_test_data = cppyy.gbl.cppyy_test_data
 
 c = cppyy_test_data()
@@ -117,7 +119,7 @@
 def test03_instance_data_write_access(self):
 """Test write access to instance public data and verify values"""
 
-import cppyy, sys
+import cppyy
 cppyy_test_data = cppyy.gbl.cppyy_test_data
 
 c = cppyy_test_data()
@@ -489,14 +491,14 @@
 
 import cppyy
 four_vector = cppyy.gbl.four_vector
-
+
 t1 = four_vector(1., 2., 3., -4.)
 t2 = four_vector(0., 0., 0.,  0.)
 t3 = four_vector(t1)
-  
+
 assert t1 == t3
 assert t1 != t2
-
+
 for i in range(4):
 assert t1[i] == t3[i]
 
@@ -625,8 +627,8 @@
 
 def test18_object_and_pointer_comparisons(self):
 """Verify object and pointer comparisons"""
-
-import cppyy 
+
+import cppyy
 gbl = cppyy.gbl
 
 c1 = cppyy.bind_object(0, gbl.cppyy_test_data)
@@ -662,11 +664,11 @@
 
 def test19_object_validity(self):
 """Test object validity checking"""
-
+
 from cppyy import gbl
 
 d = gbl.cppyy_test_pod()
- 
+
 assert d
 assert not not d
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy release-2.3.x: merge default into branch

2014-05-02 Thread mattip
Author: mattip 
Branch: release-2.3.x
Changeset: r71199:6c9fdc71640b
Date: 2014-05-02 18:09 +0300
http://bitbucket.org/pypy/pypy/changeset/6c9fdc71640b/

Log:merge default into branch

diff --git a/pypy/doc/release-2.3.0.rst b/pypy/doc/release-2.3.0.rst
--- a/pypy/doc/release-2.3.0.rst
+++ b/pypy/doc/release-2.3.0.rst
@@ -18,17 +18,18 @@
 http://pypy.org/download.html
 
 We would like to thank our donors for the continued support of the PyPy
-project. We showed quite a bit of progress on all three projects (see below)
-and we're slowly running out of funds.
-Please consider donating more so we can finish those projects!  The three
-projects are:
+project, and for those who donate to our three sub-projects.
+We showed quite a bit of progress 
+but we're slowly running out of funds.
+Please consider donating more, or even better convince your employer to donate,
+so we can finish those projects!  The three sub-projects are:
 
 * `Py3k`_ (supporting Python 3.x): the release PyPy3 2.2 is imminent.
 
 * `STM`_ (software transactional memory): a preview will be released very soon,
   once we fix a few bugs
 
-* `NumPy`_ the work done is included in the PyPy 2.2 release. More details 
below.
+* `NumPy`_ which is included in the PyPy 2.3 release. More details below.
 
 .. _`Py3k`: http://pypy.org/py3donate.html
 .. _`STM`: http://pypy.org/tmdonate2.html
@@ -44,8 +45,8 @@
 =
 
 PyPy is a very compliant Python interpreter, almost a drop-in replacement for
-CPython 2.7. It's fast (`pypy 2.2 and cpython 2.7.2`_ performance comparison;
-note that the latest cpython is not faster than cpython 2.7.2)
+CPython 2.7. It's fast (`pypy 2.3 and cpython 2.7.x`_ performance comparison;
+note that cpython's speed has not changed since 2.7.2)
 due to its integrated tracing JIT compiler.
 
 This release supports x86 machines running Linux 32/64, Mac OS X 64, Windows,
@@ -56,13 +57,13 @@
 bit python is still stalling, we would welcome a volunteer
 to `handle that`_.
 
-.. _`pypy 2.2 and cpython 2.7.2`: http://speed.pypy.org
+.. _`pypy 2.3 and cpython 2.7.x`: http://speed.pypy.org
 .. _`handle that`: 
http://doc.pypy.org/en/latest/windows.html#what-is-missing-for-a-full-64-bit-translation
 
 Highlights
 ==
 
-Bugfixes
+Bugfixes 
 
 
 Many issues were cleaned up after being reported by users to 
https://bugs.pypy.org (ignore the bad SSL certificate) or on IRC at #pypy. Note 
that we consider
@@ -71,7 +72,7 @@
 * The ARM port no longer crashes on unaligned memory access to floats and 
doubles,
   and singlefloats are supported in the JIT.
 
-* Generators are faster since they now skip unecessary cleanup
+* Generators are faster since they now skip unnecessary cleanup
 
 * A first time contributor simplified JIT traces by adding integer bound
   propagation in indexing and logical operations.
@@ -84,6 +85,8 @@
 
 * Fix a rpython bug with loop-unrolling that appeared in the `HippyVM`_ PHP 
port
 
+* Support for corner cases on objects with __int__ and __float__ methods
+
 .. _`HippyVM`: http://www.hippyvm.com
 
 New Platforms and Features
@@ -97,8 +100,6 @@
 
 * Support for precompiled headers in the build process for MSVC
 
-* Support for objects with __int__ and __float__ methods
-
 * Tweak support of errno in cpyext (the PyPy implemenation of the capi)
 
 
@@ -127,8 +128,12 @@
 * A cffi-based ``numpy.random`` module is available as a branch in the numpy
   repository, it will be merged soon after this release.
 
-* enhancements to the PyPy JIT were made to support virtualizing the 
raw_store/raw_load memory operations used in numpy arrays. Further work remains 
here in virtualizing the alloc_raw_storage when possible. This will allow 
scalars to have storages but still be virtualized when possible in loops.
+* enhancements to the PyPy JIT were made to support virtualizing the 
raw_store/raw_load 
+  memory operations used in numpy arrays. Further work remains here in 
virtualizing the 
+  alloc_raw_storage when possible. This will allow scalars to have storages 
but still be 
+  virtualized when possible in loops.
 
 Cheers
+
 The PyPy Team
 
diff --git a/pypy/module/cppyy/test/test_datatypes.py 
b/pypy/module/cppyy/test/test_datatypes.py
--- a/pypy/module/cppyy/test/test_datatypes.py
+++ b/pypy/module/cppyy/test/test_datatypes.py
@@ -7,6 +7,8 @@
 def setup_module(mod):
 if sys.platform == 'win32':
 py.test.skip("win32 not supported so far")
+if sys.maxsize < 2 ** 31:
+py.test.skip("32 bit not supported so far")
 err = os.system("cd '%s' && make datatypesDict.so" % currpath)
 if err:
 raise OSError("'make' failed (see stderr)")
@@ -30,7 +32,7 @@
 def test02_instance_data_read_access(self):
 """Test read access to instance public data and verify values"""
 
-import cppyy, sys
+import cppyy
 cppyy_test_data = cppyy.gbl.cppyy_test_data
 
 c = cppyy_test_data()
@@ -117,7 +119,7 @@
 def test03_instance_data_write_acce

[pypy-commit] stmgc marker: More symmetrically, put also in the "other"

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: marker
Changeset: r1194:937201ff1335
Date: 2014-05-02 17:46 +0200
http://bitbucket.org/pypy/stmgc/changeset/937201ff1335/

Log:More symmetrically, put  also in the
"other" field when reporting a contention where "self" writes and
"other" reads.

diff --git a/c7/stm/marker.c b/c7/stm/marker.c
--- a/c7/stm/marker.c
+++ b/c7/stm/marker.c
@@ -165,32 +165,27 @@
 
 /* For some categories, we can also collect the relevant information
for the other segment. */
+char *outmarker = abort_other ? other_pseg->marker_self
+  : my_pseg->marker_other;
 switch (kind) {
 case WRITE_WRITE_CONTENTION:
 marker_fetch_obj_write(other_segment_num, obj, other_marker);
+   marker_expand(other_marker, other_segment_base, outmarker);
 break;
 case INEVITABLE_CONTENTION:
 assert(abort_other == false);
 other_marker[0] = other_pseg->marker_inev[0];
 other_marker[1] = other_pseg->marker_inev[1];
+   marker_expand(other_marker, other_segment_base, outmarker);
 break;
+case WRITE_READ_CONTENTION:
+   strcpy(outmarker, "");
+   break;
 default:
-other_marker[0] = 0;
-other_marker[1] = 0;
+   outmarker[0] = 0;
 break;
 }
 
-marker_expand(other_marker, other_segment_base,
-  abort_other ? other_pseg->marker_self
-  : my_pseg->marker_other);
-
-if (abort_other && other_pseg->marker_self[0] == 0) {
-if (kind == WRITE_READ_CONTENTION)
-strcpy(other_pseg->marker_self, "");
-else
-strcpy(other_pseg->marker_self, "");
-}
-
 release_marker_lock(other_segment_base);
 }
 
diff --git a/c7/test/test_marker.py b/c7/test/test_marker.py
--- a/c7/test/test_marker.py
+++ b/c7/test/test_marker.py
@@ -260,7 +260,8 @@
 tl = self.get_stm_thread_local()
 assert tl.longest_marker_state == lib.STM_TIME_RUN_ABORTED_WRITE_READ
 assert ffi.string(tl.longest_marker_self) == '19'
-assert ffi.string(tl.longest_marker_other) == ''
+assert ffi.string(tl.longest_marker_other) == (
+'')
 
 def test_double_remote_markers_cb_write_write(self):
 @ffi.callback("void(char *, uintptr_t, object_t *, char *, size_t)")
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: readd less controversial py3k compat

2014-05-02 Thread pjenvey
Author: Philip Jenvey 
Branch: 
Changeset: r71200:2b5234972cd9
Date: 2014-05-02 09:30 -0700
http://bitbucket.org/pypy/pypy/changeset/2b5234972cd9/

Log:readd less controversial py3k compat

diff --git a/pypy/module/_lsprof/test/test_cprofile.py 
b/pypy/module/_lsprof/test/test_cprofile.py
--- a/pypy/module/_lsprof/test/test_cprofile.py
+++ b/pypy/module/_lsprof/test/test_cprofile.py
@@ -43,7 +43,7 @@
 )
 by_id = set()
 for entry in stats:
-if entry.code == f1.func_code:
+if entry.code == f1.__code__:
 assert len(entry.calls) == 2
 for subentry in entry.calls:
 assert subentry.code in expected
@@ -144,8 +144,8 @@
 entries = {}
 for entry in stats:
 entries[entry.code] = entry
-efoo = entries[foo.func_code]
-ebar = entries[bar.func_code]
+efoo = entries[foo.__code__]
+ebar = entries[bar.__code__]
 assert 0.9 < efoo.totaltime < 2.9
 # --- cannot test .inlinetime, because it does not include
 # --- the time spent doing the call to time.time()
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: merge heads

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71202:b635f1afc5ce
Date: 2014-05-02 12:33 -0400
http://bitbucket.org/pypy/pypy/changeset/b635f1afc5ce/

Log:merge heads

___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: redo test_cprofile py3k compat so it works the same

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71201:e89381f9b054
Date: 2014-05-02 12:32 -0400
http://bitbucket.org/pypy/pypy/changeset/e89381f9b054/

Log:redo test_cprofile py3k compat so it works the same

diff --git a/pypy/module/_lsprof/test/test_cprofile.py 
b/pypy/module/_lsprof/test/test_cprofile.py
--- a/pypy/module/_lsprof/test/test_cprofile.py
+++ b/pypy/module/_lsprof/test/test_cprofile.py
@@ -43,7 +43,7 @@
 )
 by_id = set()
 for entry in stats:
-if entry.code == f1.func_code:
+if entry.code == f1.__code__:
 assert len(entry.calls) == 2
 for subentry in entry.calls:
 assert subentry.code in expected
@@ -144,8 +144,8 @@
 entries = {}
 for entry in stats:
 entries[entry.code] = entry
-efoo = entries[foo.func_code]
-ebar = entries[bar.func_code]
+efoo = entries[foo.__code__]
+ebar = entries[bar.__code__]
 assert 0.9 < efoo.totaltime < 2.9
 # --- cannot test .inlinetime, because it does not include
 # --- the time spent doing the call to time.time()
@@ -219,12 +219,12 @@
 lines.remove(line)
 break
 else:
-print 'NOT FOUND:', pattern.rstrip('\n')
-print '--- GOT ---'
-print got
-print
-print '--- EXPECTED ---'
-print expected
+print('NOT FOUND: %s' % pattern.rstrip('\n'))
+print('--- GOT ---')
+print(got)
+print('')
+print('--- EXPECTED ---')
+print(expected)
 assert False
 assert not lines
 finally:
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: support __future__ flags in gateway appdef from func obj

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71203:b72c31dbc037
Date: 2014-05-02 13:35 -0400
http://bitbucket.org/pypy/pypy/changeset/b72c31dbc037/

Log:support __future__ flags in gateway appdef from func obj

diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py
--- a/pypy/interpreter/gateway.py
+++ b/pypy/interpreter/gateway.py
@@ -1074,7 +1074,9 @@
return x+y
 ''')
 """
+prefix = ""
 if not isinstance(source, str):
+flags = source.__code__.co_flags
 source = py.std.inspect.getsource(source).lstrip()
 while source.startswith(('@py.test.mark.', '@pytest.mark.')):
 # these decorators are known to return the same function
@@ -1083,12 +1085,21 @@
 source = source[source.find('\n') + 1:].lstrip()
 assert source.startswith("def "), "can only transform functions"
 source = source[4:]
+import __future__
+if flags & __future__.CO_FUTURE_DIVISION:
+prefix += "from __future__ import division\n"
+if flags & __future__.CO_FUTURE_ABSOLUTE_IMPORT:
+prefix += "from __future__ import absolute_import\n"
+if flags & __future__.CO_FUTURE_PRINT_FUNCTION:
+prefix += "from __future__ import print_function\n"
+if flags & __future__.CO_FUTURE_UNICODE_LITERALS:
+prefix += "from __future__ import unicode_literals\n"
 p = source.find('(')
 assert p >= 0
 funcname = source[:p].strip()
 source = source[p:]
 assert source.strip()
-funcsource = "def %s%s\n" % (funcname, source)
+funcsource = prefix + "def %s%s\n" % (funcname, source)
 #for debugging of wrong source code: py.std.parser.suite(funcsource)
 a = applevel(funcsource, filename=filename)
 return a.interphook(funcname)
diff --git a/pypy/interpreter/test/test_gateway.py 
b/pypy/interpreter/test/test_gateway.py
--- a/pypy/interpreter/test/test_gateway.py
+++ b/pypy/interpreter/test/test_gateway.py
@@ -1,18 +1,20 @@
-
 # -*- coding: utf-8 -*-
 
+from __future__ import division, print_function  # for test_app2interp_future
 from pypy.interpreter import gateway, argument
 from pypy.interpreter.gateway import ObjSpace, W_Root, WrappedDefault
 from pypy.interpreter.signature import Signature
 import py
 import sys
 
+
 class FakeFunc(object):
 def __init__(self, space, name):
 self.space = space
 self.name = name
 self.defs_w = []
 
+
 class TestBuiltinCode:
 def test_signature(self):
 def c(space, w_x, w_y, hello_w):
@@ -89,8 +91,8 @@
 w_result = code.funcrun(FakeFunc(self.space, "c"), args)
 assert self.space.eq_w(w_result, w(1020))
 
+
 class TestGateway:
-
 def test_app2interp(self):
 w = self.space.wrap
 def app_g3(a, b):
@@ -117,6 +119,14 @@
 args = gateway.Arguments(self.space, [w(6)], ['hello', 'world'], 
[w(7), w(8)])
 assert self.space.int_w(gg(self.space, w(3), args)) == 213
 
+def test_app2interp_future(self):
+w = self.space.wrap
+def app_g3(a, b):
+print(end='')
+return a / b
+g3 = gateway.app2interp_temp(app_g3)
+assert self.space.eq_w(g3(self.space, w(1), w(4),), w(0.25))
+
 def test_interp2app(self):
 space = self.space
 w = space.wrap
@@ -616,7 +626,7 @@
 w_app_f = self.space.wrap(app_f)
 
 assert isinstance(w_app_f.code, gateway.BuiltinCode2)
-
+
 called = []
 fastcall_2 = w_app_f.code.fastcall_2
 def witness_fastcall_2(space, w_func, w_a, w_b):
@@ -736,7 +746,6 @@
 
 
 class TestPassThroughArguments:
-
 def test_pass_trough_arguments0(self):
 space = self.space
 
@@ -834,7 +843,6 @@
 
 
 class AppTestKeywordsToBuiltinSanity(object):
-
 def test_type(self):
 class X(object):
 def __init__(self, **kw):
@@ -873,4 +881,3 @@
 
 d.update(**{clash: 33})
 dict.update(d, **{clash: 33})
-
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy fix-tpname: fix cpyext slot wrapper repr

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: fix-tpname
Changeset: r71205:f1afb02b246a
Date: 2014-05-02 15:31 -0400
http://bitbucket.org/pypy/pypy/changeset/f1afb02b246a/

Log:fix cpyext slot wrapper repr

diff --git a/pypy/module/cpyext/methodobject.py 
b/pypy/module/cpyext/methodobject.py
--- a/pypy/module/cpyext/methodobject.py
+++ b/pypy/module/cpyext/methodobject.py
@@ -174,7 +174,7 @@
 def descr_method_repr(self):
 return self.space.wrap("" %
(self.method_name,
-self.w_objclass.getname(self.space)))
+self.w_objclass.name))
 
 def cwrapper_descr_call(space, w_self, __args__):
 self = space.interp_w(W_PyCWrapperObject, w_self)
diff --git a/pypy/module/cpyext/test/test_typeobject.py 
b/pypy/module/cpyext/test/test_typeobject.py
--- a/pypy/module/cpyext/test/test_typeobject.py
+++ b/pypy/module/cpyext/test/test_typeobject.py
@@ -33,7 +33,7 @@
 assert "copy" in repr(module.fooType.copy)
 assert repr(module.fooType) == ""
 assert repr(obj2) == ""
-assert repr(module.fooType.__call__) == ""
+assert repr(module.fooType.__call__) == ""
 assert obj2(foo=1, bar=2) == dict(foo=1, bar=2)
 
 print(obj.foo)
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy fix-tpname: fix tp_name of cpyext typeobjects

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: fix-tpname
Changeset: r71204:2af214ac9f7e
Date: 2014-05-02 15:31 -0400
http://bitbucket.org/pypy/pypy/changeset/2af214ac9f7e/

Log:fix tp_name of cpyext typeobjects

diff --git a/pypy/module/cpyext/typeobject.py b/pypy/module/cpyext/typeobject.py
--- a/pypy/module/cpyext/typeobject.py
+++ b/pypy/module/cpyext/typeobject.py
@@ -291,14 +291,9 @@
 convert_getset_defs(space, dict_w, pto.c_tp_getset, self)
 convert_member_defs(space, dict_w, pto.c_tp_members, self)
 
-full_name = rffi.charp2str(pto.c_tp_name)
-if '.' in full_name:
-module_name, extension_name = rsplit(full_name, ".", 1)
-dict_w["__module__"] = space.wrap(module_name)
-else:
-extension_name = full_name
+name = rffi.charp2str(pto.c_tp_name)
 
-W_TypeObject.__init__(self, space, extension_name,
+W_TypeObject.__init__(self, space, name,
 bases_w or [space.w_object], dict_w)
 if not space.is_true(space.issubtype(self, space.w_type)):
 self.flag_cpytype = True
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: Add a suggestion

2014-05-02 Thread arigo
Author: Armin Rigo 
Branch: extradoc
Changeset: r5224:5fdf06cfaac9
Date: 2014-05-02 22:55 +0200
http://bitbucket.org/pypy/extradoc/changeset/5fdf06cfaac9/

Log:Add a suggestion

diff --git a/talk/icooolps2014/position-paper.tex 
b/talk/icooolps2014/position-paper.tex
--- a/talk/icooolps2014/position-paper.tex
+++ b/talk/icooolps2014/position-paper.tex
@@ -343,6 +343,10 @@
 a whole program analysis since locks are inherently non-composable.
 The effectiveness of these approaches still has to be proven for our
 use case.
+[xxx or maybe: "The effectiveness of these approaches is doubtful in our
+use case --- for example, it makes it close to impossible to order the
+locks consistently or to know in advance which locks a transaction will
+need.]
 
 %% - overhead (100-1000\%) (barrier reference resolution, kills performance on 
low \#cpu)
 %% (FastLane: low overhead, not much gain)\\
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy fix-tpname: fix translation

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: fix-tpname
Changeset: r71206:af335604eacc
Date: 2014-05-02 17:19 -0400
http://bitbucket.org/pypy/pypy/changeset/af335604eacc/

Log:fix translation

diff --git a/pypy/module/cpyext/methodobject.py 
b/pypy/module/cpyext/methodobject.py
--- a/pypy/module/cpyext/methodobject.py
+++ b/pypy/module/cpyext/methodobject.py
@@ -6,6 +6,7 @@
 from pypy.interpreter.gateway import interp2app
 from pypy.interpreter.typedef import (
 GetSetProperty, TypeDef, interp_attrproperty, interp_attrproperty_w)
+from pypy.objspace.std.typeobject import W_TypeObject
 from pypy.module.cpyext.api import (
 CONST_STRING, METH_CLASS, METH_COEXIST, METH_KEYWORDS, METH_NOARGS, METH_O,
 METH_STATIC, METH_VARARGS, PyObject, PyObjectFields, bootstrap_function,
@@ -158,7 +159,9 @@
 self.doc = doc
 self.func = func
 pyo = rffi.cast(PyObject, pto)
-self.w_objclass = from_ref(space, pyo)
+w_type = from_ref(space, pyo)
+assert isinstance(w_type, W_TypeObject)
+self.w_objclass = w_type
 
 def call(self, space, w_self, w_args, w_kw):
 if self.wrapper_func is None:
diff --git a/pypy/module/cpyext/typeobject.py b/pypy/module/cpyext/typeobject.py
--- a/pypy/module/cpyext/typeobject.py
+++ b/pypy/module/cpyext/typeobject.py
@@ -513,7 +513,7 @@
 from pypy.module.cpyext.stringobject import PyString_AsString
 pto.c_tp_name = PyString_AsString(space, heaptype.c_ht_name)
 else:
-pto.c_tp_name = rffi.str2charp(w_type.getname(space))
+pto.c_tp_name = rffi.str2charp(w_type.name)
 pto.c_tp_basicsize = -1 # hopefully this makes malloc bail out
 pto.c_tp_itemsize = 0
 # uninitialized fields:
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy reflex-support: consistency in use of integers

2014-05-02 Thread wlav
Author: Wim Lavrijsen 
Branch: reflex-support
Changeset: r71209:8282572159fe
Date: 2014-05-02 14:37 -0700
http://bitbucket.org/pypy/pypy/changeset/8282572159fe/

Log:consistency in use of integers

diff --git a/pypy/module/cppyy/capi/capi_types.py 
b/pypy/module/cppyy/capi/capi_types.py
--- a/pypy/module/cppyy/capi/capi_types.py
+++ b/pypy/module/cppyy/capi/capi_types.py
@@ -1,8 +1,8 @@
 from rpython.rtyper.lltypesystem import rffi, lltype
 
 # shared ll definitions
-_C_OPAQUE_PTR = rffi.LONG
-_C_OPAQUE_NULL = lltype.nullptr(rffi.LONGP.TO)# ALT: _C_OPAQUE_PTR.TO
+_C_OPAQUE_PTR = rffi.ULONG
+_C_OPAQUE_NULL = lltype.nullptr(rffi.ULONGP.TO)# ALT: _C_OPAQUE_PTR.TO
 
 C_SCOPE = _C_OPAQUE_PTR
 C_NULL_SCOPE = rffi.cast(C_SCOPE, _C_OPAQUE_NULL)
diff --git a/pypy/module/cppyy/capi/loadable_capi.py 
b/pypy/module/cppyy/capi/loadable_capi.py
--- a/pypy/module/cppyy/capi/loadable_capi.py
+++ b/pypy/module/cppyy/capi/loadable_capi.py
@@ -91,7 +91,7 @@
 
 # TODO: the following need to match up with the globally defined C_XYZ 
low-level
 # types (see capi/__init__.py), but by using strings here, that isn't 
guaranteed
-c_opaque_ptr = nt.new_primitive_type(space, 'long')
+c_opaque_ptr = nt.new_primitive_type(space, 'unsigned long')
  
 c_scope  = c_opaque_ptr
 c_type   = c_scope
@@ -259,10 +259,10 @@
 return c_call.ctype.rcall(c_call._cdata, args)
 
 def _cdata_to_cobject(space, w_cdata):
-return rffi.cast(C_OBJECT, space.int_w(w_cdata))
+return rffi.cast(C_OBJECT, space.uint_w(w_cdata))
 
 def _cdata_to_size_t(space, w_cdata):
-return rffi.cast(rffi.SIZE_T, space.int_w(w_cdata))
+return rffi.cast(rffi.SIZE_T, space.uint_w(w_cdata))
 
 def _cdata_to_ptr(space, w_cdata): # TODO: this is both a hack and dreadfully 
slow
 return rffi.cast(rffi.VOIDP,
@@ -281,12 +281,12 @@
 def c_resolve_name(space, name):
 return charp2str_free(space, call_capi(space, 'resolve_name', 
[_Arg(s=name)]))
 def c_get_scope_opaque(space, name):
-return rffi.cast(C_SCOPE, space.int_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
+return rffi.cast(C_SCOPE, space.uint_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
 def c_get_template(space, name):
-return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
+return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
 def c_actual_class(space, cppclass, cppobj):
 args = [_Arg(l=cppclass.handle), _Arg(l=cppobj)]
-return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'actual_class', 
args)))
+return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'actual_class', 
args)))
 
 # memory management --
 def c_allocate(space, cppclass):
@@ -302,7 +302,7 @@
 call_capi(space, 'call_v', args)
 def c_call_b(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
-return rffi.cast(rffi.UCHAR, space.c_int_w(call_capi(space, 'call_b', 
args)))
+return rffi.cast(rffi.UCHAR, space.c_uint_w(call_capi(space, 'call_b', 
args)))
 def c_call_c(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
 return rffi.cast(rffi.CHAR, space.str_w(call_capi(space, 'call_c', 
args))[0])
@@ -452,7 +452,7 @@
 
 def c_get_method(space, cppscope, index):
 args = [_Arg(l=cppscope.handle), _Arg(l=index)]
-return rffi.cast(C_METHOD, space.int_w(call_capi(space, 'get_method', 
args)))
+return rffi.cast(C_METHOD, space.uint_w(call_capi(space, 'get_method', 
args)))
 def c_get_global_operator(space, nss, lc, rc, op):
 if nss is not None:
 args = [_Arg(l=nss.handle), _Arg(l=lc.handle), _Arg(l=rc.handle), 
_Arg(s=op)]
diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py
--- a/pypy/module/cppyy/converter.py
+++ b/pypy/module/cppyy/converter.py
@@ -386,7 +386,7 @@
 try:
 # TODO: accept a 'capsule' rather than naked int
 # (do accept int(0), though)
-obj = rffi.cast(rffi.VOIDP, space.int_w(w_obj))
+obj = rffi.cast(rffi.VOIDP, space.uint_w(w_obj))
 except Exception:
 obj = rffi.cast(rffi.VOIDP, get_rawobject(space, w_obj))
 return obj
diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h
--- a/pypy/module/cppyy/include/capi.h
+++ b/pypy/module/cppyy/include/capi.h
@@ -7,10 +7,10 @@
 extern "C" {
 #endif // ifdef __cplusplus
 
-typedef long cppyy_scope_t;
+typedef unsigned long cppyy_scope_t;
 typedef cppyy_scope_t cppyy_type_t;
-typedef long cppyy_object_t;
-typedef long cppyy_method_t;
+typedef unsigned long cppyy_object_t;
+typedef unsigned long cppyy_method_t;
 typedef long cppyy_index_t;
 typedef void* (*cppyy_methptrgetter_t)(cppyy_object_t);
 

[pypy-commit] pypy reflex-support: fix potential overflow problems

2014-05-02 Thread wlav
Author: Wim Lavrijsen 
Branch: reflex-support
Changeset: r71208:563e581721d0
Date: 2014-05-02 14:35 -0700
http://bitbucket.org/pypy/pypy/changeset/563e581721d0/

Log:fix potential overflow problems

diff --git a/pypy/module/cppyy/capi/cint_capi.py 
b/pypy/module/cppyy/capi/cint_capi.py
--- a/pypy/module/cppyy/capi/cint_capi.py
+++ b/pypy/module/cppyy/capi/cint_capi.py
@@ -249,7 +249,7 @@
 
 def activate_branch(space, w_branch):
 w_branches = space.call_method(w_branch, "GetListOfBranches")
-for i in range(space.int_w(space.call_method(w_branches, 
"GetEntriesFast"))):
+for i in range(space.r_longlong_w(space.call_method(w_branches, 
"GetEntriesFast"))):
 w_b = space.call_method(w_branches, "At", space.wrap(i))
 activate_branch(space, w_b)
 space.call_method(w_branch, "SetStatus", space.wrap(1))
@@ -292,7 +292,7 @@
 activate_branch(space, w_branch)
 
 # figure out from where we're reading
-entry = space.int_w(space.call_method(w_self, "GetReadEntry"))
+entry = space.r_longlong_w(space.call_method(w_self, "GetReadEntry"))
 if entry == -1:
 entry = 0
 
@@ -341,7 +341,7 @@
 self.w_tree = w_tree
 
 self.current  = 0
-self.maxentry = space.int_w(space.call_method(w_tree, 
"GetEntriesFast"))
+self.maxentry = space.r_longlong_w(space.call_method(w_tree, 
"GetEntriesFast"))
 
 space = self.space = tree.space  # holds the class cache in 
State
 space.call_method(w_tree, "SetBranchStatus", space.wrap("*"), 
space.wrap(0))
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy fix-tpname: additional fixes

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: fix-tpname
Changeset: r71210:1a89979e6d52
Date: 2014-05-02 18:52 -0400
http://bitbucket.org/pypy/pypy/changeset/1a89979e6d52/

Log:additional fixes

diff --git a/pypy/module/_io/interp_bufferedio.py 
b/pypy/module/_io/interp_bufferedio.py
--- a/pypy/module/_io/interp_bufferedio.py
+++ b/pypy/module/_io/interp_bufferedio.py
@@ -211,17 +211,16 @@
 return space.call_method(self.w_raw, "isatty")
 
 def repr_w(self, space):
-typename = space.type(self).getname(space)
-module = space.str_w(space.type(self).get_module())
+typename = space.type(self).name
 try:
 w_name = space.getattr(self, space.wrap("name"))
 except OperationError, e:
 if not e.match(space, space.w_AttributeError):
 raise
-return space.wrap("<%s.%s>" % (module, typename,))
+return space.wrap("<%s>" % (typename,))
 else:
 name_repr = space.str_w(space.repr(w_name))
-return space.wrap("<%s.%s name=%s>" % (module, typename, 
name_repr))
+return space.wrap("<%s name=%s>" % (typename, name_repr))
 
 # __
 
diff --git a/pypy/objspace/std/test/test_typeobject.py 
b/pypy/objspace/std/test/test_typeobject.py
--- a/pypy/objspace/std/test/test_typeobject.py
+++ b/pypy/objspace/std/test/test_typeobject.py
@@ -701,7 +701,9 @@
 class A(object):
 pass
 assert repr(A) == ""
-assert repr(type(type)) == "" 
+A.__module__ = 123
+assert repr(A) == ""
+assert repr(type(type)) == ""
 assert repr(complex) == ""
 assert repr(property) == ""
 assert repr(TypeError) == ""
diff --git a/pypy/objspace/std/typeobject.py b/pypy/objspace/std/typeobject.py
--- a/pypy/objspace/std/typeobject.py
+++ b/pypy/objspace/std/typeobject.py
@@ -1098,7 +1098,7 @@
 
 def repr__Type(space, w_obj):
 w_mod = w_obj.get_module()
-if not space.isinstance_w(w_mod, space.w_str):
+if w_mod is None or not space.isinstance_w(w_mod, space.w_str):
 mod = None
 else:
 mod = space.str_w(w_mod)
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy release-2.3.x: merge default

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: release-2.3.x
Changeset: r71214:16b7931bb487
Date: 2014-05-02 21:20 -0400
http://bitbucket.org/pypy/pypy/changeset/16b7931bb487/

Log:merge default

diff --git a/pypy/doc/whatsnew-2.3.0.rst b/pypy/doc/whatsnew-2.3.0.rst
--- a/pypy/doc/whatsnew-2.3.0.rst
+++ b/pypy/doc/whatsnew-2.3.0.rst
@@ -154,7 +154,7 @@
 Improve optimization of small allocation-heavy loops in the JIT
 
 .. branch: reflex-support
-   
+
 .. branch: asmosoinio/fixed-pip-installation-url-github-githu-1398674840188
 
 .. branch: lexer_token_position_class
@@ -164,4 +164,3 @@
 
 .. branch: issue1430
 Add a lock for unsafe calls to gethostbyname and gethostbyaddr
-
diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -3,5 +3,5 @@
 ===
 
 .. this is a revision shortly after release-2.3.x
-.. startrev: 0f75ad4d14ce
+.. startrev: 773fc6275c69
 
diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py
--- a/pypy/interpreter/gateway.py
+++ b/pypy/interpreter/gateway.py
@@ -1074,7 +1074,9 @@
return x+y
 ''')
 """
+prefix = ""
 if not isinstance(source, str):
+flags = source.__code__.co_flags
 source = py.std.inspect.getsource(source).lstrip()
 while source.startswith(('@py.test.mark.', '@pytest.mark.')):
 # these decorators are known to return the same function
@@ -1083,12 +1085,21 @@
 source = source[source.find('\n') + 1:].lstrip()
 assert source.startswith("def "), "can only transform functions"
 source = source[4:]
+import __future__
+if flags & __future__.CO_FUTURE_DIVISION:
+prefix += "from __future__ import division\n"
+if flags & __future__.CO_FUTURE_ABSOLUTE_IMPORT:
+prefix += "from __future__ import absolute_import\n"
+if flags & __future__.CO_FUTURE_PRINT_FUNCTION:
+prefix += "from __future__ import print_function\n"
+if flags & __future__.CO_FUTURE_UNICODE_LITERALS:
+prefix += "from __future__ import unicode_literals\n"
 p = source.find('(')
 assert p >= 0
 funcname = source[:p].strip()
 source = source[p:]
 assert source.strip()
-funcsource = "def %s%s\n" % (funcname, source)
+funcsource = prefix + "def %s%s\n" % (funcname, source)
 #for debugging of wrong source code: py.std.parser.suite(funcsource)
 a = applevel(funcsource, filename=filename)
 return a.interphook(funcname)
diff --git a/pypy/interpreter/test/test_gateway.py 
b/pypy/interpreter/test/test_gateway.py
--- a/pypy/interpreter/test/test_gateway.py
+++ b/pypy/interpreter/test/test_gateway.py
@@ -1,18 +1,20 @@
-
 # -*- coding: utf-8 -*-
 
+from __future__ import division, print_function  # for test_app2interp_future
 from pypy.interpreter import gateway, argument
 from pypy.interpreter.gateway import ObjSpace, W_Root, WrappedDefault
 from pypy.interpreter.signature import Signature
 import py
 import sys
 
+
 class FakeFunc(object):
 def __init__(self, space, name):
 self.space = space
 self.name = name
 self.defs_w = []
 
+
 class TestBuiltinCode:
 def test_signature(self):
 def c(space, w_x, w_y, hello_w):
@@ -89,8 +91,8 @@
 w_result = code.funcrun(FakeFunc(self.space, "c"), args)
 assert self.space.eq_w(w_result, w(1020))
 
+
 class TestGateway:
-
 def test_app2interp(self):
 w = self.space.wrap
 def app_g3(a, b):
@@ -117,6 +119,14 @@
 args = gateway.Arguments(self.space, [w(6)], ['hello', 'world'], 
[w(7), w(8)])
 assert self.space.int_w(gg(self.space, w(3), args)) == 213
 
+def test_app2interp_future(self):
+w = self.space.wrap
+def app_g3(a, b):
+print(end='')
+return a / b
+g3 = gateway.app2interp_temp(app_g3)
+assert self.space.eq_w(g3(self.space, w(1), w(4),), w(0.25))
+
 def test_interp2app(self):
 space = self.space
 w = space.wrap
@@ -616,7 +626,7 @@
 w_app_f = self.space.wrap(app_f)
 
 assert isinstance(w_app_f.code, gateway.BuiltinCode2)
-
+
 called = []
 fastcall_2 = w_app_f.code.fastcall_2
 def witness_fastcall_2(space, w_func, w_a, w_b):
@@ -736,7 +746,6 @@
 
 
 class TestPassThroughArguments:
-
 def test_pass_trough_arguments0(self):
 space = self.space
 
@@ -834,7 +843,6 @@
 
 
 class AppTestKeywordsToBuiltinSanity(object):
-
 def test_type(self):
 class X(object):
 def __init__(self, **kw):
@@ -873,4 +881,3 @@
 
 d.update(**{clash: 33})
 dict.update(d, **{clash: 33})
-
diff --git a/pypy/module/_lsprof/test/test_cprofile.py 
b/pypy/module/_lsprof/test/test_cprofile.py
--- a/pypy/module/_lsprof/test/test_cprofile.py
+++ b/pypy/module/_lsprof/test/test_cprofile.py
@@ -43,7 +43,7 @@
 )
   

[pypy-commit] pypy default: fix whatsnew

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71213:6066ed126e77
Date: 2014-05-02 21:20 -0400
http://bitbucket.org/pypy/pypy/changeset/6066ed126e77/

Log:fix whatsnew

diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -3,5 +3,5 @@
 ===
 
 .. this is a revision shortly after release-2.3.x
-.. startrev: 0f75ad4d14ce
+.. startrev: 773fc6275c69
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: whitespace

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71212:773fc6275c69
Date: 2014-05-02 21:19 -0400
http://bitbucket.org/pypy/pypy/changeset/773fc6275c69/

Log:whitespace

diff --git a/pypy/doc/whatsnew-2.3.0.rst b/pypy/doc/whatsnew-2.3.0.rst
--- a/pypy/doc/whatsnew-2.3.0.rst
+++ b/pypy/doc/whatsnew-2.3.0.rst
@@ -154,7 +154,7 @@
 Improve optimization of small allocation-heavy loops in the JIT
 
 .. branch: reflex-support
-   
+
 .. branch: asmosoinio/fixed-pip-installation-url-github-githu-1398674840188
 
 .. branch: lexer_token_position_class
@@ -164,4 +164,3 @@
 
 .. branch: issue1430
 Add a lock for unsafe calls to gethostbyname and gethostbyaddr
-
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: merge reflex-support

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71211:c296ea98cf63
Date: 2014-05-02 21:14 -0400
http://bitbucket.org/pypy/pypy/changeset/c296ea98cf63/

Log:merge reflex-support

diff --git a/pypy/module/cppyy/capi/capi_types.py 
b/pypy/module/cppyy/capi/capi_types.py
--- a/pypy/module/cppyy/capi/capi_types.py
+++ b/pypy/module/cppyy/capi/capi_types.py
@@ -1,8 +1,8 @@
 from rpython.rtyper.lltypesystem import rffi, lltype
 
 # shared ll definitions
-_C_OPAQUE_PTR = rffi.LONG
-_C_OPAQUE_NULL = lltype.nullptr(rffi.LONGP.TO)# ALT: _C_OPAQUE_PTR.TO
+_C_OPAQUE_PTR = rffi.ULONG
+_C_OPAQUE_NULL = lltype.nullptr(rffi.ULONGP.TO)# ALT: _C_OPAQUE_PTR.TO
 
 C_SCOPE = _C_OPAQUE_PTR
 C_NULL_SCOPE = rffi.cast(C_SCOPE, _C_OPAQUE_NULL)
diff --git a/pypy/module/cppyy/capi/cint_capi.py 
b/pypy/module/cppyy/capi/cint_capi.py
--- a/pypy/module/cppyy/capi/cint_capi.py
+++ b/pypy/module/cppyy/capi/cint_capi.py
@@ -249,7 +249,7 @@
 
 def activate_branch(space, w_branch):
 w_branches = space.call_method(w_branch, "GetListOfBranches")
-for i in range(space.int_w(space.call_method(w_branches, 
"GetEntriesFast"))):
+for i in range(space.r_longlong_w(space.call_method(w_branches, 
"GetEntriesFast"))):
 w_b = space.call_method(w_branches, "At", space.wrap(i))
 activate_branch(space, w_b)
 space.call_method(w_branch, "SetStatus", space.wrap(1))
@@ -292,7 +292,7 @@
 activate_branch(space, w_branch)
 
 # figure out from where we're reading
-entry = space.int_w(space.call_method(w_self, "GetReadEntry"))
+entry = space.r_longlong_w(space.call_method(w_self, "GetReadEntry"))
 if entry == -1:
 entry = 0
 
@@ -341,7 +341,7 @@
 self.w_tree = w_tree
 
 self.current  = 0
-self.maxentry = space.int_w(space.call_method(w_tree, 
"GetEntriesFast"))
+self.maxentry = space.r_longlong_w(space.call_method(w_tree, 
"GetEntriesFast"))
 
 space = self.space = tree.space  # holds the class cache in 
State
 space.call_method(w_tree, "SetBranchStatus", space.wrap("*"), 
space.wrap(0))
diff --git a/pypy/module/cppyy/capi/loadable_capi.py 
b/pypy/module/cppyy/capi/loadable_capi.py
--- a/pypy/module/cppyy/capi/loadable_capi.py
+++ b/pypy/module/cppyy/capi/loadable_capi.py
@@ -91,7 +91,7 @@
 
 # TODO: the following need to match up with the globally defined C_XYZ 
low-level
 # types (see capi/__init__.py), but by using strings here, that isn't 
guaranteed
-c_opaque_ptr = nt.new_primitive_type(space, 'long')
+c_opaque_ptr = nt.new_primitive_type(space, 'unsigned long')
  
 c_scope  = c_opaque_ptr
 c_type   = c_scope
@@ -259,10 +259,10 @@
 return c_call.ctype.rcall(c_call._cdata, args)
 
 def _cdata_to_cobject(space, w_cdata):
-return rffi.cast(C_OBJECT, space.int_w(w_cdata))
+return rffi.cast(C_OBJECT, space.uint_w(w_cdata))
 
 def _cdata_to_size_t(space, w_cdata):
-return rffi.cast(rffi.SIZE_T, space.int_w(w_cdata))
+return rffi.cast(rffi.SIZE_T, space.uint_w(w_cdata))
 
 def _cdata_to_ptr(space, w_cdata): # TODO: this is both a hack and dreadfully 
slow
 return rffi.cast(rffi.VOIDP,
@@ -281,12 +281,12 @@
 def c_resolve_name(space, name):
 return charp2str_free(space, call_capi(space, 'resolve_name', 
[_Arg(s=name)]))
 def c_get_scope_opaque(space, name):
-return rffi.cast(C_SCOPE, space.int_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
+return rffi.cast(C_SCOPE, space.uint_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
 def c_get_template(space, name):
-return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
+return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
 def c_actual_class(space, cppclass, cppobj):
 args = [_Arg(l=cppclass.handle), _Arg(l=cppobj)]
-return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'actual_class', 
args)))
+return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'actual_class', 
args)))
 
 # memory management --
 def c_allocate(space, cppclass):
@@ -302,7 +302,7 @@
 call_capi(space, 'call_v', args)
 def c_call_b(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
-return rffi.cast(rffi.UCHAR, space.c_int_w(call_capi(space, 'call_b', 
args)))
+return rffi.cast(rffi.UCHAR, space.c_uint_w(call_capi(space, 'call_b', 
args)))
 def c_call_c(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
 return rffi.cast(rffi.CHAR, space.str_w(call_capi(space, 'call_c', 
args))[0])
@@ -452,7 +452,7 @@
 
 def c_get_method(space, cppscope, index):
 args = [_Arg(l=cppscope.handle), _Arg(l=index)]
-return rffi.cast(C_METHOD, space.int_w(call_capi(space, 'get_method', 
args)))
+return rffi.cast(C_METHOD, space.uint_w(call_capi(space, 'get_

[pypy-commit] pypy default: fix whatsnew

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71217:5ff0f9f58158
Date: 2014-05-02 21:24 -0400
http://bitbucket.org/pypy/pypy/changeset/5ff0f9f58158/

Log:fix whatsnew

diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -5,3 +5,5 @@
 .. this is a revision shortly after release-2.3.x
 .. startrev: 773fc6275c69
 
+.. branch: fix-tpname
+Changes hacks surrounding W_TypeObject.name to match CPython's tp_name
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy fix-tpname: close branch for merging

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: fix-tpname
Changeset: r71215:d307272e3d81
Date: 2014-05-02 21:22 -0400
http://bitbucket.org/pypy/pypy/changeset/d307272e3d81/

Log:close branch for merging

___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: backout premature merge of reflex-support: doesn't translate

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71219:56fa4b56da86
Date: 2014-05-02 21:51 -0400
http://bitbucket.org/pypy/pypy/changeset/56fa4b56da86/

Log:backout premature merge of reflex-support: doesn't translate

diff --git a/pypy/module/cppyy/capi/capi_types.py 
b/pypy/module/cppyy/capi/capi_types.py
--- a/pypy/module/cppyy/capi/capi_types.py
+++ b/pypy/module/cppyy/capi/capi_types.py
@@ -1,8 +1,8 @@
 from rpython.rtyper.lltypesystem import rffi, lltype
 
 # shared ll definitions
-_C_OPAQUE_PTR = rffi.ULONG
-_C_OPAQUE_NULL = lltype.nullptr(rffi.ULONGP.TO)# ALT: _C_OPAQUE_PTR.TO
+_C_OPAQUE_PTR = rffi.LONG
+_C_OPAQUE_NULL = lltype.nullptr(rffi.LONGP.TO)# ALT: _C_OPAQUE_PTR.TO
 
 C_SCOPE = _C_OPAQUE_PTR
 C_NULL_SCOPE = rffi.cast(C_SCOPE, _C_OPAQUE_NULL)
diff --git a/pypy/module/cppyy/capi/cint_capi.py 
b/pypy/module/cppyy/capi/cint_capi.py
--- a/pypy/module/cppyy/capi/cint_capi.py
+++ b/pypy/module/cppyy/capi/cint_capi.py
@@ -249,7 +249,7 @@
 
 def activate_branch(space, w_branch):
 w_branches = space.call_method(w_branch, "GetListOfBranches")
-for i in range(space.r_longlong_w(space.call_method(w_branches, 
"GetEntriesFast"))):
+for i in range(space.int_w(space.call_method(w_branches, 
"GetEntriesFast"))):
 w_b = space.call_method(w_branches, "At", space.wrap(i))
 activate_branch(space, w_b)
 space.call_method(w_branch, "SetStatus", space.wrap(1))
@@ -292,7 +292,7 @@
 activate_branch(space, w_branch)
 
 # figure out from where we're reading
-entry = space.r_longlong_w(space.call_method(w_self, "GetReadEntry"))
+entry = space.int_w(space.call_method(w_self, "GetReadEntry"))
 if entry == -1:
 entry = 0
 
@@ -341,7 +341,7 @@
 self.w_tree = w_tree
 
 self.current  = 0
-self.maxentry = space.r_longlong_w(space.call_method(w_tree, 
"GetEntriesFast"))
+self.maxentry = space.int_w(space.call_method(w_tree, 
"GetEntriesFast"))
 
 space = self.space = tree.space  # holds the class cache in 
State
 space.call_method(w_tree, "SetBranchStatus", space.wrap("*"), 
space.wrap(0))
diff --git a/pypy/module/cppyy/capi/loadable_capi.py 
b/pypy/module/cppyy/capi/loadable_capi.py
--- a/pypy/module/cppyy/capi/loadable_capi.py
+++ b/pypy/module/cppyy/capi/loadable_capi.py
@@ -91,7 +91,7 @@
 
 # TODO: the following need to match up with the globally defined C_XYZ 
low-level
 # types (see capi/__init__.py), but by using strings here, that isn't 
guaranteed
-c_opaque_ptr = nt.new_primitive_type(space, 'unsigned long')
+c_opaque_ptr = nt.new_primitive_type(space, 'long')
  
 c_scope  = c_opaque_ptr
 c_type   = c_scope
@@ -259,10 +259,10 @@
 return c_call.ctype.rcall(c_call._cdata, args)
 
 def _cdata_to_cobject(space, w_cdata):
-return rffi.cast(C_OBJECT, space.uint_w(w_cdata))
+return rffi.cast(C_OBJECT, space.int_w(w_cdata))
 
 def _cdata_to_size_t(space, w_cdata):
-return rffi.cast(rffi.SIZE_T, space.uint_w(w_cdata))
+return rffi.cast(rffi.SIZE_T, space.int_w(w_cdata))
 
 def _cdata_to_ptr(space, w_cdata): # TODO: this is both a hack and dreadfully 
slow
 return rffi.cast(rffi.VOIDP,
@@ -281,12 +281,12 @@
 def c_resolve_name(space, name):
 return charp2str_free(space, call_capi(space, 'resolve_name', 
[_Arg(s=name)]))
 def c_get_scope_opaque(space, name):
-return rffi.cast(C_SCOPE, space.uint_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
+return rffi.cast(C_SCOPE, space.int_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
 def c_get_template(space, name):
-return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
+return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
 def c_actual_class(space, cppclass, cppobj):
 args = [_Arg(l=cppclass.handle), _Arg(l=cppobj)]
-return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'actual_class', 
args)))
+return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'actual_class', 
args)))
 
 # memory management --
 def c_allocate(space, cppclass):
@@ -302,7 +302,7 @@
 call_capi(space, 'call_v', args)
 def c_call_b(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
-return rffi.cast(rffi.UCHAR, space.c_uint_w(call_capi(space, 'call_b', 
args)))
+return rffi.cast(rffi.UCHAR, space.c_int_w(call_capi(space, 'call_b', 
args)))
 def c_call_c(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
 return rffi.cast(rffi.CHAR, space.str_w(call_capi(space, 'call_c', 
args))[0])
@@ -452,7 +452,7 @@
 
 def c_get_method(space, cppscope, index):
 args = [_Arg(l=cppscope.handle), _Arg(l=index)]
-return rffi.cast(C_METHOD, space.uint_w(call_capi(space, 'get_method', 
args)))
+return rffi.cast(C_ME

[pypy-commit] pypy release-2.3.x: backout premature merge of reflex-support: doesn't translate

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: release-2.3.x
Changeset: r71218:773bc2c8bfc5
Date: 2014-05-02 21:51 -0400
http://bitbucket.org/pypy/pypy/changeset/773bc2c8bfc5/

Log:backout premature merge of reflex-support: doesn't translate

diff --git a/pypy/module/cppyy/capi/capi_types.py 
b/pypy/module/cppyy/capi/capi_types.py
--- a/pypy/module/cppyy/capi/capi_types.py
+++ b/pypy/module/cppyy/capi/capi_types.py
@@ -1,8 +1,8 @@
 from rpython.rtyper.lltypesystem import rffi, lltype
 
 # shared ll definitions
-_C_OPAQUE_PTR = rffi.ULONG
-_C_OPAQUE_NULL = lltype.nullptr(rffi.ULONGP.TO)# ALT: _C_OPAQUE_PTR.TO
+_C_OPAQUE_PTR = rffi.LONG
+_C_OPAQUE_NULL = lltype.nullptr(rffi.LONGP.TO)# ALT: _C_OPAQUE_PTR.TO
 
 C_SCOPE = _C_OPAQUE_PTR
 C_NULL_SCOPE = rffi.cast(C_SCOPE, _C_OPAQUE_NULL)
diff --git a/pypy/module/cppyy/capi/cint_capi.py 
b/pypy/module/cppyy/capi/cint_capi.py
--- a/pypy/module/cppyy/capi/cint_capi.py
+++ b/pypy/module/cppyy/capi/cint_capi.py
@@ -249,7 +249,7 @@
 
 def activate_branch(space, w_branch):
 w_branches = space.call_method(w_branch, "GetListOfBranches")
-for i in range(space.r_longlong_w(space.call_method(w_branches, 
"GetEntriesFast"))):
+for i in range(space.int_w(space.call_method(w_branches, 
"GetEntriesFast"))):
 w_b = space.call_method(w_branches, "At", space.wrap(i))
 activate_branch(space, w_b)
 space.call_method(w_branch, "SetStatus", space.wrap(1))
@@ -292,7 +292,7 @@
 activate_branch(space, w_branch)
 
 # figure out from where we're reading
-entry = space.r_longlong_w(space.call_method(w_self, "GetReadEntry"))
+entry = space.int_w(space.call_method(w_self, "GetReadEntry"))
 if entry == -1:
 entry = 0
 
@@ -341,7 +341,7 @@
 self.w_tree = w_tree
 
 self.current  = 0
-self.maxentry = space.r_longlong_w(space.call_method(w_tree, 
"GetEntriesFast"))
+self.maxentry = space.int_w(space.call_method(w_tree, 
"GetEntriesFast"))
 
 space = self.space = tree.space  # holds the class cache in 
State
 space.call_method(w_tree, "SetBranchStatus", space.wrap("*"), 
space.wrap(0))
diff --git a/pypy/module/cppyy/capi/loadable_capi.py 
b/pypy/module/cppyy/capi/loadable_capi.py
--- a/pypy/module/cppyy/capi/loadable_capi.py
+++ b/pypy/module/cppyy/capi/loadable_capi.py
@@ -91,7 +91,7 @@
 
 # TODO: the following need to match up with the globally defined C_XYZ 
low-level
 # types (see capi/__init__.py), but by using strings here, that isn't 
guaranteed
-c_opaque_ptr = nt.new_primitive_type(space, 'unsigned long')
+c_opaque_ptr = nt.new_primitive_type(space, 'long')
  
 c_scope  = c_opaque_ptr
 c_type   = c_scope
@@ -259,10 +259,10 @@
 return c_call.ctype.rcall(c_call._cdata, args)
 
 def _cdata_to_cobject(space, w_cdata):
-return rffi.cast(C_OBJECT, space.uint_w(w_cdata))
+return rffi.cast(C_OBJECT, space.int_w(w_cdata))
 
 def _cdata_to_size_t(space, w_cdata):
-return rffi.cast(rffi.SIZE_T, space.uint_w(w_cdata))
+return rffi.cast(rffi.SIZE_T, space.int_w(w_cdata))
 
 def _cdata_to_ptr(space, w_cdata): # TODO: this is both a hack and dreadfully 
slow
 return rffi.cast(rffi.VOIDP,
@@ -281,12 +281,12 @@
 def c_resolve_name(space, name):
 return charp2str_free(space, call_capi(space, 'resolve_name', 
[_Arg(s=name)]))
 def c_get_scope_opaque(space, name):
-return rffi.cast(C_SCOPE, space.uint_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
+return rffi.cast(C_SCOPE, space.int_w(call_capi(space, 'get_scope', 
[_Arg(s=name)])))
 def c_get_template(space, name):
-return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
+return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
 def c_actual_class(space, cppclass, cppobj):
 args = [_Arg(l=cppclass.handle), _Arg(l=cppobj)]
-return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'actual_class', 
args)))
+return rffi.cast(C_TYPE, space.int_w(call_capi(space, 'actual_class', 
args)))
 
 # memory management --
 def c_allocate(space, cppclass):
@@ -302,7 +302,7 @@
 call_capi(space, 'call_v', args)
 def c_call_b(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
-return rffi.cast(rffi.UCHAR, space.c_uint_w(call_capi(space, 'call_b', 
args)))
+return rffi.cast(rffi.UCHAR, space.c_int_w(call_capi(space, 'call_b', 
args)))
 def c_call_c(space, cppmethod, cppobject, nargs, cargs):
 args = [_Arg(l=cppmethod), _Arg(l=cppobject), _Arg(l=nargs), 
_Arg(vp=cargs)]
 return rffi.cast(rffi.CHAR, space.str_w(call_capi(space, 'call_c', 
args))[0])
@@ -452,7 +452,7 @@
 
 def c_get_method(space, cppscope, index):
 args = [_Arg(l=cppscope.handle), _Arg(l=index)]
-return rffi.cast(C_METHOD, space.uint_w(call_capi(space, 'get_method', 
args)))
+return r

[pypy-commit] pypy default: test/fix cpyext version number

2014-05-02 Thread bdkearns
Author: Brian Kearns 
Branch: 
Changeset: r71220:12b3395cfa84
Date: 2014-05-02 21:59 -0400
http://bitbucket.org/pypy/pypy/changeset/12b3395cfa84/

Log:test/fix cpyext version number

diff --git a/pypy/module/cpyext/include/patchlevel.h 
b/pypy/module/cpyext/include/patchlevel.h
--- a/pypy/module/cpyext/include/patchlevel.h
+++ b/pypy/module/cpyext/include/patchlevel.h
@@ -21,7 +21,7 @@
 /* Version parsed out into numeric values */
 #define PY_MAJOR_VERSION   2
 #define PY_MINOR_VERSION   7
-#define PY_MICRO_VERSION   3
+#define PY_MICRO_VERSION   6
 #define PY_RELEASE_LEVEL   PY_RELEASE_LEVEL_FINAL
 #define PY_RELEASE_SERIAL  0
 
diff --git a/pypy/module/cpyext/test/test_version.py 
b/pypy/module/cpyext/test/test_version.py
--- a/pypy/module/cpyext/test/test_version.py
+++ b/pypy/module/cpyext/test/test_version.py
@@ -9,11 +9,17 @@
 if (Py_IsInitialized()) {
 PyObject *m = Py_InitModule("foo", NULL);
 PyModule_AddStringConstant(m, "py_version", PY_VERSION);
+PyModule_AddIntConstant(m, "py_major_version", PY_MAJOR_VERSION);
+PyModule_AddIntConstant(m, "py_minor_version", PY_MINOR_VERSION);
+PyModule_AddIntConstant(m, "py_micro_version", PY_MICRO_VERSION);
 PyModule_AddStringConstant(m, "pypy_version", PYPY_VERSION);
 }
 """
 module = self.import_module(name='foo', init=init)
 assert module.py_version == sys.version[:5]
+assert module.py_major_version == sys.version_info.major
+assert module.py_minor_version == sys.version_info.minor
+assert module.py_micro_version == sys.version_info.micro
 v = sys.pypy_version_info
 s = '%d.%d.%d' % (v[0], v[1], v[2])
 if v.releaselevel != 'final':
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy reflex-support: untangle the use of signed and unsigned integers for the benefit of the rtyper

2014-05-02 Thread wlav
Author: Wim Lavrijsen 
Branch: reflex-support
Changeset: r71221:1481152c36d5
Date: 2014-05-02 22:22 -0700
http://bitbucket.org/pypy/pypy/changeset/1481152c36d5/

Log:untangle the use of signed and unsigned integers for the benefit of
the rtyper

diff --git a/pypy/module/cppyy/capi/loadable_capi.py 
b/pypy/module/cppyy/capi/loadable_capi.py
--- a/pypy/module/cppyy/capi/loadable_capi.py
+++ b/pypy/module/cppyy/capi/loadable_capi.py
@@ -21,10 +21,11 @@
 
 class _Arg: # poor man's union
 _immutable_ = True
-def __init__(self, l = 0, s = '', vp = rffi.cast(rffi.VOIDP, 0) ):
-self._long = l
+def __init__(self, h = 0, l = -1, s = '', vp = rffi.cast(rffi.VOIDP, 0)):
+self._handle = h
+self._long   = l
 self._string = s
-self._voidp = vp
+self._voidp  = vp
 
 # For the loadable CAPI, the calls start and end in RPython. Therefore, the 
standard
 # _call of W_CTypeFunc, which expects wrapped objects, does not quite work: 
some
@@ -57,7 +58,7 @@
 if isinstance(argtype, ctypeprim.W_CTypePrimitiveSigned):
 misc.write_raw_signed_data(data, rffi.cast(rffi.LONG, 
obj._long), argtype.size)
 elif isinstance(argtype, ctypeprim.W_CTypePrimitiveUnsigned):
-misc.write_raw_unsigned_data(data, rffi.cast(rffi.ULONG, 
obj._long), argtype.size)
+misc.write_raw_unsigned_data(data, rffi.cast(rffi.ULONG, 
obj._handle), argtype.size)
 elif obj._voidp != rffi.cast(rffi.VOIDP, 0):
 data = rffi.cast(rffi.VOIDPP, data)
 data[0] = obj._voidp
@@ -116,6 +117,8 @@
 c_voidp  = nt.new_pointer_type(space, c_void)
 c_size_t = nt.new_primitive_type(space, 'size_t')
 
+c_ptrdiff_t = nt.new_primitive_type(space, 'ptrdiff_t')
+
 self.capi_call_ifaces = {
 # name to opaque C++ scope representation
 'num_scopes'   : ([c_scope],  c_int),
@@ -152,7 +155,7 @@
 'get_methptr_getter'   : ([c_scope, c_index], 
c_voidp), # TODO: verify
 
 # handling of function argument buffer
-'allocate_function_args'   : ([c_size_t], c_voidp),
+'allocate_function_args'   : ([c_int],c_voidp),
 'deallocate_function_args' : ([c_voidp],  c_void),
 'function_arg_sizeof'  : ([], 
c_size_t),
 'function_arg_typeoffset'  : ([], 
c_size_t),
@@ -169,7 +172,7 @@
 'base_name': ([c_type, c_int],
c_ccharp),
 'is_subtype'   : ([c_type, c_type],   c_int),
 
-'base_offset'  : ([c_type, c_type, c_object, c_int],   
 c_long),
+'base_offset'  : ([c_type, c_type, c_object, c_int],   
 c_ptrdiff_t),
 
 # method/function reflection information
 'num_methods'  : ([c_scope],  c_int),
@@ -199,7 +202,7 @@
 'num_datamembers'  : ([c_scope],  c_int),
 'datamember_name'  : ([c_scope, c_int],   
c_ccharp),
 'datamember_type'  : ([c_scope, c_int],   
c_ccharp),
-'datamember_offset': ([c_scope, c_int],   
c_size_t),
+'datamember_offset': ([c_scope, c_int],   
c_ptrdiff_t),
 
 'datamember_index' : ([c_scope, c_ccharp],c_int),
 
@@ -264,6 +267,9 @@
 def _cdata_to_size_t(space, w_cdata):
 return rffi.cast(rffi.SIZE_T, space.uint_w(w_cdata))
 
+def _cdata_to_ptrdiff_t(space, w_cdata):
+return rffi.cast(rffi.LONG, space.int_w(w_cdata))
+
 def _cdata_to_ptr(space, w_cdata): # TODO: this is both a hack and dreadfully 
slow
 return rffi.cast(rffi.VOIDP,
 space.interp_w(cdataobj.W_CData, w_cdata, can_be_None=False)._cdata)
@@ -273,9 +279,9 @@
 
 # name to opaque C++ scope representation 
 def c_num_scopes(space, cppscope):
-return space.int_w(call_capi(space, 'num_scopes', 
[_Arg(l=cppscope.handle)]))
+return space.int_w(call_capi(space, 'num_scopes', 
[_Arg(h=cppscope.handle)]))
 def c_scope_name(space, cppscope, iscope):
-args = [_Arg(l=cppscope.handle), _Arg(l=iscope)]
+args = [_Arg(h=cppscope.handle), _Arg(l=iscope)]
 return charp2str_free(space, call_capi(space, 'scope_name', args))
 
 def c_resolve_name(space, name):
@@ -285,62 +291,62 @@
 def c_get_template(space, name):
 return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'get_template', 
[_Arg(s=name)])))
 def c_actual_class(space, cppclass, cppobj):
-args = [_Arg(l=cppclass.handle), _Arg(l=cppobj)]
+args = [_Arg(h=cppclass.handle), _Arg(h=cppobj)]
 return rffi.cast(C_TYPE, space.uint_w(call_capi(space, 'actual_class', 
args)