Re: [pypy-dev] Installation layout of the PyPy 3.8 Fedora package
On Thu, Dec 2, 2021 at 11:49 PM Armin Rigo wrote: > > https://cffi.readthedocs.io/en/latest/embedding.html. If we want to > go the full CPython way, we need to rename and move > "$localdir/libpypy3-c.so" to something like "/usr/lib/libpypy38.so" > and have /usr/bin/pypy3.8 be a program that is linked to > "libpypy38.so" instead of "$localdir/libpypy3-c.so". This would have > consequences for my own habits---e.g. the executable would no longer > find its .so if it lives simply in the same directory. Arguably not a > big issue :-) A freshly translated pypy would not work without > LD_LIBRARY_PATH tricks, just like a freshly compiled CPython does not. > I fear this would break much more than your own habits. To start with, it would break mine :) Jokes apart, it would break three big uses cases: 1. Download&run: if you want to try pypy, currently you can download the tarball, unpack and run, and it just works 2. non-privileged installation: currently you can install PyPy for your own user even if you don't have root power 3. install pypy in docker: this is basically the same as (1) but it's worth mentioning because a pattern which I saw a lot in real life production codie: it's very easy to install pypy in a Dockerfile, you just wget&unpack the tarball. If we require libpypy3.so so be in a system-wide directory, people will have to figure out themselves how to "install" pypy and/or we would need to provide an "install" script which will be very complex because of the zillions of slightly different details of linux distros So, I'm -1 to remove the possibility of $localdir/libpypy3-c.so. Maybe a solution which solves both problems is a compilation flag: 1. by default, we link to $localdir/libpypy3-c.so as we do now (and maybe we could even rename the file nowadats. The '-c' suffix is a relic of the past and it's probably no longer needed); 2. if you translate pypy with a special option, we link to a system-wide libpypy38.so, so that distros can compile a pypy which suits their needs Also, note that an poor's man (2) is already possible nowadays: we link to $localdir/libpypy3-c.so because pypy3-c it has an rpath set to $ORIGIN, but I think you can just patch the binary to remove it: https://stackoverflow.com/questions/13769141/can-i-change-rpath-in-an-already-compiled-binary ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] New official IRC channel
On Fri, Jun 4, 2021 at 11:06 AM anatoly techtonik wrote: > I like how silently is Freenode being buried. Reminds my of my beloved > Belarus. So, what is the reason? > a quick google reveals many articles which explain what happened to freenode and why many projects decided to migrate to another IRC server. One of such articles is for example this: https://fosspost.org/freenode-collapse/ > On Sun, May 30, 2021 at 10:47 PM Antonio Cuni wrote: > > > > Following the example of many other FOSS projects, the PyPy team has > decided to move its official #pypy IRC channel from Freenode to > Libera.Chat: irc://irc.libera.chat/hpy > > > > The core devs will no longer be present on the Freenode channel, so we > recommend to join the new channel as soon as possible. > > > > https://www.pypy.org/posts/2021/05/pypy-irc-moves-to-libera-chat.html > > > > > > ___ > > pypy-dev mailing list > > pypy-dev@python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > > -- > anatoly t. > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] New official IRC channel
Following the example of many other FOSS projects, the PyPy team has decided to move its official #pypy IRC channel from Freenode to Libera.Chat: irc://irc.libera.chat/hpy The core devs will no longer be present on the Freenode channel, so we recommend to join the new channel as soon as possible. https://www.pypy.org/posts/2021/05/pypy-irc-moves-to-libera-chat.html ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] moving blog posts to pypy.org
Hi Matti, +1 to everything for me. It sounds a big step forward, thank you for doing this. I think it would be nice to preserve the ability to comment, though. If the simplest way to do that is to move the repo to github, +1 for that as well. On Tue, Feb 2, 2021 at 11:55 PM Matti Picus wrote: > I imported the blog posts and comments to pypy/pypy.org on the blogposts > branch. If you want a preview, you can update to the branch, and run "make > build" which should create a nikola virtualenv, install the sidebar plugin, > build the site, and put a sidebar into the blog post pages. > > > The blogspot site served us well for 13 years, it is time to move on. > > The operative change will be that from now on blog posts become a merge > request to the pypy.org repo, using the nikola workflow: > > - nikola new_post > > - edit the post in RST or markdown or jupyter notebook or ... > > - add tags and a blurb > > - merge it to default > > > The repo has a CI job to build and push the site, so it is no longer > necessary to render locally and commit the pages to /public, they will be > rebuilt on a merge. > > > We may want to move this repo to github to get a nice preview in github > pages, and to use the utterances system for comments based on github issues. > > > Any thoughts? > > Matti > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy.org 2021 redesign suggestion
On Mon, Jan 4, 2021 at 12:43 PM Panos Laganakos wrote: > Glad you like it! > > Yeah, while the project itself is a banger, the "image" of PyPy is a bit > lacking. But nothing that can't be fixed. I've set up my work schedule to > have time to contribute to one non-work project, so I'm here to help with > it if I can. > wonderful! Let's fill this non-work project time with PyPy :) You will probably need a heptapod account to comment on issues and/or make MR: please follow the instructions here and we will be glad to give you access: https://doc.pypy.org/en/latest/contributing.html#get-access Also, most of us hang out on #pypy on freenode, so feel free to join if you like to have more real-time communication. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy.org 2021 redesign suggestion
Hello Pãnoș, thank you for this! Personally I like it a lot. As Matti pointed out, in order to be used it needs to be turned into a Nikola theme but hopefully it's not too hard. Historically we as a group have been very bad at designing and implementing the website, so having someone who cares and can do this is awesome! My personal hope is that this is the starting point for you to become a regular contributor to PyPy, we would appreciate it a lot :). ciao, Anto On Sat, Jan 2, 2021 at 1:25 AM Panos Laganakos wrote: > Hello, > > I was going over the PyPy website the other day, and something kept > bugging me. While the project itself is in great condition really amazing > at what it does, the website felt that it wasn't giving that information to > the average visitor. > > So, I took a stab at it, with a basic mockup: > https://www.dropbox.com/s/hnabjxy4aybfp1y/pypy.org-0.3.png?dl=0 > > And here is an annotated version: > https://www.dropbox.com/s/r817xirkhcm9dks/pypy.org-0.3-annotations.png?dl=0 > (hover over the image to see the annotations) > > And an announcement header one: > > https://www.dropbox.com/s/3fnvjcdm4aak7ys/pypy.org-0.3-announcement.png?dl=0 > > Let me know what you think. > > > -- > Pãnoș > https://panoslaganakos.com > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Differences performance Julia / PyPy on very similar codes
On Mon, Dec 21, 2020 at 11:19 PM PIERRE AUGIER < pierre.aug...@univ-grenoble-alpes.fr> wrote: > class Point3D: > def __init__(self, x, y, z): > self.x = x > self.y = y > self.z = z > > def norm_square(self): > return self.x**2 + self.y**2 + self.z**2 > you could try to store x, y and z inside a list instead of 3 different attributes: PyPy will use the specialized implementation which stores them unboxed, which might help the subsequent code. You can even use @property do expose them as .x .y and .z, since the JIT should happily remove the abstraction away ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] PyPy hpy branch workflow
Hi pypy-dev (cc:ing hpy-dev), I have just merged the hpy branch into py3.6. This means that from now, pypy3.6 nightly builds will automatically support hpy, and it will be much easier for interested users to try it out. I propose the following workflow for continuing HPy development on PyPy: 1. the development will continue on the hpy branch, in "update_vendored" steps: i.e., periodically we run the script to update to a newer hpy version and implement all the new features. As soon as we do it, new tests are introduced and they (hopefully :)) will start failing 2. note that it is not necessarily to ./update_vendored.sh to the LATEST git version: this will probably introduce too many features and will make it harder/longer to make all tests green again. So I suggest to gradually ./update_vendored.sh in smaller step (you can do it by doing "git checkout REV" in your main hpy working copy before running update_vendored) 3. Once the tests are green again, we can merge hpy into py3.6. The merge can happen directly or with a gitlab MR if you want someone to review the code. Personally, I volunteer to review all hpy-related MR, so feel free to ping me if you want :) 4. goto 1 At the moment, all tests inside module/_hpy_universal pass, so we are in a green state: I would like to try hard to keep them green, and merge changes only after the tests pass: the hpy tests are run automatically on gitlab-ci whenever you push to the hpy branch, so it should be doable. A note about nightly builds: as soon as we catch up with hpy git revision 0a46d31, it will be possible to run "import hpy; hpy.get_version()". This will be very useful for nightly builds, because it will tell you exactly which hpy.devel revision to check out in order to be compatible with your nightly build. As soon as the API stabilizes we will want to use official version numbers, but I think that for now it's a reasonable approach. Please let me know if you have any questions. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Interaction between HPy_CAST and HPy_AsPyObject
After an IRC discussion with Armin, we designed the following solution: https://github.com/hpyproject/hpy/issues/83 If you have comments, please post the on the github issue, to avoid splitting discussions half here and half there :) ciao, Antonio On Sat, Sep 19, 2020 at 11:50 PM Antonio Cuni wrote: > Consider the following snippet of code: > > typedef struct { > HPyObject_HEAD > long x; > long y; > } PointObject; > void foo(HPyContext ctx, HPy h_point) > { > PointObject *p1 = HPy_CAST(ctx, PointObject, h_point); > PyObjecy *py_point = HPy_AsPyObject(ctx, h_point); // [1] > PointObject *p2 = (PointObject*)py_point; > ... > } > > [1] Note that it does not need to be a call to HPy_AsPyObject: it might > be a legacy method which takes a PyObject *self, or other similar ways > > > It is obvious that HPy_CAST and HPy_AsPyObject need to return the very > same address. This is straightforward to implement on CPython, but it poses > some challenges on PyPy (and probably GraalPython). > > > Things to consider: > > 1. currently, in PyPy we allocate the PointObject at a non-movable > address, but so far the API does not REQUIRE it. I think it would be > reasonable to have an implementation in which objects are movable and > HPy_CAST pins the memory until the originating handle is closed. OTOH, the > only reasonable semantics is that multiple calls to HPy_AsPyObject returns > always the same address. > > > 2. HPyObject_HEAD consists of two words which can be used by the > implementation as they like. On CPython, it is obviously mapped to > PyObject_HEAD, but in PyPy we (usually) don't need these two extra words, > so we allocate sizeof(PointObject)-16 and return a pointer to malloc()-16, > which works well since nobody is accessing those two words. I think that > GraalPython could use a similar approach. > > > 3. On PyPy, PyObject_HEAD is *three words*, because it also contains > ob_pypy_link. But, since the code uses *H*PyObject_HEAD, PointObject will > contain only 2 extra words. > > > 4. In the real world usage, there will be "pure hpy types" and "legacy hpy > types", which uses legacy methods&co. It would be nice if the pure hpy > types do NOT have to pay penalties in case they are never casted to > PyObject* > > > > With this in mind, how do we implement HPy_AsPyObject on PyPy? One easy > way is: > > 1. we allocate sizeof(PointObject)+8 > > 2. we tweak cpyext to find ob_pypy_link at p-8 > > 3. we teach cpyext how to convert W_HPyObject into PyObject* and vice > versa. > > > However, this means that we need to always allocate 24 extra bytes for > each object, even if nobody ever calls HPy_AsPyObject on it, which looks > bad. Moreover, without changes in the API, the pin/unpin implementation of > HPy_CAST becomes de facto impossible. > > > So, my proposal is to distinguish between "legacy hpy types" and "pure hpy > types". An HPyType_Spec is legacy if: > > 1. it uses .legacy_slots = ... OR > > 2. it ses .legacy = true (i.e., you can explicitly mark a type as legacy > even if you no longer have any legacy method/slot. This is useful if you > pass it to ANOTHER type which expects to be able to cast the PyObject* into > the struct). > > > If a type is "legacy", the snippet shown above works as expected; if it's > not legacy, it is still possible to call HPy_AsPyObject on it, but then you > are no longer allowed to C-cast if to PointObject* (on pypy, this will mean > that you will get a "standard" PyObject* which is a proxy to W_HPyObject). > > Ideally, in that case it would be nice to catch the invalid cast in the > debug mode, but I don't think this is possible... too bad. > > > What do you think? > > ciao, > > Anto > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] Interaction between HPy_CAST and HPy_AsPyObject
Consider the following snippet of code: typedef struct { HPyObject_HEAD long x; long y; } PointObject; void foo(HPyContext ctx, HPy h_point) { PointObject *p1 = HPy_CAST(ctx, PointObject, h_point); PyObjecy *py_point = HPy_AsPyObject(ctx, h_point); // [1] PointObject *p2 = (PointObject*)py_point; ... } [1] Note that it does not need to be a call to HPy_AsPyObject: it might be a legacy method which takes a PyObject *self, or other similar ways It is obvious that HPy_CAST and HPy_AsPyObject need to return the very same address. This is straightforward to implement on CPython, but it poses some challenges on PyPy (and probably GraalPython). Things to consider: 1. currently, in PyPy we allocate the PointObject at a non-movable address, but so far the API does not REQUIRE it. I think it would be reasonable to have an implementation in which objects are movable and HPy_CAST pins the memory until the originating handle is closed. OTOH, the only reasonable semantics is that multiple calls to HPy_AsPyObject returns always the same address. 2. HPyObject_HEAD consists of two words which can be used by the implementation as they like. On CPython, it is obviously mapped to PyObject_HEAD, but in PyPy we (usually) don't need these two extra words, so we allocate sizeof(PointObject)-16 and return a pointer to malloc()-16, which works well since nobody is accessing those two words. I think that GraalPython could use a similar approach. 3. On PyPy, PyObject_HEAD is *three words*, because it also contains ob_pypy_link. But, since the code uses *H*PyObject_HEAD, PointObject will contain only 2 extra words. 4. In the real world usage, there will be "pure hpy types" and "legacy hpy types", which uses legacy methods&co. It would be nice if the pure hpy types do NOT have to pay penalties in case they are never casted to PyObject* With this in mind, how do we implement HPy_AsPyObject on PyPy? One easy way is: 1. we allocate sizeof(PointObject)+8 2. we tweak cpyext to find ob_pypy_link at p-8 3. we teach cpyext how to convert W_HPyObject into PyObject* and vice versa. However, this means that we need to always allocate 24 extra bytes for each object, even if nobody ever calls HPy_AsPyObject on it, which looks bad. Moreover, without changes in the API, the pin/unpin implementation of HPy_CAST becomes de facto impossible. So, my proposal is to distinguish between "legacy hpy types" and "pure hpy types". An HPyType_Spec is legacy if: 1. it uses .legacy_slots = ... OR 2. it ses .legacy = true (i.e., you can explicitly mark a type as legacy even if you no longer have any legacy method/slot. This is useful if you pass it to ANOTHER type which expects to be able to cast the PyObject* into the struct). If a type is "legacy", the snippet shown above works as expected; if it's not legacy, it is still possible to call HPy_AsPyObject on it, but then you are no longer allowed to C-cast if to PointObject* (on pypy, this will mean that you will get a "standard" PyObject* which is a proxy to W_HPyObject). Ideally, in that case it would be nice to catch the invalid cast in the debug mode, but I don't think this is possible... too bad. What do you think? ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Making the most of internal UTF8
To expand Armin's answer, the two most "visible" effects for end users are: - some_unicode.encode('utf-8') is essentially for free (because it is already UTF-8 internally) - some_bytes.decode('utf-8') is very chep (it just needs to check that some_bytes is valid utf-8) ciao, Anto On Wed, Feb 26, 2020 at 4:47 PM Armin Rigo wrote: > Hi Jerry, > > On Wed, 26 Feb 2020 at 16:09, Jerry Spicklemire > wrote: > > Is there a tutorial about how to best take advantage of PyPy's internal > UTF8? > > For best or for worse, this is only an internal feature. It has no > effect for the end user. In particular, Python programs written for > PyPy3.6 and for CPython3.6 should work identically. The fact that it > uses internally utf-8 is not visible to the Python > program---otherwise, it would never be cross-compatible. > > > A bientôt, > > Armin. > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] We have officially moved to foss.heptapod.net/pypy
Thank you for having handled all of this, Matti! :) On Sun, Feb 16, 2020 at 7:13 PM Matti Picus wrote: > As reported in the recent blog post > > https://morepypy.blogspot.com/2020/02/pypy-and-cffi-have-moved-to-heptapod.html, > > please do not add content to bitbucket.org/pypy or any of the repos > there. From now on use https://foss.heptapod.net/pypy/pypy. The repos > will continue to live on bitbucket until May 31, and we are still > hosting our wiki and downloads there, but activity should move to the > new instance. Anyone can open an issue, but in order to commit (as > explained in the previous mail) you will need permissions on the repo. > The FOSS heptapod instance does not support personal forks, so for now > PRs will be branches (heptapod recommends topic branches but some of us > are not convinced) on the main repo. > > > Also take a look at the facelift on www.pypy.org. > > > Matti > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Leysin Winter sprint 2020
Hi, I have added myself to people.txt. I didn't specify my dates yet because I am flexible, so I suppose I will decide the exact arrival/departure days depending on what the other people will do. If possible I'd like a single room, if available. ciao, Anto On Tue, Jan 14, 2020 at 10:24 AM Armin Rigo wrote: > Hi all, > > We will do again this year a Winter sprint in Leysin, Switzerland. > > The exact work topics are not precisely defined, but will certainly > involve HPy (https://github.com/pyhandle/hpy) as well as the Python > 3.7 support in PyPy (the py3.7 branch in the pypy repo). > > More details will be posted here, but for now, here is the early > planning: it will occur for one week starting around the 27 or 28th of > February. It will be in Les Airelles, a different bed-and-breakfast > place from the traditional one in Leysin. It is a nice old house at > the top of the village. > > There are various rooms for 2, 4 or 5 people, costing 40 to 85 CHF per > person per night. I'd recommend the spacious, 5 people room (divided > in two subrooms of 2 and 3), with a great balcony, at 50 CHF pp. > > We'd like to get some idea soon about the number of people coming. > Please reply to this mail to me personally, or directly put your name > in > https://bitbucket.org/pypy/extradoc/src/extradoc/sprintinfo/leysin-winter-2020/ > . > > > A bientôt, > > Armin. > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] upcoming release
Hi Matti, On Fri, Nov 22, 2019 at 7:03 PM Matti Picus wrote: > I would like to do a release of pypy soon. The biggest change from 7.2 > is that we changed the SOABI to pypy27_pp73 (or pp36_pp73) to better > hanlde packaging. Since a new release of pip will depend on our SOABI, I > would like the release to happen soon. I am proposing this be "7.3.0" > and not "8.0.0". > +1 Are there any outstanding issues we should fix before a new release? > if it has not done already, I think it would be a good idea to update the bundled version of pip so that you can install manylinux2010 wheels with just `pypy -m ensurepip`. > Packaging PyPy: > > I have been on changing our build/packaging on linux to produce a binary > based on portable-pypy. My motivation is to enable projects like > multibuild or cibuildwheels to download binary versions of PyPy from a > single source. Is this a priority? > +1 also on this. For my wheels repo, I have to use the unofficial portable build. If we aim for packages to release their own pypy wheels, we should make the process as straighforward as possible. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] Portable PyPy builds
Hi, I think it would be a good idea to make squeaky's portable pypy builds [1] more official by moving the repo to something like github.com/pypy/portable. I think it's a good idea for various reasons, in particular: 1. we already link them from pypy.org, although not very prominently 2. they are used as a base for my manylinux-pypy docker image [2], which I'd like to become "the official way to build PyPy wheels" (and thus move it to something like github.com/pypy/manylinux or so). [1] https://github.com/squeaky-pl/portable-pypy [2] https://travis-ci.org/antocuni/manylinux-pypy Does anybody have opinions on this? ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Continual Increase In Memory Utilization Using PyPy 6.0 (python 2.7) -- Help!?
Hi Robert, are you using any package which relies on cpyext? I.e., modules written in C and/or with Cython (cffi is fine). IIRC, at the moment PyPy doesn't detect GC cycles which involve cpyext objects. So if you have a cycle which does e.g. Py_foo -> C_bar -> Py_foo (where Py_foo is a pure-python object and C_bar a cpyext object) they will never be collected unless you break the cycle manually. Other than that: have you tried running it with PyPy 7.0 and/or 7.1? On Thu, Mar 28, 2019 at 8:35 AM Robert Whitcher wrote: > So I have a process that use PyPy and pymongo in a loop. > It does basically the same thing every loop, which query a table in via > pymongo and do a few non-save calculations and then wait and loop again > > The RSS of the process continually increased (the PYPY_GC_MAX is set > pretty high). > So I hooked in the GC stats output per: > http://doc.pypy.org/en/latest/gc_info.html > I also assure that gc.collect() was called at least every 3 minutes. > > What I see is that... The memory while high is fair constant for a long > time: > > 2019-03-27 00:04:10.033-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 144244736 > ... > 2019-03-27 01:01:46.841-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 144420864 > 2019-03-27 01:02:36.943-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 144269312 > > > Then it decides (an the exact per-loop behavior is the same each time) to > chew up much more memory: > > 2019-03-27 01:04:17.184-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 145469440 > 2019-03-27 01:05:07.305-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 158175232 > 2019-03-27 01:05:57.401-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 173191168 > 2019-03-27 01:06:47.490-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 196943872 > 2019-03-27 01:07:37.575-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 205406208 > 2019-03-27 01:08:27.659-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 254562304 > 2019-03-27 01:09:17.770-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 256020480 > 2019-03-27 01:10:07.866-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 289779712 > > > That's 140 MB Where is all that memory going... > What's more is that the PyPy GC stats do not show anything different: > > Here are the GC stats from GC-Complete when we were at *144MB*: > > 2019-03-26 23:55:49.127-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 140632064 > 2019-03-26 23:55:49.133-0600 [-] main_thread(29621)log > (async_worker_process.py:308): DBG0: Total memory consumed: > GC used:56.8MB (peak: 69.6MB) >in arenas:39.3MB >rawmalloced: 14.5MB >nursery: 3.0MB > raw assembler used: 521.6kB > - > Total: 57.4MB > > Total memory allocated: > GC allocated:63.0MB (peak: 71.2MB) >in arenas:43.9MB >rawmalloced: 22.7MB >nursery: 3.0MB > raw assembler allocated: 1.0MB > - > Total: 64.0MB > > > Here are the GC stats from GC-Complete when we are at *285MB*: > > 2019-03-27 01:42:41.751-0600 [-] main_thread(29621)log > (async_worker_process.py:304): INFO_FLUSH: RSS: 285147136 > 2019-03-27 01:42:41.751-0600 [-] main_thread(29621)log > (async_worker_process.py:308): DBG0: Total memory consumed: > GC used:57.5MB (peak: 69.6MB) >in arenas:39.9MB >rawmalloced: 14.6MB >nursery: 3.0MB > raw assembler used: 1.5MB > - > Total: 58.9MB > > Total memory allocated: > GC allocated:63.1MB (peak: 71.2MB) >in arenas:43.9MB >rawmalloced: 22.7MB >nursery: 3.0MB > raw assembler allocated: 2.0MB > - > Total: 65.1MB > > > How is this possible? > > I am measuring RSS with: > > def get_rss_mem_usage(): > ''' > Get the RSS memory usage in bytes > @return: memory size in bytes; -1 if error occurs > ''' > try: > process = psutil.Process(os.getpid()) > return process.get_memory_info().rss > except: > return -1 > > > And cross referencing with "ps -orss -p " and th
Re: [pypy-dev] Using Opencv-python with Pypy
Hello Amy, On Wed, Feb 13, 2019 at 10:38 AM Amy wrote: > "pypy3 -m pip install opencv-python". However, it give an error: "Could > not find a version that satisfies the requirement opencv-python (from > versions: ) > No matching distribution found for opencv-python > This happens because opencv decided to release only binary wheels on PyPI, as you can see here; note that the only files available are *.whl: https://pypi.org/project/opencv-python/#files The great advantage of binary wheels is that you don't have to recompile the package yourself; however, they are tied to a particular combination of OS/python version: opencv didn't release any binary wheel for PyPy, so pip cannot find any. When pip cannot locate a wheel, it tries to download a source package (like .tar.bz2 or .zip) and compile it on the fly; however, opencv didn't release any, that's why you get the error. Your best bet to have opencv on PyPy is to clone the source repo from github and run setup.py yourself: $ git clone https://github.com/skvark/opencv-python $ cd opencv-python $ pypy3 setup.py install ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] let's clean up open branches
On Mon, Feb 11, 2019 at 4:52 PM Carl Friedrich Bolz-Tereick wrote: > Antonio Cuni 2019-02-11 11:50 +0100 default > let's close this one :) ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] PyPy 7.0.0 is out!
== PyPy v7.0.0: triple release of 2.7, 3.5 and 3.6-alpha == The PyPy team is proud to release the version 7.0.0 of PyPy, which includes three different interpreters: - PyPy2.7, which is an interpreter supporting the syntax and the features of Python 2.7 - PyPy3.5, which supports Python 3.5 - PyPy3.6-alpha: this is the first official release of PyPy to support 3.6 features, although it is still considered alpha quality. All the interpreters are based on much the same codebase, thus the triple release. Until we can work with downstream providers to distribute builds with PyPy, we have made packages for some common packages `available as wheels`_. The `GC hooks`_ , which can be used to gain more insights into its performance, has been improved and it is now possible to manually manage the GC by using a combination of ``gc.disable`` and ``gc.collect_step``. See the `GC blog post`_. .. _`GC hooks`: http://doc.pypy.org/en/latest/gc_info.html#semi-manual-gc-management We updated the `cffi`_ module included in PyPy to version 1.12, and the `cppyy`_ backend to 1.4. Please use these to wrap your C and C++ code, respectively, for a JIT friendly experience. As always, this release is 100% compatible with the previous one and fixed several issues and bugs raised by the growing community of PyPy users. We strongly recommend updating. The PyPy3.6 release and the Windows PyPy3.5 release are still not production quality so your mileage may vary. There are open issues with incomplete compatibility and c-extension support. The utf8 branch that changes internal representation of unicode to utf8 did not make it into the release, so there is still more goodness coming. You can download the v7.0 releases here: http://pypy.org/download.html We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work. We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: `PyPy`_ and `RPython`_ documentation improvements, tweaking popular modules to run on pypy, or general `help`_ with making RPython's JIT even better. .. _`PyPy`: index.html .. _`RPython`: https://rpython.readthedocs.org .. _`help`: project-ideas.html .. _`cffi`: http://cffi.readthedocs.io .. _`cppyy`: https://cppyy.readthedocs.io .. _`available as wheels`: https://github.com/antocuni/pypy-wheels .. _`GC blog post`: https://morepypy.blogspot.com/2019/01/pypy-for-low-latency-systems.html What is PyPy? = PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7, 3.5 and 3.6. It's fast (`PyPy and CPython 2.7.x`_ performance comparison) due to its integrated tracing JIT compiler. We also welcome developers of other `dynamic languages`_ to see what RPython can do for them. The PyPy release supports: * **x86** machines on most common operating systems (Linux 32/64 bits, Mac OS X 64 bits, Windows 32 bits, OpenBSD, FreeBSD) * big- and little-endian variants of **PPC64** running Linux, * **s390x** running Linux Unfortunately at the moment of writing our ARM buildbots are out of service, so for now we are **not** releasing any binary for the ARM architecture. .. _`PyPy and CPython 2.7.x`: http://speed.pypy.org .. _`dynamic languages`: http://rpython.readthedocs.io/en/latest/examples.html ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Feedback on pypy.org website revamp
I agree, I like the old logo better. On Fri, Feb 8, 2019 at 2:15 PM Ronan Lamy wrote: > Do we really need a new logo? The old one has been our identity for years > and was more distinctive. > > Apart from that, I think this is an improvement. > > > Le ven. 8 févr. 2019 à 11:18, Maciej Fijalkowski a > écrit : > >> Hi everyone >> >> We are looking to redesign the main pypy website, how do people feel >> about the new quick look: >> >> https://baroquesoftware.com/pypy-website/web/ >> >> Best, >> Maciej Fijalkowski >> ___ >> pypy-dev mailing list >> pypy-dev@python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] PyPy 7.0.0 release candidate
Hi, I have uploaded all the packages for PyPy 7.0.0; the release is not yet official and we still need to write the release announcement, but the packages are already available here, for various platforms: https://bitbucket.org/pypy/pypy/downloads/ feel free to try them and please let me know if something is obviously wrong :) ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] Procedure for releasing PyPy
Hi, I have starting the procedure for doing a 7.0 release, I'd like to have some feedback on the process, in particular for bumping the version number. If we don't do it correctly the risk is to revert the version change when we do a merge between default and the release branch or viceversa, which resulting in the need of commits like this: https://bitbucket.org/pypy/pypy/commits/6a1df86a6f7a So what I did was this: 1) hg up -r default 2) hg branch release-pypy2.7-7.x 3) bump the version number to 7.0.0-final (commit d47849ba8135) 4) hg up -r default 5) hg merge release-pypy2.7-7.x (commit c4dc91f2e037) 6) bump the version number (on default) to 7.1.0-alpha0 (commit f3cf624ab14c) 7) merge default into release-pypy2.7-7.x, editing the files before the commit to avoid changing the version again (commit 7986159ef4d8) This way we should be able to freely merge default into release and viceversa without problems. Also, what to do with pypy3.5? I think that at this point the best way would be to merge default into py3.5, and to the same commit dance for release-pypy3.5-7.x. I don't like this complicate procedure, but it is the only I could come up with: do you think it's reasonable? If so, I'll update how-to-release.rst accordingly. Can we think of something better? To start with, ideally we should have the version number in a single place instead of 3 (4 if you also count the hg branch name) :( ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Review for blog post
On Tue, Dec 25, 2018 at 8:59 AM Armin Rigo wrote: > Any clue about why the "purple line" graph, after adding some > gc.disable() and gc.collect_step(), is actually 10% faster than the > baseline? Is that because "purple" runs the GC when "yellow" would be > sleeping waiting for the next input, and you don't count that time in > the performance? If so, maybe we could clarify that we don't expect > better overall performance by adding some gc.disable() and > gc.collect_step() in a program doing just computations---in this case > it works because it is reorganizing tasks in such a way that the GC > runs at a moment where it is "free". > Yes, that's exactly the reason: the GC still runs, but runs "somewhere else" which is not shown in the graph. I added a paragraph to explain it better, thanks for the suggestion. Btw, the final blog post has been published here: https://morepypy.blogspot.com/2019/01/pypy-for-low-latency-systems.html ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] Review for blog post
Hi, better to ask it here since I suspect that most people won't be much on irc during the next days: I wrote a blog post draft about the gc-disable branch which I have recently merged; it is available on extradoc: https://bitbucket.org/pypy/extradoc/src/extradoc/blog/draft/2018-12-gc-disable/gc-disable.rst?at=extradoc&fileviewer=file-view-default Reviews, comments and remarks are welcome. I think I'd like to publish it after Christmas. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] last benchmark run was June 2
Even if we decide to run them less often, we still need to setup the whole machinery: so, once we have done that, running them nightly or once a week doesn't change much. If we can get usage of speed.python.org it would be awesome I think: it would also immediately enable comparison between PyPy and CPython On Fri, Aug 31, 2018 at 9:41 AM Matti Picus wrote: > We lost the machine that was running our benchmark suite, the last run > sees to have been June2 http://speed.pypy.org/changes/. > > Choices: > - Ask the PSF to run on the machine that is used to run benchmarks for > speed.python.org. The machine, speed-python.osuosl.org, is very > powerful and not heavily used, a description of it is > https://speed.python.org/about/ under "The Machine". The users with > psf-authorized access to the machine, from > https://github.com/python/psf-chef, are fijal, zware, mattip, haypo > (results of "grep python-speed -r ." in that repo) > > - Use one of bencher4, baroquesoftware.com, or any other pypy-specific > donated machine. > > - Stop running benchmarks > > Any thoughts? Do we need nightly benchmarks or should we run them less > often? Should we also be running py3.5 benchmarks? Should we upload to > speed.python.org, speed.pypy.org or both? > Matti > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] bencher4 fialing own py3 builds
On Mon, Aug 27, 2018 at 6:38 PM Matti Picus wrote: > . Perhaps we should retire bencher4 and find > another build machine, one that we can control a bit more another thing we could try to have more repeatable builds is to use docker containers; for example, some time ago I used this approach to debug an issue which appeared only on 32bit builds: https://github.com/antocuni/dockerpypy This is orthogonal to using a machine on which we have more control of course. Probably the best is to have full control on the machine AND use docker containers :) ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] EuroScipy
Hi, I am planning to go, since it's close and it's a beautiful place. Did anybody else already submitted a pypy talk? On Sat, May 12, 2018 at 5:03 PM, Carl Friedrich Bolz-Tereick wrote: > Hi all, > > Somebody on Twitter asked whether there would be a pypy talk at EuroScipy > which is from August 28 to September 1 in Trento, Italy. Here is the Call > for Presentations: https://pretalx.com/euroscipy18/cfp > NB: the deadline is tomorrow! > > Carl Friedrich > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy cpyext-faster-arg-passing: document branch
On Wed, Jan 31, 2018 at 12:34 PM, Carl Friedrich Bolz-Tereick wrote: > Hi Anto, > > Yes, I ran your benchmarks and they are improved, particularly the ones > that pass arguments. > cool :) > I need to rerun them now that I merged default. However, I would prefer it > if we could find a real life cpyext benchmark (maybe using numpy?). > yes sure, mine are microbenchmarks, although they are still useful for guiding optimizations. About a real life-ish benchmark, what about this (the first version of course, the one using cpyext)? https://morepypy.blogspot.co.uk/2017/10/how-to-make-your-code-80-times-faster.html IIRC, at some point I tried to run it before and after the branches we did in Cape Town, and I measured a good speedup (although I don't remember how much). It'd be interesting to check how much we win with your work. I cannot do it now easily because I'm not at home until the end of the week, though. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy cpyext-faster-arg-passing: document branch
Hi Carl, wow, this looks awesome. Did you run benchmarks to measure the speedup? If yes, should we add them to my repo? https://github.com/antocuni/cpyext-benchmarks ciao, Anto On Tue, Jan 30, 2018 at 1:31 PM, cfbolz wrote: > Author: Carl Friedrich Bolz-Tereick > Branch: cpyext-faster-arg-passing > Changeset: r93724:627a1425607c > Date: 2018-01-30 14:30 +0100 > http://bitbucket.org/pypy/pypy/changeset/627a1425607c/ > > Log:document branch > > diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst > --- a/pypy/doc/whatsnew-head.rst > +++ b/pypy/doc/whatsnew-head.rst > @@ -23,3 +23,11 @@ > added, then the performance using mapdict is linear in the number of > attributes. This is now fixed (by switching to a regular dict after 80 > attributes). > + > + > +.. branch: cpyext-faster-arg-passing > + > +When using cpyext, improve the speed of passing certain objects from PyPy > to C > +code, most notably None, True, False, types, all instances of C-defined > types. > +Before, a dict lookup was needed every time such an object crossed over, > now it > +is just a field read. > ___ > pypy-commit mailing list > pypy-com...@python.org > https://mail.python.org/mailman/listinfo/pypy-commit > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Safety of replacing the instance dict
sidenote: if you do the following, you can replace the __dict__ without incurring into performance penalties (Armin, please correct me if I'm wrong): import __pypy__ def __init__(self): self.__dict__ = __pypy__.newdict('instance') this is not directly useful for your use case (because newdict() always return an empty dict), but it might be useful to know in general On Mon, Jan 29, 2018 at 1:41 PM, Armin Rigo wrote: > Hi, > > On 29 January 2018 at 11:22, Tin Tvrtković wrote: > > It's just that doing it this way is unconventional and a little scary. > Would > > we be violating a Python rule somewhere and making stuff blow up later > if we > > went this way? > > No, it's semantically fine. But it comes with a heavy penalty on > PyPy. I guess you don't see it because you measured something tiny, > like creating the instance and then throwing it away---the JIT > optimizes that to nothing at all in both cases. Not only is the > creation time larger, but attribute access is slower, and the memory > usage is larger. > > > A bientôt, > > Armin. > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy sprint in Poland
Hi, I'd be happy to come. Since I have already been to Warsaw, I vote for Krakow or Wroclaw. The only thing is that April is quite busy for me; ATM, the only reasonable dates are somewhere between 3rd and 13th. May it's surely easier :) On Sun, Jan 7, 2018 at 10:26 AM, Maciej Fijalkowski wrote: > Hi Everyone. > > It looks like I would be in Europe through April and maybe May. Anyone > fancy a sprint somewhere in Poland? Potential venues: > > * we can have a sprint at my climbing spot - it's quite a problem to > get to (~2-3h by train from either Prague or Wroclaw), but it's > incredibly lovely at this time of the year. There is a venue and > internet that we can use. Limited restaurant options are a con. > Endless hiking options are a plus. > > * we can try to organize a venue in Warsaw. We don't have a place just > yet, but it's easy to get to, relatively cheap, abundant in places to > eat out. We don't have a place to sprint at just yet, but maybe we can > organize something at the Uni. > > * Krakow. Might be easier to organize venue. Slightly harder to get to > than Warsaw, quite a bit nicer. > > * Wroclaw. We had a tiny sprint there with Armin when we merged the > JIT to pypy :-) A bit harder to get to than Warsaw, a very nice city, > we don't have a venue organized yet (but it's possible I think), > easier to do a few days excursion from there. > > Thoughts? > fijal > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Improving the documentation on how we test
On Mon, Dec 11, 2017 at 1:11 PM, Carl Friedrich Bolz wrote: > we require and have always required app-level tests for every new > feature up to the finer details. The CPython test suite is often not > very thorough, and we often work under the assumption that if our own > tests about a feature work, the feature works. > > The coding guide states that already, but is maybe not forceful or not > detailed enough: > > "adding features requires adding appropriate tests." > > I am open to suggestions how to make this more explicit. > About app-tests vs CPython test, I'd state the following: - app-tests are what we use to check that the code we write behaves as we intend (hence, you **need** an app test for every piece of code you write) - cpython tests are what we use to check that our implementation is compatible with cpython ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] a question about translating pypy for raspberry pi
Hi Gelin, Please make sure to include pypy-dev in the CC, so that the others can read the answer. I suggest you to try two things: 1. Try to run pypy under strace: this way you can see where it tries to find .so libraries and which one it cannot find (it might be a dependency of libpypy-c itself) 2. Try a nightly build from here: http://buildbot.pypy.org/nightly/trunk/ Il 10 nov 2017 7:51 PM, "Gelin Yan" ha scritto: > > > On Sat, Nov 11, 2017 at 1:50 AM, Antonio Cuni wrote: > >> Maybe it's a stupid advice but: are you sure to have copied also >> libpypy-c.so to the raspberry? >> Is it in the same directory as the "pypy" or "pypy-c" executable? >> >> I suggest you to run the script "pypy/tool/release/package.py", which >> builds a tarball containing all the necessary stuff (including the stdlib). >> Then you can simply untar on the raspberry and run it. >> >> You should run it this way: >> python package.py --archive-name=my-pypy-for-rpi >> >> >> >> >> > Hi Antonio > > Yes. I did run the package.py to make a pypy package for ARM. I did > check libpypy-c.so which is placed on the same directory of where pypy is. > > Regards > > gelin yan > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] a question about translating pypy for raspberry pi
Maybe it's a stupid advice but: are you sure to have copied also libpypy-c.so to the raspberry? Is it in the same directory as the "pypy" or "pypy-c" executable? I suggest you to run the script "pypy/tool/release/package.py", which builds a tarball containing all the necessary stuff (including the stdlib). Then you can simply untar on the raspberry and run it. You should run it this way: python package.py --archive-name=my-pypy-for-rpi On Fri, Nov 10, 2017 at 6:32 PM, Gelin Yan wrote: > Hi All > >I followed the instructions from > > http://doc.pypy.org/en/release-2.4.x/arm.html > > and succeeded to build a pypy (it works in qemu environment) > > when I copied to my raspberry pi 3 (OS: raspbian) and tried to run pypy > > the program complained: > > "error while loading shared libraries: libpypy-c.so: cannot open shared > object file: No such file or directory" > > Did I do something wrong here? > > Regards > > gelin yan > > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy default: "eh". On pypy we need to be careful in which order we have pendingblocks.
Hi, I suppose that the explanation that you put in the commit message should also go in a comment inside the source code, else when someone sees it it's just obscure. Also, it'd be nice to have some tests about ShuffleDict :) On Thu, Nov 9, 2017 at 2:55 AM, fijal wrote: > Author: fijal > Branch: > Changeset: r92981:cb9634421fa2 > Date: 2017-11-08 17:54 -0800 > http://bitbucket.org/pypy/pypy/changeset/cb9634421fa2/ > > Log:"eh". On pypy we need to be careful in which order we have > pendingblocks. Otherwise we end up in a setup where we have blocks > a, b and c where a and b are blocked because c needs to add an > attribute, but c is never appended since popitem() would always > return an a or b. I wonder if the same condition can be repeated on > CPython, but I cannot. Unclear how would you write a test for it > since it depends on dictionary order. > > diff --git a/rpython/annotator/annrpython.py b/rpython/annotator/ > annrpython.py > --- a/rpython/annotator/annrpython.py > +++ b/rpython/annotator/annrpython.py > @@ -15,10 +15,34 @@ > typeof, s_ImpossibleValue, SomeInstance, intersection, difference) > from rpython.annotator.bookkeeper import Bookkeeper > from rpython.rtyper.normalizecalls import perform_normalizations > +from collections import deque > > log = AnsiLogger("annrpython") > > > +class ShuffleDict(object): > +def __init__(self): > +self._d = {} > +self.keys = deque() > + > +def __setitem__(self, k, v): > +if k in self._d: > +self._d[k] = v > +else: > +self._d[k] = v > +self.keys.append(k) > + > +def __getitem__(self, k): > +return self._d[k] > + > +def popitem(self): > +key = self.keys.popleft() > +item = self._d.pop(key) > +return (key, item) > + > +def __nonzero__(self): > +return bool(self._d) > + > class RPythonAnnotator(object): > """Block annotator for RPython. > See description in doc/translation.txt.""" > @@ -33,7 +57,7 @@ > translator = TranslationContext() > translator.annotator = self > self.translator = translator > -self.pendingblocks = {} # map {block: graph-containing-it} > +self.pendingblocks = ShuffleDict() # map {block: > graph-containing-it} > self.annotated = {} # set of blocks already seen > self.added_blocks = None # see processblock() below > self.links_followed = {} # set of links that have ever been > followed > ___ > pypy-commit mailing list > pypy-com...@python.org > https://mail.python.org/mailman/listinfo/pypy-commit > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Leysin Winter sprint
I don't know the exact climate of behavior of Switzerland, but in the past few winters in Italy it has basically been a lottery; you just don't know when it's going to be cold or to have snow. For example, last year I skied on snow which ranged from ok-ish to very bad from december to april, and then I found the best snow/climate of the season on 2 May. But anyway, from my very personal point of view "sprint in Leysin with potentially higher temperatures" > "no sprint in Leysin" 😅 On Wed, Nov 1, 2017 at 10:18 AM, Maciej Fijalkowski wrote: > Hi Anto > > With the climate change >15/03 is not a very good winter sprint, is it? > > On Sun, Oct 29, 2017 at 11:47 PM, Antonio Cuni > wrote: > > Hi Armin, > > > > I would probably be unable to come during the first two weeks of march, > so > > for me the preference is basically "anything >= 15/03" > > > > On Sun, Oct 29, 2017 at 11:11 PM, Armin Rigo > wrote: > >> > >> Hi all, > >> > >> I'm trying to organise the next winter sprint a bit in advance. It > >> might tentatively be occurring in March 2018. If people have > >> preferences for the exact week, please do tell :-) > >> > >> > >> A bientôt, > >> > >> Armin. > >> ___ > >> pypy-dev mailing list > >> pypy-dev@python.org > >> https://mail.python.org/mailman/listinfo/pypy-dev > > > > > > > > ___ > > pypy-dev mailing list > > pypy-dev@python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Leysin Winter sprint
Hi Armin, I would probably be unable to come during the first two weeks of march, so for me the preference is basically "anything >= 15/03" On Sun, Oct 29, 2017 at 11:11 PM, Armin Rigo wrote: > Hi all, > > I'm trying to organise the next winter sprint a bit in advance. It > might tentatively be occurring in March 2018. If people have > preferences for the exact week, please do tell :-) > > > A bientôt, > > Armin. > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Custom scanning methods?
Hi Timothy, I'm surely not an expert of this area and others can probably explain better how it works, but I think that you are looking for `rgc.register_custom_trace_hook`; for an example of usage, see e.g. pypy/module/micronumpy/concrete.py ciao, Anto On Fri, Sep 15, 2017 at 10:52 PM, Timothy Baldridge wrote: > I have a rather complicated structure I'd like to create in RPython. The > structure consists of a heterogeneous array of RPython classes laid out in > a single "byte array". The problem is these structures will contain GC'd > pointers. > > Is there a way (and can someone point me to the place) to tell the GC to > use a special scanning method when looking for pointers in a specific > object type? I've read the docs on rstrategies, but that seems to be > dealing mostly with swapping out primitive arrays for object arrays, which > isn't exactly what I'm looking for. > > Can anyone help? > > Thanks! > > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Need to rebuild wheels for every pypy minor version
Hi, note that this is not because of PyPy: it's the wheel package which chooses what to include in the wheel filename: https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/pep425tags.py?at=default&fileviewer=file-view-default#pep425tags.py-39 PyPy reports only the ABI version, which is pypy_41. This is probably wrong for the opposite reasons, i.e. it claims it's backward compatible even when it's not: https://bitbucket.org/pypy/pypy/issues/2613/fix-the-abi-tag ciao, Antonio On Tue, Jul 25, 2017 at 8:47 PM, Daniele Rolando wrote: > Hi guys. > > Right now pypy wheels names include both the major and minor pypy version > in them: e.g. uWSGI-2.0.14.0.*pp257*-pypy_41-linux_x86_64.whl > This means that if we want to upgrade pypy from 5.7.1 to 5.8 we'd need to > rebuild all our wheels and this is not scalable since there are new pypy > releases every 3/4 months. > > Wouldn't it be enough to only include the major version in the wheel name? > Are minor pypy versions really incompatible between them? > > Thanks, > Daniele > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] cpyext performance
Hello, recently I have been playing a bit with cpyext, so see if there are low haning fruits to be taken to improve the performance. I didn't get any real result but I think it's interesting to share my findings. The benchmark I'm using is here: https://github.com/antocuni/cpyext-benchmarks it contains a simple C extension defining three methods, one for each METH_NOARGS, METH_O and METH_VARARGS flags. So first, the results with CPython and PyPy 5.8: $ python bench.py noargs : 0.78 secs onearg : 0.89 secs varargs: 1.05 secs $ pypy bench.py noargs : 1.67 secs onearg : 2.13 secs varargs: 4.89 secs Then, I tried my cpyext-jit branch; this branch does two things: 1) it makes cpyext visible to the JIT, and add enough @jit.dont_look_inside so that it actually compiles 2) merges part of the cpyext-callopt branch, up to rev 9cbc8bd76297 (more on this later): this adds fast paths for METH_NOARGS and METH_O to avoid going through the slow __args__.unpack(): $ pypy-cpyext-jit bench.py noargs : 0.30 secs onearg : 0.31 secs varargs: 4.90 secs So, apparently this is enough to greatly speedup the calls, and be even faster than CPython. Note that "onearg" calls "simple.onearg(None)". However, things become more complicated as soon as I start passing various kind of objects to onearg(): $ pypy bench_oneargs.py # pypy 5.8 onearg(None): 2.09 secs onearg(1) : 2.07 secs onearg(i) : 4.98 secs onearg(i%2) : 4.92 secs onearg(X) : 2.13 secs onearg((1,)): 2.30 secs onearg((i,)): 9.80 secs $ pypy-cpyext-jit bench_oneargs.py onearg(None): 0.30 secs onearg(1) : 0.30 secs onearg(i) : 2.52 secs onearg(i%2) : 2.56 secs onearg(X) : 0.30 secs onearg((1,)): 0.30 secs onearg((i,)): 7.45 secs so, the call optimization still helps, but as soon as we need to convert one object from pypy to cpython we are horribly slow. However, it is interesting to note that: 1) if we pass a constant object, we are fast: None, 1, (1,) 2) if we pass X (which is a global X=100), we are still fast 3) any other object which is created on the fly is slow Looking at the traces, they look more or less the same in the three cases, so I don't really understand what is the difference. Finally, about the branch cpyext-callopt, which was started in Leysin by Richard, Armin and me: I am not sure to fully understand the purpose of dbba78b270fd: apparently, the optimization done in 9cbc8bd76297 seems to work well, so what am I missing? ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] okay to rename cppyy -> _cppyy
+1 On Tue, Jul 18, 2017 at 11:33 PM, wrote: > Hi, > > any objections to renaming cppyy into _cppyy? > > I want to be able to do a straight 'pip install cppyy' and then use it > w/o further gymnastics (this works today for CPython), but then I can't > have 'cppyy' be a built-in module. > > (You can pip install PyPy-cppyy-backend, but then you'd still have to deal > with LD_LIBRARY_PATH and certain cppyy features are PyPy version dependent > even as they need not be as they are pure Python.) > > The pip-installed cppyy will still use the built-in _cppyy for the PyPy > specific parts (low-level manipulations etc.). > > I'm also moving the cppyy documentation out of the pypy documentation and > placing it on its own (http://cppyy.readthedocs.io/), given that the > CPython side of things now works, too. > > Yes, no, conditional? > > Thanks, > Wim > -- > wlavrij...@lbl.gov--+1 (510) 486 6411--www.lavrijsen.net > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] cppyy fails to build on gcc 5 and clang
Hi Tobias, On Wed, Jan 18, 2017 at 5:27 PM, Tobias Oberstein < tobias.oberst...@gmail.com> wrote: > Are you aware of > > https://github.com/alex/zero_buffer > > ? > > This emulates Python strings using zero-copy read-only buffer views. > yes, I saw it in the past and I considered to use it for capnpy. IIRC, I measured that at the end of the days, the overhead of using it was larger than simply doing string slicing, especially for short strings. It might be useful for very large strings, however. Depending of what you need to do, for zero copy you could also consider returning a memoryview slice of the original underlying buffer. > Holy grail for me (use case being IPC) would be: > > Python process 1 mmap's a file shared with Python process 2. > > Python process 1 puts a string into mmap'ed file, pointer to that is > "somehow transferred" to process 2 (eg pushing the index into the mmap'ed > file over Unix domain socket .. a single uint64), and Python code in > process 2 can do stuff with this string _without_ copying - probably via > zero_buffer. > > Have you actually measured that copying the data between processes is the bottleneck? Using shared memory is something I tried also for a client of mine but at the end we switched back to pass messages using network because the extra complexity was not worth the gain. But again, I suppose it depends on the size of the message. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] cppyy fails to build on gcc 5 and clang
Hi Wim, On Wed, Jan 18, 2017 at 5:09 PM, wrote: > since you wrote the initial data access part yourself way back when, I'd > expect you to fix cppyy if it were slower! :) > > that's a fair point, indeed :). But on top of that you need to put a layer which exposes a pythonic interface (for example, offering list-like classes with an __iter__ and a __getitem__). So I have no idea of how the final speed of the thing will be, until someone tries and measure :). ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] numpypy include files
Ok, thank you, I fixed&committed. About me being the first to report: actually, I think it's because there is a lot of confusion around about numpy+pypy. I started to investigate this because I saw by chance a colleague switching from installing numpypy to installing numpy. When I asked, he just said that his C extension did no longer work with numpypy on newer versions of PyPy. And as a result his code ran slower (because for his particular workload numpypy is better). So, lesson learnt: - people are confused about this numpy vs numpypy thing, Also, the fact that the numpypy repo is called "numpy" does not help, as they tend to think it's just an outdated copy of the offical numpy. - it's hard to find docs about this: the only docs I found are http://pypy.org/download.html which is not the first place I'd look at :) So: what is the current status of numpy vs numpypy? Is the latter still maintained or is it slowly dying? If numpypy is still relevant, I propose to hack things so that: 1) we have a dedicated page about numpy either on pypy.org or readthedocs 2) when you execute pip install numpy, it installs the upstream numpy but also displays a link to that page 3) pip install numpypy works as well (and displays the same link). What do you think? On Wed, Jan 18, 2017 at 9:05 PM, Matti Picus wrote: > On 18/01/17 20:38, Antonio Cuni wrote: > >> Hello Matti, >> I am having some troubles with the latest pypy and numpypy: basically, >> numpy.get_include() returns the wrong directory. >> >> I investigated a bit and I think the culprit are these two commits: >> https://bitbucket.org/pypy/pypy/commits/ad36a29d0fcc >> https://bitbucket.org/pypy/numpy/commits/26e09b343f >> >> Probably it's just a typo in the numpy commit, which specify '_numpy' >> instead of '_numpypy'. >> >> However, before committing the fix I wanted to ask you, to make sure I am >> not missing anything important :). >> >> ciao, >> Anto >> > Indeed, the changeset https://bitbucket.org/pypy/numpy/commits/26e09b343f > seems to be an error, should be _numpypy. Interesting that you are the > first to report it, I guess not many people try to build c-extensions with > numpypy > <https://bitbucket.org/pypy/numpy/commits/26e09b343f> > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] numpypy include files
Hello Matti, I am having some troubles with the latest pypy and numpypy: basically, numpy.get_include() returns the wrong directory. I investigated a bit and I think the culprit are these two commits: https://bitbucket.org/pypy/pypy/commits/ad36a29d0fcc https://bitbucket.org/pypy/numpy/commits/26e09b343f Probably it's just a typo in the numpy commit, which specify '_numpy' instead of '_numpypy'. However, before committing the fix I wanted to ask you, to make sure I am not missing anything important :). ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] cppyy fails to build on gcc 5 and clang
Hi Tobias, On Tue, Jan 17, 2017 at 7:27 PM, Tobias Oberstein < tobias.oberst...@gmail.com> wrote: > Hi Antonio, > > Fantastic!! > > Actually, if this is true (performance), this would be wonderful - I can > spare my nerves (hello C++, you are a mnster!), and eat my cake .. > > The benchmarks > > http://capnpy.readthedocs.io/en/latest/benchmarks.html > > for scalar attribute access look awesome (close to mere instance access). > > Structured value access (lists/dict): bigger difference. I am also > concerned about GC pressure - is it still "zero copy"? > yes: under the hood, capnpy objects are represented as immutable strings; then the generated classes contain accessors which look like this (overly simplified): class Point(_Struct): @property def x(self): offset = statically_known_x_offset + self._offset return struct.unpack_from('q', self._buf.s, offset) where self._buf.s is a string. The nice thing is that the pypy JIT does a very good job at optimizing struct.unpack_from: if you look at the generated code, you see that it loads the 8 bytes directly from the in-memory buffer. So, it's very close to optimal performance. For lists is the same: when you look up a list field, you get a wrapper around the very same buffer, then the custom __getitem__ does the actual lookup in memory. Note that currently lists on pypy are slowish, but I should be able to solve the problem in the near future. The only exception are strings: getting a text attribute means to take a slice of the buffer. However, if the returned value is short-lived, the JIT might be able to optimize the slicing away. (note: there is no automatic conversion to unicode, although I might add an option for that in the future). I don't understand what you mean by dicts are there are no dictionaries in capnproto. Note also that capnpy is serialization only: there is no support for RPC stuff. > Note: I am purely interested in performance on PyPy .. > > In general: I thought it would be a good idea to use capnproto C++ > generator, and then cppyy to get the best performance (on pypy). Given > there is "antocuni/capnpy", do you think this is a pointless endeavour? > The original goal of capnpy was to be as fast as possible on PyPy. However, if you find that C++ + cppyy is faster, I'd be very interested to know :). ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] cppyy fails to build on gcc 5 and clang
Hi, On Tue, Jan 17, 2017 at 5:51 PM, Tobias Oberstein < tobias.oberst...@gmail.com> wrote: > Note: for now I am fine, I managed to build it using gcc 4.9 toolchain. > Need to see how far I get with captnproto now .. > if you are interested in capnproto, you might want to try capnpy: it's written in pure python and it has been designed to be super fast on pypy (and it's very fast on CPython as well): https://github.com/antocuni/capnpy http://capnpy.readthedocs.io/en/latest/ ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Leysin sprint?
Hi, I'd prefer the classic Leysin sprint :). I will probably be unavailable the first two weeks of March; other than that, any week should be fine for me. On Mon, Jan 16, 2017 at 8:59 PM, Maciej Fijalkowski wrote: > I have a place to stay in Zurich so it's a bit of a win for me :-) > > I would prefer this to be somewhere around early March I think, but I > can't yet commit > > Cheers, > fijal > > On Mon, Jan 16, 2017 at 8:43 PM, Manuel Jacob wrote: > > Hi, > > > > Fortunately, this year I can place my exams quite flexibly. I'm staying > in > > Brussels until the February 7th after FOSDEM, so anything after that > would > > work perfectly for me. > > > > I'd prefer a Leysin sprint, but mostly for reasons of nostalgia rather > than > > more serious considerations. > > > > -Manuel > > > > > > On 2017-01-15 09:41, Armin Rigo wrote: > >> > >> Hi all, > >> > >> I'm starting to organize the Leysin sprint for this winter. > >> > >> The first note is that the Swiss Python Summit will take place on Feb > >> 17th near Zurich (http://www.python-summit.ch/). I'll give a talk > >> about RevDB, the reverse debugger. > >> > >> One option would be to add a few days of sprint at or near the > >> conference location. That may be a way to attract more people. The > >> other option would be a regular sprint in Leysin. For that case, the > >> dates are still completely open. (It snowed a lot!) > >> > >> Anyone that thinks about coming, please tell me your preferences and > >> dates! > >> > >> > >> A bientôt, > >> > >> Armin. > >> ___ > >> pypy-dev mailing list > >> pypy-dev@python.org > >> https://mail.python.org/mailman/listinfo/pypy-dev > > > > ___ > > pypy-dev mailing list > > pypy-dev@python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] integration with java
Hi, you could also check pyjnius, which seems more mature than jpype and it is used in real life apps to run e.g. python on android: http://pyjnius.readthedocs.io/en/latest/ I quickly tried to compile pyjnius with pypy (using cpyext) and it seems to work (at least, the tests pass). However, since it uses cpyext I dont' know what are the performance. ciao, Anto On Thu, Jul 14, 2016 at 11:08 AM, Maciej Fijalkowski wrote: > Hi Andrey > > There are no java bindings for PyPy just yet. You would need to write > your own based on cffi. http://jpype.sourceforge.net/ is one of the > examples of how those things can be done, but you would need to use > cffi as opposed to CPython C API. If you need a quick solution, you > can try compiling jpype against PyPys CPython C API compatibility > layer, but it would be at least very slow. > > Best regards, > Maciej Fijalkowski > > On Wed, Jul 13, 2016 at 9:49 PM, Andrey Rubik > wrote: > > Hi guys! > > > > I'am new at pypy development and need help. > > > > I want do integration between pypy and jemeter (this project on java: > > http://jmeter.apache.org/). > > > > As i learned on jemeter dev list, i need some pypy-java bindings. It may > be > > some .jar file for example. > > > > Where i can get some pypy-java binding (.jar file)? > > > > > > Thanks in advance! > > > > > > -- > > Best Regards, > > Andrey Rubik > > > > ___ > > pypy-dev mailing list > > pypy-dev@python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Multiprocessing - CPython and PyPy
fwiw, you might want to look at this: https://github.com/felipecruz/gurobi_cffi from what I read in the readme, it probably exposes a different interface than gurobipy, but it might enough for your use case. On Tue, Nov 24, 2015 at 9:35 PM, Luis José Novoa wrote: > Hi Maciej, > > Thanks for your reply. The problem is that Im using gurobipy (for the > interaction with the mathematical programming solver Gurobi) to solve a > master problem and then using the output to solve the subproblems. Now, > gurobipy is not compatible wiith PyPy. > > > On Tue, Nov 24, 2015 at 3:23 PM, Maciej Fijalkowski > wrote: > >> Hi Luis. >> >> Multiprocessing works under pypy, so just run your program under pypy >> with no changes and see what happens >> >> On Tue, Nov 24, 2015 at 7:30 PM, Luis José Novoa >> wrote: >> > Hi everyone. >> > >> > Im trying to solve some problems in parallel using the multiprocessing >> > module in Python 2.7. A general model is built using CPython and then >> the >> > subproblems are solved in parallel returning results in a queue. This is >> > currently working fine. I would like to solve the subproblems using >> PyPy to >> > increase speed. >> > I found >> > http://project-trains.tumblr.com/post/102076598295/multiprocessing-pypy >> , >> > but there it says that the procedure only works with CPython 3.4. >> > >> > I wonder is there is any clean direct way to do this. >> > >> > Appreciate any help. >> > >> > -- >> > Luis J. Novoa >> > >> > ___ >> > pypy-dev mailing list >> > pypy-dev@python.org >> > https://mail.python.org/mailman/listinfo/pypy-dev >> > >> > > > > -- > Luis J. Novoa > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] ARM failing test
Hi David, I am developing the "faster-rstruct" branch, which aims to speedup struct.unpack by reading the values at once from memory, instead of byte-by-byte as it's doing right now. The branch works fine on x86 but fails on armhf (see e.g. http://buildbot.pypy.org/summary/longrepr?testname=AppTestStruct.%28%29.test_unpack_standard_little&builder=pypy-c-jit-linux-armhf-v7&build=843&mod=module.struct.test.test_struct). I suspect a big/little endian issue. Two questions: 1) I know that ARM CPUs can be either little or big endiam. What is the case for our armhf machine? 2) is it possible to have ssh access to one or more of our ARM machines so I can test easily? thank you :) ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy 15.11 release is imminent
why using yy.mm instead of just a single increasing int number like Chrome? To avoid confusions, we should probably skip pypy 3 and start releasing from pypy 4. It looks just simpler than 15.11 and friends to me. ciao, Anto On Fri, Oct 16, 2015 at 7:12 AM, Matti Picus wrote: > I have started a major release cycle, and consensus was to start a new > numbering scheme, based on yy.mm > > While every release is a major event (yes 2.5.0, you can get a > participation award too) this one really is a biggie. > Warmup and tracing memory improvements, internal refactoring, SIMD > vectorization on x86, and more. > > Please let me know if there are more good things worth waiting for and > help flesh out the release notice > https://bitbucket.org/pypy/pypy/src/default/pypy/doc/release-15.11.0.rst > > Matti > > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy default: (fijal, arigo) merge optresult-unroll - this branch improves warmup by about
Hi, On Tue, Sep 8, 2015 at 3:11 PM, fijal wrote: > Author: Maciej Fijalkowski > Branch: > Changeset: r79543:3c45f447b1e3 > Date: 2015-09-08 15:11 +0200 > http://bitbucket.org/pypy/pypy/changeset/3c45f447b1e3/ > > Log:(fijal, arigo) merge optresult-unroll - this branch improves warmup > by about 20% by changing the underlaying structure of the > ResOperations by killing Boxes. It also rewrites unrolling to > something (hopefully) a bit saner > > diff too long, truncating to 2000 out of 44326 lines > > diff --git a/pypy/goal/targetpypystandalone.py > b/pypy/goal/targetpypystandalone.py > --- a/pypy/goal/targetpypystandalone.py > +++ b/pypy/goal/targetpypystandalone.py > @@ -341,8 +341,8 @@ > > def jitpolicy(self, driver): > from pypy.module.pypyjit.policy import PyPyJitPolicy > -from pypy.module.pypyjit.hooks import pypy_hooks > -return PyPyJitPolicy(pypy_hooks) > +#from pypy.module.pypyjit.hooks import pypy_hooks > +return PyPyJitPolicy()#pypy_hooks) > > is still intended or just a typo? I suspect this disables pypyjit.set_compile_hook etc? ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] syntax sugar for stm TransactionQueue
hi Armin, following the discussion we had today, that TransactionQueue could be easier to understand for people if you explain it as "a for loop in which you don't know the order of the iteration", I figured out that we might even introduce some syntactic sugar for it; not sure if it makes things simpler or more complicated, though :). Anyway, I'm thinking of something like this: def parallel(iterable): def decorator(f): tr = TransactionQueue() for item in iterable: tr.add(f, item) tr.run() return decorator to be used in this way: mylist = [1, 2, 3] @parallel(mylist) def for_(item): # do something with item pass ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Porting PyPy/rpython to Python 3
Hi, sorry for responding so late, I was at a conference. On Sat, Apr 18, 2015 at 11:02 AM, Armin Rigo wrote: > I would imagine that a better way would be to not care about > restricted style at all. If we really decide to move to Python 3, > then maybe we should drop 2.7 altogether and all do one sprint whose > goal is to fully switch to Python 3.N (both "default" and the major > branches open at the time). It would be a documented move that occurs > at some date --- I imagine this to be in the "far future", say when > Python 3 is becoming dominant over Python 2. > The question is also WHETHER Python 3 will become dominant over Python 2. This is a broad topic and I'm not sure pypy-dev and this particular thread is the right place to discuss it, but in my experience, I see a lot of large 2.7 codebases which will likely never be ported to python3. The problem of such codebases is what happens when python2.7 will no longer be supported, but for PyPy this is not a problem since we are self-hosting: we DO decide when to stop supporting pypy-2.7, and for all I know it might be perfectly reasonable to support pypy-2.7 + rpython-on-python-2.7 for a long time. My final point of view is similar to Armin's: +0 as long as the compatibility does not affect the readabiltity-maintainability of the code base, -1 as soon as it does. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] EuroPython?
Hi, my plan was to submit a talk about profiling/optimizing, possibly together with fijal if he comes (but I didn't do yet :)). Probably the talk which suits best for talking about the general status is Romain's one? On Sat, Apr 11, 2015 at 12:10 AM, Romain Guillebert wrote: > Hi Armin > > I submitted the talk I gave at fosdem. > > Romain > > On Fri, Apr 10, 2015 at 5:51 PM, Armin Rigo wrote: > > Hi all, > > > > I'm preparing a EuroPython submission about STM and/or about CFFI, and > > wondering if someone else also planned to submit a talk. If not, I'll > > include a general "status of PyPy" part in my submission. > > > > Armin > > ___ > > pypy-dev mailing list > > pypy-dev@python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] numpy installation
On Wed, Feb 11, 2015 at 10:09 PM, Matti Picus wrote: > The current installation procedure ensures the users get the latest > version, with no lag for an extra packaging step after pushing to bitbucket. > this has also the drawback that it can break at any time for pypy releases. For example, the current HEAD of numpypy does not work with pypy 2.4 (and it didn't work for a while, even before we released pypy 2.5) ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] libdynd
Hi, this looks interesting, but from a quick look it seems they are only offering a C++ API? In that case, it might be better/easier to wrap it through cppyy than cffi. Also, did Travis told you what are the plans for scipy? On Fri, Jul 25, 2014 at 10:24 AM, Armin Rigo wrote: > Hi, > > Feedback, from Travis Oliphant at EuroPython: libdynd > (https://github.com/ContinuumIO/libdynd) might be the longer-term > future of NumPy, and it looks like it would be much more natural to > bind to it from PyPy (via cffi). Worth a look I believe. It > certainly looks to me like such a cffi binding would be much more > user-friendly than numpypy, in the sense that missing functionality > would be far easier to contribute back. > > > A bientôt, > > Armin. > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] wrong user mapped in the issue tracker
Hi, I was looking at this issue: https://bitbucket.org/pypy/pypy/issue/1514/module-behaviour-incompatibility-which and I noticed that pypy's user "amaury" has been mapped to bitbucket's user "amaury", which unfortunately it's another physical person. I don't know much about the migration of the issue tracker, so I don't even know if it's possible and how easy/hard is to fix it, just wanted to point out. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] compiling with 4G ram
Hello Benedek, On Fri, Jul 4, 2014 at 10:42 PM, Armin Rigo wrote: > If you have exactly 4 GB of RAM, > and you don't have any PyPy to start with, then CPython is probably > using just too much memory indeed. > note that what Armin says applies only if you want to *translate* pypy by yourself. If you just want to try it, you can simply download a prebuilt binary from pypy.org or from this site, which offers portable binaries which are supposed to run on any linux distro: https://github.com/squeaky-pl/portable-pypy ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] How to get transparent proxy working.
Hi Armin, > As you have noticed, this is an old feature not supported any more > since a long time (and mostly untested). It was an experimental > feature that turned out not to be useful in practice. Just using > regular Python, you can mostly achieve the same effects, with the > exception (mostly) of pretending to have an object of some built-in > type for built-in function calls. > actually, jinja2 uses transparent proxy to create fictiious traceback chains: https://github.com/mitsuhiko/jinja2/blob/master/jinja2/debug.py if we decided that transparent proxies are not supported or deprecated, we should remove them and tell people to stop using them. ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy current status
Hi David, thank you for the prompt answer. One more question, since I am sure that people will ask me :). Does PyPy work on android? I suppose the answer is "yes, but of course without integration with the UI", but better to check. I'll also point that the Rasperry-Pi foundation founded part of the ARM development. It founded also pygame_cffi, right? On Wed, May 21, 2014 at 2:31 PM, David Schneider wrote: > Hi all, > > On 21.05.2014, at 01:56, Antonio Cuni wrote: > > > David: what is the current status of PyPy on ARM? Should I say "it just > works" or there is something more to add? What about performance? > > > > Regarding ARM, the status is indeed "it just works" (although the JIT > lacks a few features compared to x86). For performance the best overview is > this blogpost > http://morepypy.blogspot.de/2013/05/pypy-20-alpha-for-arm.html from last > year, which should still be mostly accurate. > > It might be worth pointing out that PyPy is being distributed as part of > the Raspbian OS images for the Raspberry-Pi > > Cheers > > David ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy current status
Hi, > What do you exactly want to know about hippy performance? > I simply want to put in a slide "hippy is N times faster than standard PHP", for some reasonable value of N. Nothing more :) ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy current status
Hi Alex, On Wed, May 21, 2014 at 1:58 AM, Alex Gaynor wrote: > Oh, performance, the only Ruby implementation that's competitive with it > is the Oracle Ruby VM with Truffle, I think they've started merging that > into JRuby by now, so I'm not sure how that compares. Definitely faster > than MRI though :-) > > do you have a number to put on the slides? People like hearing "N times faster than X" better than "faster than X" :) thanks! ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] PyPy current status
Hi all, I am preparing the usual "PyPy status talk" which I'll give to the upcoming Pycon Italy, which is going to cover what happened in the last two years of PyPy. If you are interested, the draft slides are here: https://bitbucket.org/pypy/extradoc/src/tip/talk/pycon-italy-2014/talk.rst?at=extradoc In the talk, I will to give an overview of the current status of the various subprojects, so I'd be glad if you could help because you surely know better than me the status of the area of your competence :) David: what is the current status of PyPy on ARM? Should I say "it just works" or there is something more to add? What about performance? Matti, Brian: what about numpy? Since people like numbers, what percentage of numpy we can consider completed? Philip: same question for py3k. Is it still considered beta quality or we can say it's stable? Alex, Maciej: I'll also briefly talk about other frontends, so Topaz and Hippy. How much complete are they? What are the performance? I know that hippy is still actively developed, but what about Topaz? Other than what I asked, I'll also highlight CFFI and STM. If anyone has ideas for other cool things which happened in PyPy since 2012, suggestions are welcome :) thank you very much! Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] pypy buildbot hook overall design
Hi Matti, IIRC, the repo copy is needed to compute the diff, since the payload only contains the hash of the relevant revisions On Thu, May 15, 2014 at 7:56 PM, Matti Picus wrote: > Hi. I am looking at adding the ability to monitor a bitbucket-hosted git > repo to the bbhook module in pypy's buildbot. The current design requires a > repo copy to monitor. Couldn't we just use the payload from bitbucket > instead of requiring a repo copy? > Matti > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Virtualizables in RPython
On Wed, Apr 30, 2014 at 1:11 PM, Anton Gulenko < anton.gule...@student.hpi.uni-potsdam.de> wrote: > I'll try to make the example that Tim mentioned more clear. > Building up the deep stack was done INSIDE the loop. It was also the only > thing that happened inside the loop. > That's why we expected the traces for deep and shallow stacks to be very > similar - shouldn't the optimizer simply eliminate the additional frame > objects? > Also, the relevant fields in the frame objects are indeed marked as > virtualizable in the SPy VM. > The smalltalk code was basically this, with a varying argument to > buildStack: > > buildStack: depth > depth <= 0 ifTrue: [ ^ nil ]. > self buildStack: depth - 1 > > 10 timesRepeat: [ self buildStack: 100 ] > > Do you have any thoughts regarding this example? > just a wild guess: it might be possible that in this example the trace becomes too long, so tracing aborts and the function is marked as "trace from start"? In that case, the function call cannot be inlined and needs to be turned into a real assembler recursive call, which means that the frame cannot be a virtual because it needs to be passed as an argument. ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Python C-API on Windows
Hi Johan, the extension module needs to have an extension like *.pypy-22.pyd: this is to avoid trying to load cpython modules by mistake. You should use setup.py+distutils to build your module, so that such details are taken into account automatically. ciao, Anto On Mon, Feb 24, 2014 at 1:50 PM, Johan Råde wrote: > Hi Yury, > > On Windows a dynamic library, by default, has the extension .dll. But a > Python extension module must have the extension .pyd, at least under > CPython. I assumed that the extension should be .pyd under PyPy too. > > For what it is worth, here is a link to the Visual Studio solution I used > to build the extension module > https://dl.dropboxusercontent.com/u/525329/C-API/Test.zip > > --Johan > > > > > On 2014-02-24 13:13, Yury V. Zaytsev wrote: > >> >> Back when I was experimenting with CPyExt, I've learned that PyPy will >> not load foo.so (not sure what the name should be like on Windows) by >> default, because it doesn't have the right extension, which it will have >> if it's built with PyPy distutils, but not if you have done this >> manually with a Makefile. >> >> The name should be something like 'foo.pypy-22.so', and the right suffix >> is defined somewhere in distutils. >> >> > > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] Europython 2014
Hi Armin, hi Romain, in Leysin we discussed about the idea of giving the usual "PyPy status talk" at the upcoming EuroPython, considering that people complained that there was none at the last one :) The CfP ends on the 9th: should we file a joint proposal? Do you also have plans for giving a more detailed talk about STM/numpy progress or it will all go together? Is there anyone else coming to EP interested in doing it? ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] AttributeError: 'socket' object has no attribute '_reuse'
On 19/12/13 17:01, Alex Gaynor wrote: No, this isn't a bug in PyPy. If gevent wants to use the internal details of the socket module in way's that aren't defined, they need to pass somethign which matches the required interface. it's worth noting that eventlet had the same issue, and it was fixed here (thanks to... Alex :)): https://github.com/eventlet/eventlet/commit/2633322d6581beacd39d832284f17d461eb25098 ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy3 release?
On 02/12/13 21:08, Philip Jenvey wrote: It's a bit weird w/ PyPy3 and PyPy sharing the version numbering scheme, at least for now, since it implies the release schedules are tied together. Maybe they should be though? Calling it PyPy3 w/ the same version scheme seemed to make the most sense vs the other options. A PyPy3 v0.1 could have broken some cases of code like sys.pypy_version_tuple < (1, 5) in the wild. Calling it PyPy 3.0 would have made sense but forced the CPython 2.7 compat. PyPy stick with a 2.x scheme forever. another issue is with cpyext: if sys.pypy_version_number is the same, pypy3 extension modules will have the same .pypy-22.so extension as the pypy2 version, causing potentially lots of troubles. I cannot think of a good way to solve the problem though. One possibility is to have pypy_version_number incremented by 3000, so that this would be PyPy 30002.2. Note that this would still break code like pypy_version_number > (2, 2). ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Proposal for sprint in Tallinn, Estonia
Hi, On 01/12/13 10:21, Ahti Heinla wrote: Hi, OK, thanks! To make the scheduling easier, I created this Doodle poll. As I understand, the critical thing is that most of the core developers fill out their preferences (or mark all dates as unsuitable, if not interested in the Tallinn sprint at all), since if there are just one or two core developers coming, this is not going to be a successful sprint. http://doodle.com/4uiwehbfed7ryqf8 The sprint lasts for a week. Select the starting dates (Mondays) suitable for you. One of the dates is already in December, and some are very close to the Leysin sprint, but I'm including them just in case. I checked more carefully my schedule and noticed that I cannot make it in the winter :-( It would work for me only from mid-march. Sorry for the confusion :-( ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Proposal for sprint in Tallinn, Estonia
Hi, as I already said on IRC, I'd be glad to do a sprint in Tallin, as I've already been there and I really liked the city :) On Tue, Nov 26, 2013 at 1:15 PM, Ahti Heinla wrote: > Hi, > > I am new to PyPy, but very impressed with what you guys have done. > Myself I am best known for having been a founding engineer and Chief > Technical > Architect for Skype. Python is my favourite language, and I want to help > PyPy > somehow. > > How about I organise a sprint in Tallinn, Estonia (where I live)? > I can hopefully > drum up some interest among local developers, perhaps also get some top > ex-Skype engineers to join. The requirement to contribute a week full-time > is a > deterrent for many though, so I am not sure yet how many people I can get. > > Myself I have no background in interpreters/compilers, but have > written > code for 32 years, often optimised lowlevel code (assembly, C++). I would > need some hand-holding to get started, but I am sure I'd be productive in > a day or two. > > Ahti > ahtih on IRC > ___ > pypy-dev mailing list > pypy-dev@python.org > https://mail.python.org/mailman/listinfo/pypy-dev > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyArray_Type cpyext bug
Hi Matti, On 23/11/13 20:55, Matti Picus wrote: Can we just say "don't do that?" I guess the answer is no... Going down the initialization route seems to be the way numpy does it, I see import_array(); used extensively in numpy c code. Although making sure it is only called once seems to really complicate the header files, with API defines and strange macros. I'm not sure about what you are saying. Of course we need to support PyArray_Type because it's part (a very important part) of the numpy C API. "import_array()" is unrelated because its role is to setup the functions to be called from C code, while PyArray_Type is part of static data. Anyway, I fixed it in 7f3a776cc72a, and I think it's the "correct" fix. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] PyArray_Type cpyext bug
Hi, I committed a cpyext+numpy failing test in a3c3b75a7f2b. In short, the following fails: if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &obj)) return NULL; The problem is that PyArray_Type is currently defined in ndarrayobject.c but never initialized: PyTypeObject PyArray_Type; not a suprise that we get a segfault when we try to use it. However, I'm not sure about what is the best way to fix it, so I ask the cpyext wizards :) I think that at the end what we want is an object for which &PyArray_Type is equal to the PyObject* that we get when we pass _numpypy.multiarray.ndarray to C. One possibility is to run the following code in some initialization function: static PyObject* _PyArray_Type; #define PyArray_Type (*_PyArray_Type) PyObject* np = PyImport_ImportModule("numpy"); if (!np) return; _PyArray_Type = PyObject_GetAttrString(np, "ndarray"); if (!My_PyArray_Type) return; I'm sure there is a better way to do this. However, I tried to play with the cpyext source for a while and didn't manage to find it. Any suggestion? ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] More strategies
yes, +1 for IntFloatNoneButNotStrangeNans as well. It seems like the best we can do, and it probably covers 99.9% of the real world use cases On Fri, Nov 15, 2013 at 8:07 AM, Maciej Fijalkowski wrote: > On Thu, Nov 14, 2013 at 11:07 PM, Armin Rigo wrote: > > Hi Antonio, > > > > On Thu, Nov 14, 2013 at 2:35 PM, Antonio Cuni > wrote: > >> W_FloatObjectPresevingTheBits will be created only by operations like > >> struct.unpack, cffi.cast, etc. > > > > That's not enough: if you read one such float into a variable and then > > append that variable into another list, then the other list also needs > > to record the fact that it contains special NaNs. > > > > It seems we could pick the following solution instead: keep > > FloatStrategy, storing a RPython list of floats --- including possibly > > special NaNs; and add FloatIntStrategy, which cannot store special > > NaNs. We check for the special NaNs when we add into a > > FloatIntStrategy list, and when converting from FloatStrategy to > > FloatIntStrategy. For the latter case we do indeed need to check all > > items, which sounds a bit pointless, but (1) this is already a good > > progress over the current situation, which is that we need to allocate > > a W_FloatObject per item and a new RPython list to hold them; and (2) > > doing the check over all items upon conversion is actually the same > > total amount of work as it would be if we checked each item as it was > > added to the FloatStrategy list. > > > > (Fwiw, I'm also fond of the idea that it should actually be a > > "FloatIntNoneStrategy"; it would improve the situation even for lists > > of int-or-None.) > > > > > > A bientôt, > > > > Armin. > > ___ > > pypy-dev mailing list > > pypy-dev@python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > +1 for IntFloatNoneButNotStrangeNans strategy > ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] More strategies
On 14/11/13 10:57, Armin Rigo wrote: Bah, there is another issue. The following code happens to work right now: x, = struct.unpack("d", "ABCDxx\xff\x7f") y = struct.pack("d", x) assert y == "ABCDxx\xff\x7f" This works even though x happens to be a NaN; its bit pattern is preserved. Such an x could not be stored into a FloatIntegerListStrategy: if it has the wrong bit pattern, we'd get the nonsensical result that storing it in a list and reading it back gives us suddenly an integer object with a random value... Unsure what to do about that. I'm tempted to say that this is an implementation detail, although for all I know there might be some code relying on this. If we REALLY want to support this case, we can always have a W_FloatObjectPreservingTheBits which cannot be put in a FloatIntegerListStrategy. W_FloatObjectPresevingTheBits will be created only by operations like struct.unpack, cffi.cast, etc. Not sure if it's worth the pain. ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] progress with numpy and removal of numpy.py
Hi, On 15/10/13 12:45, matti picus wrote: and what about cloning the numpy repo into bitbucket/pypy to make it more of a "pypy owned" thing? I think it's a good idea. I propose the following: 1) we move your repo to bitbucket/pypy/numpypy 2) we package numpypy, so that people can just do "pip install numpypy" 3) once numpypy is installed, we no longer require the ugly "import numpypy"; a simple "import numpy" will just work. 4) for some time at least, we distribute a numpypy.py so that when imported it prints an error message which explain how to get the newer numpypy What do you think? ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy fast_cffi_list_init: implement the fast-path for intstrategy and long[] only
Hi, On 09/10/13 18:48, Carl Friedrich Bolz wrote: Hi Anto, just said it on IRC, just so that it doesn't get lost: I think module/_cffi_backend should use the generic interfaces and not touch the internals of listobject.py. it can just call space.listview_int and space.listview_float, they are a no-copy operation on int/float strategy lists. indeed, that's a very good idea. I should have thought of it :) ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] ndarray cpyext api on the pypy-pyarray branch
Hi, On 09/09/13 21:37, Matti Picus wrote: I reverted the changes I made to the pypy-pyarray branch that changed c-api functions like PyArray_NDIM(arr). The original code had no real answer to what happens if these are called when arr is not an ndarray. [cut] The discussion we had on IRC starts here http://www.tismer.com/pypy/irc-logs/pypy/pypy.2013-09-08.log.html#t22:15 so if my explanation is unclear please read the log. the discussion convinced me as well, so I think it's fine to leave things as they are now :) ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] jitviewer argparse-collect: (RichardN, Edd) Add the jitviewer path to PYTHONPATH automatically.
Hi Edd, On 06/09/13 15:45, Edd Barrett wrote: +script_path = os.path.abspath(__file__) +pythonpath = os.path.dirname(os.path.dirname(script_path)) +sys.path.append(pythonpath) Here we are appending to the path, not overriding it, hence this is safe for either method. Right? yes, if you do setup.py develop those lines are both safe and pointless :) But I saw that you removed then in a later checkin, so no problem. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] jitviewer argparse-collect: (RichardN, Edd) Add the jitviewer path to PYTHONPATH automatically.
Hi, On 05/09/13 17:09, vext01 wrote: Log:(RichardN, Edd) Add the jitviewer path to PYTHONPATH automatically. diff --git a/bin/jitviewer.py b/bin/jitviewer.py --- a/bin/jitviewer.py +++ b/bin/jitviewer.py @@ -1,4 +1,10 @@ #!/usr/bin/env pypy import sys +import os.path + +script_path = os.path.abspath(__file__) +pythonpath = os.path.dirname(os.path.dirname(script_path)) +sys.path.append(pythonpath) this looks wrong. I think that the jitviewer is supposed to be installed as a normal package inside the pypy distribution to work well. You should do: $ /path/to/pypy/bin/pypy /path/to/jitviewer/setup.py develop this way, setuptools creates a link and the jitviewer package is installed in pypy even if it's physically in the repo (which is convenient for developing). ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy refactor-translator: Remove fork_before option (unused).
Hi Manuel, did you actually kill support for this feature? I find it occasionally useful: e.g. when working on the JIT you can use --fork-before=pyjitpl and avoid to annotate/rtype the whole pypy interp when you change something. ciao, Anto On 04/09/13 12:13, Manuel Jacob wrote: Author: Manuel Jacob Branch: refactor-translator Changeset: r66781:57dd91f9ccd9 Date: 2013-09-02 18:08 +0100 http://bitbucket.org/pypy/pypy/changeset/57dd91f9ccd9/ Log:Remove fork_before option (unused). diff --git a/rpython/config/translationoption.py b/rpython/config/translationoption.py --- a/rpython/config/translationoption.py +++ b/rpython/config/translationoption.py @@ -127,11 +127,6 @@ default=False, cmdline=None), BoolOption("countmallocs", "Count mallocs and frees", default=False, cmdline=None), -ChoiceOption("fork_before", - "(UNIX) Create restartable checkpoint before step", - ["annotate", "rtype", "backendopt", "database", "source", - "pyjitpl"], - default=None, cmdline="--fork-before"), BoolOption("dont_write_c_files", "Make the C backend write everyting to /dev/null. " + "Useful for benchmarking, so you don't actually involve the disk", ___ pypy-commit mailing list pypy-com...@python.org https://mail.python.org/mailman/listinfo/pypy-commit ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy default: Enable inlining into the thread module so that Lock.acquire/release have a sane calling convention
Hi Alex, I think that this commit should come with a test_pypy_c test as well. ciao, Anto On 04/09/13 00:03, alex_gaynor wrote: Author: Alex Gaynor Branch: Changeset: r66780:a3e9a5394648 Date: 2013-09-03 16:02 -0700 http://bitbucket.org/pypy/pypy/changeset/a3e9a5394648/ Log:Enable inlining into the thread module so that Lock.acquire/release have a sane calling convention diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -109,7 +109,8 @@ 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', 'mmap', 'marshal', '_codecs', 'rctime', 'cppyy', - '_cffi_backend', 'pyexpat', '_continuation', '_io']: + '_cffi_backend', 'pyexpat', '_continuation', '_io', + 'thread']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py --- a/pypy/module/pypyjit/test/test_policy.py +++ b/pypy/module/pypyjit/test/test_policy.py @@ -45,6 +45,10 @@ from pypy.module._io.interp_bytesio import W_BytesIO assert pypypolicy.look_inside_function(W_BytesIO.seek_w.im_func) +def test_thread(): +from pypy.module.thread.os_lock import Lock +assert pypypolicy.look_inside_function(Lock.descr_lock_acquire.im_func) + def test_pypy_module(): from pypy.module._collections.interp_deque import W_Deque from pypy.module._random.interp_random import W_Random ___ pypy-commit mailing list pypy-com...@python.org https://mail.python.org/mailman/listinfo/pypy-commit ___ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
[pypy-dev] pypy-pyarray branch
Hi Matti, I and Romain reviewed the pypy-pyarray branch: we think there are a couple of issues to be solved before it can be merged, we added some comments to your TODO list. Otherwise, the branch looks fine and useful :) ciao, Anto. Romain ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy 2x slower using cpickle
Hello Eleytherios, On 07/04/2013 08:12 AM, Antonio Cuni wrote: Il giorno 03/lug/2013 18:17, "Amaury Forgeot d'Arc" mailto:amaur...@gmail.com>> ha scritto: > This is because of I/O. > If I replace the file with a custom class which has an empty write() method, > pypy is twice faster than CPython. Few days ago I discovered that there is an easy optimization for this. If you look at how str2charp & friends are implemented, you see that we do an RPython loop and copy char by char. By contrast, things like string concatenation are implemented using memcpy and are much faster (like 3-4 times, iirc). Sorry if I don't give more precise pointer, but I'm on my mobile phone :-) could you try to rerun your benchmark on the improve-str2charp branch please? The benchmarks on speed.pypy.org shows some important speedup in e.g. twisted_tcp or raytrace_simple, which seems to contain a lot of write I/O, so it might help your case as well: http://speed.pypy.org/comparison/?exe=1%2BL%2Bdefault%2C1%2BL%2Bimprove-str2charp&ben=1%2C34%2C27%2C2%2C25%2C3%2C46%2C4%2C5%2C41%2C42%2C22%2C44%2C6%2C39%2C7%2C8%2C45%2C23%2C24%2C9%2C10%2C47%2C48%2C49%2C50%2C51%2C11%2C12%2C13%2C40%2C14%2C15%2C35%2C36%2C37%2C38%2C16%2C52%2C54%2C55%2C53%2C56%2C28%2C30%2C32%2C29%2C33%2C17%2C18%2C19%2C20%2C43&env=1&hor=true&bas=1%2BL%2Bdefault&chart=normal+bars ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy 2x slower using cpickle
Il giorno 03/lug/2013 18:17, "Amaury Forgeot d'Arc" ha scritto: > This is because of I/O. > If I replace the file with a custom class which has an empty write() method, > pypy is twice faster than CPython. Few days ago I discovered that there is an easy optimization for this. If you look at how str2charp & friends are implemented, you see that we do an RPython loop and copy char by char. By contrast, things like string concatenation are implemented using memcpy and are much faster (like 3-4 times, iirc). Sorry if I don't give more precise pointer, but I'm on my mobile phone :-) ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] PYTHONPATH handling doesn't seem to match Python
On 06/29/2013 11:24 PM, Skip Montanaro wrote: [cut] that version of Python was executed. Accordingly, /opt/local/lib/python2.7/site-packages was in sys.path, as it should have been. It appears that the generated pypy-c wound up with that directory in its sys.path as well. I wasn't executing it from an installed location. I just set up an alias to execute it from the goal directory. OTOH, perhaps I should build it using /usr/bin/python: Do you by chance see this warning message when you start your pypy? debug: WARNING: Library path not found, using compiled-in sys.path. debug: WARNING: 'sys.prefix' will not be set. debug: WARNING: Make sure the pypy binary is kept inside its tree of files. debug: WARNING: It is ok to create a symlink to it from somewhere else. 'import site' failed If so, it means that somehow pypy does not find its stdlib and it just uses the builtin sys.path. Note that if you run pypy from within the hg checkout, this should not happen. ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] adding numpy test target to buildbot
On 06/28/2013 11:13 AM, Matti Picus wrote: On 06/28/2013 12:07 PM, Maciej Fijalkowski wrote: - buildbot has no support for git, I wrote a git_update function but it >needs test i don't believe youhttp://docs.buildbot.net/0.8.1/Git.html cool, thanks! I guess we have our own update_hg for historical reasons? IIRC, the hg support in buildbot had some strange features and it was just easier to write our own function instead of trying to convince buildbot to do what we meant. The situation might have changed since then, I don't know. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] fastjson module
Hi, On 06/05/2013 07:26 AM, Maciej Fijalkowski wrote: Hi anto if this is for speeding up json, call the module _json, as CPython does (I don't care if the API is similar or not, as long as it's used by json lib) yes, it is for speeding up json, but I called it differently precisely because the API is different (basically, my _fastjson offers a drop-in replacement for "json.loads", while cpython's _json offers only some of the functions to implement it) I think it's better to name it differently because I can imagine there are programs around which do things like "from _json import scanstring" for their own purposes, and they would be broken by our _json-which-is-not-really-_json module. What do the others think? ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Killing OOType? (was Re: Translating pypy on FreeBSD with CLI backend)
On 05/08/2013 11:27 PM, Alex Gaynor wrote: I agree with this, the abstraction doesn't really work well right now, there's way too much code duplication. If we seriously want to have an lltype/ootype distinction this should be redone from scratch (IMO). Although I have an emotional feeling with that piece of code, I think that Alex is right. ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Pypy is slower than Python
On 05/01/2013 07:22 PM, Alex Gaynor wrote: Yes, we have a specialized map for 2 arguments, a specialized zip makes sense. (Or figuring out how to specialize that loop for N-arguments where N is ~smallish so the inner loop is unrolled at app level, that's harder, but probably worthwhile n the long run). In general, it'd be very useful to have a way to say the equivalent of @unroll_safe at applevel, although then it could be used very badly if you don't know exactly what you are doing. I think that cfbolz once started a branch to give hints from applevel, but then he never finished. Is that correct? ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy default: make looking up encodings free with the JIT
On 10/04/13 15:51, Alex Gaynor wrote: Hi guys, I just wrote a JIT test for this, I haven't actually run it since I don't have a local translation, however I'll kick the buildbot and review the results. well, my point is that you should not commit a JIT optimization if you are not sure it actually improves the code. It's too easy to make a small mistake and make it useless. I agree it's inconvenient, but what I usually do is to do a translation on tannit, check the generated code, and then commit both the change and the pypy_c test. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-commit] pypy default: make looking up encodings free with the JIT
Hi Alex, could we have a test_pypy_c test for this please? On 10/04/13 03:53, alex_gaynor wrote: Author: Alex Gaynor Branch: Changeset: r63186:c514bbc4c086 Date: 2013-04-09 19:53 -0700 http://bitbucket.org/pypy/pypy/changeset/c514bbc4c086/ Log:make looking up encodings free with the JIT diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -1,10 +1,18 @@ +from rpython.rlib import jit +from rpython.rlib.objectmodel import we_are_translated +from rpython.rlib.rstring import UnicodeBuilder + from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec, WrappedDefault -from rpython.rlib.rstring import UnicodeBuilder -from rpython.rlib.objectmodel import we_are_translated + + +class VersionTag(object): +pass class CodecState(object): +_immutable_fields_ = ["version?"] + def __init__(self, space): self.codec_search_path = [] self.codec_search_cache = {} @@ -14,6 +22,7 @@ self.encode_error_handler = self.make_encode_errorhandler(space) self.unicodedata_handler = None +self.modified() def _make_errorhandler(self, space, decode): def call_errorhandler(errors, encoding, reason, input, startpos, @@ -86,9 +95,20 @@ self.unicodedata_handler = UnicodeData_Handler(space, w_getcode) return self.unicodedata_handler +def modified(self): +self.version = VersionTag() + +def get_codec_from_cache(self, key): +return self._get_codec_with_version(key, self.version) + +@jit.elidable +def _get_codec_with_version(self, key, version): +return self.codec_search_cache.get(key, None) + def _cleanup_(self): assert not self.codec_search_path + def register_codec(space, w_search_function): """register(search_function) @@ -115,11 +135,12 @@ "lookup_codec() should not be called during translation" state = space.fromcache(CodecState) normalized_encoding = encoding.replace(" ", "-").lower() -w_result = state.codec_search_cache.get(normalized_encoding, None) +w_result = state.get_codec_from_cache(normalized_encoding) if w_result is not None: return w_result return _lookup_codec_loop(space, encoding, normalized_encoding) + def _lookup_codec_loop(space, encoding, normalized_encoding): state = space.fromcache(CodecState) if state.codec_need_encodings: @@ -143,6 +164,7 @@ space.wrap("codec search functions must return 4-tuples")) else: state.codec_search_cache[normalized_encoding] = w_result +state.modified() return w_result raise operationerrfmt( space.w_LookupError, ___ pypy-commit mailing list pypy-com...@python.org http://mail.python.org/mailman/listinfo/pypy-commit ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] 2.x -> 2 lib rename
On 04/03/2013 09:27 AM, Maciej Fijalkowski wrote: hello everyone I'm incredibly unhappy about the 2.7 -> 2 lib rename. It broke virtualenv, there is no recent virtualenv available and there is absolutely no good reason why we did that. I'm now spending tons of time trying to think how to update virtualenv to a TOT version a bit everywhere. I'm inclined to do a simple revert (or re-rename it) +1, although to jumpy to the defence of whoever did the commit, it's not obvious that such a change breaks virtualenv because there are not tests. Maybe we should setup a buildbot to run virtualenv's tests with the nightly pypy? I didn't do that when I added virtualenv support for the simple reason that at that time virtualenv had no tests, but maybe the situation improved nowadays? ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] why doesn't buildbot master sort (after I told it to)
On 03/21/2013 11:38 PM, Matti Picus wrote: fijal did restart the buildbot, it didn't help. then I fear that the best is to do some good old debug with print statements and pdb.set_trace(). If you login to cobra, you can do the following: $ ssh buildmas...@cobra.cs.uni-duesseldorf.de $ cd pypy-buildbot $ less README # :-) $ cd master $ make stop $ make debug the nice thing of "make debug" instead of "make start" is that it does not redirect stdout to a logfile, which means that you can use pdb and prints to understand what's going wrong. Once you fixed, remember to commit/push your changes and to "make start" the master again. ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] why doesn't buildbot master sort (after I told it to)
On 03/21/2013 05:01 PM, matti picus wrote: I know this is not all that important, but... It annoyed me that there is so much stuff on this page http://buildbot.pypy.org/nightly so I changed our buildbot code to sort by filesystem mtime, and put trunk on top. I tried it out by writing tests, and even installing a debug buildbot, and creating some directories in my local ~/nightly It works locally but not on buildbot. Any ideas why? did you restart the buildmaster on cobra? ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Slow int code
On 03/04/2013 09:42 AM, Roger Flores wrote: On March 3, 2013 2:20 AM, Carl Friedrich Bolz wrote: >Are you*sure* you are running on a 64 bit machine? Sure? No. I assumed it's 64bit pypy because it was generating x86_64 instructions. How would you check for sure? uname reports x86_64 on the machine I built pypy on. $ pypy --version Python 2.7.3 (42c0d1650cf4, Feb 23 2013, 01:53:42) [PyPy 2.0.0-beta1 with GCC 4.6.3] That doesn't show the machine size. pypy --info is interesting but doesn't help either just a wild guess: is it possible that you generated pyc files with a 32bit version of pypy and then imported it on a 64bit one? For example, suppose you have this foo.py: def foo(): return 2147483648 print type(foo()) if you import it on 32bit, it prints 'long' and generates a pyc file. If you then import 'foo' on 64bit, it still prints 'long', but if you remove the pyc and import again, it prints 'int'. (This happens because 2147483648 is stored as a long inside the marshalled pyc file). ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] pypy services hosting / action needed!
Hi, On 12/06/2012 05:43 PM, Armin Rigo wrote: > Right now, we have extra servers sitting around not doing much. I'm > for moving all three services there. Fijal and me have access, and > I'm sure anyone else that needs it would have access too. which servers? ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev
Re: [pypy-dev] Splitting RPython and PyPy
On 10/21/2012 09:28 PM, Ronan Lamy wrote: >> > * testrunner and dotviewer can become independent packages > +1 (well, I don't really have an opinion on testrunner). IMO, this > implies that FunctionGraph should lose its view/show method. nothing stops us to have a .show() method which tries to lazily import dotviewer and complain in case it's not there. .show() is too damn useful to kill it, IMHO :-) ciao, Anto ___ pypy-dev mailing list pypy-dev@python.org http://mail.python.org/mailman/listinfo/pypy-dev