[pypy-dev] Re: Help me understand why PyPy is so slow in my benchmark

2024-09-18 Thread Armin Rigo via pypy-dev
Hi,

On Wed, 18 Sept 2024 at 23:06, CF Bolz-Tereick via pypy-dev
 wrote:
> Can you share how you are running this function? I tried a few variants, and 
> pypy is often faster than CPython on my attempts, so the rest of your code is 
> necessary to find out what's wrong.

I would call that code THE example of where an inlining, tracing JIT
doesn't work.  It's recursive, with a dozen unpredictable places where
the recursion can occur.  Unless the heuristics are good enough to
stop *all* inlinings of this function inside itself, then we just get
an explosion of traced paths, right?

Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-02-29 Thread Armin Rigo
Hi Logan,

On Thu, 29 Feb 2024 at 08:37, Logan Chien  wrote:
> IIUC, the difference is that guard_not_invalidated is at a different location.
>
> But I don't understand why the backend can affect the logs in the 
> 'jit-log-opt-' tag.

There are a few ways to influence the front-end: for example, the
"support_*" class-level flags.  Maybe the front-end did either do or
skip a specific optimization when compared with x86, and it results
only in a 'guard_not_invalidated' being present or not (and then once
it is emitted, it's not emitted again a few instructions below).  Or
there are some other reasons.  But as a general rule, we can mostly
ignore the position of 'guard_not_invalidated'.  It should have no
effect except in corner cases.

> Also, I found that reduce_logical_and (failed) and reduce_logical_xor 
> (passed) are very different.
>
> Is there more information on the details of this test?  Any ideas to debug 
> this test case are very welcomed!  Thanks.

A possibility is that some of these tests are flaky in the sense of
passing half by chance on x86---I vaguely remember having some
troubles sometimes.  It's sometimes hard to write tests without
testing too many details.  Others may have better comments about them.
Generally, it's OK to look at what you got and compare it with what
the test expects.  If you can come up with a reason for why what you
got is correct too, and free of real performance issues, then that's
good enough.

A bientôt,
Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-02-19 Thread Armin Rigo
Hi Logan,

On Tue, 20 Feb 2024 at 05:08, Logan Chien  wrote:
> > This should just be #defined to do nothing with Boehm, maybe in 
> > rpython/translator/c/src/mem.h
>
> With this change and a few RISC-V backend fixes (related to 
> self.cpu.vtable_offset), I can build and run a JIT+BoehmGC PyPy.

Cool!  I also got a pull request merged into the main branch with this
change, and it does indeed fix boehm builds.

> This configuration (JIT+BoehmGC) can pass test_tokenize and test_zipfile64 
> (from lib_python_tests.py).
>
> Thus, my next step will focus on the differences between JIT+BoehmGC and 
> JIT+IncminimarkGC.

A problem that came up a lot in other backends is a specific input
instruction that the backend emits with specific registers.  When you
run into the bad case, the emitted code reuses a register *before*
reading the same register assuming that it still contains its old
value.  It's entirely dependent on register allocation, and if you run
it with boehm then the sequence of instruction is slightly different
and that might be the reason that the bug doesn't show up then.  If
you get two failures with incminimark and none with boehm, then it
sounds more likely that the case involves one of the incminimark-only
constructions---but it's also possible the bug is somewhere unrelated
and it's purely bad luck...


Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-02-18 Thread Armin Rigo
Hi Logan,

On Mon, 19 Feb 2024 at 05:02, Logan Chien  wrote:
>  2890 | OP_GC_INCREASE_ROOT_STACK_DEPTH(l_v498959, /* nothing 
> */);

Ah, yet another missing macro.  This should just be #defined to do
nothing with Boehm, maybe in rpython/translator/c/src/mem.h in the
section "dummy version of these operations, e.g. with Boehm".

> Just to be sure, is the following command correct?
>
> python2.7 ./pytest.py rpython/jit/backend/test/test_zll_stress_0.py -s -v

Yes, that's correct.


A bientôt,
Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-02-16 Thread Armin Rigo
Hi Logan,

On Fri, 16 Feb 2024 at 07:46, Logan Chien  wrote:
> pypy_module_cpyext.c:125333:80: error: expected expression before ')' 
> token
> 125333 | OP_GC_RAWREFCOUNT_CREATE_LINK_PYOBJ(l_v451927, 
> l_v451928, /* nothing */);

Ah, I guess that we are missing a dependency.  To compile with Boehm,
you need to avoid this particular GC-related feature that is used by
the cpyext module.  Try to translate with the option
``--withoutmod-cpyext``.

Does the equivalent of
`pypysrc\rpython\jit\backend\x86\test\test_zrpy_gc.py` pass on your
backend?  I guess it does, and so does a long-running
`test_zll_stress_*.py`---but maybe try to run `test_zll_stress_*.py`
for even longer, it can often eventually find bugs if they are really
in the JIT backend.

If it doesn't help, then "congratulation", you are in the land of
debugging the very rare crash with gdb.  For this case, it's a matter
of going earlier in time from the crash.  If the crash is nicely
reproductible, then you have a chance of doing this by setting correct
breakpoints (hardware breakpoints on memory change, for example) and
restarting the program and repeating.  If it is too random, then that
doesn't work; maybe look for what reverse debuggers are available
nowadays.  Last I looked, on x86-64, gdb had a built-in but useless
one (only goes backward a little bit), but there was lldb which
worked, and udb was still called undodb---but my guess would be that
none of that works on RISC-V.  If all else fails, I remember once
hacking around to dump a huge amount of data (at least every single
memory write into GC structures) (but that was outside the JIT;
logging from generated assembly is made harder by the fact that the
log calls must not cause the generated code to change apart from the
calls).  It would let me know exactly what happened---that was for one
bug that took me 10 days of hard work, my personal best :-/


A bientôt,

Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Contribute a RISC-V 64 JIT backend

2024-01-09 Thread Armin Rigo
Hi Logan,

On Tue, 9 Jan 2024 at 04:01, Logan Chien  wrote:
> Currently, I only target RV64 IMAD:
>
> I - Base instruction set
> M - Integer multiplication
> A - Atomic (used by call_release_gil)
> D - Double precision floating point arithmetic
>
> I don't use the C (compress) extension for now because it may complicate the 
> branch offset calculation and register allocation.
>
> I plan to support the V (vector) extension after I finish the basic JIT 
> support.  But there are some unknowns.  I am not sure whether (a) I want to 
> detect the availability of the V extension dynamically (thus sharing the same 
> pypy executable) or (b) build different executables for different 
> combinations of extensions.  Also, I don't have a development board that 
> supports the V extension.  I am searching for one.
>
> Another remote goal is to support RV32IMAF (singlefloats) or RV32IMAD.  In 
> RISC-V, 32-bit and 64-bit ISAs are quite similar.  The only difference is on 
> LW/SW (32-bit) vs. LD/SD (64-bit) and some special instructions for 64-bit 
> (e.g. ADDW).  I isolated many of them into load_int/store_int helper 
> functions so that it will be easy to swap implementations.  However, I am not 
> sure if we have to change the object alignment in `malloc_nursery*` (to 
> ensure we align to multiples of `double`).  Also, I am not sure whether it is 
> common for RV32 cores to include the D extension.  But, anyway, RV32 will be 
> a lower priority for me because I will have to figure out how to build a RV32 
> root filesystem first (p.s. Debian doesn't (officially) support RV32 as of 
> writing).

Cool!  Here are a few thoughts I had when I looked at some RISC-V
early documents long ago (warning, it may be outdated):

Yes, not using the "compress" extension is probably a good approach.
That looks like something a compiler might do, but it's quite a bit of
work both implementation-wise, and it's unclear if it would help anyway here.

About the V extension, I'm not sure it would be helpful; do you plan
to use it in the same way as our x86-64 vector extension support?  As
far as I know this has been experimental all along and isn't normally
enabled in a standard PyPy.  (I may be wrong about that.)

Singlefloats: we don't do any arithmetic on singlefloats with the JIT,
but it has got a few instructions to pack/unpack double floats into
single floats or to call a C-compiled function with singlefloat
arguments.  That's not optional, though I admit I don't know how a C
compiler compiles these operations if floats are not supported by the
hardware.  But as usual, you can just write a tiny C program and see.

I agree that RV32 can be a more remote goal for now.  It should
simplify a lot of stuff if you can just assume a 64-bit environment.
Plus all the other points you mention: the hardware may not support
doubles, and may not be supported by Debian...


A bientôt,

Armin Rigo
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Moving to github

2023-12-28 Thread Armin Rigo
Hi Matti,

On Thu, 28 Dec 2023 at 09:21, Matti Picus  wrote:
> Now that 7.3.14 has been released, I would like to move the canonical
> repo for pypy and rpython to github. Reasons:

+1 from me from the reasons you describe!


A bientôt,

Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: How do I compile pypy with debug symbols?

2023-12-02 Thread Armin Rigo
Hi,

On Sat, 2 Dec 2023 at 04:14,  wrote:
in pypy_g_ArenaCollection_rehash_arena_lists. Unfortunately, I can't
see the rest of the call stack, so I am trying to compile a debug
version of pypy.
>
> I followed the instructions here: https://doc.pypy.org/en/latest/build.html
>
> I'm at the step where it tells you to run:
>
> make lldebug or make lldebug0, and i'm realizing there is no such target. 
> Please help!

This paragraph in the instructions is not talking about the top-level
Makefile.  It is talking about the Makefile generated along all the
generated C files in the temporary directory, in
/tmp/usession-*/testing_1/.


Armin Rigo
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: Question about pypy rpython and jitlogs

2022-09-24 Thread Armin Rigo
Hi,

On Sat, 24 Sept 2022 at 20:31, Matti Picus  wrote:
> I think you are looking for the JIT "threshold" option [1], which can be
> specified as
>
>
> pypy --jit threshold=200
>
>
> to get the JIT to consider a loop "hot" after it has been hit 200 times.
> The default is 1000

Note that doing so will change the performance characteristics of your
program.  If you just want to understand why short-running programs
don't produce any jit log at all, the answer is simply that
short-running programs never reach the threshold at which the JIT
kicks in and are instead executed fully on the interpreter.  This is
expected, and it's how JITs work in general.  PyPy's JIT has a
threshold of 1000, which means that after a loop has run 1000 times,
it spends a relatively large amount of time JITting and optimizing
that loop in the hope that it will pay off in future execution time.


A bientôt,

Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: CFFI docs

2022-07-21 Thread Armin Rigo
Hi Alex!

On Thu, 21 Jul 2022 at 02:08, Alex Martelli via pypy-dev
 wrote:
> credible examples, esp. one setting CFFI head-to-head against ctypes (but 
> comparisons with cython and the API would be fine too -- IF I could figure 
> out how to define completely new Python types in CFFI, which so far escapes 
> me).

CFFI is different from ctypes, and also different from the CPython C
API, and also different from Cython.  You can't always write a
straightforward line-by-line comparison for any two of these four
things.

The main purpose of CFFI is to expose an existing C API to Python, a
bit like ctypes but focusing on the level of C instead of on the
details of the system's ABI.  You can also write some extra C code and
expose that to Python as well.  That's it: you can't do things like
define new Python types in C, or even write a C function that accepts
an arbitrary Python object.  Instead, you typically write a pythonic
layer and this layer internally uses CFFI but doesn't expose it
further to the rest of the program.  This layer may do things like
define its own Python classes to give a more object-oriented feeling
to the existing C API; or it can wrap and unwrap arbitrary Python
objects inside "handles" which can be passed to C and back as black
boxes.  You can also write regular Python functions and expose them
for the C code to call.  (This is generally useful, but is also the
basis for how embedding works with CFFI.)

CFFI is not really suited for *all* use cases.  For example, writing a
custom data structure that can be used to store a large number of
arbitrary Python objects: while this is in theory possible, in
practice you get large overheads.  Or if all you want is a very
compact Python class that can store two fixed-size integers.

> If asking for your collaboration is too much, is there at least a way to get 
> your OK about reusing, with small adaptations, some of the excellent examples 
> you give on CFFI's docs?

Sure.


A bientôt,

Armin
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


[pypy-dev] Re: module pyaesni on pypy

2022-07-01 Thread Armin Rigo
Hi,

On Fri, 1 Jul 2022 at 01:32, Dan Stromberg  wrote:
> It's probably easiest to try it and see.
>
> But it appears to have assembly language in it, so likely not.

Using assembly language doesn't make it less likely to work, as
long as the interactions with the CPython C API are written in the
usual C style.



Armin Rigo
___
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com


Re: [pypy-dev] PyCodeObject incompatibility

2022-02-10 Thread Armin Rigo
Hi,

On Thu, 10 Feb 2022 at 14:20, Carl Friedrich Bolz-Tereick  wrote:
> are doing with them? But yes, as Armin write, accessing the the .co_*
> attributes with PyObject_GetAttrString is an approach!

Oops, sorry.  Python 3 renamed various attributes to use the
double-underscore convention, like on function objects, but skipped
code objects for some reason.  So I meant
`PyObject_GetAttrString(code, "co_consts")`.



Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyCodeObject incompatibility

2022-02-10 Thread Armin Rigo
Hi,

On Thu, 10 Feb 2022 at 14:03, Dmitry Kovner  wrote:
> Hello! I'm trying to build a low-level C API extension of cPython to be used 
> in PyPy. The extension extensively uses some fields of PyCodeObject 
> (https://github.com/python/cpython/blob/f87e616af038ee8963185e11b96841c81e8ef15a/Include/code.h#L23):
>  co_firstlineno, co_stacksize, co_consts and so on. However, it seems these 
> fields are not defined in the similar structure of the PyPy  implementation: 
> https://foss.heptapod.net/pypy/pypy/-/blob/branch/py3.8/pypy/module/cpyext/include/code.h#L7.
>  Is it possible to get values of these fields using PyPy C API somehow?

Yes, it's a limitation of PyPy not to expose these fields.  If you
want to write portable code that always works, the simplest is to call
`PyObject_GetAttrString(code, "__consts__")` etc.  Remember that you
need to call `Py_DECREF()` on the result at some point.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Running PyPy on top of CPython

2022-01-10 Thread Armin Rigo
Hi,

On Mon, 10 Jan 2022 at 15:56, M A  wrote:
> I think you are confused. I just read in PyPy's documentation that PyPy could 
> be run from CPython. This sounds like something that could help save me time 
> by seeing if my changes work. I'm am not sure why you think I am ignoring the 
> tests. Yes I have tried them. I am seeing if there is a more efficient way of 
> trying out my code without having to wait a long time of PyPy to recompile.

Sorry for not explaining myself clearly.  I've been trying all along
to tell you that you don't need to recompile PyPy, ever. As long as
not all the tests I listed are passing, then there is basically no
point.  (except that it feels good to see PyPy half-working instead of
not working at all, of course, but don't try to debug that)

The documentation says "PyPy can be run on top of CPython": that's
almost what all the tests are doing.  They run not the whole PyPy, but
instead some small example RPython code (sometimes written as RPython
in the test, sometimes at a lower level).  The point is that they test
the JIT backend by running it as normal Python code, on top of
CPython.  When working on the JIT backend, you don't want to run the
whole PyPy on top of CPython running with the JIT; while possible in
theory, it is far too slow to be practical.  Instead, run the tests,
which test the same thing but with a small bit of RPython code instead
of the many-thousand-of-lines-of-code of PyPy.

Let me try to explain myself in a different way.
What we're trying to do here is fix a JIT compiler emitting machine
code, at the level of RPython instead of Python.  That means that any
error is likely to cause not just a problem for some specific Python
function which you can find by running Python code; instead, bugs
cause random hard-to-debug crashes that show up at half-random
points, with no obvious one-to-one correspondence to what Python code
is running.  This could be caused for example by the wrong registers
being used or accidentally overwritten in some cases, leading to a
nonsense value propagating further, until the point of crash.  Most
tests listed above check all basic cases and problems we encountered
so far.  The last test is producing and running random exemples, in an
attempt to find the random rare-case issues.  It has been the case
that this last test found a remaining bug after running for hours, but
I think that after half a day we can conclude that there is no bug
left to be found.  In the past, there were very rare bugs that went
uncaught by random testing too; these were a real pain to debug---I
once spent more than two weeks straight on a pair of them, running
"gdb" inside a translated PyPy, hacking at the generated C code to
dump more information to try to figure it out.  But these bugs were
more than just in the JIT backend (e.g. some interaction between the
JIT and the GC), and they are fixed now.  I'm telling this story to
explain why I do not recommend going down that path for the simpler
bugs that are already caught by the tests!

So, once more, I recommend working on this by running the tests, and
fixing them, until all tests pass.  Once all tests pass (and not
before) you can try to translate the whole PyPy, and at this point it
will likely work on the first try.

Sorry for not managing to get this message across to you in the past.
I hope I have done so now.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Running PyPy on top of CPython

2022-01-10 Thread Armin Rigo
Hi,

On Mon, 10 Jan 2022, 3:31 AM M A  wrote:

> Well I am developing for PyPy and was hoping to try my code out without
> having to build PyPy first.
>

I tried twice to tell you how we develop PyPy in the issue: by running
tests. You seem to be ignoring that. Sorry but I will ignore further
requests from you until you acknowledge you have at least tried what I
recommend first.

Armin

>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Installation layout of the PyPy 3.8 Fedora package

2021-12-02 Thread Armin Rigo
Hi,

On Thu, 2 Dec 2021 at 21:10, Michał Górny  wrote:

> > 4) The /usr/bin/libpypy3-c.so file is *not* namespaced and seems misplaced
>
> TBH I've never really understood the purpose of this file.
> We've stopped using it at some point and nothing really changed for us.

It is needed for "embedding" situations, which nowadays means
https://cffi.readthedocs.io/en/latest/embedding.html.  If we want to
go the full CPython way, we need to rename and move
"$localdir/libpypy3-c.so" to something like "/usr/lib/libpypy38.so"
and have /usr/bin/pypy3.8 be a program that is linked to
"libpypy38.so" instead of "$localdir/libpypy3-c.so".  This would have
consequences for my own habits---e.g. the executable would no longer
find its .so if it lives simply in the same directory.  Arguably not a
big issue :-)  A freshly translated pypy would not work without
LD_LIBRARY_PATH tricks, just like a freshly compiled CPython does not.
The CPython solution to that annoyance is to compile statically by
default, if no option is passed to ./configure; you need to give the
--enable-shared option explicitly, and all distributions do.  So if we
want to copy all of that for PyPy, we could.

(Note that the CPython explanation is based on my 2.7 knowledge and
may be outdated.)


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Instruction 320

2021-10-27 Thread Armin Rigo
Hi,

On Wed, 27 Oct 2021 at 20:09, M A  wrote:
> Hi, would anyone know what instruction/opcode 320 does? I'm in the file 
> pyopcode.py tracing a problem to  dispatch_bytecode(). The problem I have 
> encountered happens when next_instr and self.last_instr are both equal to 
> 320. I have tried looking at the file opcode.py. There was no mention of 320 
> anywhere. Any hints or help would be great.

`next_instr` is not the instruction itself, it is the offset inside
your bytecode string...

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Instruction 320

2021-10-27 Thread Armin Rigo
Hi,

On Thu, 28 Oct 2021 at 00:28, M A  wrote:
> That could be it. If that is the case how would I take apart the argument 
> from the opcode?

On recent Python 3 versions each instruction is two bytes in length,
the lower byte being the opcode and the higher byte being a (possibly
always zero) argument.  The value 320 is equal to opcode 64 and higher
byte 1.

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Help with asmmemmgr.py:_copy_to_raw_memory()

2021-10-20 Thread Armin Rigo
Hi,

On Tue, 19 Oct 2021 at 20:47, M A  wrote:
> [translation:ERROR] Exception: A function calling locals() is not RPython.  
> Note that if you're translating code outside the PyPy repository, a likely 
> cause is that py.test's --assert=rewrite mode is getting in the way.  You 
> should copy the file pytest.ini from the root of the PyPy repository into 
> your own project.

Hard to know for sure, but maybe you've put these lines inside a
function that is getting translated?  Put them at module level
instead.

Also note that rffi.llexternal() probably needs an argument to tell it
not to release the GIL.  This is necessary in the backend because
there are some assumptions that this code generation and patching is
atomic.  If you forget, you should get another clean error, though.
In this case the safest is to say ``rffi.llexternal(...,
_nowrapper=True)``.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Help with asmmemmgr.py:_copy_to_raw_memory()

2021-10-12 Thread Armin Rigo
Hi,

On Wed, 13 Oct 2021 at 04:08, M A  wrote:
> Hi guys. I am having a problem with a function called _copy_to_raw_memory() 
> in the file rpython/jit/backend/llsupport/asmmemmgr.py. After building PyPy 
> there is always a crash in this function. Here is a partial backtrace I 
> captured:
>
> * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS 
> (code=2, address=0x10410)
> frame #0: 0x000100c3a8b4 
> libpypy-c.dylib`pypy_g_InstrBuilder__copy_to_raw_memory + 268
>   * frame #1: 0x000100bfec98 
> libpypy-c.dylib`pypy_g_InstrBuilder_copy_to_raw_memory + 40
> frame #2: 0x000100c3efd4 
> libpypy-c.dylib`pypy_g_InstrBuilder_materialize + 232
> frame #3: 0x000100c047e4 
> libpypy-c.dylib`pypy_g_AssemblerARM64__build_failure_recovery + 1208
> frame #4: 0x000100c3b5ec 
> libpypy-c.dylib`pypy_g_BaseAssembler_setup_once + 80
> frame #5: 0x000100d31668 
> libpypy-c.dylib`pypy_g_compile_and_run_once___rpython_jit_metainterp_ji_29 + 
> 364
> frame #6: 0x000100cd3ed4 
> libpypy-c.dylib`pypy_g_bound_reached__star_4_8 + 740
> frame #7: 0x000100b66510 libpypy-c.dylib`pypy_g_portal_29 + 104
> frame #8: 0x000100ca7804 
> libpypy-c.dylib`pypy_g_ll_portal_runner__pypy_objspace_std_typeobject_W_1 + 
> 344
>
>
> With the help of print() statements I was able to find the exact line that 
> was causing the problem. It is this:
>
> dst[j] = block.data[j]
>
> I have tried printing the values for these variables but that doesn't 
> compile. I can't be sure the two variables are set correctly. Using type() on 
> these two variables also doesn't compile. Even comparing the two variables to 
> None doesn't work. How can I figure out what is wrong with this code? Please 
> let me know if you need more information. Thank you.

This is the function that copies the generated machine code into the
special memory area that was allocated to be read-write-executable.
What is likely occurring here is that on your particular machine, when
we ask the OS to get a read-write-executable piece of memory, it
doesn't do that.  But instead of returning an error, it returns a
valid page of memory that is apparently not writeable.  Then the
process crashes like you have found while trying to write to it.

You need to look up if your OS imposes strange non-POSIX requirements
on the process.  Maybe
https://developer.apple.com/documentation/apple-silicon/porting-just-in-time-compilers-to-apple-silicon
is a good starting point.


A bientôt,

Armin Rigo
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] ftbs pypy3 on alpine/musl

2021-10-12 Thread Armin Rigo
Hi Thomas,

On Tue, 12 Oct 2021 at 23:08, Thomas Liske  wrote:

> Thanks for pointing me in the right direction! Although i don't know how
> it works at all, I've created a small patch to suppress the
> std(in|err|out) declaration and got pypy3 to compile :-)

Thanks for the patch!  We can simply pass the "declare_as_extern" flag
from rfile.py and get the same effect cleanly.  I've checked it in (so
far it's only in the "default" branch, but it is merged to "py3.8"
regularly).

> Should I put the patches required to make pypy musl compatible at some
> issue tracker or will ${someone} consider to take them from the mailing
> list?

In this case it's not necessary any more, but in general, yes, we like
patches to live on https://foss.heptapod.net/pypy/pypy/-/issues/ .


A bientôt,

Armin Rigo
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] ftbs pypy3 on alpine/musl

2021-10-12 Thread Armin Rigo
Hi Thomas,

On Tue, 12 Oct 2021 at 00:33, Thomas Liske  wrote:
> [translation:ERROR] CompilationError: CompilationError(err="""
> data_pypy_module_cpyext.c:25256:3: warning: initialization of 'void
> (*)(Signed)' {aka 'void (*)(long int)'} from incompatible pointer type
> 'void (*)(int)' [-Wincompatible-pointer-types]
> 25256 |   set_Py_DebugFlag, /* 0.value */
>   |   ^~~~
> data_pypy_module_cpyext.c:25256:3: note: (near initialization for
> 'pypy_g_array_25832.a.items[0].o_value')

This is a warning (which we should look into and probably show up
everywhere).  The log contains only one error:

[platform:Error] ../module_cache/module_15.c:558:14: error:
conflicting type qualifiers for 'stdout'
[platform:Error]   558 | extern FILE* stdout;
[platform:Error]   |  ^~
[platform:Error] /usr/include/stdio.h:61:20: note: previous
declaration of 'stdout' was here
[platform:Error]61 | extern FILE *const stdout;
[platform:Error]   |^~

Responsible for that is rpython/rlib/rfile.py, whose create_stdio()
function is called on pypy3 but not on pypy2.  Maybe we can add some
hack to avoid entirely the "extern" above, which is not needed anyway
because we #include stdio.h.


A bientôt,
Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] _init_posix() not ever called?

2021-09-21 Thread Armin Rigo
Hi,

On Mon, 20 Sept 2021 at 22:33, M A  wrote:
> Hi I was working in the file lib-python/2.7/distutils/sysconfig_pypy.py, on 
> the function _init_posix(). I placed print() statements in the function to 
> indicate when this function is called. After fully building pypy the print() 
> statements were never called. I then used grep to try to find out where this 
> function is called in the source code. I ran this command: grep 
> "_init_posix()" -r *. After looking at the results it looks like this 
> function is never called by anything. Why do we have it? More importantly can 
> we delete it?

It is called from the same file by these line:

func = globals().get("_init_" + os.name)
if func:
func()

If you don't see print statements at runtime, then it might be the
case that it's called at translation time instead.  The module's state
with _init_posix() already called would then get frozen inside the
translated pypy.


A bientôt,
Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] [PATCH] Correctly set MACOSX_DEPLOYMENT_TARGET on arm, x86, and ppc

2021-09-19 Thread Armin Rigo
Hi,

On Sun, 19 Sept 2021 at 22:30, M A  wrote:
> > This would change MACOSX_DEPLOYMENT_TARGET in our existing builds for
> > x86_64 machines from '10.7' to '10.5'.  Can you explain why you
> > propose such a change?
>
> So people with Mac OS 10.5 can use PyPy on their x86_64 systems.

This might need some more care.  The value was bumped in the past to
10.7 because we're using important features in that version of OS X,
if I remember correctly.  It's not a random value we had since the
start of the project, because PyPy is much older than that.  You or
someone else would need to dig into the hg history to figure out why
it was the case, and if it's still needed.

Armin

On Sun, 19 Sept 2021 at 22:30, M A  wrote:
>
>
>
> > On Sep 19, 2021, at 3:50 PM, Armin Rigo  wrote:
> >
> > Hi,
> >
> > On Sun, 19 Sept 2021 at 21:13, M A  wrote:
> >> +elif "x86_64" in arch:
> >> +arch = "x86_64"
> >> +g['MACOSX_DEPLOYMENT_TARGET'] = '10.5'
> >
> > This would change MACOSX_DEPLOYMENT_TARGET in our existing builds for
> > x86_64 machines from '10.7' to '10.5'.  Can you explain why you
> > propose such a change?
> >
> >
> > A bientôt,
> >
> > Armin
>
> So people with Mac OS 10.5 can use PyPy on their x86_64 systems.
>
> Thank you.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] [PATCH] Correctly set MACOSX_DEPLOYMENT_TARGET on arm, x86, and ppc

2021-09-19 Thread Armin Rigo
Hi,

On Sun, 19 Sept 2021 at 21:13, M A  wrote:
> +elif "x86_64" in arch:
> +arch = "x86_64"
> +g['MACOSX_DEPLOYMENT_TARGET'] = '10.5'

This would change MACOSX_DEPLOYMENT_TARGET in our existing builds for
x86_64 machines from '10.7' to '10.5'.  Can you explain why you
propose such a change?


A bientôt,

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Subscribed to mailing list, but don't see how to submit a question

2021-07-06 Thread Armin Rigo
Hi,

You write to the mailing list by writing to pypy-dev@python.org.

Your question: How do you break into running pypy3 code, to stop execution?

Like CPython, it's supposed to be ctrl-c or Ctrl-break. Maybe it didn't
work on some older versions on windows. If you are using windows, try
upgrading to the latest version.

A bientôt,
Armin

>
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] seeking pypy gc configuring guideline

2021-05-31 Thread Armin Rigo
Hi Raihan,

On Thu, 27 May 2021 at 17:44, Raihan Rasheed Apurbo  wrote:
> I have read the test codes of test_llinterp.py and llinterp.py. I am writing 
> a summary below. I just want you to point out my mistakes if I am wrong. 
> Please help me to understand this.

There are many tests doing various checks at various levels.  I think
you should focus on the "final" tests that try to translate the GC to
C and run it: they are in rpython/translator/c/test/test_newgc.py.
The reason I'm suggesting to focus on these tests instead of looking
at all intermediate levels is that, if I'm understanding it correctly,
you mostly want to plug in an existing library to do the GC.  You're
thus going to fight problems of integration more than problems of
algorithmically correct code.  It's OK if you ignore the other tests
because they are likely to not work out of the box in your case.  For
example they're half-emulating the actual memory layout of the
objects, but your external library cannot work with such emulation.
So I'd recommend looking at test_newgc instead of, say,
rpython/rtyper/llinterp.py, which is here only to make this emulation
work.

test_newgc contains tests of the form of a top-level RPython function
(typically called f()) that does random RPython things that need
memory allocation (which is almost anything that makes lists, tuples,
instances of classes, etc.).  These RPython things are automatically
translated to calls to lltype.malloc().  Some of the tests in
test_newgc are directly calling lltype.malloc() too.  For the purpose
of this test, the two kinds of approaches are equivalent.

Note also that in the past, during development of the branch
"stmgc-c8", we made a different GC in C directly and integrated it a
bit differently, in a way that is simpler and makes more sense for
external GCs.  Maybe you want to look at this old branch instead.  But
it can also be confusing because this STM GC added many kinds of
unusual barriers (rpython/memory/gctransform/stmframework.py in this
branch).


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] seeking pypy gc configuring guideline

2021-05-22 Thread Armin Rigo
Hi Raihan,

On Sat, 22 May 2021 at 14:40, Raihan Rasheed Apurbo  wrote:
> Thanks for helping me out. I was actually looking for a generalized solution. 
> The last paragraph of your answer covers that. What I understand from that 
> is, if I want to understand how pypy gc works and if i want to write my own 
> version of GC at first I have to understand all the tests written on 
> rpython/memory. I will now look extensively into that.
>
> I have tried to understand some of the test codes earlier but there is a 
> problem that I faced. Suppose gc_test_base.py written in rpython/memory/test 
> not only uses modules written in rpython.memory but also uses modules like 
> llinterp from rpython.rtyper. As I don't know how those modules work how do I 
> figure out their function written in the test codes? Do you have any 
> suggestions for someone who is facing this issue?

Modules from rpython.rtyper.lltypesystem are essential.  You need to
learn about them to the extent that they are used.  They have test
files in "test" subdirectories, like almost every file in PyPy and
RPython.

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] seeking pypy gc configuring guideline

2021-05-22 Thread Armin Rigo
Hi Raihan,

On Sat, 22 May 2021 at 09:05, Raihan Rasheed Apurbo 
wrote:

> I was trying to run pypy using semispace gc. But as it stands currently I
> can only compile using incminimark and boehm. What changes I have to make
> so that I would be able to test pypy with any existing gc implementation?
> Moreover I want to write my own version of gc and test with pypy. In
> future. Can anyone point me where do I have to make changes to make my
> implementation of any arbitrary gc work? So far I was able to compile an
> edited version of incminimark gc.
> Any suggestions or link to any perticular documentation would be
> appreciated.
>
> p.s. I ran pypy with the following command and got this error. I can run
> same command just replacing incminimark in place of semispace.
>

This screenshot contains the whole explanation in bold red text: some
components of PyPy have evolved over time to depend on special features of
the GC.  These features have been implemented only in the 'incminimark' and
the 'boehm' GC.  For experimentation with other GCs, you can disable these
components---like the 'cpyext' module of PyPy---by following the
instructions in the bold red text.

Note also that if you want to play with the GC, it makes little sense to
translate a complete PyPy every time.  You should at least have a small
RPython program, much simpler than the whole PyPy, and use that for
testing.  But ideally you should look at the way we write various tests in
rpython/memory/.


A bientôt,

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] moving blog posts to pypy.org

2021-03-09 Thread Armin Rigo
Hi Matti,

On Tue, 9 Mar 2021 at 13:53, Matti Picus  wrote:
> Last chance to stop the move is now. Comments or fixes/typos to the blog
> post(s) are also welcome.

Thank you for your continued work!

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Contributing Polyhedral Optimisations in PyPy

2020-12-18 Thread Armin Rigo
Hi,

On Fri, 18 Dec 2020 at 19:15, muke101  wrote:
> Thanks both of you for getting back to me, these definitely seem like 
> problems worth thinking about first. Looking into it, there has actually been 
> some research already on implementing Polyhedral optimisations in a JIT 
> optimiser, specifically in JavaScript. It's paper 
> (http://impact.gforge.inria.fr/impact2018/papers/polyhedral-javascript.pdf) 
> seems to point out the same problems you both bring up, like SCoP detection 
> and aliasing, and how it worked around them.
>
> For now then I'll try and consider how ambitious replicating these solutions 
> would be and if they would map into PyPy from JS cleanly - please let me know 
> if any other hurdles come to mind in the meantime though.

I assume that by "JavaScript" you mean JavaScript with a method-based
JIT compiler.  At this level, that's the main difference with PyPy,
which contains RPython's tracing JIT compiler instead.  The fact that
they are about the JavaScript or Python language is not that
important.

Here's another idea about how to do more advanced optimizations in a
tracing JIT a la PyPy.  The idea would be to keep enough metadata for
the various pieces of machine code that the current backend produces,
and add logic to detect when this machine code runs for long enough.
At that point, we would involve a (completely new) second level
backend, which would consolidate the pieces into a better-optimized
whole.  This is an idea that exists in method JITs but that should
also work in tracing JITs: the second level backend can see all the
common paths at once, instead of one after the other.  The second
level can be slower (within reason), and it can even know how common
each path is, which might give it an edge over ahead-of-time
compilers.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Contributing Polyhedral Optimisations in PyPy

2020-12-18 Thread Armin Rigo
Hi,

On Thu, 17 Dec 2020 at 23:48, William ML Leslie
 wrote:
> The challenge with implementing this in the pypy JIT at this point is
> that the JIT only sees one control flow path.  That is, one loop, and
> the branches taken within that loop.  It does not find out about the
> outer loop usually until later, and may not ever find out about the
> content of other control flow paths if they aren't taken.

Note that strictly speaking, the problem is not that you haven't seen
yet other code paths.  It's Python, so you never know what may happen
in the future---maybe another code path will be taken, or maybe
someone will do crazy things with `sys._getframe()` or with the
debugger `pdb`.  So merely seeing all paths in a function doesn't
really buy you a lot.  No, the problem is that emitting machine code
is incremental at the granularity of code paths.  At the point where
we see a new code path, all previously-seen code paths have already
been completely optimized and turned into machine code, and we don't
keep much information about them.

To go beyond this simple model, what we have so far is that we can
"invalidate" previous code paths at any point, when we figure out that
they were compiled using assumptions that no longer hold.  So using
it, it would be possible in theory to do any amount of global
optimizations: save enough additional information as you see each code
path; use it later in the optimization of additional code paths;
invalidate some of the old code paths if you figure out that its
optimizations are no longer valid (but invalidate only, not write a
new version yet); and when you later see the old code path being
generated again, optimize it differently.  It's all doable, but
theoretical so far: I don't know of any JIT compiler that seriously
does things like that.  It's certainly worth a research paper IMHO.
It also looks like quite some work.  It's certainly not just "take
some ideas from [ahead-of-time or full-method] compiler X and apply
them to PyPy".


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Support for OS X arm64

2020-11-22 Thread Armin Rigo
Hi Janusz,

On Sun, 22 Nov 2020 at 16:51, Janusz Marecki  wrote:
> Hi PyPy dev team -- Are there any plans for adding native support for the 
> newly released OSX arm64 (M1) platform?

We can make such plans provided we get some support.  It requires some
work because we need to adapt the JIT (we support arm64 but of course
OS X comes with its own calling convention and likely other
differences).

So far, two people in our group might each be willing, on some
condition.  My conditions are getting ssh access and a small-ish
bounty of $3000. Fijal said "I think if someone sends me an arm64 OS X
laptop, I'm willing to do it too", which may or may not be a cheaper
option.  In both cases, we need a machine to add to our buildbots; one
solution would be to buy us an additional Mac Mini; another would be
to guarantee 24/7 ssh access for the foreseeable future (i.e. not
dropping out after 3 months, like occurred too often with our Mac
buildbots).


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy on aarch64 (rhe7) has issues with bzip2-libs

2020-10-08 Thread Armin Rigo
Hi,

On Thu, 8 Oct 2020 at 10:22, Srinivasa Kempapura Padmanabha
 wrote:
> I think it's looking for /usr/lib64/libbz2.so.1.0.0 which I am not sure if 
> that's compatible

It should be, if it differs only in the last digit.

> I have created the symbolic link and yet idn't work

Sorry, I can't help you more.  Maybe if you described with more
details what "didn't work"?  What are you running and what error
message are you getting?


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy on aarch64 (rhe7) has issues with bzip2-libs

2020-10-08 Thread Armin Rigo
...

ah, also the command given by matti has the arguments in the wrong
order (I think).  It should be

sudo ln -s /usr/lib64/libbz2.so.1.0.6 /usr/lib64/libbz2.so.1.0


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy on aarch64 (rhe7) has issues with bzip2-libs

2020-10-08 Thread Armin Rigo
Hi,

On Thu, 8 Oct 2020 at 07:42, Srinivasa Kempapura Padmanabha
 wrote:
> I am running on rhe7 aarch64, I tried but it din't work.

You have to run `ldconfig` (without arguments, as root) for the system
to pick up the new symlink.

A bientôt,
Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Repository unavailable

2020-08-19 Thread Armin Rigo
Hi Will,

On Wed, 19 Aug 2020 at 23:42, Will Snellen  wrote:
> Trying to download various versions of Pypy3, I get the following message:
>
> >>>Repository unavailable
> Bitbucket no longer supports Mercurial repositories.<<<

The page https://www.pypy.org/download.html contains the updated
links.  If you're getting this error from some package manager on some
OS, you could also inform whoever is maintaining that package that
this package needs to be updated.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Changing the PyPy download page

2020-06-28 Thread Armin Rigo
Hi Ram,

On Sun, 28 Jun 2020 at 21:35, Ram Rachum  wrote:
> We discussed that maybe I should make that change and open a PR for it.

I'm +1 on the idea.

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Fwd: pypy | Proposed changes to make pypy3 work when embedded within a virtualenv (!728)

2020-06-15 Thread Armin Rigo
Hi,

I'm very reluctant to make changes relating to virtualenv and installation
in general, because I have very limited experience with that topic
(particularly on Python 3).  Ronan or someone else, could you please double
check it? Thanks!

Armin


-- Forwarded message -
From: Armin Rigo 
Date: Sun, 14 Jun 2020, 10:20 PM
Subject: pypy | Proposed changes to make pypy3 work when embedded within a
virtualenv (!728)
To: 


Armin Rigo <https://foss.heptapod.net/arigo> created a merge request:

Branches: topic/py3.6/embedded-in-virtualenv to branch/py3.6
Author: Armin Rigo
Assignees:

This fixes cffi#460 <https://foss.heptapod.net/pypy/cffi/issues/460>.
Unsure it is the correct fix, though, so I'll seek review by making a merge
request.

—
Reply to this email directly or view it on GitLab
<https://foss.heptapod.net/pypy/pypy/merge_requests/728>.
You're receiving this email because of your activity on foss.heptapod.net.
If you'd like to receive fewer emails, you can unsubscribe
<https://foss.heptapod.net/sent_notifications/c51e0ea364d4bbe059ddb8297af71da2/unsubscribe>
from this thread or adjust your notification settings.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Dough about prevision on updating pypy to python 3.8

2020-06-14 Thread Armin Rigo
Hi,

On Wed, 10 Jun 2020 at 19:30, João Victor Guazzelli via pypy-dev
 wrote:
> I'm a brazilian developer in hope to use pypy, or should I say, in need to 
> use pypy. I'm thinking about upgrading my code to python 3.8 but pypy is 
> mandatory. That being said my dough is if pypy is compatible with 3.8 and if 
> it is not if there is a prevision on that.

PyPy supports Python 2.7 or 3.6 at the moment, with 3.7 being in
progress.  We can't give any time estimate for when 3.8 support might
be available.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy 3.6 and WinXP

2020-05-31 Thread Armin Rigo
Hi,

On Fri, 29 May 2020 at 20:10, Joseph Jenne via pypy-dev
 wrote:
> I don't have much experience with the windows side of pypy, especially
> on XP, but I would suggest trying to compile for your system, as I do
> not know of any reasons for incompatibility. That said, perhaps using a
> somewhat older version might be preferable, depending on the specifics
> of your use case

Joseph, this sounds like very generic advise, of which Denis is likely
aware.  Denis, my offer to discuss things concretely on
pyp...@python.org still stand.  We are not aware of any WinXP-specific
issues because we have never tried to build PyPy there.  Likely, it
doesn't work out of the box.  My guess is that we need to remove some
WinAPI calls along with the corresponding Python-level functions, and
we'd end up with a separate build that misses some functions (likely
obscure and specific ones from the 'os' module, for example).  Ideally
we wouldn't rebuild ourselves every release but let you do it.
Whatever the result, we wouldn't do it for free, but if it's only a
matter of carefully removing a bit of functionality then it can
probably be done at a reasonable price.  If you want to give it a go
yourself, you're welcome too, and we'd be happy to merge a branch
which adds a translation flag to remove dependencies on non-WinXP
functionality, for example.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy 3.6 and WinXP

2020-05-29 Thread Armin Rigo
Hi Denis,

On Thu, 28 May 2020 at 18:05, Denis Cardon  wrote:
> WinXP is actually still very common in industrial setups and it would be
> great if it would work with PyPy as CPython has drop WinXP support after
> 3.4.

I see the point.  If some parts of the industry want a modern version
of Python but are otherwise stuck on WinXP, then it can be problematic
for them.  I am rather sure that no-one among the core developers will
do it for free.  I suppose that a contract job could be arranged if
there is enough money, though.  The correct place to discuss this is
probably pyp...@python.org .


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Fwd: Python2.7 compatible PyPy 7.3.1 on Windows

2020-05-11 Thread Armin Rigo
Hi,

On Mon, 11 May 2020 at 19:24, Massimo Sala  wrote:
> I am trying to install the Postgresql module psycopg2
> The installation of the source module fails ... suggesting
> If you prefer to avoid building psycopg2 from source, please install the 
> PyPI
> 'psycopg2-binary' package instead.

The binary package can't work, because it was compiled for CPython.

The source package fails to install because of this error (see psycopg2.log):

Error: pg_config executable not found.

i suggest you check your installation.  My guess is that you need to
have a C compiler and have installed with the headers necessary to
compile programs using psycopg2.  I don't know if there is an issue
with the program pg_config on Windows.  Maybe look if there are
specific instructions about how to compile the psycopg2 Python package
from source on Windows (instructions written with CPython in mind, but
that probably work with PyPy too).

Alternatively, you should look for psycopg2cffi.


A bientôt,

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] cffi embedding interface uwsgi plugin

2020-04-23 Thread Armin Rigo
Hi Daniel,

On Thu, 23 Apr 2020 at 09:08, Daniel Holth  wrote:
> Need to look up how to initialize virtualenv at the Python level. I had more 
> success with pypy 7.1.1 which seemed to be finding the virtualenv based on 
> the working directory. Currently pypy 7.3.1 is having trouble finding the os 
> module in pypy_init_home. And this patch 
> https://foss.heptapod.net/pypy/cffi/issues/450 is needed to get rid of a 
> warning.

Can you give a reproducer for the problem you see in finding the os
module in pypy_init_home?  Thanks!


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Error running Idle

2020-02-28 Thread Armin Rigo
Hi Jerry,

On Fri, 28 Feb 2020 at 16:56, Jerry Spicklemire  wrote:
> exec code in self.locals
>  ^
> SyntaxError: Missing parentheses in call to 'exec'

This error comes from Python 3; it's not a syntax error in Python 2.
When you're executing a Python file directly, it picks whatever Python
interpreter is associated with .py files in your Windows.  It doesn't
matter that the .py file is inside a specific directory like
PyPy27-32.  In your case it is picking up either CPython 3.x or
PyPy3.6, whichever is installed.  To run a .py file with a specific
version of Python, use a command like "c:\python\pypy27-32\pypy.exe
c:\path\to\file.py".


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Making the most of internal UTF8

2020-02-26 Thread Armin Rigo
Hi Jerry,

On Wed, 26 Feb 2020 at 16:09, Jerry Spicklemire  wrote:
> Is there a tutorial about how to best take advantage of PyPy's internal UTF8?

For best or for worse, this is only an internal feature.  It has no
effect for the end user.  In particular, Python programs written for
PyPy3.6 and for CPython3.6 should work identically.  The fact that it
uses internally utf-8 is not visible to the Python
program---otherwise, it would never be cross-compatible.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Help needed: are you running windows with a non-ascii interface?

2020-02-26 Thread Armin Rigo
Hi again,

On Wed, 26 Feb 2020 at 14:28, Armin Rigo  wrote:
> In particular the first escaped character \Uf44f really should be
> two characters, '\x92O', and there is similar mangling later.  Also
> the first of the two unicodes is much shorter on CPython3.  Finally,
> the very last character is rendered as '\x39' but I have no clue why
> it is even rendered as '\x39' instead of just the ascii '9'.
>
> So yes, we have more than one bug here.

Uh, in fact time.tzname[0] == time.tzname[1] and if I print separately
time.tzname[0] and time.tzname[1] I get twice the same thing:

'Europe de l\Uf44fuest (heure d\Uf4e9t\u79c0'

But if I print the tuple, or manually (time.tzname[0], time.tzname[1])
or even (time.tzname[0], time.tzname[0]), then I see the result above
where the second item is repr'ed differently from the first one.




Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Help needed: are you running windows with a non-ascii interface?

2020-02-26 Thread Armin Rigo
Hi Matti,

On Wed, 26 Feb 2020 at 11:59, Matti Picus  wrote:
> - check what pypy3 returns for time.tzname? There is no code to decode
> it, so it is probably a sting of bytes. What encoding is it in?

On a french Windows I get, in CPython 3.6, a tuple of two unicodes
that seem correct; and on PyPy3 I get instead a tuple of two unicodes
that are very incorrect.  CPython3.6 (first line) versus PyPy3.6:

('Europe de l\x92Ouest', 'Europe de l\x92Ouest (heure d\x92été)')
('Europe de l\Uf44fuest (heure d\Uf4e9t\u79c0', 'Europe de
l\Uf44fuest (heure d\Uf4e9t\x39')

In particular the first escaped character \Uf44f really should be
two characters, '\x92O', and there is similar mangling later.  Also
the first of the two unicodes is much shorter on CPython3.  Finally,
the very last character is rendered as '\x39' but I have no clue why
it is even rendered as '\x39' instead of just the ascii '9'.

So yes, we have more than one bug here.


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Moving PyPy to https://foss.heptapod.net

2020-02-06 Thread Armin Rigo
Hi Matti,

On Thu, 6 Feb 2020 at 07:24, Matti Picus  wrote:
> >> - disallows personal forks on the instance
> > -1  (any reason why?)
>
> This is a Octobus decision, so maybe Georges can chime in. My
> understanding is that because the heptapod offering is on a trial basis,
> (...)

Yes, that makes sense for now (seen the discussion on IRC too).

> In order to block changes you need to navigate the UI and disallow each
> branch separately. It is simply too much work. This is all temporary, on
> May 31 those repos dissapear.

OK.

> I am personally done with bitbucket/atlassian. Hosting the downloads
> should be simple, if we had the bandwidth we could do it on the buildbot
> master.

OK too.  We certainly have the bandwidth to host all the past files on
the buildbot master.  For future releases it's unclear; there are 200
GB of downloads of just the py3.6-win32.zip for every release...


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Moving PyPy to https://foss.heptapod.net

2020-02-06 Thread Armin Rigo
Hi Michal, hi all,

I reworded the entry at
https://doc.pypy.org/en/latest/faq.html#why-doesn-t-pypy-use-git-and-move-to-github
to account for our decision to move to Heptapod.

A bientôt,
Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Moving PyPy to https://foss.heptapod.net

2020-02-05 Thread Armin Rigo
Hi Matti,

Thank you for all the organizational work!

On Wed, 5 Feb 2020 at 10:59, Matti Picus  wrote:
>- changes our default workflow to "Publishing is restricted to
> project masters" (I think that means only project masters can push/merge
> to published branches, but am not sure about the terminology), however
> we could override that

-1  (who is a project master? do we need the distinction between two
levels of membership at all for anything technical?)

>- disallows personal forks on the instance

-1  (any reason why?)

> - decide which repos to abandon. On the issue above I proposed
> transferring only a subset of our bitbucket repos, please make sure your
> favorite is included

I suppose I'm +0.5 on dropping the old and unused repos.  We still
have them around anyway in the sense that any one of us with a copy
can re-publish them somewhere at any point.  (See also below.)

> - block changes to the active branches (default, py3.6, py3.7 of pypy,
> and the HEAD branch of the other repos); any new contributions will have
> to be done via the heptapod instance

So you mean, we would still be allowed to push into the various repos
on bitbucket as long as it is not on these main branches?  Unsure why.

> - what to do about downloads? It is not clear that the gitlab instance
> has a place for artifacts. Assume we find a solution, how far back do we
> want to keep versions?

Would it be possible to just keep this on bitbucket and point to it?
I understand the idea of stopping all mercurial services, but a priori
they won't delete everything else too?  If they do, maybe we just need
a hack like convert the pypy repo to git on bitbucket (and then never
use it).  Same for the wiki.  And for all our many-years-old dead
repos---we can convert them in-place to git if that means they can
stay there.

Of course all this is assuming we're fine with keeping a few
historical things on bitbucket.  If you decide you'd prefer to have
nothing more to do with bitbucket soon and they should die instead of
continuing to get however little publicity we'd continue to give them
by doing that, then I would understand too.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Leysin Winter sprint 2020

2020-01-29 Thread Armin Rigo
Hi again,

On Tue, 14 Jan 2020 at 10:24, Armin Rigo  wrote:
> More details will be posted here, but for now, here is the early
> planning: it will occur for one week starting around the 27 or 28th of
> February.  It will be in Les Airelles

Here are the definite dates: from Saturday 29th to February to 8th of
March, I have a big room with a nice view, for CHF 50 per night per
person.  (I'm not sure but it is probably in several sub-rooms.)
Simon and Antonio have their own arrangements.  If the other people
could aim for these dates it would be easier, but if you end up coming
one or two days earlier we can find some different arrangement for
these extra days.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Leysin Winter sprint 2020

2020-01-14 Thread Armin Rigo
Hi all,

We will do again this year a Winter sprint in Leysin, Switzerland.

The exact work topics are not precisely defined, but will certainly
involve HPy (https://github.com/pyhandle/hpy) as well as the Python
3.7 support in PyPy (the py3.7 branch in the pypy repo).

More details will be posted here, but for now, here is the early
planning: it will occur for one week starting around the 27 or 28th of
February.  It will be in Les Airelles, a different bed-and-breakfast
place from the traditional one in Leysin.  It is a nice old house at
the top of the village.

There are various rooms for 2, 4 or 5 people, costing 40 to 85 CHF per
person per night.  I'd recommend the spacious, 5 people room (divided
in two subrooms of 2 and 3), with a great balcony, at 50 CHF pp.

We'd like to get some idea soon about the number of people coming.
Please reply to this mail to me personally, or directly put your name
in 
https://bitbucket.org/pypy/extradoc/src/extradoc/sprintinfo/leysin-winter-2020/
.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy specific code flags DONT_IMPLY_DEDENT and SOURCE_IS_UTF8

2019-12-16 Thread Armin Rigo
Hi Rocky,

On Mon, 16 Dec 2019 at 11:57, Rocky Bernstein  wrote:
>
> I did a little test, and for this program:
>
> # from 3.7 test_asyncgen.py
> def test_async_gen_iteration_01(self):
> async def gen():
> await awaitable()
> a = yield 123
>
> the 3.6 ASYNC_GENERATOR flag  is added in the code object created for "gen()" 
> as 3.6 does. So I infer that flag 0x200  in a code object doesn't imply the 
> PYPY_DONT_IMPLY_DEDENT property. Does that mean that in 3.6 
> PYPY_DON'T_IMPLY_DEDENT flag will never appear in a code object? Or just for 
> 3.6 and above or for all Pypy versions?
>
> I don't want to try guessing what the principle being used is looking at all 
> versions of Pypy across all the pypy-specific flags listed in 
> pypy/interpreter/astcompiler/consts.py if there is a principle being followed 
> and someone can just tell me what it is or point to a document that describes 
> what's up here.

I could probably dig the answer, but the general rule is that PyPy
should do the same thing as CPython.  These two PyCF_XXX flags are
passed as arguments to compile(), and should never end up inside the
co_flags object; the CO_XXX flags are what ends up there.

> On Mon, Dec 16, 2019 at 5:33 AM Armin Rigo  wrote:
>> Can you give us a concrete example of code where CPython 3.6 differs
>> from PyPy 3.6?
>
>
> CPython 3.6 bytecode differs from PyPy 3.6 bytecode and so far as I know it 
> CPython bytecode version x.y differs from Pypy  version x.y Otherwise I 
> wouldn't have to go through the extra effort to try to add code to make Pypy 
> disassembly and decompilation possible in a cross-version way. I have for 
> example that there are opcodes FORMAT_VALUE, BUILD_STRING and 
> JUMP_IF_NOT_DEBUG for example in Pypy3.6 that are not in CPython (among I 
> think other opcodes).
>
> But again I'd prefer not to guess this kind of thing if it is documented 
> somewhere or someone who knows this already can just let me know.

I meant, an example where the flags that are produced are different.
It's entirely possible that there is a problem somewhere, e.g. we
might stick the wrong kind of flags inside co_flags.  If you're asking
this question here in the first place, then maybe you already have
such an example?  If not, then you can assume that there is no
difference as far as the flags are concerned; the only difference is
that PyPy has got a very small number of extra opcodes.  (Note that
this includes JUMP_IF_NOT_DEBUG, but not FORMAT_VALUE and
BUILD_STRING, which are from CPython.)


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pypy specific code flags DONT_IMPLY_DEDENT and SOURCE_IS_UTF8

2019-12-16 Thread Armin Rigo
Hi,

On Mon, 16 Dec 2019 at 11:24, Rocky Bernstein  wrote:
> I have a cross-version Python bytecode disassembler xdis 
> (https://pypi.org/project/xdis/) and I notice that flags 0x0100 and 0x0200 
> (DONT_IMPLY_DEDENT and SOURCE_IS_UTF8 respectively) conflict in Pypy 3.6 with 
> Python 3.6's  ITERABLE_COROUTINE and ASYNC_GENERATOR.

Looking in the CPython 3.6 source code, I see that
PyCF_DONT_IMPLY_DEDENT = 0x0200 as well there, even though
``inspect.CO_ASYNC_GENERATOR`` is also 0x0200.  The same with
PyCF_SOURCE_IS_UTF8 = 0x0100 and ``inspect.CO_ITERABLE_COROUTINE``.
So the conflict you're talking about seems to exist in CPython 3.6
too.

Can you give us a concrete example of code where CPython 3.6 differs
from PyPy 3.6?


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy3: is bytecode really incompatible between releases?

2019-10-29 Thread Armin Rigo
Hi Matti,

On Sun, 27 Oct 2019 at 10:33, Matti Picus  wrote:
> Would this be considered a major API breaking change or only a revision
> change? Would we need to change to pypy 8.0 (i.e.
> pypy36-pp80-x86_64-linux-gnu.so), or can we stay with pypy 7.3 (i.e.,
> pypy36-pp73-x86_64-linux-gnu.so)? In any case, wheels made for pypy
> before this change would not be compatible with ones after it.

I like to think that major versions of PyPy should also indicate that
we did some major work in other areas, like the JIT compiler, the json
decoder, etc. etc.  The question of whether the next version should be
called "7.3" or "8.0" should weight on that IMHO.  It should not
depend *only* on whether we broke the API inside cpyext.  That means
cpyext needs to have its own way to tell that the API broke; for
example, it could use file names "pypy36-pp#-x86_64-linux-gnu.so" with
the "#" being the API version number.  Something like that.  Maybe
just an increasing number starting at 42 (as the number following the
"pypy41" we use so far; unrelated to the meaning of life!)


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy3: is bytecode really incompatible between releases?

2019-10-25 Thread Armin Rigo
Hi Matti,

On Fri, 25 Oct 2019 at 10:21, Matti Picus  wrote:
>
>
> On 25/10/19 10:49 am, Matti Picus wrote:
> > Yes, actually, thanks. They should be named {python tag}-{abi
> > tag}-{platform tag}, so pypy36-pp72-x86_64-linux-gnu.so. I will make
> > sure the python3.6/python3.7 so names do not overlap before we release
> > a python3.7 alpha.
>
> Needs more thought. The changes in the C-API are reflected in the
> platform tag: 71 is incompatible (perhaps only slightly) with 72. What
> breaking changes are there from the perspective of a C-API module
> between python 3.6 to 3.7, 3.8, 3.9?

The C module itself may contain "#if PY_VERSION_HEX >= 0x0307" or
similar, in order to compile some feature (or work around some issue)
that is only available on CPython 3.{N} but no 3.{N-1}.  So I think
it's a good idea to include both the CPython and the PyPy version in
the name.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy3: is bytecode really incompatible between releases?

2019-10-23 Thread Armin Rigo
Hi again,

On Wed, 23 Oct 2019 at 16:32, Armin Rigo  wrote:
> (...)  In
> other words we should not update it for cffi modules (which is
> unlikely to ever change, and I can check but I think the same .so
> works for pypy2 and pypy3, so maybe a version number is not needed at
> all);

Yes, I think that's the case.  The .so for cffi should be almost
entirely the same on pypy2 and pypy3, with one minor difference that
turns out not to matter.  (The module exports a function
_cffi_pypyinit__foo() that is declared to return void on pypy2, but
"PyObject *" on pypy3---where it returns NULL and the actual return
value is never checked.  We do it that way because we're reusing the
convenient macro PyMODINIT_FUNC from Python.h.)


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy3: is bytecode really incompatible between releases?

2019-10-23 Thread Armin Rigo
Hi Matti,

On Tue, 22 Oct 2019 at 15:34, Matti Picus  wrote:
> #DEFAULT_SOABI = 'pypy-%d%d' % PYPY_VERSION[:2]
> DEFAULT_SOABI = 'pypy-41'
>
> So do we update it across the board for each change in the cpyext ABI?

No, my point was that if we want to do that we should first split the
usages, and only update the version used for C extension modules.  In
other words we should not update it for cffi modules (which is
unlikely to ever change, and I can check but I think the same .so
works for pypy2 and pypy3, so maybe a version number is not needed at
all); and also not for .pyc files (which should just be "pypy-36" for
pypy3 implementing python 3.6, and if at some point we really want to
add a new bytecode, then well, we'll think again, I suppose).


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy3: is bytecode really incompatible between releases?

2019-10-20 Thread Armin Rigo
Hi Matti,

On Sun, 20 Oct 2019 at 09:47, Matti Picus  wrote:

> I would like to confirm that in fact there is an issue: that the
> c-extension shared objects are incompatible. I am not completely
> convinced this is the case, at least my experimentation with NumPy
> proved it indeed is *not* the case for PyPy2. I am open to hearing
> opinions from others. Is there a concensus around whether we do need to
> change the ABI designation? I think this would also require
> recompilation of any CFFI shared objects on PyPy2.

In PyPy2 there are two different numbers: the version in the
".pypy-XY.so" extension, and the internal version in the ".pyc" files.
In PyPy3 the ".pyc" files have grown to ".pypy-XY.pyc".  (This is
confusing because if you translate PyPy3.6 and the in-progress PyPy3.7
then they'll try to use the same ".pypy-XY.pyc" extension, even though
the internal bytecode version in that file is different.)  If we want
a
single number that changes mostly every release, then we're doing the
right thing.  If instead we prefer to keep several more precise
numbers, we should use different numbers for (a) the C extensions; (b)
the .pyc files; and even (c) the cffi modules.  As far as I
understand, the problem with doing that is that people used to (and
code used on) CPython are not really ready to handle this situation.

As for the precise question you're asking, "do we need to change the
ABI designation in PyPy2", the answer is yes, imho: we should change
it as soon as we break the ABI, even if only in a corner case that
doesn't concern most C extensions...


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Windows 10 error

2019-10-09 Thread Armin Rigo
Hi,

On Wed, 9 Oct 2019 at 16:45, Kristóf Horváth
 wrote:
> Thank you for your reply. The listed dll-s already were parts of my system, 
> but installing Visual C++ Redistributable solved the problem, so it works 
> now. Perhaps, you could write a note about the dependencies onto your 
> installing page.

Thanks for the confirmation!  I've updated the page
http://pypy.org/download.html.


A bientôt,

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Windows 10 error

2019-10-09 Thread Armin Rigo
Hi again,

On Wed, 9 Oct 2019 at 16:28, Armin Rigo  wrote:
> 1. Install http://www.microsoft.com/en-us/download/details.aspx?id=5582

Oops, this link is now broken.  Thanks MS.  Here is another page that
mentions the right VCRuntime140.dll in the question title:

https://answers.microsoft.com/en-us/windows/forum/windows_10-update/vcruntime140dll/b1a15b0e-e389-41a3-bc49-f4fc85bac575
https://www.microsoft.com/en-us/download/details.aspx?id=52685

You need the 32-bit version.


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Windows 10 error

2019-10-09 Thread Armin Rigo
Hi,

On Wed, 9 Oct 2019 at 16:16, Kristóf Horváth
 wrote:
> I downloaded the Windows binary, but pypy3.exe fails on startup with error 
> code 0xc07b (I tried version 3.6 and 3.5, too). I use Windows 10 x64, can 
> it cause the problem? I read in the building instructions that only a 32-bit 
> version available on Windows at this time, but it should be work on a 64-bit 
> system, right?

Yes.  Assuming (randomly) that this is because of a missing
dependencies, can you try:

1. Install http://www.microsoft.com/en-us/download/details.aspx?id=5582

2. If it doesn't help, check that you have the following files, e.g.
in C:\Windows\System32:

ADVAPI32.dll
WS2_32.dll
VCRUNTIME140.dll


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Sandbox 2

2019-08-15 Thread Armin Rigo
Hi Ryan,

On Wed, 14 Aug 2019, 4:58 AM Ryan Gonzalez  wrote:

> Just as a random side note, this reminds me a bit of https://gvisor.dev/
>

Thanks. Yes, that's similar. The main difference is that our approach is
slightly more portable and works at the slightly higher level of the C
library interface rather than really the system calls. I also guess that it
is easier to integrate the controller processing into any existing program
(not sure how easy that is with gvisor.dev).


A bientôt,

Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Sandbox 2

2019-08-13 Thread Armin Rigo
Hi all,

I've got an early but probably usable PyPy and PyPy3 with sandboxing.
Like the old sandboxing, these new versions are made out of a special
version of PyPy (or PyPy3, which is new), running in a mode where no
direct I/O should be possible; and a separate controller process.

The communication between these two processes has been rewritten.
Now, it actually looks similar to the communication between a process
and a OS kernel.  For example, when a regular process wants to write
data to a file, it calls the OS function "write(int fd, char *buf,
size_t count)" (actually it calls the standard C library, which is
itself a relatively thin wrapper around the OS, but we'll ignore
that).  Then the OS proceeds to read directly the part of the memory
owned by the process, from addresses "buf" to "buf + count".

In the sandboxed version of PyPy, this is replaced by the sandboxed
process saying to the controller process "I'm trying to call write()
with these three arguments: two numbers and one pointer".  Assuming
that the controller supports "write()", it contains logic that will
then ask the sandboxed process for the content of some of its memory,
from "buf" to "buf + count".

In other system calls like "read(fd, buf, count)", the controller
would write (instead of reading) data into the sandboxed process' raw
memory, between "buf" and "buf + count".

This approach means that a single sandboxed PyPy (or PyPy3) is needed,
and it supports out of the box all "system calls".  It's up to the
controller process to either support them or not, and when it does, to
do things like (if necessary) reading and writing raw memory from/to
the sandboxed subprocess.  It puts more responsibility into the hands
of the controller process, but is also far more flexible.  It is the
same security approach as for the OS, basically: the sandboxed process
can do whatever it wants but all its I/O are tightly controlled.


Sources:

* The sandboxed PyPy including the necessary RPython changes is in
PyPy's standard repo, branch "sandbox-2".  Translate with
``rpython/bin/rpython -O2 --sandbox``.

* The PyPy3 version is in the branch "py3.6-sandbox-2".  Same command
to translate.

* The controller process, or at least one possible version of it, has
been split into its own repo: http://bitbucket.org/pypy/sandboxlib .
This is CPython-or-PyPy-2-or-3 source code.  You need to run
``setup.py build_ext -f -i``.  To run the tests (with pytest), make a
symlink from ``test/pypy-c-sandbox`` to the sandboxed version built
above.  Try also ``interact.py``.

Right now there is no way to limit the amount of RAM of CPU time
consumed by the sandboxed process, but I think it should be done with
standard platform tools (e.g. ``ulimit``).


Please try it out and give me any feedback !


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Segfault observed while loading dynamic library in (pypy-7.0.0-linux_x86_64-portable)

2019-07-25 Thread Armin Rigo
Hi again,

On Thu, 25 Jul 2019 at 16:53, Nabil Memon  wrote:
> I did figure out the workaround by not importing this cpyext module at first.

Cool.  In the meantime I managed to sneak in a workaround for the
problem that shouldn't have a performance impact, in c89821342184.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Segfault observed while loading dynamic library in (pypy-7.0.0-linux_x86_64-portable)

2019-07-25 Thread Armin Rigo
Hi Nabil,

On Wed, 24 Jul 2019 at 09:02, Nabil Memon  wrote:
>> I am currently using pypy(pypy-7.0.0-linux_x86_64-portable) and I am facing 
>> some issues using it.
>> I installed pip, cython and numpy using commands:
>>   $ ./pypy-xxx/bin/pypy -m ensurepip
>> $ ./pypy-xxx/bin/pip install cython numpy
>> After that, I installed one package "pysubnettree" thorough pypy's pip.
>> While importing SubnetTree class, I get segmentation fault.

This is because the module does something that is unexpected from us,
and probably in the gray area of "correctness".  The C++ module
SubnetTree.cc contains this global variable declaration (it couldn't
occur in C):

static PyObject* dummy = Py_BuildValue("s", "");

This is in the gray area because there is no particular guarantee
that, even on CPython, this is called while the GIL is held.  In the
case of PyPy, that crashes if it's the first cpyext module you import.
A quick workaround is to say "import numpy" before; this should fix
that particular problem.

We may figure out an acceptable workaround, but this is really a
"don't do that".  Ideally you should fix it in SubnetTree.cc and
submit a pull request.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Portable PyPy builds

2019-05-17 Thread Armin Rigo
Hi Antonio,

On Fri, 17 May 2019 at 14:49, Antonio Cuni  wrote:
> Does anybody have opinions on this?

I agree that it's good if linking to them is done more prominently.  I
don't have a particular opinion about where they live.

A bientôt,
Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


[pypy-dev] Fwd: Mercurial conference ­Paris end of May

2019-05-07 Thread Armin Rigo
Hi Pierre-Yves, hi all,

Here's a forwarded announcement for the upcoming Mercurial sprint in
Paris.  None of us also involved in Baroque Software can be there
because the date conflicts with another conference we're taking part
in.  But other people could be interested.

A bientôt,
Armin.

-- Forwarded message -
From: Pierre-Yves David 
Date: Mon, 6 May 2019 at 16:28
Subject: Mercurial conference ­Paris end of May
To: Armin Rigo , Maciej Fijalkowski
, georges Racinet



Hey Baroque Software,

I wanted to make sure you are aware of the upcoming Mercurial Conference
in Paris (France) at the end of May (28th). It will gather users from
diverse backgrounds to share experiences, exchange with core developers
and discover new features.
The first edition will focus in particular on modern tooling, workflows
and scaling. We would be very happy to have some someone from Baroque
and the Pypy project with us.

Free free to forward the information to the pypy project. I am not sure
about the best way for me to do it.


You can read more details in the original announcement :
https://www.mercurial-scm.org/pipermail/mercurial/2019-April/051196.html

And registration is available here:
https://www.weezevent.com/mercurial-conference-paris-2019.

Cheers,

--
Pierre-Yves David
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy as micropython Replacement

2019-03-19 Thread Armin Rigo
Hi Mark,

On Tue, 19 Mar 2019 at 12:46, Mark Melvin  wrote:
> I was wondering if PyPy would be a good candidate to create an embedded 
> distribution of Python similar to micropython.

No, PyPy's minimal memory requirements are larger than CPython's.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] pip_pypy3 throws exception when updating

2019-02-18 Thread Armin Rigo
Hi Joseph,

On Mon, 18 Feb 2019 at 14:33, Joseph Reagle  wrote:
> Homebrew on MacOS Mojave just updated to pypy3 7.0.0 and reminded me to 
> update the setup tools. Doing so, though, throws an exceptions.

We're unlikely to be able to help.  The problem might be anywhere from
homebrew to setuptools to the way we package pypy3, and my guess would
be that nobody can reproduce the problem now (even if we had OS X
machines to test with).  I assume that if you remove everything and
install pypy3 from scratch instead of via an update, everything works
fine?


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Why is pypy slower?

2019-02-13 Thread Armin Rigo
Hi Joseph,

On Wed, 13 Feb 2019 at 16:19, Maciej Fijalkowski  wrote:
> On Wed, Feb 13, 2019 at 3:57 PM Joseph Reagle  wrote:
> > Is it possible for pypy to remember optimizations across instantiations?
>
> It is not possible.

A more constructive answer: in some cases, you can change the overall
approach.  The problem is likely not that it takes 2s instead of 1s to
run the program, but that this difference is multiplied many times
because you run the same program many times on different input data.
In that case, you may try to convert the single-use script into a
local "server" that is only started once.  Then you change your
``fe.py`` script to connect to it and "download" the result locally.
The point is that the server runs in a single process that remains
alive.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy 7.0.0 release candidate

2019-02-07 Thread Armin Rigo
Re-hi,

On Fri, 8 Feb 2019 at 00:33, Armin Rigo  wrote:
> getting "InvalidArgument: Invalid argument deadline" in a call to
> hypothesis.settings() (rpython/conftest.py:14) and upgrading hypothesis
> doesn't seem to help...

My mistake, an old "hypothesis" kept being used.  But the logic in
rpython/conftest.py to detect old versions of hypothesis was broken.
Fixed in default in 54492586de6f; maybe this should be cherry-picked
for the release branch (for source downloads).


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy 7.0.0 release candidate

2019-02-07 Thread Armin Rigo
Hi Anto,

On Thu, 7 Feb 2019 at 11:46, Antonio Cuni  wrote:
> I have uploaded all the packages for PyPy 7.0.0; the release is not yet 
> official and we still need to write the release announcement, but the 
> packages are already available here, for various platforms:

It's unclear to me how you managed to translate the
release-pypy2.7-v7.0.0 tag.  I'm getting "InvalidArgument: Invalid
argument deadline" in a call to hypothesis.settings()
(rpython/conftest.py:14) and upgrading hypothesis doesn't seem to
help...


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Procedure for releasing PyPy

2019-01-24 Thread Armin Rigo
Hi Anto,

On Thu, 24 Jan 2019 at 18:40, Antonio Cuni  wrote:
> 1) hg up -r default
> 2) hg branch release-pypy2.7-7.x
> 3) bump the version number to 7.0.0-final (commit d47849ba8135)
> 4) hg up -r default
> 5) hg merge release-pypy2.7-7.x (commit c4dc91f2e037)
> 6) bump the version number (on default) to 7.1.0-alpha0 (commit f3cf624ab14c)
> 7) merge default into release-pypy2.7-7.x, editing the files before the 
> commit to avoid changing the version again (commit 7986159ef4d8)

I think you can in theory do it in less steps by doing only one merge
with more complicated edits, if you set things up properly (maybe make
the branch, commit the version number 7.0.0-final, and merge that back
to default but editing the version to 7.1.0-alpha0 in the merge
commit...).  Looks like even more of a hack than your 7 steps, though.


A bientôt,
Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Review for blog post

2018-12-24 Thread Armin Rigo
Hi Anto,

On Mon, 24 Dec 2018 at 12:45, Carl Friedrich Bolz-Tereick  wrote:
>> https://bitbucket.org/pypy/extradoc/src/extradoc/blog/draft/2018-12-gc-disable/gc-disable.rst?at=extradoc&fileviewer=file-view-default

Any clue about why the "purple line" graph, after adding some
gc.disable() and gc.collect_step(), is actually 10% faster than the
baseline?  Is that because "purple" runs the GC when "yellow" would be
sleeping waiting for the next input, and you don't count that time in
the performance?  If so, maybe we could clarify that we don't expect
better overall performance by adding some gc.disable() and
gc.collect_step() in a program doing just computations---in this case
it works because it is reorganizing tasks in such a way that the GC
runs at a moment where it is "free".


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] no list() support for

2018-12-04 Thread Armin Rigo
Hi Timothy,

On Tue, 4 Dec 2018 at 02:31, Timothy Baldridge  wrote:
> I have the following code in my interpreter, and I'm getting an error message 
> I haven't seen before:
>
> @specialize.call_location()
> def assoc(self, *kws):
> (...)
> for idx in unrolling_iterable(range(0, len(kws), 2)):

The problem might be that the call to unrolling_iterable() should not
be seen in RPython code, as far as I know.  It should only occur
outside compiled functions.

It might be possible to rewrite your example to do that but it's not
completely obvious how.  Here's how I would do it instead.  Use the
fact that any function with ``*args`` is automatically specialized
based on the number of arguments (no @specialize needed):

def assoc(self, *kws):
new_dict = {}
shape = self.get_shape()
for k in shape.attr_list():
new_dict[k] = shape.getter_for(k).get_value(self)
_turn_to_dict(new_dict, kws)
return DictStruct(new_dict)

def _turn_to_dict(new_dict, *kws):
if not kws: return   # done
new_dict[kws[0]] = kws[1]
_turn_to_dict(new_dict, *kws[2:])

This looks like a recursive call, but it isn't: the function
"_turn_to_dict_6" calls "_turn_to_dict_4", which calls
"_turn_to_dict_2", etc.  And at each level the tuple "kws" is replaced
with individual arguments: nobody is actually making and inspecting
any tuple at runtime.  Most or all of the levels of calls are inlined
because each function is small enough.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Slowdown on latest 3.5 nightly?

2018-11-20 Thread Armin Rigo
Hi Matti,

On 20/11/2018, Matti Picus  wrote:
> On 16/11/18 4:50 pm, Donald Stufft wrote:
> If you have extra time, maybe you could even try the unicode-utf8-py3
> nightly, http://buildbot.pypy.org/nightly/unicode-utf8-py3 which is a
> WIP to use utf8 internally everywhere without converting back and forth
> to unicode. It would be nice to know if that is any faster (or even works).

That's even assuming the slow-down is in any way related to unicodes.
I think we might as well assume that the slowdown is not related to
any change in the code and due to other random effects.  If we want to
know more precisely what is going on, the first step would be to get
the real code that is unexpectedly slower on one pypy than on the
other, and try to run it ourselves---e.g. inside gdb, to start with.


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Best way to inline a primitive array in RPython?

2018-10-30 Thread Armin Rigo
Hi Timothy,

On Tue, 30 Oct 2018 at 00:42, Timothy Baldridge  wrote:
> typedef struct foo_t {
> int a, b;
> int _data[0];
> }
>
> foo_t tmp = malloc(sizeof(foo_t) + 64);

You can do that if you use the lltype.GcStruct type directly, not
using "regular" RPython code.  See the main example in
rpython.rtyper.lltypesystem.rstr: the types STR and UNICODE.  They are
defined as lltype.GcStruct('name', ..some_regular_fields..,
inlined_array_field), and allocated with lltype.malloc(STR,
length_of_array).

Note that you also need to prevent the JIT from seeing any such type:
it has got special case for STR and UNICODE but doesn't handle the
general case.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyCXX and PyPy status

2018-10-27 Thread Armin Rigo
Hi Barry,

On Sat, 27 Oct 2018 at 16:22, Barry Scott  wrote:
> Where is your test code for the this feature? I could review/modify that
> to help find a difference that may explain the problem.

You are saying that your use case fails, but we don't know which part
of cpyext is causing the failure.  I can point to the whole tests of
cpyext (they are in pypy/module/cpyext/test/; new-style classes tests
are in test_typeobject).  But that seems not useful.

What I'm asking for is any way for us to reproduce the problem.  It
can be step-by-step instructions about building PyCXX (like you
suggested); or it can be a smaller .c use case, which might help
to motivate us to look at it.  It doesn't have to be specially prepared
or integrated with the cpyext tests---we first need to figure out what
the real problem is, before we can write a relevant unit test.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Fwd: [#3588935] Re: PyPy 3 for Windows 64-bit systems

2018-10-25 Thread Armin Rigo
Hi,

On Thu, 25 Oct 2018 at 16:48,  wrote:
> The Wikipedia article seems to imply that using Cygwin as a compiler on 
> Windows 64 would resolve the problem?

It might, but then you would have to compile all the CPython C
extension modules using Cygwin as well, which is (1) unexpected and
(2) likely to break some
of them not expecting a different 64-bit model.  In addition, I have
no clue about which calling convention Cygwin-on-64bit-Windows is
using, which is important for our JIT.  So no, that's unlikely to be
the right approach to 64-bit Windows.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Fwd: [#3588935] Re: PyPy 3 for Windows 64-bit systems - no download link available

2018-10-24 Thread Armin Rigo
Hi,

On Tue, 23 Oct 2018 at 10:22, Barry  wrote:
> Each time i email pypy i get a email from this hosting company.
>
> Can you unsubscribe them?

It's not so easy because we have no direct clue about which e-mail
address ends up on that random issue tracker.  But I found it anyway
and removed the offender.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy 3 for Windows 64-bit systems - no download link available

2018-10-21 Thread Armin Rigo
Hi,

On Sun, 21 Oct 2018 at 16:47, Barry Scott  wrote:
> > On 19 Oct 2018, at 11:26, Armin Rigo  wrote:
> > PyPy is not available for 64-bit Windows.
>
> How odd. sys.maxint on macOS and Fedora is (2**63)-1 not (2**31)-1.

That's because MacOS and Fedora are not Windows.


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyPy 3 for Windows 64-bit systems - no download link available

2018-10-19 Thread Armin Rigo
Hi,

On Fri, 19 Oct 2018 at 12:23,  wrote:
> It will not run with PyPy https://pypy.org/download.html on Windows, because 
> there are only 32-bit PyPy installs for Windows available

PyPy is not available for 64-bit Windows.  See
http://doc.pypy.org/en/latest/windows.html#what-is-missing-for-a-full-64-bit-translation
for technical information.  It's not high priority for us, which means
we won't do it unless we're getting a sizeable grant or contract.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Difference to CPython (Maybe a bug in PyPy)

2018-10-12 Thread Armin Rigo
Hi,

On Fri, 12 Oct 2018 at 17:00, Wiener, Markus
 wrote:
> i found (maybe) a bug in PyPy.

Your code is relying on an explicitly undefined behavior described for
dicts in https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects
.  The same applies to sets, although I'm not sure it's as clearly
spelled out as for dicts.  Specifically, you cannot mutate the set
``a`` in the middle of a loop ``for j in a``.

In fact, I think that if your code was written using dicts instead of
sets, then its behavior would change in CPython 3.6 because dicts are
ordered there.  Sets are not---as far as I know (and I guess I should
say "not yet"), so CPython 3.6 didn't change for your exact code.  But
PyPy's sets are ordered, and gives different results.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyCXX and PyPy status

2018-10-08 Thread Armin Rigo
Hi,

On Sun, 7 Oct 2018 at 18:05, Barry Scott  wrote:
> (gdb) p/x table->tp_flags
> $4 = 0x201eb
>
> But when the instance comes back to me they are:
>
> (gdb) p/x self->ob_type->tp_flags
> $11 = 0x1208
>
> Surely the flags should not have been changed?

Some flags change in CPython too.  The change you're seeing might be
correct, or it might not be---although I agree that the bits in the
two values are very different.  But it's hard for us to know at this
point without knowing what you are doing exactly.  You may be using
only the standard parts of the CPython C API, or instead be using
completely internal details.  Can you try to extract a small CPython
extension module as C code, containing the usual kind of PyTypeObject
definitions and calls to some PyXxx() functions, and which behaves
differently on CPython and on PyPy?  That would help us understand the
problem.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] PyCXX and PyPy status

2018-10-02 Thread Armin Rigo
Hi Barry,

On Tue, 2 Oct 2018 at 21:09, Barry Scott  wrote:
> Using PyPy 0.6.0 on macOS I managed to build a .so that crashed when imported.

I will assume that you mean PyPy2 6.0.0, and not PyPy3 nor the version 0.6.0.

> The reason is that the PyPy's C API does not allow tuples to be created.

You cannot change tuple objects after they "escaped" to become general
PyPy objects.  You can only call PyTuple_SetItem() on a tuple object
that exists as a "PyObject *" but not yet as a PyPy object.  That
means, mostly, you should only call PyTuple_SetItem() on the fresh
result of PyTuple_New() before the "PyObject *" is sent somewhere
else.  In your example code, maybe the set() function makes the tuple
object escape in this way.  Note that escaping tuples before they are
fully initialized can also sometimes create potential crashes on
CPython.

>  if (PyMapping_DelItemString (ptr(), 
> const_cast(s.c_str())) == -1)
>  if (PyMapping_DelItem (ptr(), *s) == -1)

I don't know how it ever worked, because PyMapping_DelItem and
PyMapping_DelItemString are not implemented, neither right now nor at
the time of release 6.0.0.

If you are using these two functions, please open an issue
(https://bitbucket.org/pypy/pypy/issues?status=new&status=open) and
we'll implement them.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] install pypy3 error

2018-09-28 Thread Armin Rigo
Hi Jie,

On Sat, 29 Sep 2018 at 07:18, Jie You  wrote:
> I can not install the pypy3 on my Mac. I try several times. It raised 
> NOTTY("Can not start the debugger when stdout is capture"). I have enclosed 
> the screenshot in the attachment. Please tell me how to fix it. Thank you so 
> much for your kindly support.

Sorry, this screenshot only shows the generic error "oops something
went wrong".  It doesn't print what went wrong.  The error message was
eaten; it may have been stored in some log file by the Homebrew
installer.  I would suggest that you first look on the Homebrew side
to figure out how to display the error message---you probably need to
start by reading the instructions at
"https://docs.brew.sh/Troubleshooting";.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] CFFI: better performance when calling a function from address

2018-09-26 Thread Armin Rigo
Hi Carl Friedrich,

On Wed, 26 Sep 2018 at 22:28, Carl Friedrich Bolz-Tereick  wrote:
> Couldn't that slowness of getattr be fixed by making the lib objects eg use 
> module dicts or something?

If we use the out-of-line API mode then ``lib`` is an RPython object,
but if we use the in-line ABI mode then it's a pure Python object.
More precisely it's a singleton instance of a newly created Python
class, and the two extra instructions are reading and guard_value'ing
the map.

It might be possible to rewrite the whole section of pure-Python code
making the ``lib`` for the in-line ABI mode, but that looks like it
would be even slower on CPython.  And I don't like the idea of
duplicating---or even making any non-necessary changes to---this
slightly-fragile-looking logic...

Anyway, I'm not sure to understand how just a guard_value on the map
of an object can cause a 250 ns slow-down.  I'd rather have thought it
would cause no measurable difference.  Maybe I missed another
difference.  Maybe the effect is limited to microbenchmarks.  Likely
another mystery of modern CPUs.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] CFFI: better performance when calling a function from address

2018-09-26 Thread Armin Rigo
Hi again,

On Wed, 26 Sep 2018 at 21:19, Dimitri Vorona via pypy-dev
 wrote:
> In my microbenchmarks its has pretty much the same call performance as when 
> using cffi ABI mode (dumping the functions to a shared library first) and is 
> around 250ns per call slower than when using API mode.
>
> I haven't looked at the generating assembly yet, but I guess pypy has to be 
> more careful, since not all information from API mode is available.

Just for completeness, the documentation of CFFI says indeed that the
API mode is faster than the ABI mode.  That's true mostly on CPython,
where the ABI mode always requires using libffi for the calls, which
is slow.  On PyPy, the JIT has got enough information to do mostly the
same thing in the end.  (And before the JIT, PyPy uses libffi both for
the ABI and the API mode for simplicity.)


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] CFFI: better performance when calling a function from address

2018-09-26 Thread Armin Rigo
Hi Dimitri,

On Wed, 26 Sep 2018 at 21:19, Dimitri Vorona via pypy-dev
 wrote:
> In my microbenchmarks its has pretty much the same call performance as when 
> using cffi ABI mode (dumping the functions to a shared library first) and is 
> around 250ns per call slower than when using API mode.

I doubt that these microbenchmarks are relevant.  But just in case, I
found out that the JIT is producing two extra instructions in the ABI
case, if you call ``lib.foobar()``.  These two instructions are caused
by reading the ``foobar`` method on the ``lib`` object.  If you write
instead ``foobar()``, with either ``foobar = lib.foobar`` or ``from
_x_cffi.lib import foobar`` done earlier, then the speed is exactly
the same.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] translation problems on unicode-utf8-py3

2018-08-31 Thread Armin Rigo
Hi,

On 31 August 2018 at 08:15, Matti Picus  wrote:
> I have two failures with translation that I need some help with on the
> unicode-utf8-py3 branch.
> Occasionally while translating the branch locally, I get an error

I've tried both on unicode-utf8-py3 and on unicode-utf8 (which seems
to have the same logic around these functions).  I didn't get the
error.  The error message is not enough to understand what is wrong,
I'll need a pdb prompt and look around.  Maybe if you manage to
reproduce in a "screen" on bencher4, I could connect.


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] bencher4 fialing own py3 builds

2018-08-27 Thread Armin Rigo
Hi Matti,

On 27 August 2018 at 18:38, Matti Picus  wrote:
> Own nightly builds on p3.5 are failing numerous cpyext tests. The problem
> seems to be related to the _cffi_backend module somehow hitting an assert at
> import, but I cannot reproduce the problem locally.

It reproduces if we run both test_buffer and test_translate together
(py.test test_buffer.py test_translate.py).

> Perhaps we should retire bencher4 and find another build machine, one
> that we can control a bit more?

It wouldn't be a bad idea.  Note that we should check again if it
would be OK to reinstall bencher4 in a way that we fully control
(including giving accounts to people at our discretion), which would
save us the trouble of looking elsewhere.  But maybe it still isn't
possible to do that...


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] How to ptrace pypy stack

2018-07-21 Thread Armin Rigo
Hi,

On 21 July 2018 at 17:00, tao he  wrote:
> Yes, I have just read  the code of  https://eng.uber.com/pyflame/ .  And I
> try to do it with pypy, but  not work. So I have to ask for help.

This is more complicated to do in PyPy than in CPython.  PyPy has got
a JIT, and the JIT does not actually create the frame objects that are
found not to be needed.  This has been done, however, in "vmprof".

"vmprof" is similar to what you're trying to do, as far as I can tell,
but it is using an in-process timer instead of using "ptrace".  It
seems to me like it is simpler anyway: I don't understand why you
would use ptrace at all, honestly.  ptrace can be useful for profiling
a random external process, but if the goal is the profile exactly the
CPython (or PyPy) interpreter, then you may as well add code directly
inside instead of messing around with ptrace introspecting debugging
symbols to recontruct the state.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] How to ptrace pypy stack

2018-07-20 Thread Armin Rigo
Hi again,

Ah, found out https://eng.uber.com/pyflame/ .  I guess this is what
you mean, is that correct?


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] How to ptrace pypy stack

2018-07-20 Thread Armin Rigo
Hi,

On 21 July 2018 at 08:19, ht  wrote:
>I'm trying to use ptrace to profile pypy program, but i can't get the
> stack or virtual memory address more than file no.
>
>But python has a global variable named _PyThreadState_Current, that
> can help to extracting the thread state, so i wander if pypy has some method
> to do this. thanks!

It's not clear to me what you mean.

* How do you use ptrace for profiling?  Is it even giving timing information?

* Can you give an example of how you would use _PyThreadState_Current
inside CPython?

* What is the link between _PyThreadState_Current and ptrace?

Please describe more precisely what you are doing in CPython and why,
and then we'll try to think of a way to achieve the same goals.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] [PATCH] FreeBSD build fix

2018-07-18 Thread Armin Rigo
Hi David,

On 20 May 2018 at 18:39, David CARLIER  wrote:
> Here a new version of the patch.

Sorry for the delay.  Added your patch (minus the Makefile because I
don't think it's intended).  See also
https://bitbucket.org/pypy/pypy/issues/2853/build-fails-on-freebsd-11x-x64
.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Why does elasticsearch raise an exception on pypy but not cpython?

2018-07-17 Thread Armin Rigo
Hi Sean,

On 16 July 2018 at 22:46, Sean Whalen  wrote:
> I have some code that saves data to Elasticsearch. It runs fine in Python
> 3.5.2 (cpython), but raises an exception when running on pypy 3 6.0.0
> (Python 3.5.3). Any ideas why?

Not out of the box.  If you provide some code that we can run and
which gives different results, then we can investigate.


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] bug in PyPy3 context __exit__ handling

2018-07-01 Thread Armin Rigo
Hi Nathaniel,

On 1 July 2018 at 15:37, Nathaniel Pierce  wrote:
> Returning True from __exit__ when type is None can break expected program
> flow. So far, I've found that each of break/continue/return inside a with
> block is affected. Both with and without '--jit off'
>
> PyPy2 is unaffected.

Thanks!  This bug must have given very obscure behaviour.  Thanks for
identifying the cause.  Fixed in 0a4016e8a6bc!


A bientôt,

Armin.
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


Re: [pypy-dev] Translating of FreeBSD fails for v6.0.0

2018-07-01 Thread Armin Rigo
Hi David,

Issue #2853 was just reported with the same error message.  See there:
https://bitbucket.org/pypy/pypy/issues/2853/build-fails-on-freebsd-112-x64


Armin
___
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev


  1   2   3   4   5   6   7   8   9   10   >