On 9/1/2010 10:49 AM, sarvi wrote:

Is there a plan to adopt PyPy and RPython under the python foundation
in attempt to standardize both.

I have been watching PyPy and RPython evolve over the years.

PyPy seems to have momentum and is rapidly gaining followers and
performance.

PyPy JIT and performance would be a good thing for the Python
Community
And it seems to be well ahead of Unladen Swallow in performance and in
a position to improve quite a bit.


Secondly I have always fantasized of never having to write C code yet
get its compiled performance.
With RPython(a strict subset of Python), I can actually compile it to
C/Machine code


These 2 seem like spectacular advantages for Python to pickup on.
And all this by just showing the PyPy and the Python foundation's
support and direction to adopt them.


Yet I see this forum relatively quiet on PyPy or Rpython ?  Any
reasons???

Sarvi

    The winner on performance, by a huge margin, is Shed Skin,
the optimizing type-inferring compiler for a restricted subset
of Python.  PyPy and Unladen Swallow have run into the problem
that if you want to keep some of the less useful dynamic semantics
of Python, the heavy-duty optimizations become extremely difficult.

    However, if we defined a High Performance Python language, with
some restrictions, the problem becomes much easier.  The necessary
restrictions are roughly this:

-- Functions, once defined, cannot be redefined.
   (Inlining and redefinition do not play well
   together.)

-- Variables are implicitly typed for the base types:
   integer, float, bool, and everything else.  The
   compiler figures this out automatically.
   (Shed Skin does this now.)

-- Unless a class uses a "setattr" function or has
   a __setattr__ method, its entire list of attributes is
   known at compile time.
   (In other words, you can't patch in new attributes
   from outside the class unless the class indicates
   it supports that.  You can subclass, of course.)

-- Mutable objects (other than some form of synchronized
   object) cannot be shared between threads.  This is the
   key step in getting rid of the Global Interpreter Lock.

-- "eval" must be restricted to the form that has a list of
   the variables it can access.

-- Import after startup probably won't work.

Those are the essential restrictions.  With those, Python
could go 20x to 60x faster than CPython.  The failures
of PyPy and Unladen Swallow to get any significant
performance gains over CPython demonstrate the futility
of trying to make the current language go fast.

Reference counts aren't a huge issue.  With some static
analysis, most reference count updates can be optimized out.
(As for how this is done, the key issue is to determine whether
each function "keeps" a reference to each parameter.  For
any function which does not, that parameter doesn't have
to have reference count updates within the function.
Most math library functions have this property.
You do have to analyze the entire program globally, though.)

                                John Nagle
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to