Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Andrew Brown
Thanks for the info!
That's all the questions I have, for now at least. Feel free to reply with
any more tips if you think of any.

I read over your posts you linked, Carl. They were certainly informative and
helpful, thanks. I'll keep thinking of ways to improve the performance of my
test interpreter, but it's so simple, I don't think there's much more that
can be done. The shared attribute maps described by that link don't really
apply here.

In any case, I'm satisfied with the speed. It's still beaten by a BF to C
translator combined with gcc -O2 though, that'd be a tough case to beat. =)

-Andrew

On Tue, Mar 29, 2011 at 5:33 AM, Armin Rigo  wrote:

> Hi Andrew,
>
> On Mon, Mar 28, 2011 at 7:21 PM, Andrew Brown  wrote:
> > When the optimizer encounters a "pure" function, it must compare the
> objects
> > "promote - promote the argument from a variable into a constant". Could
> this
> > be an appropriate alternate to the @purefunction solution? Or, I'm
> guessing,
> > does it just mean the name bracket_map won't change bindings, but does
> not
> > impose a restriction on mutating the dictionary?
>
> One point of view on 'promote' is to mean "this variable was red, but
> now turn it green (i.e. make it constant)".  It has no effect on a
> variable that is already green (= a constant).
>
> We have no support for considering that a dict is immutable, so it
> needs to be done with @purefunction.  But to be effective,
> @purefunction must receive constant arguments; so in one or two places
> in the source code of PyPy you will find a construction like:
>
>   x = hint(x, promote=True)   # turn x into a constant
>   some_pure_function(x) # call this pure function on x
>
> Indeed, Carl Friedrich's blog series explains it nicely, but it should
> also mention that when the hints described in the blog are applied not
> to integer but to pointers, they apply only to the pointers
> themselves, not on the fields of the objects they point to.
>
>
> A bientôt,
>
> Armin.
>
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Antonio Cuni
On 31/03/11 14:28, Andrew Brown wrote:
> In any case, I'm satisfied with the speed. It's still beaten by a BF to C
> translator combined with gcc -O2 though, that'd be a tough case to beat. =)

what happens if you combine the BF to C with gcc -O0 or -O1?

Anyway, I think that if you feel like writing a post explaining your
experience with using pypy and its jit for writing an interpreter, we could
publish it on our blog.  I suppose it would be useful/interesting for other
people as well.

What do the others think?
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Laura Creighton
In a message of Thu, 31 Mar 2011 14:33:40 +0200, Antonio Cuni writes:
>On 31/03/11 14:28, Andrew Brown wrote:
>> In any case, I'm satisfied with the speed. It's still beaten by a BF to
> C
>> translator combined with gcc -O2 though, that'd be a tough case to beat
>. =)
>
>what happens if you combine the BF to C with gcc -O0 or -O1?
>
>Anyway, I think that if you feel like writing a post explaining your
>experience with using pypy and its jit for writing an interpreter, we cou
>ld
>publish it on our blog.  I suppose it would be useful/interesting for oth
>er
>people as well.
>
>What do the others think?

I'd look forward to reading it.

Laura

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Andrew Brown
Compiling with -O0 is really quick, but the runtime is fairly slow. I
haven't tried with -O1. -O2 takes a few seconds to compile, but that plus
runtime is still faster than the pypy version with jit, but not by too much
(I'm recalling the tests I did with the mandelbrot program specifically). I
can get some actual numbers later today.

Sure I'll write up a post. This was a lot of fun, and I think it's a great
way to teach people how pypy works.

-Andrew

On Thu, Mar 31, 2011 at 8:33 AM, Antonio Cuni  wrote:

> On 31/03/11 14:28, Andrew Brown wrote:
> > In any case, I'm satisfied with the speed. It's still beaten by a BF to C
> > translator combined with gcc -O2 though, that'd be a tough case to beat.
> =)
>
> what happens if you combine the BF to C with gcc -O0 or -O1?
>
> Anyway, I think that if you feel like writing a post explaining your
> experience with using pypy and its jit for writing an interpreter, we could
> publish it on our blog.  I suppose it would be useful/interesting for other
> people as well.
>
> What do the others think?
>
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Timothy Baldridge
> Sure I'll write up a post. This was a lot of fun, and I think it's a great
> way to teach people how pypy works.

I'd love to read a post on this. Perhaps I'll get a few pointers that
I can use in my Clojure-pypy port.

Timothy
-- 
“One of the main causes of the fall of the Roman Empire was
that–lacking zero–they had no way to indicate successful termination
of their C programs.”
(Robert Firth)
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Andrew Brown
Here's three emails that accidentally didn't get sent to the list.

On Thu, Mar 31, 2011 at 10:41 AM, Dan Roberts  wrote:

> Hi Andrew,
> Did you ever try your interpreter with the 99 bottles program? I got my
> interpreter down faster than the beef interpreter ~4s vs ~15s on mandelbrot,
> however even with that speed, both 'bf' and 'beef' trounced my interpreter
> by an absurd amount. It seems like it probably was a problem with my code
> base, when I first saw you were working on this I meant to ask you to try
> 99bottles.bf and see if you had similar problems. I haven't had a chance
> to examine where the problem is coming from though.
>
> Cheers,
> Dan
>

On Thu, Mar 31, 2011 at 11:02 AM, Andrew Brown  wrote:

> Hi Dan,
> Did you mean to send this to the list as well? I only ask because it's easy
> to hit "Reply" instead of "Reply All".
>
> Regardless, I have a 99 bottles program, in the comments it says it's
> written by Andrew Paczkowski. I haven't mentioned it just because it runs
> absurdly fast: 0.02 seconds or so (compared with 0.2s for running the py
> code on cpython), so I didn't consider it a good test. I wanted something
> that took a bit longer.
>
> I just searched for another and found one by Raphael Bois, but that runs in
> 0.04 seconds.
>
> Perhaps you're using a different version of this program that's less
> efficient and runs faster? (or maybe this really is just that fast?)
>
> Also, the mandelbrot program that I included in my repo takes 8.4 seconds
> to run on my computer. Not quite the 4 second time you're getting (have you
> published your interpreter anywhere? I'd like to look at it) I have a
> feeling I've taken this interpreter as far as it will go without doing any
> more intelligent inspection of the bf code directly.
>
> -Andrew
>

On Thu, Mar 31, 2011 at 11:20 AM, Dan Roberts  wrote:

> Hey,
> Yeah, that's the second or third time I didn't reply to all lately :-/
> And my interpreter is on paste.pocoo.org somewhere, I can paste it again
> when I go home today. I suspect there's something wrong with it though,
> considering you're getting proper performance on 99bottles, it takes about 3
> minutes here! (On the same system where it wins on mandelbrot by >66%
> against 'beef' or bf whichever one is faster) One immediately obvious
> difference was your use of the bracket map, which I think is an awesome
> idea. I may adopt it, currently I calculate how far backwards/forwards to
> travel at "runtime" instead of preprocessing it. I made it pure so that it
> would be constant folded by the JIT, but I suppose the 1000 iterations
> before the JIT kicks in (per loop) could explain a large performance
> difference. I could probably combine both techniques and cache the results
> at runtime, by the second run, it'll be a dict lookup, so it'll be jitted
> essentially the same, and I won't have to think about parsing to find
> matching braces :-)
>
> If you can think of a good way to bring this discussion back on the mailing
> list that'd be fine.
>
> Cheers,
> Dan
>
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Dima Tisnek
On 31 March 2011 05:33, Antonio Cuni  wrote:
> On 31/03/11 14:28, Andrew Brown wrote:
>> In any case, I'm satisfied with the speed. It's still beaten by a BF to C
>> translator combined with gcc -O2 though, that'd be a tough case to beat. =)

What if bf code was really really large?

bf to c then gcc could take a hit as it might thrash cpu cache, as
single pass gcc doesn't know what a given program would actually do at
runtime.

jit'd rpy would only have 1 hotspot, always in cache, and might be a
little smarter too.

I suppose it's hard to beat 2-pass (profile driven optimized) compiled c though.

>
> what happens if you combine the BF to C with gcc -O0 or -O1?
>
> Anyway, I think that if you feel like writing a post explaining your
> experience with using pypy and its jit for writing an interpreter, we could
> publish it on our blog.  I suppose it would be useful/interesting for other
> people as well.
>
> What do the others think?

I think it can be a great example. It's very educational ;-)

It could go into official docs/howto too.
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] The JVM backend and Jython

2011-03-31 Thread Maciej Fijalkowski
On Wed, Mar 30, 2011 at 12:11 PM, Antonio Cuni  wrote:
> On 30/03/11 19:37, fwierzbi...@gmail.com wrote:
>
>> My thoughts here are taking a very primitive step - that is run the
>> JVM translation and look at the generated Java - then see what needs
>> to be modified so that I could use the generated Java parser from
>> Jython. At this stage I would be using PyPy exactly the way I use
>> ANTLR now - as a parser generator. There wouldn't be any need at all
>> for calling into Java code (as far as I can think of).
>
> yes, I think it makes sense.
> Actually, as Leonardo says we don't generate java code but assembler which is
> converted to .class by jasmin. However, it should not change anything.
>
>> I think if we
>> Jython developers get some experience with PyPY - we might be able to
>> help with the task of calling into Java from PyPy - since we know a
>> bit about that :)
>
> that would be extremely cool :-)
>
> Ok, so if Ademan tells me that he's not going to work on the ootype-virtualref
> branch, I'll try to finish the work so you can start playing with it.

Note to frank: this is kind of cool but only needed for the JIT,
otherwise it's a normal reference.

>
> ciao,
> Anto
> ___
> pypy-dev@codespeak.net
> http://codespeak.net/mailman/listinfo/pypy-dev
>
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Andrew Brown
On Thu, Mar 31, 2011 at 2:09 PM, Dima Tisnek  wrote:

> What if bf code was really really large?
>

I've only been testing with the examples at hand included in my repo. The
mandelbrot and towers of hanoi examples are pretty big though. If you can
find some larger examples, I'd like to try them.

I think it can be a great example. It's very educational ;-)


> It could go into official docs/howto too.
>

Awesome! I'm working on writing up everything, it's turning out to be pretty
long. I'm assuming no prior PyPy knowledge in the readers though =)

Here are a few numbers from tests I just did.

python double-interpreted: > 78m (did not finish)
pypy-c (with jit) double-interpreted: 41m 34.528s
translated interpreter no jit: 45s
translated interpreter jit: 7.5s
 translated direct to C, gcc -O0
  translate: 0.2s
  compile: 0.4s
  run: 18.5s
translated direct to C, gcc -O1
  translate: 0.2s
  compile: 0.85s
  run: 1.28s
translated direct to C, gcc -O2
  translate: 0.2s
  compile: 2.0s
  run: 1.34s

These were all running the mandelbrot program.

-Andrew
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Antonio Cuni
On 31/03/11 22:05, Andrew Brown wrote:

> python double-interpreted: > 78m (did not finish)
> pypy-c (with jit) double-interpreted: 41m 34.528s

this is interesting. We are beating cpython by more than 2x even in a "worst
case" scenario, because interpreters in theory are not a very good target for
tracing JITs.
However, it's not the first time that we experience this, so it might be that
this interpreter/tracing JIT thing is just a legend :-)

> translated interpreter no jit: 45s
> translated interpreter jit: 7.5s
> translated direct to C, gcc -O0
>   translate: 0.2s
>   compile: 0.4s
>   run: 18.5s
> translated direct to C, gcc -O1
>   translate: 0.2s
>   compile: 0.85s
>   run: 1.28s
> translated direct to C, gcc -O2
>   translate: 0.2s
>   compile: 2.0s
>   run: 1.34s

these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than -O1
and -O2.  Pretty good, I'd say :-)

ciao,
anto
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] The JVM backend and Jython

2011-03-31 Thread Antonio Cuni
On 31/03/11 21:57, Maciej Fijalkowski wrote:

>> Ok, so if Ademan tells me that he's not going to work on the 
>> ootype-virtualref
>> branch, I'll try to finish the work so you can start playing with it.
> 
> Note to frank: this is kind of cool but only needed for the JIT,
> otherwise it's a normal reference.

well, no. Virtualrefs were introduced for the JIT, but they also need to be
supported by normal backends.  This is why translation is broken at the moment.

It is true that the implementation is straightforward, though (I suppose this
is what you meant originally :-))

ciao,
Anto
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Pypy custom interpreter JIT question

2011-03-31 Thread Alex Gaynor
On Thu, Mar 31, 2011 at 6:00 PM, Antonio Cuni  wrote:

> On 31/03/11 22:05, Andrew Brown wrote:
>
> > python double-interpreted: > 78m (did not finish)
> > pypy-c (with jit) double-interpreted: 41m 34.528s
>
> this is interesting. We are beating cpython by more than 2x even in a
> "worst
> case" scenario, because interpreters in theory are not a very good target
> for
> tracing JITs.
> However, it's not the first time that we experience this, so it might be
> that
> this interpreter/tracing JIT thing is just a legend :-)
>
>
Well the issue with tracing an interpreter is the large number of paths, a
brainfuck interpreter has relatively few paths compared to something like a
Python VM.




> > translated interpreter no jit: 45s
> > translated interpreter jit: 7.5s
> > translated direct to C, gcc -O0
> >   translate: 0.2s
> >   compile: 0.4s
> >   run: 18.5s
> > translated direct to C, gcc -O1
> >   translate: 0.2s
> >   compile: 0.85s
> >   run: 1.28s
> > translated direct to C, gcc -O2
> >   translate: 0.2s
> >   compile: 2.0s
> >   run: 1.34s
>
> these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than
> -O1
> and -O2.  Pretty good, I'd say :-)
>
> ciao,
> anto
> ___
> pypy-dev@codespeak.net
> http://codespeak.net/mailman/listinfo/pypy-dev
>

Alex

-- 
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

[pypy-dev] [PATCH] fix executing a file under sandbox by implementing fstat

2011-03-31 Thread Seth de l'Isle
I ran into a problem trying to follow the instructions to run a python
file under the sandbox version of pypy-c.  See the console log:
http://paste.pocoo.org/show/363348

I was following the instructions here:
http://codespeak.net/pypy/dist/pypy/doc/sandbox.html

I got some help from arigato, ronny and antocuni on IRC (see log
below) and they gave me the courage to hack on sandlib.py a little to
see
if I could get things working.

http://www.tismer.com/pypy/irc-logs/pypy/%23pypy.log.20110331

The following patch changes the code so that it tracks the virtual
file system nodes that correspond to each virtual file descriptor so
that the node.stat() function can
be used for fstat the same way it is used for stat.

Thanks!

diff -r 601862ed288e pypy/translator/sandbox/sandlib.py
--- a/pypy/translator/sandbox/sandlib.pyMon Mar 28 13:12:49 2011 +0200
+++ b/pypy/translator/sandbox/sandlib.pyThu Mar 31 16:13:41 2011 -0800
@@ -391,6 +391,7 @@
 super(VirtualizedSandboxedProc, self).__init__(*args, **kwds)
 self.virtual_root = self.build_virtual_root()
 self.open_fds = {}   # {virtual_fd: real_file_object}
+self.fd_to_node = {}

 def build_virtual_root(self):
 raise NotImplementedError("must be overridden")
@@ -425,10 +426,17 @@
 def do_ll_os__ll_os_stat(self, vpathname):
 node = self.get_node(vpathname)
 return node.stat()
+
 do_ll_os__ll_os_stat.resulttype = s_StatResult

 do_ll_os__ll_os_lstat = do_ll_os__ll_os_stat

+def do_ll_os__ll_os_fstat(self, fd):
+node = self.fd_to_node[fd]
+return node.stat()
+
+do_ll_os__ll_os_fstat.resulttype = s_StatResult
+
 def do_ll_os__ll_os_isatty(self, fd):
 return self.virtual_console_isatty and fd in (0, 1, 2)

@@ -452,11 +460,14 @@
 raise OSError(errno.EPERM, "write access denied")
 # all other flags are ignored
 f = node.open()
-return self.allocate_fd(f)
+fd = self.allocate_fd(f)
+self.fd_to_node[fd] = node
+return fd

 def do_ll_os__ll_os_close(self, fd):
 f = self.get_file(fd)
 del self.open_fds[fd]
+del self.fd_to_node[fd]
 f.close()

 def do_ll_os__ll_os_read(self, fd, size):
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] The JVM backend and Jython

2011-03-31 Thread Maciej Fijalkowski
On Thu, Mar 31, 2011 at 4:02 PM, Antonio Cuni  wrote:
> On 31/03/11 21:57, Maciej Fijalkowski wrote:
>
>>> Ok, so if Ademan tells me that he's not going to work on the 
>>> ootype-virtualref
>>> branch, I'll try to finish the work so you can start playing with it.
>>
>> Note to frank: this is kind of cool but only needed for the JIT,
>> otherwise it's a normal reference.
>
> well, no. Virtualrefs were introduced for the JIT, but they also need to be
> supported by normal backends.  This is why translation is broken at the 
> moment.
>
> It is true that the implementation is straightforward, though (I suppose this
> is what you meant originally :-))

Sure.

I was mostly saying "the complex part of the implementation for ootype
can be ommited if we skip the JIT part".

>
> ciao,
> Anto
>
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] The JVM backend and Jython

2011-03-31 Thread Stefan Behnel
fwierzbi...@gmail.com, 30.03.2011 04:40:
> I've been thinking about the first steps towards collaboration between
> the Jython project and the PyPy project. It looks like it isn't going
> to be too long before we are all (CPython, PyPy, IronPython, Jython,
> etc) working on a single shared repository for all of our standard
> library .py code.

On a somewhat related note, the Cython project is pushing towards 
reimplementing parts of CPython's stdlib C modules in Cython. That would 
make it easier for other projects to use the implementation in one way or 
another, rather than having to reimplement and maintain it separately by 
following C code.

http://thread.gmane.org/gmane.comp.python.devel/122273/focus=122716

The advantage for other-than-CPython-Pythons obviously depends on the 
module. If it's just implemented in C for performance reasons (e.g. 
itertools etc.), it would likely end up as a Python module with additional 
static typing, which would make it easy to adapt. If it's using lots of 
stuff from libc and C I/O, or even from external libraries, the code itself 
would obviously be less useful, although it would likely still be easier to 
port changes/fixes.

Stefan

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev