Hi.

I'm sitting in a room with Wichert right now and he told me, that he
posted about my blog post in here, so that's where I think Michael
have seen it :)

Michael posted some comments on 
http://hannosch.blogspot.com/2008/07/project-messerschmidt-vs-nkotb.html
which I'd like to follow up on in here, since blog post comments are a
real bad place to have discussions. If there is a better place for
this discussion please tell me.

On Jul 21, 9:52 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
> is that test from the spitfire suite ?  I haven't looked at it, but
> theirMakonumbers look a whole lot like Myghty, notMako(Makois
> roughly the same speed as Cheetah in reality, a tad slower usually).
>
> I haven't had the time to deal with spitfire, which will involve
> verifying that they are testing againstMakoand not Myghty, and then
> spending the time to plug Psyco intoMako(should be a three liner) to
> see if that closes the gap (since they certainly aren't running pure
> python to get that kind of result).   Can you perhaps tell me if that
> suite is in fact usingmakoand not myghty ?

As I mentioned in the blog post, none of the tests are using Psyco.
Enabling it, bumps up the result by another 30% to 50% percent. All
tests are done on a MacBook Pro 2.16 GHz Intel Core Duo, with Python
2.4.

The code for the Mako test is from spitfire and looks like this:

from mako.template import Template

mako_tmpl = Template("""
<table>
  % for row in table:
    <tr>
      % for col in row.values():
        <td>${ col | h  }</td>
      % endfor
    </tr>
  % endfor
</table>
""")

The timeit is applied to this call:

data = mako_tmpl.render(table=table)

I installed mako via: "easy_install mako" and got version 0.2.2.

The generated code for the z3c.pt is the following, with _out being a
list and _write the bound append method of the list.

def render(table, _context=None, target_language=None):
    global generation

    (_out, _write) = generation.initialize_stream()
    (_attributes, repeat) = generation.initialize_tal()
    (_domain, _negotiate, _translate) = generation.initialize_i18n()
    (_escape, _marker) = generation.initialize_helpers()
    _path = generation.initialize_traversal()

    _target_language = _negotiate(_context, target_language)
    _write('<table>\\n')
    for row in table:
        _write('<tr>\\n')
        for column in row.values():
            _write('<td>')
            _tmp1 = column
            _urf = _tmp1
            if isinstance(_urf, unicode):
                _write(_urf)
            elif _urf is not None:
                _write(_escape(_urf))
            _write('</td>')
        _write('</tr>')
    _write('</table>')

    return _out.getvalue()


spitfire's main trick is to first generate a Python abstract syntax
tree out of the template and then have multiple loops of various
optimization's being applied to that tree, so it can optimize away
even more.

If someone is interested in it, I can probably try to dig up the
generated source code from spitfire as well.

Hanno

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to pylons-discuss@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to