Re: Could this expression parser be more 'Pythonic'?

2009-05-07 Thread Amr
Hi John,

Thanks for the tips, I will check them out.

--Amr
--
http://mail.python.org/mailman/listinfo/python-list


Could this expression parser be more 'Pythonic'?

2009-05-06 Thread Amr
Hello all,

I've been spending the last few weeks learning Python, and I've just
started to use it to write a simple BASIC compiler. I'm writing a
mathematical expression parser and wrote a function that would take a
string and split it into high level tokens.

The code can be found at http://pastebin.com/f757252ec (have a look at
the function tokenize).

I was interested to see if the tokenize function could be written in a
better way, so as to make better use of the Python language and
libraries. One thing that sprung to mind was whether it would be a
good idea to perhaps take advantage of regular expressions. I had a
look at a few compiler sites and regexps seem to come into that role
quite a bit. I've only done a little work on regexps and they seem
very powerful, but can easily get out of control!

So, any suggestions as to how to make better use of the language
writing something like this?

Thanks,

--Amr
--
http://mail.python.org/mailman/listinfo/python-list


Re: Could this expression parser be more 'Pythonic'?

2009-05-06 Thread John Machin
On May 7, 9:23 am, Amr amrbek...@gmail.com wrote:
 Hello all,

 I've been spending the last few weeks learning Python, and I've just
 started to use it to write a simple BASIC compiler. I'm writing a
 mathematical expression parser and wrote a function that would take a
 string and split it into high level tokens.

 The code can be found athttp://pastebin.com/f757252ec(have a look at
 the function tokenize).

It's always a good idea to describe to yourself what you are actually
trying to achieve before you get too embrolied in the implementation
details.

Adding a few more tests like this:

if __name__ == '__main__':
tests = \
1 + 2 * 3
(1 + 2) * 3
1 + (2 * 3)
!...@#$%^
foo = bar + zot
foo=bar+zot
xyzzy(bar, zot + 2.34)

for test in tests.splitlines():
test = test.strip()
e = Expression(test)
result = e.tokenize(e.expression)
print %r = %r % (test, result)

produces this:
'1 + 2 * 3' = ['1', '+', '2', '*', '3']
'(1 + 2) * 3' = ['1 + 2', '*', '3']
'1 + (2 * 3)' = ['1', '+', '2 * 3']
'!...@#$%^' = ['!...@#$%^']
'foo = bar + zot' = ['foo', '=', 'bar', '+', 'zot']
'foo=bar+zot' = ['foo=bar', '+', 'zot']
'xyzzy(bar, zot + 2.34)' = ['xyzzy', 'bar, zot + 2.34']
'' = []

which indicates that your notions of what a token is and what you
might use them for must be rather interesting.

Most other folks' tokenisers would have no concept of bracket depth,
would not throw the parentheses away, and would return not only the
tokens but a classification as well e.g.

'xyzzy' ID
'(' LPAREN
'bar' ID
',' LISTSEP
'zot' ID
'+' OPER
'2.34' FLOAT_CONST
')' RPAREN


 I was interested to see if the tokenize function could be written in a
 better way, so as to make better use of the Python language and
 libraries. One thing that sprung to mind was whether it would be a
 good idea to perhaps take advantage of regular expressions. I had a
 look at a few compiler sites and regexps seem to come into that role
 quite a bit. I've only done a little work on regexps and they seem
 very powerful, but can easily get out of control!

 So, any suggestions as to how to make better use of the language
 writing something like this?

There are various packages like pyparsing, plex, ply, ... which are
all worth looking at. However if you want to get stuck into the
details, you can lash up a quick and dirty lexer (tokeniser) using
regexps in no time flat, e.g.

an ID is matched by r[A-Za-z_][A-Za-z0-9_]*
an INT_CONST is matched by r[0-9]+
an OPER is matched by r[-+/*]
etc
so you set up a (longish) re containing all the alternatives
surrounded by capturing parentheses and separated by |.

r([A-Za-z_][A-Za-z0-9_]*)|([0-9]+)|([-+/*])etc

Hint: don't do it manually, use )|(.join()

and you just step through your input string:
1. skip over any leading whitespace
2. look for a match of your re starting at current offset
3. matchobj.lastindex gives you 0 if it's an ID, 1 if it's an
INT_CONST, etc
4. save the slice that represents the token
5. adjust offset, loop around until you hit the end of the input

Hint: the order in which you list the alternatives can matter a lot
e.g. you better have = before  otherwise it would regard = as
comprising two tokens  and =

HTH,
John
--
http://mail.python.org/mailman/listinfo/python-list