I did try to see if I could get that to work, but I couldn't figure it
out. I'll see if I can play around more with that api.
So say I did investigate a little more to see how much work it would
take to adapt the re module to accept an iterator (while leaving the
current string api as another code
On Wed, 2 Feb 2005 22:22:27 -0500, rumours say that Daniel Bickett
<[EMAIL PROTECTED]> might have written:
>Erick wrote:
>> True, but it doesn't work with multiline regular expressions :(
>If your intent is for the expression to traverse multiple lines (and
>possibly match *across* multiple lines
Erick wrote:
Hello,
I've been looking for a while for an answer, but so far I haven't been
able to turn anything up yet. Basically, what I'd like to do is to use
re.finditer to search a large file (or a file stream), but I haven't
figured out how to get finditer to work without loading the entire f
Erick wrote:
> True, but it doesn't work with multiline regular expressions :(
If your intent is for the expression to traverse multiple lines (and
possibly match *across* multiple lines,) then, as far as I know, you
have no choice but to load the whole file into memory.
--
Daniel Bickett
dbicke
Is it not possible to wrap your loop below within a loop doing
file.read([size]) (or readline() or readlines([size]),
reading the file a chunk at a time then running your re on a per-chunk
basis?
-ej
"Erick" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Ack, typo. What I mea
True, but it doesn't work with multiline regular expressions :(
-e
--
http://mail.python.org/mailman/listinfo/python-list
Ack, typo. What I meant was this:
cat a b c > blah
>>> import re
>>> for m in re.finditer('\w+', file('blah')):
... print m.group()
...
Traceback (most recent call last):
File "", line 1, in ?
TypeError: buffer object expected
Of course, this works fine, but it loads the file completely into
m
The following example loads the file into memory only one line at a
time, so it should suit your purposes:
>>> data = file( "important.dat" , "w" )
>>> data.write("this\nis\nimportant\ndata")
>>> data.close()
now read it
>>> import re
>>> data = file( "important.dat" , "r" )
>>> line = data.
Hello,
I've been looking for a while for an answer, but so far I haven't been
able to turn anything up yet. Basically, what I'd like to do is to use
re.finditer to search a large file (or a file stream), but I haven't
figured out how to get finditer to work without loading the entire file
into mem