El dt 24 de 04 del 2007 a les 18:55 +0000, en/na Pauli Virtanen va
escriure:
> Tue, 24 Apr 2007 20:04:25 +0200, Francesc Altet kirjoitti:
> 
> > El dt 24 de 04 del 2007 a les 11:39 -0400, en/na David Huard va
> > escriure:
> >> Hi, 
> >> 
> >> I manage a dynamic table where I need to flush the table each time a
> >> row is appended (because the table can be read at any time), and I
> >> spent a lot of  time trying to understand why my code didn't work. It
> >> turns out that whenever table.flush() is called, a new instance of
> >> table.row must be created, or else you get some weird errors (not to
> >> mention difficult to track down...) I think the documentation should
> >> say something about this feature. Maybe better even would be to raise
> >> some kind of error whenever a flushed row instance gets appended.
> >> Right now, what happens is that n rows with zeros are appended, where
> >> n is the number of times append has been called. So basically, nrows
> >> grows as 0,1,3,6,10,15,... 
> > 
> > I don't quite understand what do you mean. Please, always try to add
> > code so that we can see better what you are trying to do.
> 
> Hi,
> 
> I also have hit this problem, but forgot about it. I have filed a
> bug report with an example program:
> 
>       http://www.pytables.org/trac/ticket/65

Ok. We will look into this after PyTables 2.0 rc1 would be out (most
probably tomorrow).

Thanks!

-- 
Francesc Altet    |  Be careful about using the following code --
Carabos Coop. V.  |  I've only proven that it works, 
www.carabos.com   |  I haven't tested it. -- Donald Knuth


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Pytables-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to