Hi,

I wrote a small program to parse a log file, generate some objects
from it, and then store these objects in an SQL database using the
ORM.

It takes 20 seconds to parse a 1000 line file. I was wondering if
that's intrinsic in using the high levels of abstraction, or if there
is something that can be done about it, either in my code or in
SQLAlchemy itself.

A profile using 0.4 beta 6 reveals these guys to be the culprits:

         20079901 function calls (19993507 primitive calls) in 105.520
CPU seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall
filename:lineno(function)
   692199   33.020    0.000   61.760    0.000 attributes.py:
870(_is_modified)
 10726906   23.250    0.000   23.250    0.000 attributes.py:
334(check_mutable_modified)
     2864    4.120    0.001    6.640    0.002 attributes.py:
722(values)
   693127    3.770    0.000    5.410    0.000 attributes.py:
841(managed_attributes)
   692199    3.210    0.000   64.970    0.000 attributes.py:
867(is_modified)
  1580968    2.980    0.000    2.980    0.000 :0(append)
     1432    2.520    0.002   70.760    0.049 unitofwork.py:
154(locate_dirty)
   740892    1.990    0.000    2.200    0.000 weakref.py:
218(__getitem__)

My scripts and a sample logfile can be found at:
http://users.ugent.be/~pbienst/pub/logparse.tgz

Thanks for any light you can shed on this!

Peter


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to