Hi,
 I have a python script that uses Sqlsoup to iterate over some rows of table
to modify them. The script is running really slow and I ran cProfile to see
where is the bottleneck. I've got this:

         20549870 function calls (20523056 primitive calls) in 34.800 CPU
seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000   34.810   34.810 <string>:1(<module>)
        1    0.002    0.002   34.810   34.810 {execfile}
        1    0.095    0.095   34.808   34.808 zimbra-archive.py:34(<module>)
       20    0.000    0.000   25.618    1.281 sqlsoup.py:504(commit)
       20    0.000    0.000   25.618    1.281 scoping.py:128(do)
       20    0.000    0.000   25.618    1.281 session.py:631(commit)
    40/20    0.000    0.000   25.617    1.281 session.py:361(commit)
       20    1.262    0.063   25.565    1.278
session.py:290(_remove_snapshot)
   636600   20.002    0.000   23.254    0.000
state.py:220(expire_attributes)
      100    0.021    0.000    5.035    0.050 query.py:1447(all)
    32048    0.042    0.000    3.180    0.000 query.py:1619(instances)
      504    0.001    0.000    3.139    0.006 sqlsoup.py:549(__getattr__)
      504    0.003    0.000    3.138    0.006 sqlsoup.py:535(entity)

The problem is commit, almost all the time used by the script is on commit.

Is there a faster way to modify every row? Or Should i use raw sql?

Regards,
  Diego

-- 
Diego Woitasen
XTECH

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to