Using Postgres for the first time, I was dismayed to find it actually 
slower than SQLite.  Of course, the standard Postgres install is very 
UN-optimized so I spent some time making settings to improve it, but I was 
still not able to get the speed I needed from a rather large 
"update_or_insert" transaction.  I decided to approach it with standard DAL 
commands and found the result was about 2.5-3 times faster than the 
built-in update_or_insert method.

My strategy is to attempt the update first and look at the number of 
returned rows.  If zero, then the key was not found and the insert is 
needed.  In the pseudo-code below, "expr" is some column expression to 
select the row to be updated, "dict" is a dictionary of column names and 
values, and "table" is the base table for the update/insert:


  count = db(expr).update(dict)
  if count==0:
      table[0].insert(**dict)

  #Or if you prefer...

  if not db(expr).update(dict): 
      table[0].insert(**dict)


Using Postgres my massive update went from around 60 seconds with 
update_or_insert method down to about 22 seconds with the above.  The 
SQLite time was also reduced from about 30 seconds down to about 18 
seconds.  I do not have enough experience with other databases and web2py 
to know if this is a general case or if it only affects my schema and 
database choice.

-- Joe B.

-- 



Reply via email to