In a transaction I am updating values, then querying, then updating
some more, ... then eventually committing the whole batch at once.

If a value being updated is a dictionary to be pickled then I may,
depending on the data in the dictionary, find that it is flushed to
the database before every subsequent query even if the value is never
modified again. So I will see something like this:

  query for A
  update A
  query for B (A gets flushed by SA before the query)
  update B
  query for C (A and B get flushed by SA before the query)
  update C
  query for D (A, B and C get flushed by SA before the query)
  etc.

I.e., O(n*n) run-time. Which is a problem as my batch job has 1000
items in it!

I think the problem may be that for some dictionaries that are to
pickled PickleType.copy_value returns something that does not equal
the original value according to PickleType.compare_values.

E.g.
>>> from sqlalchemy.types import PickleType
>>> p = PickleType()
>>> from decimal import Decimal
>>> d1 = {'Amt': Decimal("6420.000000000000"), 'Broker': 'A', 'F': 'B'}
>>> d2 = {'Amt0': Decimal("6420.000000000000"), 'Broker': 'A', 'F': 'B'}
>>> p.compare_values(p.copy_value(d1), d1)
False
>>> p.compare_values(p.copy_value(d2), d2)
True

Yet:
>>> p.copy_value(d1) == d1
True

I can work around the issue by setting comparator=operator.eq on
PickleType however the current behaviour seems to be broken.

I can post a test case that shows objects being flushed multiple times
if you like. I'm running SA 0.5.0rc4 on OS X 10.5.5 (using Apple's
python 2.5.1 interpreter)

Ta,
Jon.



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to