hey oraclers -

turns out I was underestimating cx_Oracle in the last go-around with
storing BLOBs, and it actually is possible to store any number of
blobs in one row with any size (or at least ive tested in the 10s of
Ks).  During an INSERT, there was a glitch whereby it wasnt pulling in
cx_Oracle.BLOB for the setinputsizes call and it was instead pulling
in cx_Oracle.BINARY from the base type object.  also the
auto_setinputsizes stuff was kind of broken in the case of some types
too so it was largely useless.  and my silly workaround with the
"RAWTOHEX" conversion was only because it was stuck on BINARY and
wasnt using BLOB, so thats out.

so in rev 2402....its fixed !  the oracle dialect will now have
auto_setinputsizes default to True (meaning it calls
cursor.setinputsizes() with the appropriate type for most columns
before executing), it will use cx_Oracle.BLOB for blobs and
cx_Oracle.CLOB for clobs....and then you can INSERT any number of
columns at once each with a size well over 4K.

the fetch side was already working as we had our special ResultProxy
working around the fetchall() issue.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to