Hi,

I'm wondering how I might get at postgresql's large object type (lob).  
It's not to be confused with the TOASTED bytea, which are limited currently 
to 1 GiB yet in practice is much lower (for me <400MiB)  - it's a special 
table + api designed to handle very large binary objects, like a few 
hundred MiBs to more recently TiBs. I don't see appropriate definitions 
anywhere and can't find any mention of it really with sqlalchemy. psycopg2 
has support for it and calls it lobject, it provides a file like interface 
to the lob which is a good mapping since with libpgsql you use lo_creat, 
lo_seek, lo_write, lo_read to work with these beasts

I took a look at UserDefinedType but on the bind_processor, this doesn't 
distinguish between inserts and updates.  With inserts, you'd use an oid 
allocated from lo_creat in the transaction.  On updates, you'd use 
lo_trunc/lo_write.  As one more constraint, you must be in a transaction 
before any of these functions are usable.  To reference large objects, as 
they are explicitly an out of table storage, the postgresql specific oid is 
used (which allows garbage collection, referential integrity checks etc).

I'll also mention that other tables reference these large objects via oids, 
something like smart pointers in postgres. 

It'd be great to plug large objects into sqlalchemy properly - but can it 
be done?

-Jason

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to