[sqlalchemy] Problems with typeadapter on HSTORE

2015-07-29 Thread Morgan McClure
 

I’ve been looking to make a simple typeadapter based on the HSTORE data 
type in postgres to do some value coercion and I have been having some 
problems.


I want to coerce the individual values inside my dictionary before they are 
put into the postgres HSTORE column and when I select keys, I want to 
coerce them back.


What methods do I need to subclass to intercept subkey/indexing operations 
as well as full assignment operations?


Normally for a simpler datatype I’d just implement process_bind_param, 
process_result_value but that hasn't been working the way I expect it to 
with HSTORE.


If it makes the problem simpler I don’t need the MutableExtension to work 
so I only need to coerce data on the first assignment to the column.


I'll primarily be selecting data like
Query(MyObjectClass.MyHSTOREType['somekey'])

but occasionally, I may be doing

Query(MyObjectClass.MyHSTOREType)

and would like my postgres-python coercion to work in both scenarios.


Thanks in advance

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unique Many-to-Many

2014-03-18 Thread Morgan McClure
Yeah I confirmed set collections don't actually fix it, I guess setting a 
rollback point is required.
Is there any chance this is a difference between the way mysql does table 
locking and postgres?
The collections in question have lazy=dynamic set so I'm wondering if 
previously it wasn't a problem because the DB was locking the whole table?



On Monday, March 17, 2014 8:49:57 PM UTC-7, Michael Bayer wrote:


 On Mar 17, 2014, at 10:38 PM, Morgan McClure 
 mcclure...@gmail.comjavascript: 
 wrote: 

  I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
 postgres 
  
  If I put a unique constraint on the join table, and I add a duplicate, I 
 get an integrity error. 
  If I change the collection class to set, it won't double commit, however 
 a set prevents me from using things like order_by. 
  
  In my scenario, I'm massively multi-threaded and so the check-before 
 commit methodology won't work (proven it breaks with 24 processes). 
  Is it possible to get a nice elegant solution to this without defining a 
 custom collection_class? 
  
  I believe this is a regression (enhancement?) from version 0.8, but on 
 0.8 I was using mysql and now I'm using postgres. 

 im not seeing how this is any kind of change from version 8 to 9, or even 
 from version 7, 6, or 5;  a list will certainly allow duplicates that will 
 give you integrity errors, and a set certainly won’t.   using many 
 processes of course you can’t coordinate those in memory with a set, only 
 the database knows the right answer. 

 the approach here unfortunately is to use traditional means of adding new 
 rows while checking for an existing one.   which means either emitting a 
 SELECT first and ensuring adequate coordination between these 24 processes 
 using transaction isolation or locks, or using a simple optimistic approach 
 where you start a savepoint (begin_nested()), attempt the operation, catch 
 IntegrityError and then continue with the row now known to be already 
 present.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Unique Many-to-Many

2014-03-17 Thread Morgan McClure
I'm trying to make a many-to-many relationship using sqlalchemy 0.9 and 
postgres

If I put a unique constraint on the join table, and I add a duplicate, I 
get an integrity error.
If I change the collection class to set, it won't double commit, however a 
set prevents me from using things like order_by.

In my scenario, I'm massively multi-threaded and so the check-before commit 
methodology won't work (proven it breaks with 24 processes).
Is it possible to get a nice elegant solution to this without defining a 
custom collection_class?

I believe this is a regression (enhancement?) from version 0.8, but on 0.8 
I was using mysql and now I'm using postgres.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.