[sqlalchemy] RelationshipProperty.info in 0.7.x branch.

2012-10-17 Thread Stefano Fontanelli


Hi Michael,
do you plan to backport the RelationshipProperty.info attribute in SQLA 0.7?
I read the source code and I found Column.info, but seems there is no 
'info' attribute in RelationshipProperty.


Best Regards,
Stefano.


--
Stefano Fontanelli
Asidev S.r.l.
Viale Rinaldo Piaggio, 32 - 56025 Pontedera (Pisa)
Tel. (+39) 333 36 53 294
Fax. (+39) 0587 97 01 20
E-mail: s.fontane...@asidev.com
Skype: stefanofontanelli
Twitter: @stefontanelli
LinkedIn: http://it.linkedin.com/in/stefanofontanelli

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Computed Columns

2012-10-17 Thread ThereMichael
The distance function should be this:

distance_function = ( 
(6371 
* func.acos(func.cos(func.radians(bindparam('origin_lat')))
* func.cos(func.radians(places_table.c.latitude))
* func.cos(func.radians(places_table.c.longitude) - 
   func.radians(bindparam('origin_lng'))
  ) 
+ func.sin(func.radians(bindparam('origin_lat'))) 
* func.sin(func.radians(places_table.c.latitude))
  )
) 
  ) 

(We're taking the acos of the whole shebang, not just the origin_lat)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/eZ22zz7xgJEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Pool_recycle in Oracle (aka Oracle has gone away)

2012-10-17 Thread Ben Hitz
that's not entirely accurate.  The Flask-SQLAlchemy extension checks out a 
connection from the connection pool at request start, returns it at request 
end.   The pool is responsible for dealing with the lifecycle of database 
connections and can be confiugured to deal with this.  To deal with 
connections that time out after a certain time idle, use the pool_recycle 
option: 
http://docs.sqlalchemy.org/en/rel_0_7/core/pooling.html?highlight=pool_timeout#setting-pool-recycle.
 

Oh thanks, I hadn't dug that deeply into FSQLA yet.This is narrowing 
down the issue though.  Our anecdotal experience is that pool_recycle is 
not doing anything with Oracle... hopefully have a test (or ideally a 
never mind) follow up in a few days.

Am I correct in assuming that SQLA connection pool is wholly independent of 
Oracle's connection pooling (which we have disabled)?

Ben





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/jPrqeF15RZcJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] changes flushed for expunged relationships

2012-10-17 Thread Kent
The attached script fails with sqlalchemy.exc.InvalidRequestError: 
Instance 'Bug at 0x1e6f3d10' has been deleted.  Use the make_transient() 
function to send this object back to the transient state.

While this example is somewhat convoluted, I have a few questions about 
sqlalchemy behavior here:

1) At the session.flush(), even though the Rock and the bugs relationship 
have been expunged, the pending delete still is issued to the database.  
Would you expect/intend sqlalchemy to delete even after the expunge()?

2) After the flush(), shouldn't the history of the 'bugs' relationship have 
been updated to reflect the statement issued to the database?  (See print 
statement)

3) The InvalidRequestError is only raised if the 'bugs' relationship has a 
backref, otherwise it isn't raised.  Any idea why?

4) Don't hate me for asking: is there a work around?  I'm trying to 
understand this scenario since in a rare case, it presents.

Thanks very much!
Kent

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/6oYSFMbpnEsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm import attributes

engine = create_engine('sqlite:///', echo=True)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

rocks_table = Table(rocks, metadata,
Column(id, Integer, primary_key=True),
)

bugs_table = Table(bugs, metadata,
Column(id, Integer, primary_key=True),
Column(rockid, Integer, ForeignKey('rocks.id')),
)

class Object(object):
def __init__(self, **attrs):
self.__dict__.update(attrs)

class Rock(Object):
def __repr__(self):
return 'Rock: id=[%s]' % self.__dict__.get('id')

class Bug(Object):
def __repr__(self):
return 'Bug: id=[%s]' % self.__dict__.get('id')

mapper(Rock, rocks_table,
properties={'bugs': relationship(Bug,
cascade='all,delete-orphan',
backref=backref('rock',cascade='refresh-expire,expunge'))
})

mapper(Bug, bugs_table)

metadata.create_all()
try:
session = Session()
r = Rock(id=1)
r.bugs=[Bug(id=1)]
session.add(r)
session.commit()

session = Session()
r = Rock(id=1)
r.bugs=[]
merged = session.merge(r)
session.expunge(merged)
# if merged is now detached, should flush() still delete Bug?
session.flush()
# should history still have deleted Bug?
print \n\nadd: %r\nunchanged: %r\ndelete: %r\n % attributes.get_history(merged, 'bugs')

# this only fails if the backref 'rock' is present in relationship
session.add(merged)

finally:
metadata.drop_all()