Re: [sqlalchemy] NoSuchColumnError and _key_fallback

2013-10-11 Thread Michael Bayer
Redshift has all kinds of limitations and weird behaviors.   They developed it 
as a simplified cloud db only generally based off of Postgres. 

Sent from my iPhone

 On Oct 10, 2013, at 11:11 PM, Wu Jiang wujian...@gmail.com wrote:
 
 Yes. Thank you, Michael. Still wondering why Redshift is case insensitive.
 
 On Thursday, October 10, 2013 5:22:23 PM UTC-4, Michael Bayer wrote:
 so send case_insensitive=True to create_engine() , all the result-row 
 targeting will be done case insensitively - fixes the issue ?
 
 
 On Oct 10, 2013, at 4:46 PM, Wu Jiang wuji...@gmail.com wrote:
 
 It seems to only happen when we connect to Amazon Redshift. According to 
 Redshift doc
 (http://docs.aws.amazon.com/redshift/latest/dg/r_names.html), delimited 
 identifiers are
 case insensitive.
 
 On Thursday, October 10, 2013 2:06:09 PM UTC-4, Michael Bayer wrote:
 can you produce a test case?  can you get the short test I sent to produce 
 the failure condition ? 
 
 
 On Oct 10, 2013, at 1:24 PM, Ryan Kelly rpke...@gmail.com wrote: 
 
  On Thu, Oct 10/10/13, 2013 at 12:59:31PM -0400, Michael Bayer wrote: 
  
  On Oct 10, 2013, at 8:34 AM, Ryan Kelly rpke...@gmail.com wrote: 
  
  Hi: 
  
  One of our applications is generating the following error: 
  
  NoSuchColumnError: Could not locate column in row for column 
  'client.claims.client_id' 
  
  Which is rather strange, because that column is aliased with .label() 
  to 
  AS Client_ID, so of course that row cannot be located. 
  
  when the SQL is actually rendered, is the Client_ID name rendered 
  with quotes?  it should be, as that is a case-sensitive name. 
  Yes, the SQL is correctly quoted. 
  
  0.8 made a change regarding upper/lower case names which is that we no 
  longer call lower() on identifier names when producing lookup 
  dictionaries for result rows - see 
  http://docs.sqlalchemy.org/en/rel_0_8/changelog/migration_08.html#case-insensitive-result-row-names-will-be-disabled-in-most-cases
   
  
  or the failure here might have something to do with a conflicting 
  client_id name elsewhere in the query. 
  It's the only one. 
  
  you might want to see is sending case_sensitive=False to 
  create_engine() restores the working behavior you saw in 0.7. 
  it does sound like something originating within your query and how 
  SQLAlchemy handles it, psycopg2 is unlikely to be playing a role here. 
  I guess I will poke around some more. 
  
  -Ryan Kelly
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Session creation without scoped session?

2013-10-11 Thread Michael Bayer

On Oct 10, 2013, at 8:57 PM, Iain Duncan iainduncanli...@gmail.com wrote:

 Seems like there are some variety of opinion on some stuff on the pyramid 
 list so I thought I'd come straight to the source. In Pyramid, I can attach a 
 server wide shared object to the registry which is created on server startup, 
 and I can specify a factory for making requests that creates the request 
 object, available as a local argument (*not* a thread local request import) 
 to all code. In the request object I can also add an end-of-life callback.
 
 I've been using SQLAlchemy by:
 
 - creating the engine and sessionmaker at startup time, stashing in the ZCA 
 registry
 - calling the session maker at the beginning of the request lifecycle and 
 then creating the session for that request and attaching to the request.
 - doing a cleanup at end of request
 
 According to my (possibly flawed) understanding of the SA docs, this should 
 mean I do not need to use a scoped session maker.

that is correct.


 The session object never gets imported as a magic global thread local thing, 
 it's always explicitly passed in as an arg.

you can also link the two things together like this, not that you'd need to:

from pyramid.threadlocal import get_current_request
Session = scoped_session(sessionmaker(), scopefunc=get_current_request)

now you're only using Pyramid's thread local voodoo, none in SQLAlchemy.


 Am I correct in this? Is there any advantage in using a scoped session anyway?

I find plenty.   I'm usually concerned about code that needs to run outside the 
context of a web request, like background jobs and utilities - this code still 
makes full use of the web app's model and all, just not the request.So 
without Pyramid's hook for session creation, someone else has to establish the 
new Session.   ScopedSession is nice because nobody has to tag themselves as 
the creator of the Session, it just creates the new Session as soon as 
someone starts using it.

Also having to pass a session around to all methods and functions that aren't 
web-request-aware can become burdensome.  My app over here actually has three 
different sessions for different use cases, so I guess if I weren't using 
scoped sessions I'd need to create some more substantial context object that I 
pass around to all these methods which has all three sessions, but then I'd 
need to reconcile that context object with Pyramid's request also.Or 
perhaps, you can just make Pyramid's request the context itself and when 
you're running in a non-web context just use DummyRequest or something, but 
doesn't that seem strange/weird also?   Model objects that have nothing to do 
with web requests all have to become hardcoded to the concept of a request, 
that seems no less distasteful than thread locals to me.

With a scoped session, none of these decisions even need to be made, you don't 
have to pass anything around to your model, the model has a place to get its 
context for database connectivity just like it has a place to get context for 
anything else it needs.  There's no downside to scoped sessions that I've 
observed and the Pyramid community's aversion to thread local voodoo IMHO is 
slightly misplaced.  Thread locals are, like so many other things, easy to 
misuse, particularly when random bits of code use them to communicate state and 
such.  But having a few well-located and globally accessible registries that 
are specific to your application, not a problem at all.  I see no evidence for 
Pyramid's assertion that they decrease testability, it's standard practice to 
patch well-known globals for testing, and in fact mock.patch is part of the 
Python standard library now.





signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Debug insights into QueuePool

2013-10-11 Thread Michael Bayer

On Oct 10, 2013, at 9:11 PM, John Anderson son...@gmail.com wrote:

 Hey, I was wondering if it would be possible to add some extra logging to the 
 QueuePool for when its using overflow connections vs a connection out of the 
 pool.

I'd just use a checkout event handler and then log out pool.status() to get 
at this, it shows current overflow among other things.


 
 We currently are using a pool_size of 1 and a max overflow of 10.  These were 
 just random settings set in a library and no one really noticed at first,

those seem like awkward settings for a multithreaded app.


 For our app design we have paster running 10 worker threads and we are using 
 the threadlocal strategy.  

The threadlocal thing isn't really something needed with modern usage 
patterns (is this create_engine() threadlocal or pool_threadlocal?), with the 
exception of using SQLite :memory: databases.   Of course if you're using 
patterns that rely upon the threadlocal behavior then you'd need to address 
that.

 Should we have a pool_size for the amount of worker threads we have so that 
 they each get their own?

Seems reasonable...


 
 My understanding of how the QueuePool is working is it'll pull a connection 
 from the queue and it'll mark that connection with the thread that accessed 
 it so for our current setup of a pool_size of 1, we'll have a single worker 
 who gets the pooled connection and everyone else is going to get overflow 
 connections, right?

pretty much

 
 So, in our specific case where we don't use transactions at all, it probably 
 makes most sense to remove threadlocal strategy and to increase our pool_size 
 to the amount of workers we have?

yeah I don't think threadlocal is getting you anything here.


signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] Collections and outer joins with core

2013-10-11 Thread Gustavo Baratto
Hello there,

I'm looking into the cheapest way of, in core, building collections into
each row on fairly large left and right tables (the collections themselves
are small though). By collection i mean the same thing as in sqlalchemy
orm: an outer left join, that to each row of the left table with I add an
extra column referencing a list of rows on the right table matching the
join criteria.

Right now I'm using a two-query + stitch approach
1- a simple select on the left table.
2- an inner join on the right table using the same where clause as the
query above but returning only columns of the right table.
3- iterate over the left table result to build a dict where the key is the
pk, and the value the row itself.
4- iterate over the right table result, and append each row to a new column
on the referenced left table row.

The above can be done programmatically and it works fine for two table
joins. And the only reason I'm doing this way is that I could not figure
out how to programmatically find which columns in the result of a join
belong to each table without hardcoding column names.

But now I need a bunch of 3-table outer joins, and I believe using one big
join would be the way to go, but again I'd like to avoid hardcoding column
names if possible.

So here are my questions:
- is it possible to know which columns in the result of a join belong to
each of the tables?
- is there a more efficient way of building collections with core? It makes
me cringe to load large resultsets in memory and iterate over them several
times :)

Thank you very much :)
G.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Collections and outer joins with core

2013-10-11 Thread Michael Bayer

On Oct 11, 2013, at 1:25 PM, Gustavo Baratto gbara...@gmail.com wrote:

 - is it possible to know which columns in the result of a join belong to each 
 of the tables?
 
 

sure:

for row in result:
cols_from_table_a = [row[col] for col in tablea.c]
cols_from_table_b = [row[col] for col in tableb.c]

 - is there a more efficient way of building collections with core? It makes 
 me cringe to load large resultsets in memory and iterate over them several 
 times :)
 
 

the techinique you describe is what the ORM calls subquery eager loading.   
There is another approach that was pointed out to me by a user, which may make 
for a nice new feature someday, which is IN clause eager loading.

Subq eager loading looks like this:

SELECT id, data FROM a

SELECT b.id, b.data, b.a_id FROM b JOIN (SELECT id FROM a) AS a_anon ON 
b.a_id=a_anon.id

IN clause is like this:

SELECT id, data FROM a

gather A.ids

SELECT b.id, b.data, b.a_id FROM b WHERE b.a_id IN (list of A ids)

the IN version is for small parent result sets, much more efficient than the 
subq version especially on a JOIN-hostile database like MySQL.  For very large 
parent result sets, it's not, because the SQL statement grows too large; 
databases like Oracle have a hard limit on how many params can be in the IN 
clause.






signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Session creation without scoped session?

2013-10-11 Thread Iain Duncan
Thanks Michael, that's very helpful.

Iain


On Fri, Oct 11, 2013 at 7:18 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Oct 10, 2013, at 8:57 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

  Seems like there are some variety of opinion on some stuff on the
 pyramid list so I thought I'd come straight to the source. In Pyramid, I
 can attach a server wide shared object to the registry which is created on
 server startup, and I can specify a factory for making requests that
 creates the request object, available as a local argument (*not* a thread
 local request import) to all code. In the request object I can also add an
 end-of-life callback.
 
  I've been using SQLAlchemy by:
 
  - creating the engine and sessionmaker at startup time, stashing in the
 ZCA registry
  - calling the session maker at the beginning of the request lifecycle
 and then creating the session for that request and attaching to the request.
  - doing a cleanup at end of request
 
  According to my (possibly flawed) understanding of the SA docs, this
 should mean I do not need to use a scoped session maker.

 that is correct.


  The session object never gets imported as a magic global thread local
 thing, it's always explicitly passed in as an arg.

 you can also link the two things together like this, not that you'd need
 to:

 from pyramid.threadlocal import get_current_request
 Session = scoped_session(sessionmaker(), scopefunc=get_current_request)

 now you're only using Pyramid's thread local voodoo, none in SQLAlchemy.


  Am I correct in this? Is there any advantage in using a scoped session
 anyway?

 I find plenty.   I'm usually concerned about code that needs to run
 outside the context of a web request, like background jobs and utilities -
 this code still makes full use of the web app's model and all, just not the
 request.So without Pyramid's hook for session creation, someone else
 has to establish the new Session.   ScopedSession is nice because nobody
 has to tag themselves as the creator of the Session, it just creates the
 new Session as soon as someone starts using it.

 Also having to pass a session around to all methods and functions that
 aren't web-request-aware can become burdensome.  My app over here actually
 has three different sessions for different use cases, so I guess if I
 weren't using scoped sessions I'd need to create some more substantial
 context object that I pass around to all these methods which has all three
 sessions, but then I'd need to reconcile that context object with Pyramid's
 request also.Or perhaps, you can just make Pyramid's request the
 context itself and when you're running in a non-web context just use
 DummyRequest or something, but doesn't that seem strange/weird also?
 Model objects that have nothing to do with web requests all have to become
 hardcoded to the concept of a request, that seems no less distasteful
 than thread locals to me.

 With a scoped session, none of these decisions even need to be made, you
 don't have to pass anything around to your model, the model has a place to
 get its context for database connectivity just like it has a place to get
 context for anything else it needs.  There's no downside to scoped sessions
 that I've observed and the Pyramid community's aversion to thread local
 voodoo IMHO is slightly misplaced.  Thread locals are, like so many other
 things, easy to misuse, particularly when random bits of code use them to
 communicate state and such.  But having a few well-located and globally
 accessible registries that are specific to your application, not a problem
 at all.  I see no evidence for Pyramid's assertion that they decrease
 testability, it's standard practice to patch well-known globals for
 testing, and in fact mock.patch is part of the Python standard library
 now.





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Trying to get a 'global' session to work for SQLAlchemy database library tests

2013-10-11 Thread Ken Lareau
In the process of trying to find an efficient way to manage a test database
for a large set of tests for a database library I'm writing that uses
SQLAlchemy,
I came across this page:


http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html

This is definitely what I want to do, with one catch: I already have a
session
management system in place for the library that seems to conflict with the
sample code given on the webpage, and I'm not having luck reconciling it.

Basically I have an 'init_session' I use that looks as follows:

def init_session(db_user, db_password, **kwargs):
Initialize database session

if 'hostname' not in kwargs or 'db_name' not in kwargs:
db_host, db_name = load_db_config()
kwargs.setdefault('hostname', db_host)
kwargs.setdefault('db_name', db_name)

dbconn_string = create_dbconn_string(db_user, db_password, **kwargs)
engine = create_engine(dbconn_string)

# Ensure connection information is valid
try:
engine.execute('select 1').scalar()
except sqlalchemy.exc.DBAPIError, e:
raise PermissionsException(e)

Session.configure(bind=engine)

Session is defined as follows (in another file at module-level):

Session = scoped_session(sessionmaker())

From all other modules, I simply do a:

from db-meta-module import Session

and I am able to use Session to manage all my access to the ORM.
My attempt to modify the code on the webpage currently looks like this:

import unittest2 as unittest

from tagopsdb.database import init_database, init_session
from tagopsdb.database.meta import Session


def setup_module():
global transaction, connection, engine

# Connect to the database and create the schema within a transaction
engine = init_session('dbtest', 'dbtestpasswd', hostname='localhost',
  db_name='TagOpsDB_Testing')
init_database()

# If you want to insert fixtures to the DB, do it here


def teardown_module():
# Roll back the top level transaction and disconnect from the database
Session.rollback()
Session.close()
engine.dispose()


class DatabaseTest(unittest.TestCase):
Base unittest class to manage nested transactions for testing

def setup(self):
self.__transaction = Session.begin_nested()


def teardown(self):
self.__transaction.rollback()

I modified my test classes to subclass DatabaseTest, but an attempt
to run the tests results in:

UnboundExecutionError: Could not locate a bind configured on mapper
Mapper|Environments|environments or this Session

This does not completely surprise me as my understanding about sessions
in SQLAlchemy is still not solid enough to always understand why my
current code works (and I have a full application using this without issue
at the moment).

So I guess the question is: am I missing something really obvious here,
or will I need to rethink how I deal with sessions in my library code?

Thanks in advance.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Trying to get a 'global' session to work for SQLAlchemy database library tests

2013-10-11 Thread Michael Bayer

On Oct 11, 2013, at 7:14 PM, Ken Lareau klar...@tagged.com wrote:

 In the process of trying to find an efficient way to manage a test database
 for a large set of tests for a database library I'm writing that uses 
 SQLAlchemy,
 I came across this page:
 
 http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html
 
 This is definitely what I want to do, with one catch: I already have a session
 management system in place for the library that seems to conflict with the
 sample code given on the webpage, and I'm not having luck reconciling it.

I find the approach on that page a little awkward - it's using new globals for 
no good reason and also the begin_nested() seems strange.   The test itself 
then has a self.session, so the test itself is using a test-bound session, 
the choice of globals for connection and engine seems even more weird.

The way this works is:

1. test fixture gets at an engine, from configurational system, locally, 
whereever.

2. test fixture gets a connection, holds onto it locally.

3. test fixture gets a transaction from connection - this is a top level 
transaction, using connection.begin()

4. test fixture then does whatever the test needs to get at a session.  if the 
code being tested relies upon a global registry, it injects the connection.  
below is using a traditional scoped session:

def setUp(self):
self.conn = engine.connect()
self.trans = self.conn.begin()

from application.model import the_scoped_session
self.session = the_scoped_session(bind=self.conn)

now above, the test fixture has a hold on self.session.  but - this is the 
*same* session that's in the registry (the registry here being 
application.model.the_scoped_session).   if some other code somewhere calls 
upon the_scoped_session(), they get the *same* session.   it's a registry, 
that's the point of it.

if you have some other kind of registration thing in place, you'd need to 
figure out how to load it up with a new Session bound to that local connection.


5. test fixture releases the session:

def tearDown(self):
the_scoped_session.remove()
self.trans.rollback()
self.conn.close()

so note, we don't have to bother doing anything special with the Session at 
teardown time, we just dump it.   we roll back the transaction that we've 
created externally to it.

an example of running through this is in the docs at:

http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#joining-a-session-into-an-external-transaction

 
 and I am able to use Session to manage all my access to the ORM.
 My attempt to modify the code on the webpage currently looks like this:
 
 import unittest2 as unittest
 
 from tagopsdb.database import init_database, init_session
 from tagopsdb.database.meta import Session
 
 
 def setup_module():
 global transaction, connection, engine
 
 # Connect to the database and create the schema within a transaction
 engine = init_session('dbtest', 'dbtestpasswd', hostname='localhost',
   db_name='TagOpsDB_Testing')
 init_database()
 
 # If you want to insert fixtures to the DB, do it here
 
 
 def teardown_module():
 # Roll back the top level transaction and disconnect from the database
 Session.rollback()
 Session.close()
 engine.dispose()

I tend to organize things such that the scope of this transaction is *per 
test*, not per module as you're doing.  the engine, that can be per module, or 
preferably per-application.   But i'd be linking the lifespan of the Session to 
that of the transaction (which again begin_nested() should be a begin()).

 
 I modified my test classes to subclass DatabaseTest, but an attempt
 to run the tests results in:
 
 UnboundExecutionError: Could not locate a bind configured on mapper 
 Mapper|Environments|environments or this Session

make sure the Session is created with an explicit bind to the connection.




signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Trying to get a 'global' session to work for SQLAlchemy database library tests

2013-10-11 Thread Ken Lareau
On Fri, Oct 11, 2013 at 5:53 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Oct 11, 2013, at 7:14 PM, Ken Lareau klar...@tagged.com wrote:

 In the process of trying to find an efficient way to manage a test database
 for a large set of tests for a database library I'm writing that uses
 SQLAlchemy,
 I came across this page:


 http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html

 This is definitely what I want to do, with one catch: I already have a
 session
 management system in place for the library that seems to conflict with the
 sample code given on the webpage, and I'm not having luck reconciling it.


 I find the approach on that page a little awkward - it's using new globals
 for no good reason and also the begin_nested() seems strange.   The test
 itself then has a self.session, so the test itself is using a test-bound
 session, the choice of globals for connection and engine seems even
 more weird.


Yeah, after looking at it a bit, it also seemed awkward to me, but the
reason
I was looking is that it takes a good full 7 seconds to recreate an empty
database
and doing that per test or even per module might get a bit painful
(eventually
I suspect this library will have quite a few test modules to match all the
tables
and uses of the library), but maybe I'll just pursue that for now (I mostly
had
that working, at least).


 The way this works is:

 1. test fixture gets at an engine, from configurational system, locally,
 whereever.

 2. test fixture gets a connection, holds onto it locally.

 3. test fixture gets a transaction from connection - this is a top level
 transaction, using connection.begin()

 4. test fixture then does whatever the test needs to get at a session.  if
 the code being tested relies upon a global registry, it injects the
 connection.  below is using a traditional scoped session:

 def setUp(self):
 self.conn = engine.connect()
 self.trans = self.conn.begin()

 from application.model import the_scoped_session
 self.session = the_scoped_session(bind=self.conn)

 now above, the test fixture has a hold on self.session.  but - this is
 the *same* session that's in the registry (the registry here being
 application.model.the_scoped_session).   if some other code somewhere
 calls upon the_scoped_session(), they get the *same* session.   it's a
 registry, that's the point of it.

 if you have some other kind of registration thing in place, you'd need to
 figure out how to load it up with a new Session bound to that local
 connection.


 5. test fixture releases the session:

 def tearDown(self):
 the_scoped_session.remove()
 self.trans.rollback()
 self.conn.close()

 so note, we don't have to bother doing anything special with the Session
 at teardown time, we just dump it.   we roll back the transaction that
 we've created externally to it.

 an example of running through this is in the docs at:


 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#joining-a-session-into-an-external-transaction


 and I am able to use Session to manage all my access to the ORM.
 My attempt to modify the code on the webpage currently looks like this:

 import unittest2 as unittest

 from tagopsdb.database import init_database, init_session
 from tagopsdb.database.meta import Session


 def setup_module():
 global transaction, connection, engine

 # Connect to the database and create the schema within a transaction
 engine = init_session('dbtest', 'dbtestpasswd', hostname='localhost',
   db_name='TagOpsDB_Testing')
 init_database()

 # If you want to insert fixtures to the DB, do it here


 def teardown_module():
 # Roll back the top level transaction and disconnect from the database
 Session.rollback()
 Session.close()
 engine.dispose()


 I tend to organize things such that the scope of this transaction is *per
 test*, not per module as you're doing.  the engine, that can be per module,
 or preferably per-application.   But i'd be linking the lifespan of the
 Session to that of the transaction (which again begin_nested() should be a
 begin()).


My only concern with that is the enormous time to recreate/reset the
database for each test as mentioned above, but once again, I might
not be fully understanding if this is actually a concern or not.


I modified my test classes to subclass DatabaseTest, but an attempt
to run the tests results in:

UnboundExecutionError: Could not locate a bind configured on mapper
Mapper|Environments|environments or this Session


make sure the Session is created with an explicit bind to the connection.


I have this at the end of my init_session():

Session.configure(bind=engine)

but I'm assuming that's not enough?


-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 

[sqlalchemy] Introspection: finding out more about relationships

2013-10-11 Thread Alexey Vihorev
Hi!

I'm trying to do some introspection on a class's one-to-many relationships.
I'm trying to find out which attribute in many table points to the one
table. Is that possible? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Introspection: finding out more about relationships

2013-10-11 Thread Michael Bayer
take a look at local_columns, remote_side, local_remote_pairs (all views of the 
same thing):

MyClass.attribute.property.local_remote_pairs

that gives you Column objects.   If your mappings have attributes without the 
same names, you can relate them together mapper.get_property_by_column() where 
mapper is via inspect:

from sqlalchemy import inspect
mapper = inspect(MyRemoteClass)

for local, remote in MyLocalClass.attribute.property.local_remote_pairs:
prop = mapper.get_property_by_column(remote)
class_attr = prop.class_attribute   # same as MyRemoteClass.someattribute
key = prop.key
# etc



On Oct 11, 2013, at 11:01 PM, Alexey Vihorev viho...@gmail.com wrote:

 Hi!
 I’m trying to do some introspection on a class’s one-to-many relationships. 
 I’m trying to find out which attribute in “many” table points to the “one” 
 table. Is that possible? Thanks!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail