[sqlalchemy] is this a sqlalchemy memory leak?

2012-08-28 Thread hackie chain
When I was using sqlalchemy 0.7.8 to manipulate database, it look's like 
the query have some memory leak..I have googled this problem for a long 
time, but no gains. I have tried run this snippet in a while loop. The code 
eat 1M every 5 second. environment is Python 2.7.3, mysql 5.5.25a, fedora 
17.

the code is really simple. But the memory didn't release at session.close()
Is it a mis-use?

def query(g_engine):
session = scoped_session(sessionmaker(bind=g_engine, autoflush=False))
q = session.query(AgentScript)
for j in q.yield_per(100):
print j
session.commit()
session.close()

gc.collect()



here is the same question on stackoverflow.
http://stackoverflow.com/questions/12073375/is-this-sqlalchemy-memory-leak

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/BIH-1lmR6zgJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Session query on different server is not returning an object

2012-08-28 Thread Jakob D.
Shouldn't open transactions within the session close when I issue a 
session.remove()
I've also tried combinations of close, rollback and commit.
I do not work towards the cacheing in this service. 

Does an open transaction mean I cannot issue any queries at all before 
closing them? Because when working with other projects I don't have to 
restart the service.

The service is basically a set of subprocesses doing the same thing. 
After each iteration of the subprocess, the session is removed, and any 
exceptions will issue a rollback. 

Besides remove and commit, what can I do to close open transactions? A 
commit should issue a flush?  

I thought I should explicitly create a new session, in case the remove 
fails and as you said, to make sure I'm using the session that's expected.
I also thought about doing this for each query and then removing the 
session right after, maybe working with detached objects.

I thought this way, there should be no problems with the session being in a 
undesirable state from a previous iteration since I'm always working on a 
fresh session.

Could this approach cause any other issues?
 
I'm working on reproducing the problem so that I can use the sql logs. 

Thanks,
J

On Monday, 27 August 2012 16:56:25 UTC+2, Michael Bayer wrote:


 On Aug 27, 2012, at 4:00 AM, Jakob D. wrote: 

  Hi! 
  
  I have a problem and I cannot figure out whats wrong. I thought might 
 get some ideas here. 
  
  On one server a Project instance is committed, and I know for sure it's 
 in the db. 
  
  But on another server querying that object returns None: 
  Project.query.filter_by(id=project_id).first() 
  
  The scoped session is removed right before this query, I've also tried 
 committing it. 
  
  If I restart the service issuing the query the project is found. 
  
  Any idea what might cause this behavior? 
  
  It says in the docs that I don't have to instantiate a new session after 
 removing it.   
  
  I thought I'd try instantiating a new scoped session anyway and working 
 towards that in each process just to make sure nothing is shared. 

 the data comes back when the server is restarted nearly always means 
 that there is either an open transaction in the previous process not being 
 closed, or there's some external caching system in use. 

 I'd check very carefully that the code in question is using the session 
 that's expected, and also turn on SQL echoing with the debug flag which 
 will illustrate the SELECT statement being emitted as well as the raw 
 results that come back. 






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/zyxSl0nxfr8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] making bindparms required by default

2012-08-28 Thread alex bodnaru

thanks michael,

On 08/27/2012 07:49 PM, Michael Bayer wrote:
 Here's a patch:
 
 diff -r 594b0162e8a5 lib/sqlalchemy/sql/expression.py
 --- a/lib/sqlalchemy/sql/expression.pyMon Aug 27 12:33:10 2012 -0400
 +++ b/lib/sqlalchemy/sql/expression.pyMon Aug 27 12:43:48 2012 -0400
 @@ -3110,7 +3110,7 @@
  typemap[key] = sqltypes.to_instance(typemap[key])
  
  def repl(m):
 -self.bindparams[m.group(1)] = bindparam(m.group(1))
 +self.bindparams[m.group(1)] = bindparam(m.group(1), 
 required=True)
  return ':%s' % m.group(1)
  
  # scan the string and search for bind parameter names, add them
 
that's what i did indeed.
but it would break queries that are happy with the null on missing default.
thus, a configurable option, false by default, would do the job.
 
 however, this is too easy.  There's lots of other places binds are generated. 
   A rule such that if value is not passed to bindparam(), then flip the 
 required flag on, would be the best behavior.  needs a few lines in the 
 bindparam() function, but also would need a lot of tests, in 
 test/sql/test_query.py.   ticket #2556 is added.
thank you very much. the patch is very sensible, but it will certainly break
scripts that rely on the null on missing behaviour.
however, i don't have such scripts ;).
 
 for your immediate needs, you can make a function that returns a Text() 
 construct, and just supplies a dictionary of bindparam() objects to the 
 Text() pre-established, with the required=True flag turned on.
 
thought of it too, but it won't do the work: i needed to catch :variables
missing in the given bindparm.
 
 
 
thanks a lot,
alex
 
 
 On Aug 27, 2012, at 12:06 PM, alex bodnaru wrote:
 

 hello friends,

 for a migrated system, i'm using textual queries in the form:

 dbsession.execute(select name from table where id=:id, dict(id=1))

 to my surprise, select ... id=:idd would still work, asuming idd is null,
 despite 'idd' is not in bindparms.
 
 
 
 

 a 'required' argument to bindparam in _TextClause __init__() would indeed 
 make
 the 'idd' column required, thus raise an exception when not found in bind
 dict(id=1).

 is there an official way to acomplish this, or should i just hack in 
 hardcoded?

 thanks in advance,
 alex

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] making bindparms required by default

2012-08-28 Thread alex bodnaru
On 08/27/2012 10:13 PM, Michael Bayer wrote:
 I'm super-hot to get the first betas of 0.8 out the door, and as this is a 
 backwards-compatability-sensitive change, I was in a super rush to get this 
 in, so it's in tip.
 
seen it' thanks again :).
 
 
 On Aug 27, 2012, at 12:49 PM, Michael Bayer wrote:
 
 Here's a patch:

 diff -r 594b0162e8a5 lib/sqlalchemy/sql/expression.py
 --- a/lib/sqlalchemy/sql/expression.py   Mon Aug 27 12:33:10 2012 -0400
 +++ b/lib/sqlalchemy/sql/expression.py   Mon Aug 27 12:43:48 2012 -0400
 @@ -3110,7 +3110,7 @@
 typemap[key] = sqltypes.to_instance(typemap[key])

 def repl(m):
 -self.bindparams[m.group(1)] = bindparam(m.group(1))
 +self.bindparams[m.group(1)] = bindparam(m.group(1), 
 required=True)
 return ':%s' % m.group(1)

 # scan the string and search for bind parameter names, add them


 however, this is too easy.  There's lots of other places binds are 
 generated.   A rule such that if value is not passed to bindparam(), then 
 flip the required flag on, would be the best behavior.  needs a few lines in 
 the bindparam() function, but also would need a lot of tests, in 
 test/sql/test_query.py.   ticket #2556 is added.

 for your immediate needs, you can make a function that returns a Text() 
 construct, and just supplies a dictionary of bindparam() objects to the 
 Text() pre-established, with the required=True flag turned on.






 On Aug 27, 2012, at 12:06 PM, alex bodnaru wrote:


 hello friends,

 for a migrated system, i'm using textual queries in the form:

 dbsession.execute(select name from table where id=:id, dict(id=1))

 to my surprise, select ... id=:idd would still work, asuming idd is 
 null,
 despite 'idd' is not in bindparms.





 a 'required' argument to bindparam in _TextClause __init__() would indeed 
 make
 the 'idd' column required, thus raise an exception when not found in bind
 dict(id=1).

 is there an official way to acomplish this, or should i just hack in 
 hardcoded?

 thanks in advance,
 alex

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: is this a sqlalchemy memory leak?

2012-08-28 Thread hackie chain
MySQLdb version is 1.2.3c1.   not 1.2.2.

http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#known-issues

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/HPbUxX0w-0EJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: is this a sqlalchemy memory leak?

2012-08-28 Thread hackie chain
It is the problem of myself. Sorry for the noise..  -_-|||

http://docs.sqlalchemy.org/en/rel_0_6/orm/session.html#frequently-asked-questions

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/UFlDOJzWU5wJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Session query on different server is not returning an object

2012-08-28 Thread Michael Bayer

On Aug 28, 2012, at 2:54 AM, Jakob D. wrote:

 Shouldn't open transactions within the session close when I issue a 
 session.remove()

yes.


 
 Does an open transaction mean I cannot issue any queries at all before 
 closing them?

You can issue as many queries as you want at any time.  the transaction 
accumulates state as these queries proceed.  The isolation level of the 
transaction, which is usually configurable, determines how this transaction 
will interact with other concurrent transactions going on, to what degree 
concurrent data will be exposed, etc. A good overview is at: 
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29

 The service is basically a set of subprocesses doing the same thing. 
 After each iteration of the subprocess, the session is removed, and any 
 exceptions will issue a rollback. 

OK, this is something you need to be careful with, if your program is using 
fork() or some variant (subprocess, multiprocessing, etc).A DBAPI 
connection generally will not travel along process boundaries.   A Session 
also, when passed across a process boundary, naturally creates a second 
Session, since that's how child processes work.   If you are calling 
close()/remove() only in the parent/child, you'd still have a second Session 
where this has not been done.

Inside of a new fork(), you need to be creating a brand new engine using 
create_engine(), which establishes connection state local to that fork.   Then 
you need to establish a brand new Session to go along with it.  If you're using 
scoped_session(), making sure the registry has no Session when the fork starts, 
by calling scoped_session.remove(), is a good idea.

Objects that are sent along the process boundary should be added to the 
subprocess Session using either merge() or add().


 Besides remove and commit, what can I do to close open transactions? A commit 
 should issue a flush?  

close() the session, or remove(), it's done.

 
 I thought I should explicitly create a new session, in case the remove fails 
 and as you said, to make sure I'm using the session that's expected.
 I also thought about doing this for each query and then removing the session 
 right after, maybe working with detached objects.
 
 I thought this way, there should be no problems with the session being in a 
 undesirable state from a previous iteration since I'm always working on a 
 fresh session.

working with detached objects and many new sessions, such that you are leaving 
the objects detached as their normal state and creating sessions just for 
ad-hoc queries, is not a good way to approach the problem, and works against 
the design of SQLAlchemy as well as the database itself.

When an object is detached, it pretty much shouldn't be used at all except to 
store it in a cache, or to place it into another Session.   So if your detached 
object pattern is just so that you can transfer objects over subprocess 
boundaries, then its OK.  But the parent process and child process should have 
just a single Session at a time (per thread, but I assume you're not using 
threads).  When objects are in use, they should be associated with a Session.  
The lifespan of the Session should cover a regular series of operations against 
a field of objects.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] OrderingList weird reordering after session.commit()

2012-08-28 Thread Gilles Lenfant
Hi,

I extracted the code from a complex application and isolated the issue I'm 
actually facing in a short sample here : https://gist.github.com/3499127

To summarize : I need a parent entity that has ordered children. I use 
for this a
 sqlalchemy.ext.orderinglist.ordering_list that is supposed to manage 
transparently the position attribute of any child to keep a persistent 
ordered list (line 20).

The real application logic appends or inserts the Child entities at the 
appropriate position using the OrderingList canonical API.
 
So far so good, my app manages perfectly the children order (as in lines 
57, 58) , but when commiting the changes, the last created child is moved 
to the end of its parent OrderingList (lines 61-66), whatever's the 
position where I inserted that child before commiting.
 
I don't know if I'm facicing an SQLAlchemy real issue or if I missed 
something (sorry, I'm an SA noob). Googling around didn't yield valuable 
information regarding this specific issue.

Any help would be really appreciated. Kind regards.
-- 
Gilles Lenfant

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/katHuCBe_10J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Issues with scoped sessions and webtest.

2012-08-28 Thread Michael Bayer

On Aug 28, 2012, at 11:49 AM, Owen Nelson wrote:

 
 As a guess, I think this has to do with how scoped sessions work, but exactly 
 how they work is a bit of a mystery to me. Have you seen anything like this 
 behavior before?
 I'm not quite sure how to investigate further into this issue, so any 
 pointers would be appreciated.

this is how the Scoped Session works:

import threading

class JustLikeScopedSession(object):
def __init__(self):
self.sessions = threading.local()

def __call__(self):
__call__ turns a JustLikeScopedSession object into a callable, that 
is, JustLikeScopedSession()()

# return the Session for this thread.  if not present, create it.
if hasattr(self.sessions, the_session):
return self.sessions.the_session
   else:
self.sessions.the_session = Session()
return self.sessions.the_session

def remove(self):
# remove the Session for this thread.
self.sessions.the_session.close()
del self.sessions.the_session

def __getattr__(self, key):
proxy method/attribute calls to the current thread's Session
return getattr(self(), key)

that's it.   If you understand what a threading.local() is (and what the 
__call__() method does), you're golden.  If not, then don't use 
scoped_session(), do things explicitly so that they make sense.

as far as state transferring along on tests, if you want to have some database 
state that carries forth on many tests, then your setup for the suite as a 
whole would create a Session, persist all the data you want, then commit() it.  
   There's no issue using sqlite :memory: for tests assuming everything occurs 
in the same process/thread across all your tests. I often recommend a 
pattern of enclosing the Session in a transaction so that the data can be 
rolled back when all tests are complete (see the example at 
http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#joining-a-session-into-an-external-transaction),
 but with sqlite :memory: this is not really critical.

I'm not sure if i have more insight into your specific issue at the moment 
without more specifics.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Issues with scoped sessions and webtest.

2012-08-28 Thread Owen Nelson
I was under the impression that the query property was implemented as part 
of the scoped session (not the non-scoped variety) from reading the docs. 
Is this not the case? If this isn't the case, I wonder if my issues will 
just *go away* by switching the type of session I work with (while running 
the test suite).

Regarding transferring state along tests, I'm not talking about persisting 
changes from test to test. I'm really talking about making changes in setUp 
(or in the test method itself) then having them reflected within the 
context of the wsgi app (attached to my TestCase's self.app prop).

Given the implementation of scoped session, I'm willing to bet that webtest 
runs the wsgi app in a separate thread, which might explain some of this. 
I'm still not really sure why flushing and committing changes from my test 
method wouldn't be reflected inside the wsgi app however. Regardless of 
what session you're using in which thread - the session should still be 
querying the database for the requested data, no? Furthermore, what reason 
could there be for Session.close_all() making this all work (sort of)?

The issue I was seeing with sqlite :memory: was that after closing sessions 
the schema would go away. Seems like to get this to work, I'd have to make 
sure a session was kept alive throughout the run of the test (which is what 
I want anyway).


On Tuesday, August 28, 2012 12:59:10 PM UTC-4, Michael Bayer wrote:


 On Aug 28, 2012, at 11:49 AM, Owen Nelson wrote: 

  
  As a guess, I think this has to do with how scoped sessions work, but 
 exactly how they work is a bit of a mystery to me. Have you seen anything 
 like this behavior before? 
  I'm not quite sure how to investigate further into this issue, so any 
 pointers would be appreciated. 

 this is how the Scoped Session works: 

 import threading 

 class JustLikeScopedSession(object): 
 def __init__(self): 
 self.sessions = threading.local() 

 def __call__(self): 
 __call__ turns a JustLikeScopedSession object into a callable, 
 that is, JustLikeScopedSession()() 

 # return the Session for this thread.  if not present, create it. 
 if hasattr(self.sessions, the_session): 
 return self.sessions.the_session 
else: 
 self.sessions.the_session = Session() 
 return self.sessions.the_session 

 def remove(self): 
 # remove the Session for this thread. 
 self.sessions.the_session.close() 
 del self.sessions.the_session 

 def __getattr__(self, key): 
 proxy method/attribute calls to the current thread's Session 
 return getattr(self(), key) 

 that's it.   If you understand what a threading.local() is (and what the 
 __call__() method does), you're golden.  If not, then don't use 
 scoped_session(), do things explicitly so that they make sense. 

 as far as state transferring along on tests, if you want to have some 
 database state that carries forth on many tests, then your setup for the 
 suite as a whole would create a Session, persist all the data you want, 
 then commit() it. There's no issue using sqlite :memory: for tests 
 assuming everything occurs in the same process/thread across all your 
 tests. I often recommend a pattern of enclosing the Session in a 
 transaction so that the data can be rolled back when all tests are complete 
 (see the example at 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#joining-a-session-into-an-external-transaction),
  
 but with sqlite :memory: this is not really critical. 

 I'm not sure if i have more insight into your specific issue at the moment 
 without more specifics. 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/-Zpifes37pYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Issues with scoped sessions and webtest.

2012-08-28 Thread Owen Nelson


On Tuesday, August 28, 2012 1:45:55 PM UTC-4, Michael Bayer wrote:


 On Aug 28, 2012, at 1:15 PM, Owen Nelson wrote: 

  I was under the impression that the query property was implemented as 
 part of the scoped session (not the non-scoped variety) from reading the 
 docs. Is this not the case? If this isn't the case, I wonder if my issues 
 will just *go away* by switching the type of session I work with (while 
 running the test suite). 

 OK, using the query property too, then yes this has a link to the scoped 
 session.  It's roughly equivalent to this: 

 class MyMappedClass(object): 
 @classmethod 
 def query(cls): 
 return scoped_session().query(cls) 

 so it means, MyMappedClass.query()... is going to use your scoped session, 
 which is going to pull out the current Session() from the current 
 threading.local() in the current thread. 



  
  Regarding transferring state along tests, I'm not talking about 
 persisting changes from test to test. I'm really talking about making 
 changes in setUp (or in the test method itself) then having them reflected 
 within the context of the wsgi app (attached to my TestCase's self.app 
 prop). 
  
  Given the implementation of scoped session, I'm willing to bet that 
 webtest runs the wsgi app in a separate thread, which might explain some of 
 this. 

 I'm fairly certain it doesn't do that.  I have some pretty involved 
 testing scenarios using webtest where I swap out different engines/sessions 
 as it nests/un-nests into the remote connections, etc.   You stay in the 
 same thread.  However, if your WSGI app has it's own session setup/teardown 
 going on, webtest hitting your WSGI app is going to trigger all of that, 
 which could be interfering with the enclosing session setup you have going 
 on.   So you need to make sure that your WSGI app plays nicely with tests. 
  I will often use a big global variable like TESTS_ENABLED=True just so 
 that the WSGI app knows to tread lightly. 


We don't have any special session bootstrapping in the app itself. In fact, 
I'd hope that the session maker that I create during the test setup is the 
same exact object used inside the app itself. Perhaps there's a reload() at 
work here messing things up.
 



  I'm still not really sure why flushing and committing changes from my 
 test method wouldn't be reflected inside the wsgi app however. Regardless 
 of what session you're using in which thread - the session should still be 
 querying the database for the requested data, no? Furthermore, what reason 
 could there be for Session.close_all() making this all work (sort of)? 

 mmm yeah I can't really say from this end.   You'd need to get in there 
 and see exactly at which point sessions are being set up and torn down.   
 But I'm pretty sure you don't have any extra threads going on unless you 
 have something in your app that specifically spawns a thread. 


It might not be webtest that's starting new threads, but rather the app 
engine nose plugin, or the various testbeds that come with (task queue 
emulation, search service, datastore, etc). For checking the setup/teardown 
of the sessions, what's the recommended tactic? I'm thinking: hook up some 
logging to some session events.
 


  
  The issue I was seeing with sqlite :memory: was that after closing 
 sessions the schema would go away. Seems like to get this to work, I'd have 
 to make sure a session was kept alive throughout the run of the test (which 
 is what I want anyway). 

 well this is also strange, because an engine that uses sqlite with a 
 :memory: connection will keep the connection persistent, using the 
 SingletonThreadPool.  Closing the session won't affect this connection 
 being open assuming default usages.   If your create_engine() is specifying 
 a specific pool like NullPool, then yes :memory: won't work with that. 

  
We specified a StaticPool -- I'll omit and see if that yields a better 
result. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/COUqErlZIpAJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Issues with scoped sessions and webtest.

2012-08-28 Thread Michael Bayer

On Aug 28, 2012, at 2:06 PM, Owen Nelson wrote:

 mmm yeah I can't really say from this end.   You'd need to get in there and 
 see exactly at which point sessions are being set up and torn down.   But I'm 
 pretty sure you don't have any extra threads going on unless you have 
 something in your app that specifically spawns a thread. 
 
 It might not be webtest that's starting new threads, but rather the app 
 engine nose plugin, or the various testbeds that come with (task queue 
 emulation, search service, datastore, etc). For checking the setup/teardown 
 of the sessions, what's the recommended tactic? I'm thinking: hook up some 
 logging to some session events.

is there any chance you can get your test suite to run in a regular environment 
first ?app engine supposedly has extra weird behaviors like MySQL 
connections that go away randomly and stuff like that.

right now the Session events will give you hooks into transaction lifecycle, 
but not as much the session itself being closed totally.   you might try just 
using a custom Session subclass with your scoped_session() that does what you 
need.

  
 
  
  The issue I was seeing with sqlite :memory: was that after closing sessions 
  the schema would go away. Seems like to get this to work, I'd have to make 
  sure a session was kept alive throughout the run of the test (which is what 
  I want anyway). 
 
 well this is also strange, because an engine that uses sqlite with a :memory: 
 connection will keep the connection persistent, using the 
 SingletonThreadPool.  Closing the session won't affect this connection being 
 open assuming default usages.   If your create_engine() is specifying a 
 specific pool like NullPool, then yes :memory: won't work with that. 
 
  
 We specified a StaticPool -- I'll omit and see if that yields a better 
 result. 

StaticPool should never be closing the DBAPI connection, so again this is 
strange.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Issues with scoped sessions and webtest.

2012-08-28 Thread Owen Nelson


On Tuesday, August 28, 2012 2:12:10 PM UTC-4, Michael Bayer wrote:


 is there any chance you can get your test suite to run in a regular 
 environment first ?app engine supposedly has extra weird behaviors like 
 MySQL connections that go away randomly and stuff like that.

 right now the Session events will give you hooks into transaction 
 lifecycle, but not as much the session itself being closed totally.   you 
 might try just using a custom Session subclass with your scoped_session() 
 that does what you need.

  
There's no clean way to turn off all the google. I'd have to work pretty 
hard to disable all the task queue, datastore, and full text search stuff. 
I could probably work up a small independent test without that stuff 
(easier to build it from scratch). If I can set it up so it'll fail when 
running ``--with-gae`` and but not without, then I suppose we've learned 
something new.
For what it's worth, the test environment is running on the ultra 
conventional mysqldb package directly (rather than the google rdbms that 
would proxy it for testing). We haven't seen any connection issues like you 
mentioned just now, either up on appspot, or local.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/8Bb5cTdTkagJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] PickleType collections of rows?

2012-08-28 Thread Jacob Biesinger

 Yes, this is the pattern I'm trying to describe though I've been thinking
 of these as typed collections, smallish tables for handling a particular
 table's collections needs, which are all of a single type (so rows are
 either a collection or a FK to ONE other table).  It feels very doable,
 given the many examples I've seen that don't go quite this far.  If it's a
 common enough use case, why not have an example on the best way to do this?


 Uh, scroll up, I said, which you'll find in the **examples** with the
 distribution.start reading here:
 http://docs.sqlalchemy.org/en/rel_0_7/orm/examples.html#examples-generic-associations


I'm sorry to be a pest here.  I've read through all the examples and have
spent several days with the documentation (which is really excellent).  I
think what I'm describing is distinct from these patterns.  I'm looking for
a polymorphic collection type capable of reflecting on the column (at
run-time) and presenting a different ORM interface.  Extending the
polymorphic vertical column example at
http://www.sqlalchemy.org/trac/browser/examples/vertical shows what I'm
trying to do pretty well. Again, please forgive me if I'm just being dense.

# set up declarative tables
# from http://www.sqlalchemy.org/trac/browser/examples/vertical
from sqlalchemy_examples.vertical import dictlike_polymorphic as dictlike

metadata = MetaData()
Base = declarative_base()
engine = create_engine('sqlite://', echo=True)
Base.metadata.bind = engine
session = Session(engine)


class AnimalFact(dictlike.PolymorphicVerticalProperty, Base):
key/value attribute whose value can be either a string or a list of
strings
__tablename__ = 'animalfacts'
type_map = {str: ('string', 'str_value'),
list: ('list', 'list_value')}
id = Column(Integer, primary_key=True)
animal_id = Column(Integer, ForeignKey('animal.id'), primary_key=True)
key = Column(String, primary_key=True)
type = Column(String)
str_value = Column(String)
list_value = relationship('StringEntry')


class Animal(Base, dictlike.VerticalPropertyDictMixin):
__tablename__ = 'animal'
_property_type = AnimalFact
_property_mapping = 'facts'

id = Column(Integer, primary_key=True)
name = Column(String)
facts = relationship(AnimalFact, backref='animal',

collection_class=attribute_mapped_collection('key'))

def __init__(self, name):
self.name = name


class StringEntry(Base):
__tablename__ = 'myvalue'
id = Column(Integer, primary_key=True)
animalfacts_id = Column(Integer, ForeignKey('animalfacts.id'))
value = Column(String)

def __init__(self, value):
self.value = value

Base.metadata.create_all()


# create a new animal
a = Animal('aardvark')

# create a new AnimalFact.  This uses the list_value column, which is in
turn a one-to-many relationship to StringEntry
a['eyes'] = [StringEntry('left side'), StringEntry('right side')]  # works
great

# this should use the str_value column, which is a simple string
a['cute'] = 'sort of'  # fails with message TypeError: Incompatible
collection type: None is not list-like


Thanks as always for any help you can give!  I'll update the stackoverflow
question once we get this squared away.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] PickleType collections of rows?

2012-08-28 Thread Michael Bayer
Well, you dug into a really esoteric and complex example there.
association_proxy is a much easier way to get around these cases where you want 
an object to act like a scalar, so here's that, without all that crazy 
boilerplate of the vertical example, which I'd avoid as it is really too 
complex.   Your example seemed undecided about primary key style so I went with 
the composite version.Surrogate + composite can't be mixed in a single 
table (well it can, but its relationally incorrect.   The key should be the 
smallest unit that identifies a row - http://en.wikipedia.org/wiki/Unique_key 
is a good top level read into various subjects regarding this).

from sqlalchemy import Integer, String, Column, create_engine, ForeignKey, 
ForeignKeyConstraint
from sqlalchemy.orm import relationship, Session
from sqlalchemy.orm.collections import attribute_mapped_collection
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy

Base = declarative_base()

class AnimalFact(Base):
key/value attribute whose value can be either a string or a list of 
strings
__tablename__ = 'animalfacts'

# use either surrogate PK id, or the composite animal_id/key - but
# not both.   id/animal_id/key all together is not a proper key.
# Personally I'd go for id here, but here's the composite version.

animal_id = Column(Integer, ForeignKey('animal.id'), primary_key=True)
key = Column(String, primary_key=True)

# data
str_value = Column(String)
_list_value = relationship('StringEntry')

# proxy list strings
list_proxy = association_proxy('_list_value', 'value')

def __init__(self, key, value):
self.key = key
self.value = value

@property
def value(self):
if self.str_value is not None:
return self.str_value
else:
return self.list_proxy

@value.setter
def value(self, value):
if isinstance(value, basestring):
self.str_value = value
elif isinstance(value, list):
self.list_proxy = value
else:
assert False

class Animal(Base):
__tablename__ = 'animal'

id = Column(Integer, primary_key=True)
name = Column(String)
_facts = relationship(AnimalFact, backref='animal',
  collection_class=attribute_mapped_collection('key'))
facts = association_proxy('_facts', 'value')

def __init__(self, name):
self.name = name

# dictionary interface around facts.
# I'd just use animal.facts here, but here's how to skip that.
def __getitem__(self, key):
return self.facts.__getitem__(key)

def __setitem__(self, key, value):
self.facts.__setitem__(key, value)

def __delitem__(self, key):
self.facts.__delitem__(key)

def __contains__(self, key):
return self.facts.__contains__(key)

def keys(self):
return self.facts.keys()


class StringEntry(Base):
__tablename__ = 'myvalue'
id = Column(Integer, primary_key=True)
animal_id = Column(Integer)
key = Column(Integer)
value = Column(String)

# because AnimalFact has a composite PK, we need
# a composite FK.
__table_args__ = (ForeignKeyConstraint(
['key', 'animal_id'],
['animalfacts.key', 'animalfacts.animal_id']),
)
def __init__(self, value):
self.value = value

engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)

session = Session(engine)


# create a new animal
a = Animal('aardvark')

a['eyes'] = ['left side', 'right side']

a['cute'] = 'sort of'

session.add(a)
session.commit()
session.close()

for animal in session.query(Animal):
print animal.name, ,.join([%s % animal[key] for key in animal.keys()])




On Aug 28, 2012, at 2:43 PM, Jacob Biesinger wrote:

 Yes, this is the pattern I'm trying to describe though I've been thinking of 
 these as typed collections, smallish tables for handling a particular 
 table's collections needs, which are all of a single type (so rows are 
 either a collection or a FK to ONE other table).  It feels very doable, 
 given the many examples I've seen that don't go quite this far.  If it's a 
 common enough use case, why not have an example on the best way to do this? 
 
 Uh, scroll up, I said, which you'll find in the **examples** with the 
 distribution.start reading here: 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/examples.html#examples-generic-associations
 
 
 I'm sorry to be a pest here.  I've read through all the examples and have 
 spent several days with the documentation (which is really excellent).  I 
 think what I'm describing is distinct from these patterns.  I'm looking for a 
 polymorphic collection type capable of reflecting on the column (at run-time) 
 and presenting a different ORM interface.  Extending the polymorphic vertical 
 column example at 

[sqlalchemy] getting results uncidoe from mssql with pyodbc (where mssql encoding is windows-1255) using turbogears scoped DBSession

2012-08-28 Thread alonn
some of my sqlalchemy 0.7.3 (with tubrogears 2.1.4) models work with a 
mssql 2005 db using pyodbc.

(No can't change this, don't bother suggesting, this is an enterprise 
financial system, I can just read and write to certain tables there)

the query returned are encoded windows-1255 instead of utf-8
failing to return unicode causes various 'UnicodeDecodeError' error in 
sprox and toscawidgets which I can override manualy by rewriting certain 
lines in the sprox/tw.forms source code but not exactly an optimal solution 

is there a  way to specify in the connection url to convert the data to 
standard unicode encoding?

currently using the following format:

sqlalchemy.second.url = mssql://user:password@SERVER\db

or maybe changing some parameter in the sqlalchemy engine should do the 
trick?



thanks for the help


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/xTmE0yTs810J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] getting results uncidoe from mssql with pyodbc (where mssql encoding is windows-1255) using turbogears scoped DBSession

2012-08-28 Thread Michael Bayer
what ODBC driver ?   the encoding issues are typically configured with ODBC.
it's a huge difference if you're on the windows drivers, vs. freetds, vs 
anything else.


also I use MSSQL 2005 in production financial applications as well.


On Aug 28, 2012, at 4:43 PM, alonn wrote:

 some of my sqlalchemy 0.7.3 (with tubrogears 2.1.4) models work with a mssql 
 2005 db using pyodbc.
 
 (No can't change this, don't bother suggesting, this is an enterprise 
 financial system, I can just read and write to certain tables there)
 
 the query returned are encoded windows-1255 instead of utf-8
 failing to return unicode causes various 'UnicodeDecodeError' error in sprox 
 and toscawidgets which I can override manualy by rewriting certain lines in 
 the sprox/tw.forms source code but not exactly an optimal solution 
 
 is there a  way to specify in the connection url to convert the data to 
 standard unicode encoding?
 
 currently using the following format:
 
 sqlalchemy.second.url = mssql://user:password@SERVER\db
 
 or maybe changing some parameter in the sqlalchemy engine should do the trick?
 
 
 
 thanks for the help
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/xTmE0yTs810J.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] OrderingList weird reordering after session.commit()

2012-08-28 Thread Michael Bayer

On Aug 28, 2012, at 11:59 AM, Gilles Lenfant wrote:

 Hi,
 
 I extracted the code from a complex application and isolated the issue I'm 
 actually facing in a short sample here : https://gist.github.com/3499127
 
 To summarize : I need a parent entity that has ordered children. I use 
 for this a
  sqlalchemy.ext.orderinglist.ordering_list that is supposed to manage 
 transparently the position attribute of any child to keep a persistent 
 ordered list (line 20).
 
 The real application logic appends or inserts the Child entities at the 
 appropriate position using the OrderingList canonical API.
  
 So far so good, my app manages perfectly the children order (as in lines 57, 
 58) , but when commiting the changes, the last created child is moved to 
 the end of its parent OrderingList (lines 61-66), whatever's the position 
 where I inserted that child before commiting.
  
 I don't know if I'm facicing an SQLAlchemy real issue or if I missed 
 something (sorry, I'm an SA noob). Googling around didn't yield valuable 
 information regarding this specific issue.
 
 Any help would be really appreciated. Kind regards.

the orderinglist maintains ordering during in-Python mutations.   It doesn't 
have any effect when rows are loaded from the database, which is a good thing 
as the database can send the rows back ordered much more efficiently than if 
the ordering list had to respond to every row loaded.

So your relationship needs an order by:

children = relationship('Child', backref='parent',
collection_class=ordering_list('position'),
order_by=Child.position
)


I checked the docs, it is there though these are kind of old docs and they 
spend a lot of time repeating themselves and also we didn't have the benefit of 
Sphinx directives when these were written, if you read the doc for 
OrderingList.__init__ 
(http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/orderinglist.html#sqlalchemy.ext.orderinglist.OrderingList.__init__):
  This implementation relies on the list starting in the proper order, so be 
sure to put an order_by on your relationship.   But this should be more 
prominently mentioned at the top as a note box and some of the redundancy 
should be chopped down here.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.