[sqlalchemy] dogpile.cache 0.1.0 released

2012-04-09 Thread Michael Bayer
Hello lists -

As some of you know, I've been promising an update to the Beaker caching system 
for some months now, building on a pair of libraries dogpile and dogpile.cache 
(see the original blog post at 
http://techspot.zzzeek.org/2011/10/01/thoughts-on-beaker/ for background).
Following the effort that started at about that time, I'm pleased to announce 
the initial alpha release of dogpile.cache now available from Pypi.   

dogpile.cache builds on the dogpile locking system, which implements the idea 
of allow one creator to write while others read in the abstract. Overall, 
dogpile.cache is intended as a replacement to the Beaker caching system, the 
internals of which are written by the same author. All the ideas of Beaker 
which work are re-implemented in dogpile.cache in a more efficient and 
succinct manner, and all the cruft (Beaker's internals were first written in 
2005) relegated to the trash heap.

Key, nifty features of dogpile.cache include a really straightforward API, 
significant performance improvements over Beaker, a pluggable, key-distributed 
locking system, including a memcached-based lock out of the box, and 
registration of new and/or modified cache backends as a daily matter of 
routine, directly or via setuptools entrypoints.

There's a decent README up now at http://pypi.python.org/pypi/dogpile.cache and 
you can read all the docs at http://dogpilecache.readthedocs.org/.   I'm hoping 
to get some testers and initial feedback.





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dogpile.cache 0.1.0 released

2012-04-09 Thread Wichert Akkerman

On 2012-4-9 17:28, Michael Bayer wrote:

There's a decent README up now at http://pypi.python.org/pypi/dogpile.cache and 
you can read all the docs at http://dogpilecache.readthedocs.org/.   I'm hoping 
to get some testers and initial feedback.


Can you shed some light on how this differs from retools 
(http://readthedocs.org/docs/retools/en/latest/), other than that 
dogpile does not support redis and retools only supports redis?


Wichert.

--
Wichert Akkerman wich...@wiggy.net   It is simple to make things.
http://www.wiggy.net/  It is hard to make things simple.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dogpile.cache 0.1.0 released

2012-04-09 Thread Michael Bayer

On Apr 9, 2012, at 11:49 AM, Wichert Akkerman wrote:

 On 2012-4-9 17:28, Michael Bayer wrote:
 There's a decent README up now at http://pypi.python.org/pypi/dogpile.cache 
 and you can read all the docs at http://dogpilecache.readthedocs.org/.   I'm 
 hoping to get some testers and initial feedback.
 
 Can you shed some light on how this differs from retools 
 (http://readthedocs.org/docs/retools/en/latest/), other than that dogpile 
 does not support redis and retools only supports redis?
 

Basically the caching and distributed locking features of retools should be a 
dogpile backend, and in fact they can be if retools wants to publish a 
dogpile.cache backend.Though looking at the source he'd have to rip out 
some more rudimental functions out of CacheRegion.load(), which seems to be 
totally inlined right now.   The redis lock itself could be used but needs a 
wait flag.   

It seems like we'd almost be better off taking the stats logic wired into 
load and just making that an optional feature of dogpile, so that all the 
backends could get at hit/miss stats equally.  The get/set/lock things i see in 
retools would only give us like a dozen lines of code that can actually be 
reused, and putting keys into redis and locking are obviously almost the same 
as a memcached backend in any case.   It's just a question of, which project 
wants to maintain the redis backend.

He appears to have a region invalidate feature taking advantage of being able 
to query across a range in redis, that's not something we can generalize across 
backends so I try not to rely on things like that, but dogpile can expose this 
feature via the backend directly.

The function decorators and the dogpile integration in retools are of course 
derived from Beaker the same way dogpile's are and work similarly.  Dogpile's 
schemes are generalized and pluggable.  

It seems like Ben stuck with the @decorate(short-term, namespace) model we 
first did in Beaker, whereas with dogpile.cache though the API looks more like 
flask-cache (http://packages.python.org/Flask-Cache/), where you have a cache 
object that provides the decorator.

Queue/job/etc appears to be something else entirely, I'd ask how that compares 
to celery-redis and other redis-queue solutions.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dogpile.cache 0.1.0 released

2012-04-09 Thread Wichert Akkerman

On 2012-4-9 18:28, Michael Bayer wrote:


On Apr 9, 2012, at 11:49 AM, Wichert Akkerman wrote:


On 2012-4-9 17:28, Michael Bayer wrote:

There's a decent README up now at http://pypi.python.org/pypi/dogpile.cache and 
you can read all the docs at http://dogpilecache.readthedocs.org/.   I'm hoping 
to get some testers and initial feedback.


Can you shed some light on how this differs from retools 
(http://readthedocs.org/docs/retools/en/latest/), other than that dogpile does 
not support redis and retools only supports redis?



Basically the caching and distributed locking features of retools should be a dogpile 
backend, and in fact they can be if retools wants to publish a dogpile.cache backend.
Though looking at the source he'd have to rip out some more rudimental functions out of 
CacheRegion.load(), which seems to be totally inlined right now.   The redis lock itself 
could be used but needs a wait flag.


Would you also be willing to accept a pull request (assuming you use 
git, otherwise a patch?) that adds a redis backend to dogpile directly?



It seems like we'd almost be better off taking the stats logic wired into 
load and just making that an optional feature of dogpile, so that all the backends 
could get at hit/miss stats equally.  The get/set/lock things i see in retools would only give us 
like a dozen lines of code that can actually be reused, and putting keys into redis and locking are 
obviously almost the same as a memcached backend in any case.   It's just a question of, which 
project wants to maintain the redis backend.


The statistics are certainly very useful.


He appears to have a region invalidate feature taking advantage of being able 
to query across a range in redis, that's not something we can generalize across backends 
so I try not to rely on things like that, but dogpile can expose this feature via the 
backend directly.


Region invalidate is very very useful in my experience: for us it allows 
us to have a management-system invalidate caches for a running website 
without having to make it aware of all the implementation details of the 
site. We can now simple say 'invalidate everything related to magazines' 
instead of invalidating 15 different functions separately (which will 
get out of sync as well).



The function decorators and the dogpile integration in retools are of course 
derived from Beaker the same way dogpile's are and work similarly.  Dogpile's 
schemes are generalized and pluggable.


A problem I have with the Beaker and retools decorators is that they 
make it very hard to include context from a view into a cache key. For 
example for complex views it is very common that you want to cache a 
helper method for the view class, but you want the context and things 
like request.application_url to be part of the cache key, but those are 
never passed to the method. That leads to code like this:


class MyView:
 @some_cache_decorator
 def _slow_task(self, context_id):
# Do something

 def slow_task(self):
 return self._slow_task(self.context.id)

one approach I used to take was to use a decorator which could take a 
function parameter which returned extra cache keys. You could use that 
like this:


class MyView:
def _cachekey(self, *a, **kw):
return (self.request.application_url, self.context.id)

@some_cache_decorator(extra_keys=_cachekey)
def slow_task(self):
# Do things here



It seems like Ben stuck with the @decorate(short-term, namespace) model we first did 
in Beaker, whereas with dogpile.cache though the API looks more like flask-cache 
(http://packages.python.org/Flask-Cache/), where you have a cache object that provides the 
decorator.


That sounds like the dogpile approach does not supported environments 
where you have multiple copies of the same application in the same 
process space but using different configurations? That's a rare 
situation, but to some people appears to be to important.




Queue/job/etc appears to be something else entirely, I'd ask how that compares 
to celery-redis and other redis-queue solutions.


I can already answer that one :)

Wichert.

--
Wichert Akkerman wich...@wiggy.net   It is simple to make things.
http://www.wiggy.net/  It is hard to make things simple.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Computed by columns

2012-04-09 Thread Werner
Is there a way in SA 0.7.6 to define computed columns as part of other 
DeclarativeBase columns?


Currently I do manual DDL, but would like to get things setup to be able 
to easier move to another db backend.


Most of my computed columns are calculations like the one on 
AVAILCAPACITY below, but also have a few simple selects like in the 
USEDCAPACITY column.


CREATE TABLE WINERACKIT (
WINERACKITIDPKEYS NOT NULL,
SPLIT   INTEGER NOT NULL,
CAPACITYINTEGER DEFAULT 0,
FK_WINERACKBID  PKEYS NOT NULL /* PKEYS = BIGINT */,
USEDCAPACITYCOMPUTED BY ((SELECT COUNT(BOTTAG.TAGNO) FROM 
BOTTAG WHERE BOTTAG.FK_WINERACKIT_ID = WINERACKIT.ID)),

AVAILCAPACITY   COMPUTED BY (CAPACITY-USEDCAPACITY)

Thanks in advance for some pointers on how to do this.

Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Computed by columns

2012-04-09 Thread Michael Bayer
even google is not telling me which vendor has produced this expression.  
Because computed by is the most generic phrase.

I see a brief mention in firebird's docs ?   no clue.

I'd probably use @compiles on top of CreateTable for this, and use a regular 
expression to tokenize out the columns that have computed_by in their info 
field.

To really support extensions like this fully we'd have to break out compiler to 
make usage of a CreateColumn directive, then you'd be able to put @compiles on 
top of CreateColumn and generate the columns directly.   wouldn't be too 
terrible but 0.8 is quite backlogged (see 
http://www.sqlalchemy.org/trac/ticket/2463).

Or why not do it at the type level ?   Can you just make a UserDefinedType here 
?   2463 might be nicer.



On Apr 9, 2012, at 1:36 PM, Werner wrote:

 Is there a way in SA 0.7.6 to define computed columns as part of other 
 DeclarativeBase columns?
 
 Currently I do manual DDL, but would like to get things setup to be able to 
 easier move to another db backend.
 
 Most of my computed columns are calculations like the one on AVAILCAPACITY 
 below, but also have a few simple selects like in the USEDCAPACITY column.
 
 CREATE TABLE WINERACKIT (
WINERACKITIDPKEYS NOT NULL,
SPLIT   INTEGER NOT NULL,
CAPACITYINTEGER DEFAULT 0,
FK_WINERACKBID  PKEYS NOT NULL /* PKEYS = BIGINT */,
USEDCAPACITYCOMPUTED BY ((SELECT COUNT(BOTTAG.TAGNO) FROM BOTTAG WHERE 
 BOTTAG.FK_WINERACKIT_ID = WINERACKIT.ID)),
AVAILCAPACITY   COMPUTED BY (CAPACITY-USEDCAPACITY)
 
 Thanks in advance for some pointers on how to do this.
 
 Werner
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Mitchell Hashimoto
Hi,

I am continually getting this sort of error after some amount of time: 
QueuePool 
limit of size 30 overflow 10 reached, connection timed out, timeout 30

We're using only the SQLAlchemy Core expressions API, so we're not wrapping 
anything in sessions, so I'm not sure how this is happening.

Any pointers?

Thanks,
Mitchell

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/QlU8RHFQOwQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] remove a filter expression from session.query

2012-04-09 Thread Michael Bayer

On Apr 7, 2012, at 9:44 AM, Jonathan MacCarthy wrote:

 Hi, all.  I'm trying to built queries that handle the presence of certain 
 database-dependent functions like this:
 
 # 
 q = session.query(MyTable)
 
 if isCustomQuery:
 q = q.filter(func.custom_function(MyTable.distance  0))
 
 try:
 results = q.all()
 except OperationalError:
 #database doesn't have custom_function
 remove func.custom_function filter here
 do something else
 results = q.all()
 
 # 
 
 How do I remove filters that, for whatever reason, don't work?  Am I going 
 about this the wrong way?  I'd also be happy with a way to test whether or 
 not func.custom_function will compile, but I haven't found an answer to 
 either of these options in my searching.

what could custom_function possibly be that you need to do it on certain 
backends but not others, in order to get the same result ?

in any case, I'd reverse things:

q = myquery

try:
   results = q.filter(func.myfunc()).all()
except ...


and then after that, I'd use @compiles so that on unsupported backends, I'd get 
1=1 or something universally true like that:

http://docs.sqlalchemy.org/en/latest/core/compiler.html#subclassing-guidelines


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Michael Bayer
close your connections after you are finished with them.


On Apr 9, 2012, at 2:43 PM, Mitchell Hashimoto wrote:

 Hi,
 
 I am continually getting this sort of error after some amount of time: 
 QueuePool limit of size 30 overflow 10 reached, connection timed out, timeout 
 30
 
 We're using only the SQLAlchemy Core expressions API, so we're not wrapping 
 anything in sessions, so I'm not sure how this is happening.
 
 Any pointers?
 
 Thanks,
 Mitchell
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/QlU8RHFQOwQJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Claudio Freire
On Mon, Apr 9, 2012 at 4:03 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 close your connections after you are finished with them.

They should be automatically returned to the pool when unreferenced.

The OP may be storing stray references somewhere, or associating them
somehow to a reference cycle that takes time to be freed.

In any case, explicit closing may not be the greatest idea (that
connection won't go back to the pool I think, not sure, please SA
gurus confirm), rather, they should be de-referenced thoroughly.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dogpile.cache 0.1.0 released

2012-04-09 Thread Michael Bayer

On Apr 9, 2012, at 12:57 PM, Wichert Akkerman wrote:

 
 Would you also be willing to accept a pull request (assuming you use git, 
 otherwise a patch?) that adds a redis backend to dogpile directly?

sure

 
 It seems like we'd almost be better off taking the stats logic wired into 
 load and just making that an optional feature of dogpile, so that all the 
 backends could get at hit/miss stats equally.  The get/set/lock things i see 
 in retools would only give us like a dozen lines of code that can actually 
 be reused, and putting keys into redis and locking are obviously almost the 
 same as a memcached backend in any case.   It's just a question of, which 
 project wants to maintain the redis backend.
 
 The statistics are certainly very useful.
 
 He appears to have a region invalidate feature taking advantage of being 
 able to query across a range in redis, that's not something we can 
 generalize across backends so I try not to rely on things like that, but 
 dogpile can expose this feature via the backend directly.
 
 Region invalidate is very very useful in my experience: for us it allows us 
 to have a management-system invalidate caches for a running website without 
 having to make it aware of all the implementation details of the site. We can 
 now simple say 'invalidate everything related to magazines' instead of 
 invalidating 15 different functions separately (which will get out of sync as 
 well).

great, just not something that's possible with memcached, for example.

 
 The function decorators and the dogpile integration in retools are of course 
 derived from Beaker the same way dogpile's are and work similarly.  
 Dogpile's schemes are generalized and pluggable.
 
 A problem I have with the Beaker and retools decorators is that they make it 
 very hard to include context from a view into a cache key. For example for 
 complex views it is very common that you want to cache a helper method for 
 the view class, but you want the context and things like 
 request.application_url to be part of the cache key, but those are never 
 passed to the method. That leads to code like this:
 
 class MyView:
 @some_cache_decorator
 def _slow_task(self, context_id):
# Do something
 
 def slow_task(self):
 return self._slow_task(self.context.id)
 
 one approach I used to take was to use a decorator which could take a 
 function parameter which returned extra cache keys. You could use that like 
 this:
 
 class MyView:
def _cachekey(self, *a, **kw):
return (self.request.application_url, self.context.id)
 
@some_cache_decorator(extra_keys=_cachekey)
def slow_task(self):
# Do things here


Well I've got the function that comes up with the cache key as pluggable.  So 
you can make your own that interprets the namespace parameter as a tuple or 
other, if not a plain string as its usual function, then snatches whatever 
you'd like from each function call.

 
 
 It seems like Ben stuck with the @decorate(short-term, namespace) model 
 we first did in Beaker, whereas with dogpile.cache though the API looks more 
 like flask-cache (http://packages.python.org/Flask-Cache/), where you have a 
 cache object that provides the decorator.
 
 That sounds like the dogpile approach does not supported environments where 
 you have multiple copies of the same application in the same process space 
 but using different configurations? That's a rare situation, but to some 
 people appears to be to important.

assuming you're talking about beaker's middleware, all it did was stick a 
CacheManager object in the wsgi environ.   You can certainly do that with 
dogpile cache regions too.

After considering some approaches I think you could just subclass CacheRegion 
and overrride backend and configure to pull from the wsgi environment, some 
system of making it available would need to be devised (pylons made this easy 
with the thread local registries but I understand we don't do that with 
Pyramid)




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Michael Bayer

On Apr 9, 2012, at 3:25 PM, Claudio Freire wrote:

 On Mon, Apr 9, 2012 at 4:03 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 close your connections after you are finished with them.
 
 They should be automatically returned to the pool when unreferenced.
 
 The OP may be storing stray references somewhere, or associating them
 somehow to a reference cycle that takes time to be freed.
 
 In any case, explicit closing may not be the greatest idea (that
 connection won't go back to the pool I think, not sure, please SA
 gurus confirm), rather, they should be de-referenced thoroughly.

Code that deals with Connection explicitly should definitely have an explicit 
plan in place to close them (where close on Connection will return the DBAPI 
connection to the pool).Relying on dereferencing is not very clean and also 
doesn't work deterministically with environments like Pypy and jython that 
don't use reference counting.

Using context managers, i.e. with engine.connect() as conn, is the most 
straightforward.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Computed by columns

2012-04-09 Thread Werner

Good evening Michael,

On 09/04/2012 20:05, Michael Bayer wrote:

even google is not telling me which vendor has produced this expression.  Because 
computed by is the most generic phrase.

I see a brief mention in firebird's docs ?   no clue.
Sorry, should have mentioned that currently I use Firebird SQL 2.5.  
BTW, it is the first FB release which supports an alter column to 
change the computed by clause.


I'd probably use @compiles on top of CreateTable for this, and use a regular expression 
to tokenize out the columns that have computed_by in their info field.

To really support extensions like this fully we'd have to break out compiler to 
make usage of a CreateColumn directive, then you'd be able to put @compiles on 
top of CreateColumn and generate the columns directly.   wouldn't be too 
terrible but 0.8 is quite backlogged (see 
http://www.sqlalchemy.org/trac/ticket/2463).

Or why not do it at the type level ?   Can you just make a UserDefinedType here 
?

Will look into these.

2463 might be nicer.

and will keep watching this one in case it does make it into 0.8.

Thanks
Werner





On Apr 9, 2012, at 1:36 PM, Werner wrote:


Is there a way in SA 0.7.6 to define computed columns as part of other 
DeclarativeBase columns?

Currently I do manual DDL, but would like to get things setup to be able to 
easier move to another db backend.

Most of my computed columns are calculations like the one on AVAILCAPACITY 
below, but also have a few simple selects like in the USEDCAPACITY column.

CREATE TABLE WINERACKIT (
WINERACKITIDPKEYS NOT NULL,
SPLIT   INTEGER NOT NULL,
CAPACITYINTEGER DEFAULT 0,
FK_WINERACKBID  PKEYS NOT NULL /* PKEYS = BIGINT */,
USEDCAPACITYCOMPUTED BY ((SELECT COUNT(BOTTAG.TAGNO) FROM BOTTAG WHERE 
BOTTAG.FK_WINERACKIT_ID = WINERACKIT.ID)),
AVAILCAPACITY   COMPUTED BY (CAPACITY-USEDCAPACITY)

Thanks in advance for some pointers on how to do this.

Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dogpile.cache 0.1.0 released

2012-04-09 Thread Michael Bayer

On Apr 9, 2012, at 3:33 PM, Michael Bayer wrote:

 
 assuming you're talking about beaker's middleware, all it did was stick a 
 CacheManager object in the wsgi environ.   You can certainly do that with 
 dogpile cache regions too.
 
 After considering some approaches I think you could just subclass CacheRegion 
 and overrride backend and configure to pull from the wsgi environment, 
 some system of making it available would need to be devised (pylons made this 
 easy with the thread local registries but I understand we don't do that with 
 Pyramid)

Here's what that could look like:

class ConfigInjectedRegion(CacheRegion):
A :class:`.CacheRegion` which accepts a runtime method
of determining backend configuration.

Supports ad-hoc backends per call, allowing storage of 
backend implementations inside of application specific
registries, such as wsgi environments.



def region_registry(self):
Return a dictionary where :class:`.ConfigInjectedRegion`
will store backend implementations.

This is typically stored in a place like wsgi 
environment or other application configuration.


raise NotImplementedError()

def configure(self, backend, **kw):
self.region_registry()[self.name] = super(
ConfigInjectedRegion, self).configure(backend, **kw)

@property
def backend(self):
return self.region_registry()[self.name]

Suppose we were using Pylons' config stacked proxy.  Usage is then like:

from pylons import config

class MyRegions(ConfigInjectedRegion):
def region_registry(self):
return config['dogpile_registries']

def setup_my_app(config):
config['dogpile_registries'] = {}


I haven't used Pyramid much yet but whatever system it has of set up these 
config/request things for the lifespan of this request, you'd inject onto the 
MyRegions object.

I could commit ConfigInjectedRegion as it is, but I'd want to check that this 
use case works out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Claudio Freire
On Mon, Apr 9, 2012 at 4:36 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 Using context managers, i.e. with engine.connect() as conn, is the most 
 straightforward.

IIRC, context managers are new in SA, aren't they?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Michael Bayer
there's some new-er ones including that one, we've had a few for some years


On Apr 9, 2012, at 5:39 PM, Claudio Freire wrote:

 On Mon, Apr 9, 2012 at 4:36 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 Using context managers, i.e. with engine.connect() as conn, is the most 
 straightforward.
 
 IIRC, context managers are new in SA, aren't they?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Michael Bayer
better one to use from engine is begin() (also new in 0.7.6):

with engine.begin() as conn:
...

that way everything you do with conn is on the same transaction.



On Apr 9, 2012, at 5:39 PM, Claudio Freire wrote:

 On Mon, Apr 9, 2012 at 4:36 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 Using context managers, i.e. with engine.connect() as conn, is the most 
 straightforward.
 
 IIRC, context managers are new in SA, aren't they?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Claudio Freire
On Mon, Apr 9, 2012 at 6:50 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 better one to use from engine is begin() (also new in 0.7.6):

 with engine.begin() as conn:
    ...

 that way everything you do with conn is on the same transaction.

Yeah, because I'm using 0.5.8 (and couldn't switch to 0.6.x yet, the
app breaks with it).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Mitchell Hashimoto


On Monday, April 9, 2012 12:03:29 PM UTC-7, Michael Bayer wrote:

 close your connections after you are finished with them.


So I suppose my confusion is where is the connection being made. I have a 
singleton engine instance running around, and when I query, I basically do 
something like this:

query = select([fields])
result = engine.execute(query).fetchall()

Therefore I'm using implicit connections. The reference to the ResultProxy 
is quickly gone, which I thought would implicitly close the connection as 
well.

What exactly am I supposed to do here?

Mitchell
 



 On Apr 9, 2012, at 2:43 PM, Mitchell Hashimoto wrote:

 Hi,

 I am continually getting this sort of error after some amount of time: 
 QueuePool 
 limit of size 30 overflow 10 reached, connection timed out, timeout 30

 We're using only the SQLAlchemy Core expressions API, so we're not 
 wrapping anything in sessions, so I'm not sure how this is happening.

 Any pointers?

 Thanks,
 Mitchell

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/QlU8RHFQOwQJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/IQNlf2z7RscJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Michael Bayer

On Apr 9, 2012, at 8:24 PM, Mitchell Hashimoto wrote:

 
 
 On Monday, April 9, 2012 12:03:29 PM UTC-7, Michael Bayer wrote:
 close your connections after you are finished with them.
 
 So I suppose my confusion is where is the connection being made. I have a 
 singleton engine instance running around, and when I query, I basically do 
 something like this:
 
 query = select([fields])
 result = engine.execute(query).fetchall()
 
 Therefore I'm using implicit connections. The reference to the ResultProxy is 
 quickly gone, which I thought would implicitly close the connection as well.
 
 What exactly am I supposed to do here?

That pattern will return the connection to the pool immediately after use - so 
for that to be the only pattern at play, you'd have to have some execute() or 
fetchall() calls that are taking so long to complete that concurrent executions 
are timing out.   You'd want to look at any processes/threads hanging or taking 
very long and causing other concurrent connections to time out.

If you aren't using threads or concurrency, and expect that only one connection 
should be in use at a time for a given engine, then I'd give the AssertionPool 
a quick try which will ensure you're only checking out one connection at a 
time, illustrating a stack trace where a second concurrent checkout would be 
occurring.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Mitchell Hashimoto
On Mon, Apr 9, 2012 at 5:32 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On Apr 9, 2012, at 8:24 PM, Mitchell Hashimoto wrote:



 On Monday, April 9, 2012 12:03:29 PM UTC-7, Michael Bayer wrote:

 close your connections after you are finished with them.


 So I suppose my confusion is where is the connection being made. I have a
 singleton engine instance running around, and when I query, I basically do
 something like this:

 query = select([fields])
 result = engine.execute(query).fetchall()

 Therefore I'm using implicit connections. The reference to the ResultProxy
 is quickly gone, which I thought would implicitly close the connection as
 well.

 What exactly am I supposed to do here?


 That pattern will return the connection to the pool immediately after use -
 so for that to be the only pattern at play, you'd have to have some
 execute() or fetchall() calls that are taking so long to complete that
 concurrent executions are timing out.   You'd want to look at any
 processes/threads hanging or taking very long and causing other concurrent
 connections to time out.

 If you aren't using threads or concurrency, and expect that only one
 connection should be in use at a time for a given engine, then I'd give the
 AssertionPool a quick try which will ensure you're only checking out one
 connection at a time, illustrating a stack trace where a second concurrent
 checkout would be occurring.


In addition to fetchall() there are some fetchone() calls as well. I'm
assuming in these cases I need to explicitly close the ResultProxy?

Mitchell


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] QueuePool limit size reached, using expression API only

2012-04-09 Thread Michael Bayer

On Apr 9, 2012, at 8:32 PM, Michael Bayer wrote:

  You'd want to look at any processes/threads hanging 

correction, threads.   other processes wouldn't have any impact here.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.