On Tuesday, October 18, 2016 at 9:29:37 AM UTC-4, Mike Bayer wrote:
This approach can also be made extremely performant by making sure you
> load up that profile datafile just once, and cache it in memory in a
> dictionary.
>
What's also nice about the decorator approach in the aspect, is
converting to json would be like this:
for row in rows:
converted = json.dumps(row)
it is possible that your json encoder can not handle a particular
sqlalchemy object. if that is the case you will need to build your own
json dumps function or encoder that can adapt your data.
Alfred-
I'm sorry you felt offended and misinterpreted my comment. I did not think
you were unskilled in the slightest. Quite the opposite (i did look you
up).
Your concerns and solutions were in-line with to how every highly
experienced lower-level engineer I've worked with has approached
Your query's underlying sql returns multiple rows. You can cast those
results into a dict in Python.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See
On Wednesday, October 12, 2016 at 5:58:41 PM UTC-4, Alfred Perlstein wrote:
>
> Oh nice!! Will this be called in the context of the same thread
> executing the sql? This would allow me to debug without formatting all
> debugs I believe...
>
It depends on how you're using threads and
On Tuesday, October 11, 2016 at 9:06:33 AM UTC-4, Ludovic Beliveau wrote:
>
> Note that the code is specifying "expire_on_commit=False".
>>
>
This suggests that you have a `commit` in your code. (If you can run the
same block with `expire_on_commit=True`, then you probably just have a
On Monday, October 3, 2016 at 7:41:09 PM UTC-4, Andrew Kowalik wrote:
>
> I have a query that I am baking that uses in_ with a python list. For
> example
> baked_query += lambda q: q.filter(Model.attr1.in_([bindparam('attr1')])
>
>
Try using a tuple instead of a list. Some Python drivers
In terms of troubleshooting... try running an initialization hook in the
main process (before forking a thread) that does a db setup and performs
this query. That has helped me out a lot in similar situations.
--
You received this message because you are subscribed to the Google Groups
Ok. I'll put a note on the code and leave as is.
The cast/bit/op is just too hard to read for maintenance. BUT it might be
okay in one spot as a compiles function. I think i may do that!
Thanks, Mike!
--
You received this message because you are subscribed to the Google Groups
Let's say that I have a table such as this:
CREATE TABLE foo (
id SERIAL PRIMARY KEY,
attribute_toggles INT DEFAULT NULL
);
CREATE INDEX idx_attribute_toggles ON foo(CAST(attribute_toggles AS
BIT(32))) WHERE attribute_toggles <> 0;
represented by such a
There is a super-old ticket about this
- https://bitbucket.org/zzzeek/sqlalchemy/issues/959 (yes, I remembered
that!)
And there is a workaround
here: https://groups.google.com/forum/#!topic/sqlalchemy/cIvgH2y01_o using
the compilies system as suggested
here
On Thursday, September 8, 2016 at 12:32:41 PM UTC-4, Iain Duncan wrote:
>
> Hi Jonathan, sure can, here it is below. I think maybe this has something
> to do with the fact that we are creating a new engine and dropping the
> schema on each pass maybe? Because it seems to *only* happen when we
can you share the test harness?
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to
You're doing it wrong, but it's not too hard to correct - it will mostly be
shifting your existing code-blocks around as-is.
Check out the current version of Pyramid's scaffold for SqlAlchemy (I think
the template change happened in Pyramid 1.6) , which uses a more
streamlined approach that
On Wednesday, August 17, 2016 at 1:25:17 PM UTC-4, Kent wrote:
>
> Generally, echo=True for log level of INFO is very helpful for log files.
>
If you just have a few of these, you can wrap the flush:
dbSession.add(bigObject)
try:
# wrap `flush` in a disabled block, so we don't go crazy
You may be handle this at the database level using triggers.
There are some widely used techniques for emulating a "trigger on commit"
for postgres and oracle (which don't have that), and that can be used to
toggle a flag on the table.
But on most databases, you could also track onto the row
FWIW, instead of creating one big table you may be able to partition the
data on the database level. I know postgres does this, not sure about
others.
If you're unfamiliar with it, you would create 1 master table that has an
id field for the stock. Then you create a partition for each stock.
On Thursday, July 21, 2016 at 12:15:40 PM UTC-4, Simon King wrote:
>
> There are some hints for keeping references to objects at:
>
>
> http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#session-referencing-behavior
>
Thanks. I never knew that.
--
You received this
I do this often.
All you need to do it stash the object into something that is in the scope
you wish the object to survive. An array or dict works perfectly:
stashed = []
def f(s):
...
parents = s.query(Parent).all() # load all parents objects in identity map
stashed.append(parents)
In
On Wednesday, July 13, 2016 at 11:01:08 AM UTC-4, Jonathan Underwood wrote:
Thanks - you're right. That's weird though, as it contradicts the
> documentation. Anyway, I've found it all works as expected, simply by not
> bothering to create the aliases:
>
I think I've run into this problem
On Thursday, June 30, 2016 at 3:38:08 PM UTC-4, Mike Bayer wrote:
>
>
> that's already there, in the form of the Python warnings filter.
> people keep asking about this and it should be in the FAQ.
>
>
Oh that's right - you can set warnings to raise.
--
You received this message because
postgresql allows for comparing result columns via `LEAST`
sqlite allows for comparing result columns via `min`
so the correct query is...
postgresql - ... order by least(col_a, col_b)
sqlite - ... order by min(col_a, col_b)
I tossed together a custom compiler that works for my
On Thursday, June 30, 2016 at 10:17:34 AM UTC-4, Mike Bayer wrote:
> So it seems clear that probably for 1.2 we will need to add a
> comprehensive layer of being able to change what "IN" does, the choices
> being "raise", "simple-false", "column-false", and probably some others,
> and for
I suggest a github/bitbucket repo (and possibly a pypi release), then link
off the wiki.
using those other systems gives a sense of version control/timeliness and
is much easier to copy from than wiki text.
--
You received this message because you are subscribed to the Google Groups
for what it's worth...
I handle a similar situation with:
sql: begin;
sql: select Foo for update;
python: if Foo.revision_id != expected: raise IntegrityError()
sql: update Foo set column = value;
sql: commit;
--
You received this message because you are subscribed to the
At a bare minimum, you want to do something like this:
results =
session.query(Container).options(sqlalchemy.orm.load('children')).all()
Where:
* load is either "joinedload" or "subqueryload". "joinedload" issues 1
sql statement with the child table joined. "subqueryload" issues
On Friday, June 17, 2016 at 1:14:05 PM UTC-4, Mike Bayer wrote:
>
> seems appropriate for a basic validator:
>
>
> http://docs.sqlalchemy.org/en/rel_1_0/orm/mapped_attributes.html?highlight=validates#simple-validators
>
>
>
There's also the 'descriptor' pattern (on the same page as the
When this hangs, open up a terminal window for postgres and list all the
queries/connections. Hangs like this often happen because of locking.
It's not technically a deadlock, so it doesn't get reported as one -- but
dropping a table that another connection used/is-using will just stall
On Wednesday, June 8, 2016 at 10:15:37 AM UTC-4, Krishnakant wrote:
>
> So essentially opening a connection at the start of a class method/
> function, doing number of queries and then closing it just before return
> would be the right strategy I think?
>
No.
The scope should last for one
On Friday, June 3, 2016 at 7:49:23 AM UTC-4, Krishnakant wrote:
>
> So it will have no performance difference is it?
>
> If both do same thing then how and why will explicit connection help me
> better?
>
the `engine.execute()` will be slower, because you will be
creating/checking-out a
as the docs state, `engine.execute(foo)` is shorthand for "connection =
engine.connect()" + "connection.execute(foo)". you can verify this in the
source.
for what you describe, it's usually best to grab an explicit connection and
re-use it.
--
You received this message because you are
On Sunday, May 22, 2016 at 11:45:23 PM UTC-4, Mike Bayer wrote:
>
>
> Anyway I approve the message and the poster and it should show up in the
> group, I do get the released message emailed to me at least.
>
For whatever reasons, it's only going to you. All the "user's first post"
threads
On Saturday, May 21, 2016 at 12:30:26 PM UTC-4, Andrew Pashkin wrote:
>
> It's strange, why I don't see the first post in the thread?
>
this has happened a few times this week. any chance this is from a
moderation feature, mike?
--
You received this message because you are subscribed to
I rarely think in terms of hits anymore, just free memory and cpu. Apache is
still a bit too greedy for me, so I prefer to offload onto a cdn and or use
nginx to free up resources.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe
The bigger issues with a lot of the date operations are the the internal db
representations, different functions in each db, and the nonstandardness of
driver behavior. Sqlalchemy does a lot, even when it looks like it doesn't.
--
You received this message because you are subscribed to the
Have you thought about using cloudflare to cache the docs?
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to
On Monday, May 16, 2016 at 5:31:25 PM UTC-4, Mike Bayer wrote:
>
> Apparently, readthedocs has been not building for over a week, so at the
> moment the CHANGES link below is very stale. We are trying to get RTD
> to respond for help.
>
Last month they had a big change and migrated
In my experience, doing anything that involves Dates or Times on more than
once database will require a custom compiler -- especially when Sqlite is
involved. This only appears to be problem with SqlAlchemy, because it is
standardizing all these completely non-standard database behaviors.
Just posting this to the group for SEO equity
The docs for inserting many are all built around a table class:
conn.execute(addresses.insert(), [...{'user_id': 1, 'email_address' :
'j...@yahoo.com'},...{'user_id': 1, 'email_address' : 'j...@msn.com'}
I needed to quickly create some
On Wednesday, April 27, 2016 at 5:46:11 PM UTC-4, kevin...@chownow.com
wrote:
>
> Thank you again for all your hard work on SQLAlchemy. It's incredibly
> useful. I'm pleasantly surprised by how few things broke when we moved from
> SQLAlchemy 0.7.9 to the 1.0.12.
>
FYI, something that WILL
There are a lot of ways to handle versioning in SqlAlchemy. This is just
one example. The docs show another approach as well
(http://docs.sqlalchemy.org/en/latest/orm/examples.html#versioning-objects)
It's not included, because it's just an example. It won't suit all cases
and many people
You have to install the python packages for each environment with their
version of pip or easy_install.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Michael helped me with a few similar threads recently.
One of them is here:
https://groups.google.com/forum/#!topic/sqlalchemy/Vw1iBXSLibI
That uses a "last 5" correlated subquery, but you could modify that to be
max-1 and uselist=false
--
You received this message because you are subscribed
On Wednesday, April 13, 2016 at 7:25:16 PM UTC-4, Mike Bayer wrote:
>
> Well scopedsession.remove throws away the session, so yeah either don't
> call that , or set up the connection immediately on the next session.
I thought "this is obvious, the session is closed on `remove`", but then
On Wednesday, April 6, 2016 at 12:00:22 AM UTC-4, Mike Bayer wrote:
>
> does your test case show that? I guess I can run it.
>
Thanks. I think I finally got it. (!)
The confusion stemmed from how the `join()` required the start point on the
first bit of the "secondary" but not on the latter
resurfacing this, as I'm facing something similar with pyramid.tm.
most of my problems on a new project stem from ZopeTransactionExtension
clearing everything out on a commit for a certain route (only one route
explicitly calls `transaction.commit`)
I'm fine with the additional selects being
On Monday, April 4, 2016 at 11:16:56 PM UTC-4, Mike Bayer wrote:
>
> i think the problem is that if you aren't nesting the JOIN and instead
> doing an implicit join, then things like joined eager loading don't work
> out as well. There's no subqueries in your primaryjoin I doubt there's
> a
I've been battling with a particular relationship and eventually managed to
get it working based off the docs when I realized I had a similar way to
short-circuit the chain, however I'm not happy with the result.
(https://gist.github.com/jvanasco/6a98b339771bdee38e5d77b0da6ffd98)
My
Thanks! This works like a charm.
For the next person...
`year_week(column)` turns a timestamp into "{Year}.{week of year}". I
should probably 0pad the week , but not now.
--
from sqlalchemy.sql import expression
from sqlalchemy.ext.compiler import compiles
import sqlalchemy.types
I forgot to add... in the interim I checked the
session.connection().dialect and used a different filter for sqlite
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
I need to extract the week of the year across multiple databases (this
needs to work on at-least sqlite and postgres, though mysql would be good
too.)
the problem? this operation isn't standard:
sqlite:
select strftime('%W', cast(timestamp_event as text)) from table_a;
postgres:
if you can put together a self-contained repeatable example that triggers
this, i'd be happy to take a look.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
On Thursday, March 31, 2016 at 6:46:10 PM UTC-4, Robert Smith wrote:
>
> Thank you. That's a good idea but in this case, I'm really wondering if
> sqlalchemy should use that small change to improve performance quite a bit
> in this type of queries.
>
I've been in your place.
1. It sounds like
On Thursday, March 31, 2016 at 9:43:11 AM UTC-4, Mike Bayer wrote:
>
> this is more of a Postgresql optimization question so I dont have any
> insight on that.
>
FWIW, I generally handle these types of "optimize postgres" queries using a
function to apply the filter - like this:
def
Thanks a ton. This was really quick to implement and solved a bunch of
issues!
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
On Wednesday, March 30, 2016 at 12:46:01 AM UTC-4, Mike Bayer wrote:
why can't you build a relationship() which has its primaryjoin set up to
> do this? You can do it w/ a correlated subquery.
>
Can you reference an example ?
> That is, unless you want this relationship() to load
I'm in the processes of open-sourcing a mini-project and need to support
sqlite -- so I'd like to be a bit easier on DB operations.
The following use-case is puzzling me:
I want to view a record on TableA, which will show the last 5 related
objects in TableB and a 'see all' link.
It's
You could also implement a Two Phase Commit using `zope.sqlalchemy` and the
`transaction` package.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
>
> subquery().as_scalar()
>
BINGO! thanks!
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group,
I created a testcase here
- https://gist.github.com/jvanasco/a2ff04614a23e192ac7f
this has been stumping me this morning.
the sql I want is:
update table_a set id_b__latest = ( :other_query )
generated from something like:
s.execute(TableA.__table__
.update()
Yes, the approach would be to use the exact same session:
dbSession = SQLSession.sql_session()
functionA()
functionB()
dbSession.close()
It looks like functionA and functionB each call `SQLSession.sql_session()`,
which will cause problems. I believe that will create
There might be a way to do it off the relationships API entirely. I do the
following way all the time:
Join the child table, order by it, then use `contains_eager(childfield)` so
sqlalchemy won't do another query for the child data
return self.session.query(Reference) \
We all inherit less-than-ideal situations.
If this is running once a day and isn't impacting performance or other
work, I wouldn't really worry about the huge join matrix. It sounds like
the current solution is "good enough". In a few weeks or months you'll be
better acquainted with
GENERALLY SPEAKING
if you have relationships declared on the ORM and just want to iterate on
them, then you don't have to join anything.
The relationships will just iterate based on the join conditions on the orm.
if you end up joining or filtering , you should read up on contains_eager
To add to Simon's resposne: SqlAlchemy has some postgres specific
extensions to the dialect.
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSONB
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To
On Friday, March 18, 2016 at 8:48:48 AM UTC-4, Simon King wrote:
I think you should be able to use something like:
>
I find it easier to run raw sql to grab data like that.
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from
The database design you have is less than perfect.
The goal of having to reformat the relational DB into a CSV is less than
perfect.
If I were you, I would think about 3 questions:
1. How often do you have to run this?
2. Does it take too long?
3. Does it use up too much DB/Python memory?
If
The ORM has an `outerjoin` method on queries:
http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.outerjoin
You can also pass "isouter=True" to `join`
http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.join
The core supports an
If you're not doing it already, It may help to setup a playground/testing
environment as you get familiar with SqlAlchemy.
You can create a directory that has a copy-of (or can import) your model.
Then create a series of scripts that create a new engine with echo turned
on.
This way you can
>
> .filter(t1.c1=='hello', and_(t3.c1=='world'))
>
The and_ Is wrong in this context. Everything in `filter` is joined by
"and" by default. You just want:
.filter(t1.c1=='hello', t3.c1=='world')
`and_` is usually used in a nested condition, often under an `or_`.
--
You received
What might work for you (if the database is structured as such) would be
something like this (in pseudocode)
class ProductAttribute(base):
id = column-int, primary key
name = column-unicode
class Product2ProductAttribute(base):
product_id =column-int, part of
2 comments:
1. Go through the SqlAlchemy ORM tutorial. What you're describing right
now is dancing around some very basic relationship definition and
loading/query techniques.
2. You haven't described what you actually want to accomplish, only some
general ideas of how you think you could
from all
> this. Thanks again for the help, guys.
>
> On 3/9/16, Jonathan Vanasco <jona...@findmeon.com > wrote:
> >
> > On Wednesday, March 9, 2016 at 3:02:05 PM UTC-5, Alex Hall wrote:
> >>
> >> Fair enough, thanks. I didn't realize it was such
On Wednesday, March 9, 2016 at 3:02:05 PM UTC-5, Alex Hall wrote:
>
> Fair enough, thanks. I didn't realize it was such a complex task; I
> figured it was just a matter of passing an argument to distinct() or
> something equally easy.
>
Yeah PostgreSQL is the only db that supports "DISTINCT
On Wednesday, March 9, 2016 at 3:17:26 PM UTC-5,
jbrow...@clearcosthealth.com wrote:
>
> Like I said I am not terribly network savvy but what I meant was that the
> particular vpn client that I am using says that my "connection speed" is
> ~540Mpbs.
>
That's most-likely a speed to your
Those aren't the same inserts as SqlAlchemy
The SQL you noted above (DECLARE / BEGIN/ END) is one packet of text that
is sent over the wire to the database server and executed. (a few hundred
bytes)
A more appropriate comparison would be sending 1000 separate "INSERT INTO
" statements
It would probably be best for you to figure out the correct raw sql you
want, then convert it to SqlAlchemy.
Postgres is the only DB I know of that offers "DISTINCT ON (columns)" --
and even that works a bit awkward.
The query that you want to do isn't actually simple -- there are concerns
1. You're serializing the SqlAlchemy object, not the data.
In order to get the underlying data from an object, I PERSONALLY use a
mixin class for my declarative base that offers `columns_as_dict`:
def columns_as_dict(self):
return dict((col.name, getattr(self, col.name)) for col in
Just FYI, there is a small edge-case in that approach if the db stores time
with microseconds.
If that's the case, you may be better off comparing to a 'floored" date
column, or using a search where you grab values ">=" the start and "<" the
end. Otherwise you'll miss values that happen
On Friday, February 26, 2016 at 4:54:01 AM UTC-5, Simon King wrote:
>
>
> I'm not aware of any way to do this.
>
>
I'm not either, and I'm scared of the SQL that would be generated and the
wire traffic/memory if there were. That would be subselects within
subqueryloads within... this also
On Thursday, February 25, 2016 at 5:05:25 AM UTC-5, Simon King wrote:
> I think it's a matter of personal preference. Some people like to see all
> the attributes of a class defined as part of the class definition itself,
> in which case they'll need to use 2 relationship definitions with
So in your example (simplified below)
class User(Base):
...
class Address(Base):
...
user = relationship('User', backref='addresses')
when you create `Address.user` the `backref` command automatically creates
the "other" side of the relationship --
I'm not sure it's doable either... but I wanted to point out something:
class WeatherStation(MainBase):
class EvaporationData(Base):
They are inheriting from different bases, and `Base` is not associated to
the same metadata as `MainBase`
--
You received this message because you are
This may or may not help...
Whenever I have issues with db drivers, I clear out my environment and
start from scratch (before that, I'll remove all the .pyc files too).
1. Sometimes there's a weird import going on and the wrong versions are
loading (due to packaging issues. this happened a lot
Something similar was asked on the list within the past 6 months, but I
couldn't find it.
We found a certain bit of code where one or more relationships were not
properly eager loaded. This results in 300 postgres queries on a page
instead of 30.
The easiest way I could find the problem,
On Wednesday, February 17, 2016 at 4:34:35 AM UTC-5, Simon King wrote:
>
>
> (expr1 == expr2) & (expr3 == expr4)
>
you usually won't need to & or and_ though.
filter automatically "and"s a list.
.filter(expr1 == expr2, expr3 == expr4)
the only you need to use `and_` is when doing
On Tuesday, February 9, 2016 at 6:20:30 PM UTC-5, Michael Bayer wrote:
>
> I'm not seeing the word "preferred" in that document?It is actually
> simpler from a database introspection perspective to create a UNIQUE
> INDEX alone, the UNIQUE CONSTRAINT is redundant on the Postgresql
>
For validation, I use FormEncode, which is super old.
Just to chime in on the above:
1. Phone Numbers should be strings. They can have a leading 0. Then you
have to worry about extensions, etc.
2. Most people I know will do validation on a form, that is used to
populate the sqlalchemy model
I've run into a few cases where I need to abandon the ORM and run
`session.execute()`
Usually... I'm doing a bunch of nested queries and only pull out a `count`
or a few rows of 1-2 id columns. Writing in pure sql is faster (for me),
gives me more control, and avoids having to do a
On Friday, February 5, 2016 at 6:03:11 PM UTC-5, Michael Bayer wrote:
horrors
>
i know :/
why not call result.scalar() ?
>
ignorance.
result.scalar() I think is what you're looking for
yep!
>
--
You received this message because you are subscribed to the Google Groups
On Thursday, February 4, 2016 at 9:36:43 AM UTC-5, Michael Bayer wrote:
You might find it a better approach overall to use a separate Session
> for persistence, have your users deal with entirely detached objects and
> then just copy their state to the Session when you're ready to persist
>
just implemented the following code:
_saEngine is just the sqlalchemy engine I use.
logging some vars to the terminal window in my routine and this listener,
and I see that this routine is called in the worker after forking.
from celery.signals import worker_process_init
I use SqlAlchemy and Celery and haven't run into this issue yet, but it
appears to be dumb-luck from the size of my worker pool.
I did a bit of searching, and check out Cerlery.signals
Reading through some closed tickets on their github, it looks like
worker_process_init was introduced to
On Wednesday, January 20, 2016 at 7:05:35 PM UTC-5, Amy Smith wrote:
>
> I put "sharded" in quotes because it is almost certainly not the correct
> technical term for what I'm trying to do.
>
That strategy is commonly called "partitioning", and more specifically
"table partitioning".
I'm auditing some code, and noticed that we've often flipped between the
two.
has anyone run benchmarks on the two regarding performance / memory size /
etc ?
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubscribe from this group and
On Wednesday, January 20, 2016 at 1:39:05 PM UTC-5, Michael Bayer wrote:
>
> filter_by creates a SQL construct and calls filter(), so the difference
> should be negligible.
>
rad!
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To
Couldn't one also use the "PreFilteredQuery" recipe
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PreFilteredQuery
to create a marker?
I don't think that filters relationships, while the "GlobalFilter" recipe
does
the postgres dialect for sqlalchemy supports JSON/JSONB column types ,
which have some methods. the docs have examples for all.
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON
"Does no such maintained object caching project exist for SQLAlchemy?"
FWIW, I don't think any Python ORM has one (or has a good one). I've seen
some projects experiment with them. Zope has ZEO which sort of emulates
it, but really doesn't.
I've been working on the fundamentals of a dogpile
The sizes for the connection pool are for each instance of your
application. If you have a 10connection pool and you are running 10
instances of your application on the server, you'll easily have 100
connections. If you're running 1 instance that forks, each fork will have
it's own pool (if
401 - 500 of 1035 matches
Mail list logo