There was one more reply in this thread, seems it was moderated and
removed. Can someone explain what happened?
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Hi,
I'm trying to use get() with composite primary key like this:
class GroupPermissionService(BaseService):
@classmethod
def get(cls, group_id, perm_name, db_session=None):
db_session = get_db_session(db_session)
return db_session.query(cls.model).get([group_id,
Hi Pravin,
The problem you are seeing here is probably related to the fact you have
some other foreign keys that rely on the user_id column, Mysql will do
everything to make your life miserable if you change the name of the
column/size of type that has other constraints depend on it - and the
Unless we are talking millions of rows and table of size 500+mb, nothing
happens - it will be very fast,
Normally when you start running alters on your table it will get locked for
a while, but unless the table is really big you wont notice anything.
The performance of those operations depends
ConcurrentModificationError: Deleted rowcount 0 does not match number
of objects deleted...
I create my schema from models using create_all() - now , the problem
is i get this error only using mysql or mysql+oursql. It happens every
time in one place of my code.
But when i uses postgresql
Hi,
Sorry Michael i was unclear on this - i tested on 4 drivers:
psycopg2,
pg8000,
mysql-python (1.2.3c1),
oursql (0.9.2)
mysql-connector (0.1.5)
on pg drivers everything works fine, on both on them, every single
mysql driver fails.
the problem is solvable by passing passive_deletes=True, to
yeah a friend suggested to me that it could have been about the fact i
forgot to define innodb in model definitions, and unfortunately that
didnt help anything.
I guess ill set passive_deletes to True, but it would be still good to
investigate further why this happens - since it seems a
Hello,
I'm looking for a way to execute a recursive query and get ORM
instances with it.
http://www.postgresql.org/docs/8.4/interactive/queries-with.html
I know we don't have any direct support for recursive syntax in SA,
but is there a way to execure arbitrary query that would return all
the
Hello,
I'm having a strange problem with CachingQuery.
I have a model that looks like this:
User(object):
.
@classmethod
def by_id(cls, id, cache=FromCache(default, by_id),
invalidate=False):
q = meta.Session.query(User).filter(User.id == id)
if cache:
q
Hi,
I do something like this to invalidate:
--code--
User.by_id(id, invalidate=True)
--code--
and on next request
c.user = User.by_id(id)
and no - regular query DOES NOT return right contents for me.
c.users = meta.Session.query(User).order_by(User.username).limit(30)
- this will only return
On 7 Maj, 18:52, Michael Bayer mike...@zzzcomputing.com wrote:
expire_all() your session. that is why you are seeing your stale cache data
with a query that does not specify cache.
Was that added recently ? this happens on subsequent requests in
pylons application that in the end calls
This is only happening when one uses QueryCache?
Im completly lost now because back in 0.5.x i never saw a single
problem of this kind.
Every new request would use new correct data.
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this
Im lost and clueless maybe im completly dumb :(
im trying really hard but i cant understand the problem.
every request calls meta.Session.remove() at the end of request - i
checked it does that.
Now even when i do:
meta.Session.query(User).order_by(User.username).limit(30)
I see the query
ok, i THINK i understand whats going on... i reproduced it and i think
i understand whats going on since i was able to track the problem by
restarting memcache and noticing that it gives me right values:
its kinda deceiving at first, i do a request in application:
lets assume we have a users
Yes, I understand that, but to my understanding if I have a setup of 4
pasters, and i use expire_on_commit=True, if paster1 does a commit(),
while paster4 was in the middle of request somerhere, then it will
not be aware that commit was issued and will not re-fetch the data in
the middle.
--
You
Basicly, im wondering if i have 4 separate app instances, im just not
wasting resources to refetch the data here.
In a multi server setup this doesnt seem to be a good idea to have
this on ? Its transaction separation that should decide of this ?
--
You received this message because you are
I wanted to conftrm that if we run multiple instances of web
application. then sessions in those applications are not aware of
commits that are issued by other instances sessions, right?
So expire_on_commit=True does not ensure coherency of data and just
adds overhead, am i correct ?
--
You
17 matches
Mail list logo