try doing:
level in row
instead of recompiling the query with __str__() each time, very expensive,
also not very accurate
Thanks!
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to
How I can compile statement with this 2 objects:
st = SELECT user.id AS user_id, user.username AS
user_username, user.email AS user_email, user.password AS
user_password, user.bd AS user_bd, user.first_name AS
user_first_name, user.last_name AS user_last_name,
user.middle_name AS
On Oct 21, 2011, at 10:30 AM, lestat wrote:
If I do
result = st % params
don't do that (where that is, using the % operator). Use bound parameters.
Your DBAPI (I'm assuming psycopg2 here) will then format the value correctly.
--
You received this message because you are subscribed
Maybe compiler from sqlalchemy can help me?
from sqlalchemy.sql import compiler
from sqlalchemy.sql.expression import text
t = text(st)
c = compiler.SQLCompiler(db.engine.dialect, t)
and now I don't know how pass the params dict to compiler.
Thanks.
On Fri, Oct 21, 2011 at 6:40 PM, Michael
it occurs upon execution.
Ultimately the cursor.execute() method described here:
http://www.python.org/dev/peps/pep-0249/
within SQLAlchemy, the general idea is here:
http://www.sqlalchemy.org/docs/core/tutorial.html#using-text
so, you'd add your params as second arguments to text().
they were flushed ! the primary key is always fetched when the ORM does
an INSERT.
I know it *does* happen, just not *how*.
It seems that some of my confusion was mostly based entirely on a
peculiarity of the sqlite dialect. When I switch to Postgres it is
perfectly clear that the ids
On Oct 21, 2011, at 11:05 AM, Russell Warren wrote:
they were flushed ! the primary key is always fetched when the ORM does an
INSERT.
I know it does happen, just not how.
It seems that some of my confusion was mostly based entirely on a peculiarity
of the sqlite dialect. When I
Ok thanks. I actually feel better knowing I'm up against underlying
limitations.
Sorry for being a few steps behind you... I didn't initially understand your
initial comments about why you can't use bulk INSERT calls in a
backend-neutral way but with hindsight they are perfectly clear.
its
I'm developing a Plone module with sqlalchemy 0.7.3 and postgresql
9.1. I'm making use of postgres' new 'True Serializiation' feature to
ensure that concurrent writes do not lead to conflicts.
Since true serialization is quite expensive I started to use two
connections for each server thread:
-
Good day,
I am having trouble using sqlalchemy with a third-party Sybase 9
database with read-only permissions. I believe this is based on the
way (certain versions of) Sybase handle prepared statements[1].
Using pyodbc, this works:
results = cursor.execute(select name from table where
On Oct 21, 2011, at 4:02 PM, Firass Asad wrote:
Good day,
I am having trouble using sqlalchemy with a third-party Sybase 9
database with read-only permissions. I believe this is based on the
way (certain versions of) Sybase handle prepared statements[1].
Using pyodbc, this works:
Hi all,
the subject pretty much explains it all. Here's a complete example of the
issue. Any tips will be appreciate.
Regards,
Mariano
Python 2.7.2 (default, Oct 14 2011, 23:34:02)
[GCC 4.5.2] on linux2
Type help, copyright, credits or license for more information.
import sqlalchemy
On Oct 21, 2011, at 3:54 PM, Denis wrote:
Is there any way for me to share the pending objects of the serialized
session with the read session? I would like to ensure that the read
session used in the same request as the serialized session sees the
same data.
A single object can only be
On 21.10.11 20:09, Michael Bayer wrote:
Because not every DB supports this (such as MySQL), we have not yet
implemented the feature of named column level constraints across the board
yet. We'd have to either implement it only for those DBs which support it,
or add exceptions to those which
14 matches
Mail list logo