[sqlalchemy] Postgres' arrays in subqueries

2015-10-01 Thread artee
Mike and others :)

I have the following sql:
select 
models.code,
*array*(
select *(clients.id, clients.code) *from clients
group by clients.id, clients.code
) as envs
from models

I ended with something like this:
items = DBSession.query(Client.id, Client.code). \
group_by(Client.id, Client.code). \
subquery()

return DBSession.query(
Model.code,
array(items)). \
order_by(Model.code)

but the exception is thrown:
TypeError: __init__() argument after * must be a sequence, not Alias

The question is how to use an array to be returned in query (select 
(clients.id, clients.code)) and how to pass subquery to the array?
Could you provide some ideas?

I've tested it on SA 1.0.8.

Thanks
Artur

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] unregister Tables from Metadata

2015-10-01 Thread Pavel S
Hi,

we use application-level partitioning (but no real partitioning in mysqld), 
where new tables are created daily with the same structure, but different 
name (suffix).

   - mysqld is shared for these components:
   

   - daemon written in C++
  - creates new tables every day and fills them with records
   

   - other daemon written in Python:


   - periodically scans *information_schema.tables* for new tables and 
  creates Table objects, which haven't been initialized before
  - the same daemon then processes records from all tables and computes 
  some statistics on top of them
   

   - daily cronjob drops tables older than *n* days


*The problem:*


Pythonic daemon has to somehow notice that some tables were dropped and 
unload them from Metadata object. Otherwise, it will fail on 
ProgrammingError: table does not exit

We don't want to establish any messaging between daemons, just to keep the 
setup as it is.

Daemon should compare what's in *information_schema* and what's in Metadata 
and remove Table objects. How to do that?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Please advice with new oracle 'OFFSET x ROWS FETCH NEXT x ROWS' approach in 12c

2015-10-01 Thread Ralph Heinkel
On Tuesday, September 29, 2015 at 5:54:03 PM UTC+2, Michael Bayer wrote:
>
>
> On 9/29/15 9:50 AM, Ralph Heinkel wrote:
>
> In short this looks like:
>
> SELECT val
> FROM   some_table
> ORDER  BY val
> OFFSET 4 ROWS FETCH NEXT 4 ROWS ONLY;
>
> it's like OK we can add a LIMIT/OFFSET feature, but first!  Let's send it 
> off to the Oracle department of "make this syntax as awkward and obtuse as 
> possible - OraTuse! (tm)"
>
>
> +1 ;-)  You are right, the syntax is really awkward - they should have 
just used something like Postgresql does, very concise, very easy to read. 
But hey - it's Oracle, it must be complicated to justify the price.

> and is so much faster than the nested approach that has been used so far (at 
> least on my system).
>
>
> I am sure.
>
> OK, that was a bit too optimistic, the speed increase depends very much on 
the table/view I'm applying it to. There is some increase in speed, never a 
decrease.
The real difference in speed that I believed (hoped) I saw seems to be 
related on our db load at the time when I measured it. So I think we have 
to collect experiences from different side whether or not it really makes a 
difference.

But anyway, even though the syntax is slightly strange I prefer it much 
over the original approach which required to nest two select statements.

Ralph 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] one-to-one: lost on track

2015-10-01 Thread Richard Gerd Kuesters

hey all!

i think i got lost on track about relationships, specially one-to-one. 
i'll not go into (code) details because all my one-to-one are failing 
with the following exception:

*
*

   *python2.7/site-packages/sqlalchemy/orm/query.pyc in
   _no_criterion_assertion(self, meth, order_by, distinct)*
   *361 raise sa_exc.InvalidRequestError(*
   *362 "Query.%s() being called on a "*
   *--> 363 "Query with existing criterion. " % meth)*
   *364 *
   *365 def _no_criterion_condition(self, meth, order_by=True,
   distinct=True):*

   *InvalidRequestError: Query.get() being called on a Query with
   existing criterion.*


*i think* this happens because i have a custom session that applies a 
filter to all objects queried, even `session.query(A).first()`. perhaps 
someone got stuck on the same problem? faq? anything? :)


best regards and sorry for my possible irresponsible lazyness,

richard.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
<>

Re: [sqlalchemy] one-to-one: lost on track

2015-10-01 Thread Richard Gerd Kuesters
ok, just for the record, my "lazyness" lead me to use `lazy='joined'` in 
a backref and now things works fine :)


richard.


On 10/01/2015 09:03 AM, Richard Gerd Kuesters wrote:

hey all!

i think i got lost on track about relationships, specially one-to-one. 
i'll not go into (code) details because all my one-to-one are failing 
with the following exception:

*
*

*python2.7/site-packages/sqlalchemy/orm/query.pyc in
_no_criterion_assertion(self, meth, order_by, distinct)*
*361 raise sa_exc.InvalidRequestError(*
*362 "Query.%s() being called on a "*
*--> 363 "Query with existing criterion. " % meth)*
*364 *
*365 def _no_criterion_condition(self, meth,
order_by=True, distinct=True):*

*InvalidRequestError: Query.get() being called on a Query with
existing criterion.*


*i think* this happens because i have a custom session that applies a 
filter to all objects queried, even `session.query(A).first()`. 
perhaps someone got stuck on the same problem? faq? anything? :)


best regards and sorry for my possible irresponsible lazyness,

richard.
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
<>

Re: [sqlalchemy] funky session usage to add join conditions and where clauses

2015-10-01 Thread Chris Withers

On 26/09/2015 21:15, Mike Bayer wrote:



On 9/25/15 12:24 PM, Chris Withers wrote:

On 25/09/2015 13:58, Mike Bayer wrote:

session.query(A).filter(A.id>10).as_at(now))


you'd need to subclass Query and dig into Query.column_descriptions 
to get at the existing entities, then add all that criterion.
remind me where the docs are for plugging in a subclassed Query into 
a session?
it's just straight up subclassing.   there's some examples such as at 
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PreFilteredQuery.


...which points to this:

https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/GlobalFilter

...which I like the feel of but can't quite get my head around how to 
make the TemporalOption apply to every table that uses a Temporal mixin 
that is involved in the query.


Chris

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Tips for schema based db traversal and building

2015-10-01 Thread Mike Bayer



On 10/1/15 12:33 AM, Steve Murphy wrote:



On Wednesday, September 30, 2015 at 1:43:46 PM UTC-6, Michael Bayer 
wrote:


there's no known bugs in fetching unique constraints.  PG 8.4 is a
pretty old version but should be working.

note that a unique index and a unique constraint aren't listed as
the same thing, however.you might just have unique indexes.


This was quite informative!

Just one last item:  I'm having trouble getting at the Check 
Constraints...


You advised:

 there's no listing of CHECK constraints right now.

they are also available on the Table as fully constructed constraint 
objects:


table = Table('mytable', somemetadata, autoload=True)
for constraint in table.constraints:
#   ...

I am assuming when you said "they are also available" that the "they" 
was referring

to Foreign, and primary key constraints, etc, and that
CHECK constraints were not available at all, neither by reflection nor 
inspection.

Am I correct?

yes



murf


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Bulk inserting rows into a dynamically created table.

2015-10-01 Thread Mike Bayer
I would definitely not rely upon new Oracle features like GENERATED AS 
IDENTITY unless it is absolutely no issue.


The SQL you may emit for using NEXTVAL is along the lines of:

INSERT INTO table (id, col1, col2, ...) VALUES (my_sequence.nextval, 
'data1', 'data2', ...)


See the example at 
http://docs.oracle.com/cd/A84870_01/doc/server.816/a76989/ch26.htm.


SQLAlchemy supports emitting SQL of this form against the Oracle 
backend, and there are many ways to achieve this result depending on 
what you are starting with.


What is not clear is what we are inserting.  Is this just a list of 
dictionaries you'd like to bulk insert into a table?   What kind of 
Python structure do you have that represents these dynamically created 
tables?  (choices are:  nothing, a Table, or a mapped class).




On 10/1/15 1:11 AM, Nana Okyere wrote:
In my application, atables is dynamically created on the schema. I 
need to insert rows onto that table. I tried to use some of the 
methods suggested here 
 but 
then I realized that they are passing the mapped class or the table of 
the class to the bulk_insert_mappings method or the 
engine.execute(Customer.__table__.insert() method. In both approaches, 
Customer is a mapped class. For what I'm doing, the table I'm doing 
the insert on is dynamically created. So I can't do models.TableName. 
Is there a way to do bulk inserts on such a table in sqlalchey (orm or 
core) ?


I tried the "insert all  select * from dual" construct in oracle 
but that doesn't work with sequence_name.nextval for the id column. So 
I'm looking for another approach. I read that since oracle 12c I can 
create a unique id column using *GENERATED AS IDENTITY *clause. Is 
that supported in sqlalchemy? Then I could just insert rows without 
worrying about how the id is generated. Thanks for sqlalchemy and your 
continued support.

--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Postgres' arrays in subqueries

2015-10-01 Thread Mike Bayer



On 10/1/15 4:35 AM, artee wrote:

Mike and others :)

I have the following sql:
select
models.code,
*array*(
select *(clients.id, clients.code) *from clients
group by clients.id, clients.code
) as envs
from models

I ended with something like this:
items = DBSession.query(Client.id, Client.code). \
group_by(Client.id, Client.code). \
subquery()

return DBSession.query(
Model.code,
array(items)). \
order_by(Model.code)

but the exception is thrown:
TypeError: __init__() argument after * must be a sequence, not Alias

The question is how to use an array to be returned in query (select 
(clients.id, clients.code)) and how to pass subquery to the array?

Could you provide some ideas?

I've tested it on SA 1.0.8.


the "array()" construct currently represents only an array literal of 
column expressions, and it produces the PG expression ARRAY[], which is 
not what you want here.   For a vanilla array() function, use func.array():


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
data = Column(Integer)

e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.drop_all(e)
Base.metadata.create_all(e)


s = Session(e)
s.add_all([A(data=1), A(data=2), A(data=3)])
s.commit()

subq = s.query(A.data).subquery().as_scalar()

q = s.query(A, func.array(subq))

print q.all()


output:

SELECT a.id AS a_id, a.data AS a_data, array((SELECT a.data
FROM a)) AS array_1
FROM a







Thanks
Artur
--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unregister Tables from Metadata

2015-10-01 Thread Mike Bayer



On 10/1/15 5:56 AM, Pavel S wrote:

Hi,

we use application-level partitioning (but no real partitioning in 
mysqld), where new tables are created daily with the same structure, 
but different name (suffix).


  * mysqld is shared for these components:

  * daemon written in C++
  o creates new tables every day and fills them with records

  * other daemon written in Python:

  o periodically scans /information_schema.tables/ for new tables
and creates Table objects, which haven't been initialized before
  o the same daemon then processes records from all tables and
computes some statistics on top of them

  * daily cronjob drops tables older than _n_ days


_The problem:_


Pythonic daemon has to somehow notice that some tables were dropped 
and unload them from Metadata object. Otherwise, it will fail on 
ProgrammingError: table does not exit


removal of individual tables is not a supported API feature of the 
MetaData object because the problem of resolving foreign key references 
to these tables is complicated and error prone.


However, you can just whack an individual Table that is guaranteed to 
not be referenced by any other Table objects by removing it from the 
dictionary (which requires special tricks as we've had to add checks in 
place to prevent people from doing this):


dict.__del__(my_metadata.tables, "mytable")


Also, if you are using any ORM functionality here, ORM classes that 
refer to these tables would also have to be dealt with (e.g. class 
should be dereferenced and there cannot be any other mappings which 
refer to these classes using relationship()).



A better option is likely to use a separate MetaData object for *each* 
table.There's no requirement that tables are all present in just one 
MetaData collection.




We don't want to establish any messaging between daemons, just to keep 
the setup as it is.


Daemon should compare what's in /information_schema/ and what's in 
Metadata and remove Table objects. How to do that?


--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is there a better and more idiomatic way to accomplish this?

2015-10-01 Thread Massimiliano della Rovere
Greetings,
I have two questions to the following code that works, but looks dirty to 
me.
Being new to SQLAlchemy I'd like to ask for advices.

The database is PostgreSQL.
The following check_xyz function must check whether the index 
"table_xyz_idx" exists in table "table".
The table surely already exist, but the index could not.

The index uses some features unique to PostgreSQL:
* it is a gin index
* it uses the special operator class gin_trgm_ops,
* it is composed by a concatenation of two fields and a string.

(The index definition is in the docstring below)


First question:
I use the copy.copy to check whether the index exists in the table.indexes 
set.
I need to copy the table.indexes set because as soon as the index is 
defined (yes, it refers the table columns), it's added to "table.idexes", 
even though the index hasn't been created yet.
So: is there a way to avoid the copy.copy?


Second question:
The "postgresql_ops" parameter of the Index class [1] requires that the 
keys of the dict must equal the .key attributes of the columns/expressions 
used to define the index.
In my case the column is an expression, so it has not ".key" attribute.
So: is there a SQLAlchemy class/construct like:

X(name, expression)

so that I can write:

index = Index(
"go_anagrafica_nominativo_idx",
X("xyz", table.c.field1.concat(text("' 
'")).concat(table.c.field2)),
postgresql_ops={"xyz": "gin_trgm_ops"},
postgresql_using="gin")


[1] 
http://docs.sqlalchemy.org/en/rel_1_0/dialects/postgresql.html?highlight=postgresql_ops#operator-classes


--- the code ---



[...]

def check_xyz(self):
"""
CREATE EXTENSION pg_trgm;
CREATE INDEX CONCURRENTLY table_xyz_idx
ON s.table
USING gin((field1 || ' ' || field2) gin_trgm_ops)
"""
# self.execute is a method of the calss that executes multiple
# raw commands in a single transaction.
self.execute(("CREATE EXTENSION IF NOT EXISTS pg_trgm",))
table = Table(
"table",
self.metadata,
schema="s",
autoload=True)
# FIRST DOUBT
indexes = copy.copy(table.indexes)
# SECOND DOUBT
expression = table.c.field1.concat(text("' 
'")).concat(table.c.field2)
expression.key = "xyz"
index = Index(
"table_xyz_idx",
expression,
postgresql_ops={"xyz": "gin_trgm_ops"},
postgresql_using="gin")
if index not in indexes:
index.create()

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is there a better and more idiomatic way to accomplish this?

2015-10-01 Thread Mike Bayer



On 10/1/15 11:41 AM, Massimiliano della Rovere wrote:

Greetings,
I have two questions to the following code that works, but looks dirty 
to me.

Being new to SQLAlchemy I'd like to ask for advices.

The database is PostgreSQL.
The following check_xyz function must check whether the index 
"table_xyz_idx" exists in table "table".

The table surely already exist, but the index could not.

The index uses some features unique to PostgreSQL:
* it is a gin index
* it uses the special operator class gin_trgm_ops,
* it is composed by a concatenation of two fields and a string.

(The index definition is in the docstring below)


First question:
I use the copy.copy to check whether the index exists in the 
table.indexes set
I need to copy the table.indexes set because as soon as the index is 
defined (yes, it refers the table columns), it's added to 
"table.idexes", even though the index hasn't been created yet.

So: is there a way to avoid the copy.copy?


What is happening there with copy.copy and "if index not in indexes" 
does not do anything.  You are creating a new Index object, it is 
guaranteed to not be in the table.indexes collection before you 
construct it, even if its name and definition matches exactly that of 
another Index object that is already in the collection; you'd have two 
of the same index and the CREATE will then fail due to name conflict.  
The Index object is not hashed on its definition or anything like that.


if you'd like to see if an index of a particular name already exists, 
you can iterate through the names:


if "table_xyx_idx" in [idx.name for idx in table.indexes]:
   # ...






Second question:
The "postgresql_ops" parameter of the Index class [1] requires that 
the keys of the dict must equal the .key attributes of the 
columns/expressions used to define the index.


give your expression a simple label and this should be recognized

expr = table.c.foo.concat(table.c.bar).label('my_expr')

postgresql_ops={"my_expr", "gin_trgm_ops"}



In my case the column is an expression, so it has not ".key" attribute.
So: is there a SQLAlchemy class/construct like:

X(name, expression)

so that I can write:

index = Index(
"go_anagrafica_nominativo_idx",
X("xyz", table.c.field1.concat(text("' 
'")).concat(table.c.field2)),

postgresql_ops={"xyz": "gin_trgm_ops"},
postgresql_using="gin")


[1] 
http://docs.sqlalchemy.org/en/rel_1_0/dialects/postgresql.html?highlight=postgresql_ops#operator-classes


--- the code ---



[...]

def check_xyz(self):
"""
CREATE EXTENSION pg_trgm;
CREATE INDEX CONCURRENTLY table_xyz_idx
ON s.table
USING gin((field1 || ' ' || field2) gin_trgm_ops)
"""
# self.execute is a method of the calss that executes multiple
# raw commands in a single transaction.
self.execute(("CREATE EXTENSION IF NOT EXISTS pg_trgm",))
table = Table(
"table",
self.metadata,
schema="s",
autoload=True)
# FIRST DOUBT
indexes = copy.copy(table.indexes)
# SECOND DOUBT
expression = table.c.field1.concat(text("' 
'")).concat(table.c.field2)

expression.key = "xyz"
index = Index(
"table_xyz_idx",
expression,
postgresql_ops={"xyz": "gin_trgm_ops"},
postgresql_using="gin")
if index not in indexes:
index.create()

--
You received this message because you are subscribed to the Google 
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLTap 0.3.10 released

2015-10-01 Thread George Reilly
On Mon, Sep 28, 2015 at 1:04 AM, David Allouche  wrote:

> On 28 Sep 2015, at 05:21, George V. Reilly 
> wrote:
>
> SQLTap is a very useful library that helps you in profiling SQLAlchemy
> queries. It helps you understand where and when SQLAlchemy issues queries,
> how often they are issued, how many rows are returned, and if you are
> issuing queries with duplicate parameters. The last two are new in
> tonight's 0.3.10 release, which incorporates improved reporting from me.
>
> https://github.com/inconshreveable/sqltap
>
>
> What are the downsides of issuing queries with duplicate parameters? Aside
> from "it makes it harder to read the logs".
>
> I mean, obviously, not "dozens of duplicate parameters": any query with
> dozens of parameters probably has other problems.
>
> I could not find any rationale on the linked page.
>

I was investigating the performance of a single, pathologically slow API
request to one of our web services. SQLTap told me that there were nearly
12,000 database queries over several minutes. Digging further, I found that
about half of these requests could be eliminated by adding lazy="joined" to
some relationships. There was one relationship where adding lazy="joined"
eliminated some queries but the result rowsets were so large that the
overall API request became noticeably slower. I updated SQLTap's report to
show the number of rows returned by each query, which helped identify such
problems.

I also found that certain objects were being requested again and again;
i.e., I was issuing queries with identical parameters. SQLAlchemy issues
each query to the database, gets back an identical result (since they
weren't being modified), and returns the existing object from its identity
map. See
http://docs.sqlalchemy.org/en/rel_1_0/orm/session_basics.html#is-the-session-a-cache.
Making my app smarter about not asking again for objects that it had
already loaded eliminated thousands more queries. This is why I added the
duplicate query reporting, so that I could pinpoint opportunities for
caching.

/George

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.