Re: [sqlalchemy] Multiple JOINs of superclass table and potential discrepancy in chained vs on-clause

2015-08-10 Thread Douglas Russell
Thanks.

It makes sense to me that you would alias in the JOIN as well as in the 
WHERE as you say in the issue. I'm tracking that and making use of the 
from_joinpoint

for now.



On Monday, 3 August 2015 13:12:16 UTC-4, Michael Bayer wrote:



 On 8/3/15 1:01 PM, Mike Bayer wrote:

 all of those syntaxes are supposed to be exactly equivalent so this is a 
 major issue: 
 https://bitbucket.org/zzzeek/sqlalchemy/issues/3505/join-targeting-broken-for-joined-inh
  
 https://www.google.com/url?q=https%3A%2F%2Fbitbucket.org%2Fzzzeek%2Fsqlalchemy%2Fissues%2F3505%2Fjoin-targeting-broken-for-joined-inhsa=Dsntz=1usg=AFQjCNHNU0RMx9j2gGpxoTIUzcaXauAdqA


 OK, this is not as much of a bug as I thought, though I might be able to 
 do something.

 The join here has no choice but to auto-alias the A target when it joins 
 on B.a_list.   When you then join with a second call to .join(), it assumes 
 you want to again join from B.   You need to be using from_joinpoint here:

 q = q.join(A.x_list, from_joinpoint=True)


 it then knows you want to join from A and not B.






 On 8/3/15 11:13 AM, Douglas Russell wrote:

 Hi again, 

 Full code: https://gist.github.com/dpwrussell/8ecca88f642cca003999

 I have an structure linked together like so. A-B is a Many-To-Many and 
 uses an association table. A and B are both subclasses of common base 
 Object.

 A
 └── B

 I also have an object X that can be linked to any type of object: A or B.

 I can easily run a query that returns all objects that have a certain X 
 object linked to it.

 I also need to be able to run a query which gets all the B objects where 
 the A parent has a certain X object linked to it.

 Chained:

 SELECT object.type AS object_type, b.id AS b_id, object.id AS object_id, 
 object.name AS object_name
 FROM object JOIN b ON object.id = b.id JOIN a_b_association AS 
 a_b_association_1 ON b.id = a_b_association_1.b_id JOIN (object AS 
 object_1 JOIN a AS a_1 ON object_1.id = a_1.id) ON a_1.id = 
 a_b_association_1.a_id JOIN x ON object.id = x.obj_id
 WHERE x.name = %(name_1)s
 2015-08-03 10:53:03,474 INFO sqlalchemy.engine.base.Engine {'name_1': 'x1'
 }

 Multiple as-clause:

 SELECT object.type AS object_type, b.id AS b_id, object.id AS object_id, 
 object.name AS object_name
 FROM object JOIN b ON object.id = b.id JOIN a_b_association AS 
 a_b_association_1 ON b.id = a_b_association_1.b_id JOIN (object AS 
 object_1 JOIN a AS a_1 ON object_1.id = a_1.id) ON a_1.id = 
 a_b_association_1.a_id JOIN x ON object_1.id = x.obj_id
 WHERE x.name = %(name_1)s
 2015-08-03 10:53:03,480 INFO sqlalchemy.engine.base.Engine {'name_1': 'x1'
 }
 Object(id='2', name='b1')


 The difference is subtle. In the multiple on-clause case the JOIN to the x 
 table is conducted using the alias (object_1) created during the previous 
 JOIN. This is the behaviour that I would expect and gives the correct 
 result. In the chained case, the original object reference is used, giving 
 incorrect results (none in this case).

 The SQLAlchemy manual seems to suggest that these should be equivalent so 
 I'm wondering if there is a bug there?

 If I'm reading the manual correctly, I can ordinarily use JOIN aliases to 
 explicitly avoid this kind of thing, but in this case, I am not specifying 
 this join myself, it is being built from the joined table inheritance.

 I am going to use the on-clause technique for now to get around this, but 
 it would be good to know (especially if this is not a bug) if I should be 
 handling this differently in general?

 Thanks a lot,

 Douglas
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] using inheritance with automap_base

2015-08-10 Thread Mike Bayer



On 8/10/15 10:55 AM, yoch.me...@gmail.com wrote:

Hello,

I use reflection with automap_base to load my database schema.

|
# my code currently looks
Base=automap_base()
Base.prepare(engine,reflect=True)
|

And for now, I want to configure the Base to take in account some 
inheritance relationships.


In database, inheritance are modeled by :

|
--basetable
CREATE TABLE `hardware`(
`id`INT UNSIGNED NOT NULL AUTO_INCREMENT,
`type`VARCHAR(8)NOT NULL,--indicates the type of children used
  PRIMARY KEY (`id`));


--children tables (usesame id of hardware)
CREATE TABLE `hdw_xxx`(
`id`INT UNSIGNED NOT NULL,
...
  PRIMARY KEY (`id`));


CREATE TABLE `hdw_yyy`(
`id`INT UNSIGNED NOT NULL,
...
  PRIMARY KEY (`id`));
|


There is some proper way to do this ?
you need to build out the structure of classes which inherit up front.  
you can still use reflection for pulling in the columns. see the example 
at 
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/automap.html#using-automap-with-explicit-declarations.


Here's also a test from the test suite, doing the same thing:

Base = automap_base()

class Joined(Base):
__tablename__ = 'joined_base'

type = Column(String)

__mapper_args__ = {
polymorphic_identity: u0,
polymorphic_on: type}

class SubJoined(Joined):
__tablename__ = 'joined_inh'
__mapper_args__ = {polymorphic_identity: u1}

Base.prepare(engine=testing.db, reflect=True)









Thanks you
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-10 Thread Claudio Freire
On Fri, Aug 7, 2015 at 6:58 PM, kk krm...@gmail.com wrote:
 On Friday 07 August 2015 10:05 PM, Claudio Freire wrote:

 On Fri, Aug 7, 2015 at 12:51 PM, Mike Bayer mike...@zzzcomputing.com
 wrote:

 On 8/7/15 11:05 AM, kk wrote:



 On Friday 07 August 2015 03:03 PM, Ladislav Lenart wrote:

 Hello.

 ORM is certainly slower. How much depends A LOT on your workload. For
 example
 bulk operations with ORM are an order of magnitude slower than raw SQL.
 On the
 other hand, SQLAlchemy Core let's you write generative SQL queries
 without ORM
 features which are as performant as raw SQL.


 So is is there some kind of a method to have some prepared sql
 statements
 in SQLAlchemy itself?
 I have seen that prepared statements in other languages like Java do a
 great job.


 That's totally a myth and you can see me ranting on this whole topic of
 explicit prepared statements == SPEED here:
 https://mail.python.org/pipermail/db-sig/2014-December/006147.html See
 the
 benchmark there.  Whatever performance we get with prepared statements
 is
 vanishingly small and utterly dwarfed by the order-of-magnitude-greater
 latencies we get from Python. The DBAPI already has a great speed
 optimization in this area and it is known as executemany() - it applies
 only
 to CRUD statements, not SELECT, but SQLAlchemy uses executemany() very
 heavily and to great effect - the speed gains here are not so much due to
 prepared statements, as psycopg2 does not use them in any way, but due to
 the fact that we roll up lots of data into a single call that psycopg2
 can
 run from pure compiled C code.

 It may be a myth most of the time, but there are cases where it is not.

 I had one case (in a whole decade of programming, so it is indeed very
 rare) in which a very complex query ran very fast, and planning time
 was the dominant cost (think 150ms for planning and 15ms for
 execution). For that query, preparing it explicitly saved a lot of
 runtime. Again, executemany helps when it's implemented with prepared
 statements. But I was using psycopg2 and it doesn't use prepared
 statements, so I had to prepare them explicitly myself. This was with
 SQLAlchemy 0.3 and I managed to do it just fine. Some black magic was
 needed of course (had to explicitly compile the query to SQL, bind
 parameters, and generate a PREPARE statement from it), but nothing too
 complex.

 Now, pg8000 does use prepared statements, so it may be as simple as
 using that driver if the need for prepared statements is there. You
 can even have two engines pointing to the same database and use pg8000
 only for the queries that really need prepared statements. So
 SQLAlchemy (and DBAPI) has come a long way since I found that case I
 mentioned above. I'm sure today handling that case would have been
 even easier.


 Thanks for the details reply.
 Now I understand it better.
 So you mean to say executemany will give me the needed performance gain in a
 nutshell.
 Is that curect?

In a nutshell, yes.

 Secondly, is executemany good at only Insert, or Update or both?

Both. More precisely, anything that doesn't produce results.

 And lastly if I have a big resultset through a select statement, more so
 from a view, what is the best approach to use if I decide not to use stored
 procedures?

Well, the bigger your result sets are, the less you care about store procedures.

Assuming store procedures speed up queries (which is a big IF that is
very often false), they only speed up the planning phase, not the
execution. There's no execution optimization you cannot accomplish
with raw SQL, so the bigger the result set, the less you care about
planning time, and thus the less the relative benefit from using
stored procedures is.

Honestly, the benefits of store procedures is so small, and their
maintainance cost so high, that I would suggest never using them
unless you find a case you've thoroughly analyzed and profiled, and
that you find they'd be a huge help (which won't happen for a big
while).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] using inheritance with automap_base

2015-08-10 Thread yoch . melka
Wow, thank you. I'm going to try.

Le lundi 10 août 2015 18:01:37 UTC+3, Michael Bayer a écrit :



 On 8/10/15 10:55 AM, yoch@gmail.com javascript: wrote:

 Hello, 

 I use reflection with automap_base to load my database schema.

 # my code currently looks
 Base = automap_base()
 Base.prepare(engine, reflect=True)

 And for now, I want to configure the Base to take in account some 
 inheritance relationships.

 In database, inheritance are modeled by :

 -- base table
 CREATE TABLE `hardware` (
   `id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
   `type` VARCHAR(8) NOT NULL,  -- indicates the type of children used
   PRIMARY KEY (`id`));


 -- children tables (use same id of hardware)
 CREATE TABLE `hdw_xxx` (
   `id` INT UNSIGNED NOT NULL,
   ...
   PRIMARY KEY (`id`));


 CREATE TABLE `hdw_yyy` (
   `id` INT UNSIGNED NOT NULL,
   ...
   PRIMARY KEY (`id`));


 There is some proper way to do this ?

 you need to build out the structure of classes which inherit up front.  
 you can still use reflection for pulling in the columns.   see the example 
 at 
 http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/automap.html#using-automap-with-explicit-declarations
 .

 Here's also a test from the test suite, doing the same thing:

 Base = automap_base()

 class Joined(Base):
 __tablename__ = 'joined_base'

 type = Column(String)

 __mapper_args__ = {
 polymorphic_identity: u0,
 polymorphic_on: type}

 class SubJoined(Joined):
 __tablename__ = 'joined_inh'
 __mapper_args__ = {polymorphic_identity: u1}

 Base.prepare(engine=testing.db, reflect=True)








 Thanks you
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] using inheritance with automap_base

2015-08-10 Thread yoch . melka
Hello,

I use reflection with automap_base to load my database schema.

# my code currently looks
Base = automap_base()
Base.prepare(engine, reflect=True)

And for now, I want to configure the Base to take in account some 
inheritance relationships.

In database, inheritance are modeled by :

-- base table
CREATE TABLE `hardware` (
  `id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
  `type` VARCHAR(8) NOT NULL,  -- indicates the type of children used
  PRIMARY KEY (`id`));


-- children tables (use same id of hardware)
CREATE TABLE `hdw_xxx` (
  `id` INT UNSIGNED NOT NULL,
  ...
  PRIMARY KEY (`id`));


CREATE TABLE `hdw_yyy` (
  `id` INT UNSIGNED NOT NULL,
  ...
  PRIMARY KEY (`id`));


There is some proper way to do this ?

Thanks you

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to sort list based off of a join to another table

2015-08-10 Thread Mike Bayer



On 8/10/15 3:42 PM, Eric Miller wrote:


I have a table that needs to be sorted alphabetically by names that 
aren't actually contained in the table *in sqlalchemy*. Only codes 
representing the names are in the table. Like so...



Table 1:

|row code month value 1 A 201501 50 2 Z 201501 100 3 CO 201501 200 4 VA 
201502 300 5 C 201502 300 |



Table 2:

|row code name 1 A Apple 2 C Cascade 3 CO Colorado 4 VA Virginia |


I need to sort *Table 1* the following:

  * month
  * name (found in Table 2)

What is the best technique to achieve these sorting results when the 
sorting occurs on value not inherent in the table. I can't send the 
joined product of Table 1 and Table 2. Although, I can join them 
temporarily and remove the 'name' column if needed.




you need to produce rows that are a composite of table1 and table2, so 
that a single row contains both the columns you want to receive as well 
as the columns you intend to sort by.  You do this using JOIN.   SQL 
would be:


SELECT table1.* FROM table1 JOIN table2 ON table1.code=table2.code ORDER 
BY table2.name






--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-10 Thread kk




Hello,
Tahnks to you and Mike for detaild insight, My questions follow 
in-line.On Friday 07 August 2015 08:48 PM, Claudio Freire wrote:

On Fri, Aug 7, 2015 at 12:05 PM, kk krm...@gmail.com wrote:

On Friday 07 August 2015 03:03 PM, Ladislav Lenart wrote:

Hello.

ORM is certainly slower. How much depends A LOT on your workload. For
example
bulk operations with ORM are an order of magnitude slower than raw SQL. On
the
other hand, SQLAlchemy Core let's you write generative SQL queries without
ORM features which are as performant as raw SQL.


I am going to be mostly with Postgresql for now so I don't wish database 
independent queries.  Perhaps we will shift to NoSql in near future 
(more on that in some time ).
So if I were to write core queries then I could as well do directly with 
psycopg2.  What advantage I will then get by using SQLAlchemy?




So is is there some kind of a method to have some prepared sql statements in
SQLAlchemy itself?
I have seen that prepared statements in other languages like Java do a great
job.

Depending on the dialect and driver, you can use executemany.
Not all drivers implement executemany with prepared statements though,
check your case.
Ok, so is this good for some kind of bulk inserts and Updates?  Or does 
it have any other not so obvious performance bennifit?



Overall SQLAlchemy is an excellent
library to work with!


So you mean performance will really get hit when pure ORM is used.
So shold I use a mixture?
For inserts let's say orm and for bulk select queries some prepared
statement like thing (if it exists )?

It really depends on each use case.

If your concern is whether ORM queries will be efficient, don't worry,
SQLAlchemy is powerful enough that you can make almost any kind of
query with the ORM. Almost all the optimizations you could do to plain
SQL are doable at the ORM level.
I see, So can you give me some example of this kind of optimization 
which I should particularly look at?  I have seen docs  on lazy joine 
etc.  Are there some other efficiency and performance tricks I can do?
I have many queries which get data from views and some times have to do 
joins, such data is really bulk, in the magnitude of 5 records and 
is repeated 2 or 3 times at a go.
Inserts are not very very frequent and even if they do, the records in 
one insert would be hardly one in master and 2 in the detail table from 
a single user.

And not more than 25 or 30 users are inserting at one time.



If your concern is CPU overhead on the application side, yes, the ORM
does induce quite an overhead, but whether it's a problem or not
greatly depends on your use case, the number of objects your
transactions will be handling, the complexity of the mapping, your
latency and thoughput constraints, etc. I've convinced myself over
time that a little overhead is fine in exchange for the benefits the
ORM gives you, in ease of coding mostly, but also robustness (the ORM
solves some issues that are hard to handle correctly and robustly with
raw SQL), and SQLAlchemy is flexible enough that you can usually
escape to raw sql if/when you need to. You shouldn't optimize
prematurely, the ORM won't be a death trap as it happens with other
ORMs.
We have good server and I don't think CPU overhead is to much of a 
concern given the use case which I have already mentioned.
There is fair bit of complicated calculations going on big result sets 
but not as complicated as scientific ones.  These are mostly financial 
calculations such as those in preparing a ledger statement in a book 
keeping software.



So, we're saying we need more information if we're to give a meaningful answer.
So I have provided enough information I guess.
By the way we are thinking of moving to either couch db or the 
postgresql's jsonb datatype for most of our work in near future.
Any thing you can specially advice me as far as using SQLAlchemy on such 
data?



Happy hacking.
Krishnakant.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Reflect from one engine, use reflected tables in different engine

2015-08-10 Thread Luis Guzman
Right!, thanks for that.


On Monday, August 10, 2015 at 2:15:40 PM UTC-7, Michael Bayer wrote:



 On 8/10/15 4:14 PM, Luis Guzman wrote:

 Right, this is possible. 

 In my newbie status I did not know you can post-bind the metadata to a 
 different engine. 

 A_metadata.bind = B_engine

 And then:

 A_metadata.create_all() does the magic.


 please just pass the Engine object to create_all() directly and don't 
 confuse things by modifying the MetaData object in place:

 metadata.create_all(any_engine)







 On Friday, August 7, 2015 at 4:43:07 PM UTC-7, Luis Guzman wrote: 

 Hi, 

 Is it possible to reflect from one engine and then use some of the tables 
 reflected as base to create tables in a second, different engine?

 A possible way:

 A_engine = create_engine(postgresql+psycopg2://...A...)
 B_engine = create_engine(postgresql+psycopg2://...B...)

 A_metadata = MetaData(schema='A', bind=A_engine)
 B_metadata = MetaData(schema='B', bind=B_engine)

 A_metadata.reflect(bind=A_engine)
 A_table = A_metadata.tables['A_table']

 A_table.metadata = B_metadata   
 A_table.schema = 'B'

 But now, I'm not sure how to tell A_metadata that it has a A_table to 
 care for.

 I've tried also with similar success using the 'autoload' and with the 
 'automap_base',  instead of reflect.

 Any ideas? is this too hacky/crazy/stupid? If not, it is a better way I 
 haven't found? 

 The use case is that I'd like to reflect what are the tables from 
 production database, and use them as 'templates' to create a new database 
 for testing.
 That is, I want to be able to create in my test database with the actual 
 reflection of what it is in production, and use SQLAlchemy to control the 
 creation of fake data in it.

 Thanks in advance!

 Luis


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-10 Thread kk


Hi,
I have gone through the documentation for baked queries.
It sounds interesting and I guess the performance boost is surely going 
to be worth noticing.
I just wished to ask one thing, if there are series of queries firing 
with each one of them bringing back bulk recordsets, will this concept 
still be helpful?
I am asking because then the most important bottleneck wil be the time 
the RDBMS takes to parse, compile and execute these queries.

So it kind of becomes a task in itself.
I am not a big ORM expert and don't really know the details under the 
hood so asking this.

Happy hacking.
Krishnakant.

On Friday 07 August 2015 09:21 PM, Mike Bayer wrote:



On 8/7/15 11:05 AM, kk wrote:



On Friday 07 August 2015 03:03 PM, Ladislav Lenart wrote:

Hello.

ORM is certainly slower. How much depends A LOT on your workload. 
For example
bulk operations with ORM are an order of magnitude slower than raw 
SQL. On the
other hand, SQLAlchemy Core let's you write generative SQL queries 
without ORM

features which are as performant as raw SQL.


So is is there some kind of a method to have some prepared sql 
statements in SQLAlchemy itself?
I have seen that prepared statements in other languages like Java do 
a great job.


That's totally a myth and you can see me ranting on this whole topic 
of explicit prepared statements == SPEED here: 
https://mail.python.org/pipermail/db-sig/2014-December/006147.html See 
the benchmark there.  Whatever performance we get with prepared 
statements is vanishingly small and utterly dwarfed by the 
order-of-magnitude-greater latencies we get from Python. The DBAPI 
already has a great speed optimization in this area and it is known as 
executemany() - it applies only to CRUD statements, not SELECT, but 
SQLAlchemy uses executemany() very heavily and to great effect - the 
speed gains here are not so much due to prepared statements, as 
psycopg2 does not use them in any way, but due to the fact that we 
roll up lots of data into a single call that psycopg2 can run from 
pure compiled C code.


Reading that thread overall, you'll learn at the very least that the 
Python DBAPI does not expose prepared statements.  As you'll note, 
I'm entirely against the idea of them being made explicit, for this 
exact reason; now everyone's going to want the concept expressed 
explicitly in SQLAlchemy, involving that multiple resource-holding 
cursors be held onto which then open the doors to all kinds of new 
concurrency / memory / connection pool / locking issues that will all 
be reported as new bugs that I have to worry about, all for absolutely 
no good reason as explicit PS does just about nothing to help 
performance in any real way. Yet another chronically misunderstood 
concept that everyone is going to demand everywhere even if you show 
them that it's pointless (see: 
http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/ 
for the reigning king of this phenomenon).


Now, there is something in SQLAlchemy that will give you an *enormous* 
boost of speed that is basically doing what everyone things a 
prepared statement will do, which is a Python-side prepare of 
everything.  Because compared to the database's time to set up a 
statement handle, the time it takes for SQLAlchemy to set up a core 
Select from a Query object as well as the time to build the Query 
itself is very significant.   That feature is known as Baked Queries 
and it is documented here: 
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/baked.html. A lot 
of work went into this very unique feature and it is also benchmarked 
in the example suite, which I would strongly recommend you read and 
run fully.








Overall SQLAlchemy is an excellent
library to work with!


So you mean performance will really get hit when pure ORM is used.
So shold I use a mixture?
For inserts let's say orm and for bulk select queries some prepared 
statement like thing (if it exists )?
The suite in 
http://docs.sqlalchemy.org/en/rel_1_0/orm/examples.html#module-examples.performance 
was built in order to provide the answers to these questions. That's 
where you need to be.






Happy hacking.
Krishnakant.






--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to sort list based off of a join to another table

2015-08-10 Thread Mike Bayer



On 8/10/15 5:30 PM, Eric Miller wrote:

Well, I meant that I need to know how to do this in sqlalchemy.


assuming ORM, joins are first introduced at 
http://docs.sqlalchemy.org/en/rel_1_0/orm/tutorial.html#querying-with-joins, 
it would look like:


session.query(Table1).join(Table2).order_by(Table2.name)






On Monday, August 10, 2015 at 5:17:34 PM UTC-4, Michael Bayer wrote:



On 8/10/15 3:42 PM, Eric Miller wrote:


I have a table that needs to be sorted alphabetically by names
that aren't actually contained in the table *in sqlalchemy*. Only
codes representing the names are in the table. Like so...


Table 1:

|row code month value 1 A 201501 50 2 Z 201501 100 3 CO 201501 200
4 VA 201502 300 5 C 201502 300 |


Table 2:

|row code name 1 A Apple 2 C Cascade 3 CO Colorado 4 VA Virginia |


I need to sort *Table 1* the following:

  * month
  * name (found in Table 2)

What is the best technique to achieve these sorting results when
the sorting occurs on value not inherent in the table. I can't
send the joined product of Table 1 and Table 2. Although, I can
join them temporarily and remove the 'name' column if needed.



you need to produce rows that are a composite of table1 and
table2, so that a single row contains both the columns you want to
receive as well as the columns you intend to sort by.  You do this
using JOIN.   SQL would be:

SELECT table1.* FROM table1 JOIN table2 ON table1.code=table2.code
ORDER BY table2.name http://table2.name




-- 
You received this message because you are subscribed to the

Google Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com javascript:.
To post to this group, send email to sqlal...@googlegroups.com
javascript:.
Visit this group at http://groups.google.com/group/sqlalchemy
http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout
https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-10 Thread Claudio Freire
On Mon, Aug 10, 2015 at 6:42 PM, kk krm...@gmail.com wrote:
 Hello,
 Tahnks to you and Mike for detaild insight, My questions follow in-line.On
 Friday 07 August 2015 08:48 PM, Claudio Freire wrote:

 On Fri, Aug 7, 2015 at 12:05 PM, kk krm...@gmail.com wrote:

 On Friday 07 August 2015 03:03 PM, Ladislav Lenart wrote:

 Hello.

 ORM is certainly slower. How much depends A LOT on your workload. For
 example
 bulk operations with ORM are an order of magnitude slower than raw SQL.
 On
 the
 other hand, SQLAlchemy Core let's you write generative SQL queries
 without
 ORM features which are as performant as raw SQL.


 I am going to be mostly with Postgresql for now so I don't wish database
 independent queries.  Perhaps we will shift to NoSql in near future (more on
 that in some time ).
 So if I were to write core queries then I could as well do directly with
 psycopg2.  What advantage I will then get by using SQLAlchemy?

For one benefit, building complex queries programatically is much
easier with Core than with SQL strings, and less error-prone.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Retrieving a row/cursor's description object from a ResultSetProxy

2015-08-10 Thread Mike Bayer



On 8/10/15 4:27 PM, Van Klaveren, Brian N. wrote:

Hi,

I want to get extra type information from a given column after performing a 
query like the following:

results = engine.execute(text(SELECT a, b FROM Attributes))

It seems the only way to really do this is to use cursor from 
results.cursor.description.

Is this the preferred method, or is there a better alternative?
if you are invoking SQL as strings to the DBAPI, that's the only 
authoritative source of typing information.This is pretty limited so 
SQLAlchemy's architecture is oriented towards that of bundling the 
column typing information with the statement ahead of time, which is 
carried along and matched up when the result is returned, such as:


execute(text(select a, b from table).columns(a=Numeric, b=Integer)).

That would ensure Numeric and Integer are applied to the result rows as 
they are returned.  To see those types, you'd probably want to look at 
the original statement:


stmt = text(select a, b from table).columns(a=Numeric, b=Integer)
stmt.c.a.type
stmt.c.b.type



I ask because my database may have decimals larger than double precision, and 
integers larger than 64 bits, and I'm confused as to what the call:
type(row[0]).

...would return in this case.


you're going to get back a symbol that is specific to the DBAPI in use, 
such as psycopg2.NUMERIC.  These aren't the SQLAlchemy type objects.







Brian



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Reflect from one engine, use reflected tables in different engine

2015-08-10 Thread Mike Bayer



On 8/10/15 4:14 PM, Luis Guzman wrote:

Right, this is possible.

In my newbie status I did not know you can post-bind the metadata to a 
different engine.


A_metadata.bind = B_engine

And then:

A_metadata.create_all() does the magic.


please just pass the Engine object to create_all() directly and don't 
confuse things by modifying the MetaData object in place:


metadata.create_all(any_engine)








On Friday, August 7, 2015 at 4:43:07 PM UTC-7, Luis Guzman wrote:

Hi,

Is it possible to reflect from one engine and then use some of the
tables reflected as base to create tables in a second, different
engine?

A possible way:

A_engine = create_engine(postgresql+psycopg2://...A...)
B_engine = create_engine(postgresql+psycopg2://...B...)

A_metadata = MetaData(schema='A', bind=A_engine)
B_metadata = MetaData(schema='B', bind=B_engine)

A_metadata.reflect(bind=A_engine)
A_table = A_metadata.tables['A_table']

A_table.metadata = B_metadata
A_table.schema = 'B'

But now, I'm not sure how to tell A_metadata that it has a A_table
to care for.

I've tried also with similar success using the 'autoload' and with
the 'automap_base',  instead of reflect.

Any ideas? is this too hacky/crazy/stupid? If not, it is a better
way I haven't found?

The use case is that I'd like to reflect what are the tables from
production database, and use them as 'templates' to create a new
database for testing.
That is, I want to be able to create in my test database with the
actual reflection of what it is in production, and use SQLAlchemy
to control the creation of fake data in it.

Thanks in advance!

Luis


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to sort list based off of a join to another table

2015-08-10 Thread Eric Miller
Well, I meant that I need to know how to do this in sqlalchemy.

On Monday, August 10, 2015 at 5:17:34 PM UTC-4, Michael Bayer wrote:



 On 8/10/15 3:42 PM, Eric Miller wrote:

 I have a table that needs to be sorted alphabetically by names that aren't 
 actually contained in the table *in sqlalchemy*. Only codes representing 
 the names are in the table. Like so...


 Table 1:

 row  code  month   value
 1A 201501  50
 2Z 201501  100
 3CO201501  200
 4VA201502  300
 5C 201502  300


 Table 2:

 row  code  name
 1A Apple
 2C Cascade
 3COColorado
 4VAVirginia


 I need to sort *Table 1* the following:

- month
- name (found in Table 2)

 What is the best technique to achieve these sorting results when the 
 sorting occurs on value not inherent in the table. I can't send the joined 
 product of Table 1 and Table 2. Although, I can join them temporarily and 
 remove the 'name' column if needed.


 you need to produce rows that are a composite of table1 and table2, so 
 that a single row contains both the columns you want to receive as well as 
 the columns you intend to sort by.  You do this using JOIN.   SQL would be:

 SELECT table1.* FROM table1 JOIN table2 ON table1.code=table2.code ORDER 
 BY table2.name




 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-10 Thread kk



On Tuesday 11 August 2015 03:44 AM, Claudio Freire wrote:

On Mon, Aug 10, 2015 at 6:42 PM, kk krm...@gmail.com wrote:

Hello,
Tahnks to you and Mike for detaild insight, My questions follow in-line.On
Friday 07 August 2015 08:48 PM, Claudio Freire wrote:

On Fri, Aug 7, 2015 at 12:05 PM, kk krm...@gmail.com wrote:

On Friday 07 August 2015 03:03 PM, Ladislav Lenart wrote:

Hello.

ORM is certainly slower. How much depends A LOT on your workload. For
example
bulk operations with ORM are an order of magnitude slower than raw SQL.
On
the
other hand, SQLAlchemy Core let's you write generative SQL queries
without
ORM features which are as performant as raw SQL.


I am going to be mostly with Postgresql for now so I don't wish database
independent queries.  Perhaps we will shift to NoSql in near future (more on
that in some time ).
So if I were to write core queries then I could as well do directly with
psycopg2.  What advantage I will then get by using SQLAlchemy?

For one benefit, building complex queries programatically is much
easier with Core than with SQL strings, and less error-prone.


I see, I am sorry, I had misunderstood that with core I will have to 
actually right the select insert and update statements with all the 
concatenation as-is.

If that is the case then definitely I will try experimenting with core.
happy hacking.
Krishnakant.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how fast can pure sqlalchemy perform?

2015-08-10 Thread kk



On Monday 10 August 2015 10:36 PM, Claudio Freire wrote:

On Fri, Aug 7, 2015 at 6:58 PM, kk krm...@gmail.com wrote:

On Friday 07 August 2015 10:05 PM, Claudio Freire wrote:

On Fri, Aug 7, 2015 at 12:51 PM, Mike Bayer mike...@zzzcomputing.com
wrote:

On 8/7/15 11:05 AM, kk wrote:



On Friday 07 August 2015 03:03 PM, Ladislav Lenart wrote:

Hello.

ORM is certainly slower. How much depends A LOT on your workload. For
example
bulk operations with ORM are an order of magnitude slower than raw SQL.
On the
other hand, SQLAlchemy Core let's you write generative SQL queries
without ORM
features which are as performant as raw SQL.


So is is there some kind of a method to have some prepared sql
statements
in SQLAlchemy itself?
I have seen that prepared statements in other languages like Java do a
great job.


That's totally a myth and you can see me ranting on this whole topic of
explicit prepared statements == SPEED here:
https://mail.python.org/pipermail/db-sig/2014-December/006147.html See
the
benchmark there.  Whatever performance we get with prepared statements
is
vanishingly small and utterly dwarfed by the order-of-magnitude-greater
latencies we get from Python. The DBAPI already has a great speed
optimization in this area and it is known as executemany() - it applies
only
to CRUD statements, not SELECT, but SQLAlchemy uses executemany() very
heavily and to great effect - the speed gains here are not so much due to
prepared statements, as psycopg2 does not use them in any way, but due to
the fact that we roll up lots of data into a single call that psycopg2
can
run from pure compiled C code.


It may be a myth most of the time, but there are cases where it is not.

I had one case (in a whole decade of programming, so it is indeed very
rare) in which a very complex query ran very fast, and planning time
was the dominant cost (think 150ms for planning and 15ms for
execution). For that query, preparing it explicitly saved a lot of
runtime. Again, executemany helps when it's implemented with prepared
statements. But I was using psycopg2 and it doesn't use prepared
statements, so I had to prepare them explicitly myself. This was with
SQLAlchemy 0.3 and I managed to do it just fine. Some black magic was
needed of course (had to explicitly compile the query to SQL, bind
parameters, and generate a PREPARE statement from it), but nothing too
complex.

Now, pg8000 does use prepared statements, so it may be as simple as
using that driver if the need for prepared statements is there. You
can even have two engines pointing to the same database and use pg8000
only for the queries that really need prepared statements. So
SQLAlchemy (and DBAPI) has come a long way since I found that case I
mentioned above. I'm sure today handling that case would have been
even easier.


Thanks for the details reply.
Now I understand it better.
So you mean to say executemany will give me the needed performance gain in a
nutshell.
Is that curect?

In a nutshell, yes.


Secondly, is executemany good at only Insert, or Update or both?

Both. More precisely, anything that doesn't produce results.


And lastly if I have a big resultset through a select statement, more so
from a view, what is the best approach to use if I decide not to use stored
procedures?

Well, the bigger your result sets are, the less you care about store procedures.

Assuming store procedures speed up queries (which is a big IF that is
very often false), they only speed up the planning phase, not the
execution. There's no execution optimization you cannot accomplish
with raw SQL, so the bigger the result set, the less you care about
planning time, and thus the less the relative benefit from using
stored procedures is.

Honestly, the benefits of store procedures is so small, and their
maintainance cost so high, that I would suggest never using them
unless you find a case you've thoroughly analyzed and profiled, and
that you find they'd be a huge help (which won't happen for a big




while).
So esssentially baked statements as Mike was pointing seems to be a great 
solution for bigger resultsets.

Is that correct?
happy hacking.
Krishnakant.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Retrieving a row/cursor's description object from a ResultSetProxy

2015-08-10 Thread Van Klaveren, Brian N.
Hi,

I want to get extra type information from a given column after performing a 
query like the following:

results = engine.execute(text(SELECT a, b FROM Attributes))

It seems the only way to really do this is to use cursor from 
results.cursor.description.

Is this the preferred method, or is there a better alternative?

I ask because my database may have decimals larger than double precision, and 
integers larger than 64 bits, and I'm confused as to what the call:
type(row[0]).

...would return in this case.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Alter a column from an Integer to a many-to-many relationship

2015-08-10 Thread c.buhtz
I want to alter a column in a sqlite database from an Integer to a
many-to-many relationship (maybe with alembic).

I tried a little bit arround and created an example code for that. But
it didn't work. And I am not sure if alembic is the right solution for
my case.

Please see the details at stackoverflow.
http://stackoverflow.com/questions/31910185/alter-a-column-in-a-sqlite-database-from-an-integer-to-a-many-to-many-relationsh
-- 
-BEGIN PGP PUBLIC KEY BLOCK-
Version: GnuPG v1

mQENBFQIluABCACfPwAhRAwFD3NXgv5CtVUGSiqdfJGVViVBqaKd+14E0pASA0MU
G0Ewj7O7cGy/ZIoiZ0+lIEZmzJKHfuGwYhXjR/PhnUDrQIHLBvh9WuD6JQuULXfH
kXtVm/i9wm76QAcvr2pwYgNzhcJntUHl2GcgnInYbZDeVmg+p9yIPJjuq73/lRS3
0/McgNoFOBhKK/S6STQuFyjr9OyJyYd1shoM3hmy+kg0HYm6OgQBJNg92WV9jwGe
GzlipvEp2jpLwVsTxYir2oOPhfd9D1fC9F/l/3gXbfjd5GIIVrZFq2haZmoVeJ33
LJxo3RA5Tf9LoUeels1b4s9kFz6h7+AHERUpABEBAAG0IUNocmlzdGlhbiBCdWh0
eiA8YnVodHpAcG9zdGVvLmRlPokBPgQTAQIAKAUCVAiW4AIbAwUJAeEzgAYLCQgH
AwIGFQgCCQoLBBYCAwECHgECF4AACgkQZLsXsAdRqOxNUAf/V/hDA5zGDpySuCEj
DhjiVRK74J9Wd8gfH0WAf1Co5HZ24wZH8rgOIVIgXw8rWkOw/VA6xfdfT+64xjTY
Fhkpbrk199nDzp72F7Jc4NC+x8xac2e3rK5ifSWhZx7L5A32pGYE+d16m3EEqImK
D4gcZl38x9zdUnD4hHyXkIPz1uCfuMuGgWEnaUk4Wbj41CBZr3O0ABue6regV15U
jaes8r+B8iCcY+0yP2kse+3iaCaMqNv5FgQZ9+b2Cql8pFkZJVtBVUw4GW3DWZJi
du0O/YrC9TgS+xY9ht/MD2qSHwjcK1sdImjqBO7xP8TIOwKeYyDvGKnSO3EJ/sSA
UPGEPrkBDQRUCJbgAQgA0k/Qg67CCUJE2/zuxBEoK4wLJpDRJzh8CQPZpjWx8VP0
KL892jwfxymXn8KNhuy1SgCBFSeV9jg4VZNWDlUGJc2lo82ajr9PzIsrQwu4lf0B
zrUWV5hWepKu/kb8uSjx58YYfx0SFz4+9akX3Wwu9TUHntzL5Gk3Q26nnsr1xEJ+
VEumvCH9AE0Tk0K7dQpJ2/JcLuO+uhrpd/lHFDYVN5NsG3P015uFOkDI6N/xNFCj
v95XNR93QlfKpK3qWlFGescfG+o/7Ub6s67/i/JoNbw0XgPEHmQfXpD7IHO4cu+p
+ETb11cz+1mmi96cy98ID+uTiToJ8G//yD9rmtyxoQARAQABiQElBBgBAgAPBQJU
CJbgAhsMBQkB4TOAAAoJEGS7F7AHUajs6sQH/iKs6sPc0vkRJLfbwrijZeecwCWF
blo/jzIQ8jPykAj9SLjV20Xwqg3XcJyko8ZU6/zuRJq9xjlv9pZr/oVudQAt6v+h
2Cf4rKEjmau483wjMV2xjTXQhZi9+ttDbia4fgdmGtKsOicn5ae2fFXcXNPu3RiW
sZKifWdokA6xqMW6iIG9YjjI5ShxngHWp2xfPscBFMDRtFOMags/Yx+YvwoyEZ4A
dURYMFHFqpwILEc8hIzhRg1gq40AHbOaEdczS1Rr3T7/gS6eBs4u6HuY5g2Bierm
lLjpspFPjMXwJAa/XLOBjMF2vsHPrZNcouNKkumQ36yq/Pm6DFXAseQDxOk=
=PGP9
-END PGP PUBLIC KEY BLOCK-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to sort list based off of a join to another table

2015-08-10 Thread Eric Miller


I have a table that needs to be sorted alphabetically by names that aren't 
actually contained in the table *in sqlalchemy*. Only codes representing 
the names are in the table. Like so...


Table 1:

row  code  month   value
1A 201501  50
2Z 201501  100
3CO201501  200
4VA201502  300
5C 201502  300


Table 2:

row  code  name
1A Apple
2C Cascade
3COColorado
4VAVirginia


I need to sort *Table 1* the following:

   - month
   - name (found in Table 2)
   
What is the best technique to achieve these sorting results when the 
sorting occurs on value not inherent in the table. I can't send the joined 
product of Table 1 and Table 2. Although, I can join them temporarily and 
remove the 'name' column if needed.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Reflect from one engine, use reflected tables in different engine

2015-08-10 Thread Luis Guzman
Right, this is possible.

In my newbie status I did not know you can post-bind the metadata to a 
different engine.

A_metadata.bind = B_engine

And then:

A_metadata.create_all() does the magic.




On Friday, August 7, 2015 at 4:43:07 PM UTC-7, Luis Guzman wrote:

 Hi, 

 Is it possible to reflect from one engine and then use some of the tables 
 reflected as base to create tables in a second, different engine?

 A possible way:

 A_engine = create_engine(postgresql+psycopg2://...A...)
 B_engine = create_engine(postgresql+psycopg2://...B...)

 A_metadata = MetaData(schema='A', bind=A_engine)
 B_metadata = MetaData(schema='B', bind=B_engine)

 A_metadata.reflect(bind=A_engine)
 A_table = A_metadata.tables['A_table']

 A_table.metadata = B_metadata   
 A_table.schema = 'B'

 But now, I'm not sure how to tell A_metadata that it has a A_table to care 
 for.

 I've tried also with similar success using the 'autoload' and with the 
 'automap_base',  instead of reflect.

 Any ideas? is this too hacky/crazy/stupid? If not, it is a better way I 
 haven't found? 

 The use case is that I'd like to reflect what are the tables from 
 production database, and use them as 'templates' to create a new database 
 for testing.
 That is, I want to be able to create in my test database with the actual 
 reflection of what it is in production, and use SQLAlchemy to control the 
 creation of fake data in it.

 Thanks in advance!

 Luis




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.