Re: [sqlalchemy] Create Table scripts failing randomly - scripts work from sqlite3 driver but not sqlalchemy

2013-09-23 Thread Simon King
A quick google for psycopg multiple statements doesn't turn up
anything useful, so I suspect you are going to be out of luck. And
unless there are more database drivers than sqlite that support an
executescript method, it doesn't seem likely that it'll get added to
SQLAlchemy either...

For batch loading of data, you could look at the examples at
http://docs.sqlalchemy.org/en/rel_0_7/core/tutorial.html#executing-multiple-statements

Simon

On Mon, Sep 23, 2013 at 6:54 AM,  monosij.for...@gmail.com wrote:
 Hi Simon -

 Great! The executescript from sqlite worked great. I had not seen that. I
 was able to execute indices and fks as well.

 Is the same not possible from SQLAlchemy then? Would version 0.9 have it? I
 don't know if Michael is planning on having this feature.

 Meaning even for PostgreSQL or MySQL? Would I then have to find another
 driver (such as for SQLite) that would do this?

 For bulk loading of data then would SQLAlchemy's ORM capabilities be the
 best than executing SQL in bulk load operations. 'Bulk' is relative term,
 but 10K - 100K records and trying not to resort to ETL tools.

 Look forward to your reply and thanks for your help.

 Mono

 On Sunday, September 22, 2013 5:30:37 PM UTC-4, Simon King wrote:

 The documentation for the Python sqlite driver specifically says:

 
 execute() will only execute a single SQL statement. If you try to execute
 more than one statement with it, it will raise a Warning. Use
 executescript() if you want to execute multiple SQL statements with one
 call.
 

   (http://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.execute)

 (raise a Warning is a slightly vague statement, but it appears that it
 is actually an exception)

 SQLAlchemy's conn.execute eventually calls sqlite's Cursor.execute, so has
 the same limitations. I don't know, but I suspect that postgres will be the
 same.

 I think you need to find a way to split your scripts up into individual
 statements that you can run one at a time.

 Hope that helps,

 Simon

 On 21 Sep 2013, at 16:14, monosij...@gmail.com wrote:

  Hi Michael and Simon - Thank you for your responses and help. Sorry I
  should have provided more details.
 
  Putting in the raise gave this error trace blow.
 
  It says it cannot execute more than one line at a time. It says the same
  even if I do not have it in a transaction block, but in this case I do.
 
  Meaning if just do a: conn.execute(scriptFile) - it says cannot execute
  more than 1 statement at a time, which is the same error I am getting with
  the transaction.
 
  ...
  In this case I do a try catch as in:
  conn = engine.connect() transact = conn.begin() try:
  conn.execute(scriptFile) transact.commit() except: raise 
  transact.rollback()
 
  With scriptFile being just a string with 4 CREATE TABLE statements with
  3 - 10 fields each.
  The scriptFile does have '/n' and '/t' (newlines/tabs) in each line of
  fields in the CREATE TABLE statements.
 
  And as I said before I can execute the same statements (with the /n and
  /t) 1 at a time with either the sqlalchemy or sqlite3 drivers, no problem.
  Eventually I do want to run these in PostgresSQL so I assume it is not a
  SQLite3 issue. I do need to run on SQLite3 as well.
 
  Btw, in my use case, while I can run the CREATE TABLEs 1 at a time
  eventually I do have to run CREATE INDEXES / FKs / INITIAL DATA LOAD
  multiple at a time, so I would like to get the multiple statements working
  right.
  ...
  This is the error raise generates - at the end it says it can only
  execute 1 statement at a time but it s warning.
 
  2013-09-21 10:47:35,543 INFO sqlalchemy.engine.base.Engine BEGIN
  (implicit)
  2013-09-21 10:47:35,543 INFO sqlalchemy.engine.base.Engine
 
  ... then shows the CREATE TABLE statements ...
 
  2013-09-21 10:47:35,543 INFO sqlalchemy.engine.base.Engine ()
  Traceback (most recent call last):
File dbms.model/python3/Controller.py, line 54, in module
  if __name__ == __main__: main()
File dbms.model/python3/Controller.py, line 50, in main
  controller.initiateEnvironment()
 
  ... lines from traceback in my code
  ... end of traceback from SQLAlchemy driver below ...
 
File
  /space/development/python/dbms.model/python3/DBMS/ExecutorSQL.py, line 
  47,
  in executeCreateOnSQLite2
  conn.execute(scriptFile)
File
  /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, line
  662, in execute
  params)
File
  /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, line
  805, in _execute_text
  statement, parameters
File
  /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, line
  874, in _execute_context
  context)
File
  /usr/local/lib/python3.3/dist-packages/sqlalchemy/engine/base.py, line
  1027, in _handle_dbapi_exception
  util.reraise(*exc_info)
File
  /usr/local/lib/python3.3/dist-packages/sqlalchemy/util/compat.py, line
  183, in reraise
  raise value
File
  

Re: [sqlalchemy] load events with polymorphic_on

2013-09-23 Thread Philip Scott
 this is normal, loading for the base class only hits those columns which
 are defined for that base class - it does not automatically fan out to all
 columns mapped by all subclasses.

to do so, you can specify with_polymorphic:


Ahh, thank you very much Michael that does do exactly what I want. So many
times in SQLAlchemy I have been rummaging and hacking for days and then
there's a simple one liner that does exactly what I was after all along :)

One problem remains though. I use a Query.from_self() which seems to cause
SA to forget about the with_polymorphic setting I have given in
mapper_args. If I try to remind it, by explicitly saying
my_query.with_polymorphic(*) I get errors like this:

 Query.with_polymorphic() being called on a Query with existing criterion.

Which seems to be related in some way to having a .distinct or .order_by
clause in the query. Is this expected/understood?

Thank you so much for your help!

All the best,

Philip

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] What is declarative_base() exactly?

2013-09-23 Thread Edward Kim
Hi all,

I have a really short experience of python so it can be really stupid 
question.
I tried to understanding about declarative_base().

Example below:

Base = declarative_base()

class Bus(Base):

__tablename__ = 'bus' 




In my understanding, That python code look like function or class. So it 
will be
return some value or instance.

In SQLAlchemy, declarative_base() return something and then, Bus class
inherit that Base. I saw the code in SQLAlchemy, But I can't understand what
exactly supposed to be.

 Base = declarative_base()
  Base
 class 'sqlalchemy.ext.declarative.api.Base'


How this function is return class, not instance? Is it kind of design 
pattern?

I know It is not a big deal for just using SQLAlchemy, but I can't explain 
what it is
and how can return the class.

Please let me know what I need to know about this pattern or style.


Thanks,

Edward.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] What is declarative_base() exactly?

2013-09-23 Thread Claudio Freire
On Mon, Sep 23, 2013 at 10:22 AM, Edward Kim onward.ed...@gmail.com wrote:
  Base = declarative_base()
  Base
 class 'sqlalchemy.ext.declarative.api.Base'


 How this function is return class, not instance? Is it kind of design
 pattern?

 I know It is not a big deal for just using SQLAlchemy, but I can't explain
 what it is
 and how can return the class.


A class is an object like any other.

You can pass around references to classes like any other reference,
and you can create them just as well:

 def make_me_a_class():
...class A(object):
...   def f(self):
...  print I'm a class
...return A
...
 B = make_me_a_class()
 c = B()
 c.f()
I'm a class


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] What is declarative_base() exactly?

2013-09-23 Thread Edward Kim
Oh, I see! It is return class literally. Thanks for your code.

On Monday, 23 September 2013 23:28:11 UTC+10, Klauss wrote:

 On Mon, Sep 23, 2013 at 10:22 AM, Edward Kim 
 onward...@gmail.comjavascript: 
 wrote: 
   Base = declarative_base() 
   Base 
  class 'sqlalchemy.ext.declarative.api.Base' 
  
  
  How this function is return class, not instance? Is it kind of design 
  pattern? 
  
  I know It is not a big deal for just using SQLAlchemy, but I can't 
 explain 
  what it is 
  and how can return the class. 


 A class is an object like any other. 

 You can pass around references to classes like any other reference, 
 and you can create them just as well: 

  def make_me_a_class(): 
 ...class A(object): 
 ...   def f(self): 
 ...  print I'm a class 
 ...return A 
 ... 
  B = make_me_a_class() 
  c = B() 
  c.f() 
 I'm a class 
  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] What is declarative_base() exactly?

2013-09-23 Thread Philip Scott
declarative_base is just a function that returns a class. In python, a
class is a first class object just like any other. You can do things like
this:

class MyClass(object):
pass

def foo()
   return MyClass

my_class_instance = foo()()

In normal use of SQLAlchemy you don't need to think too hard about what
actually goes on inside declarative_base; it's part of the magical alchemy
that takes a class full of Column() objects and lets you build queries and
look at data in instances of your mapped classes. For more information on
that sort of design pattern, punch 'python metaclass' into your favourite
search engine and allow your mind to be boggled.

- Phil


On Mon, Sep 23, 2013 at 2:22 PM, Edward Kim onward.ed...@gmail.com wrote:

 Hi all,

 I have a really short experience of python so it can be really stupid
 question.
 I tried to understanding about declarative_base().

 Example below:

 Base = declarative_base()

 class Bus(Base):

 __tablename__ = 'bus'

 


 In my understanding, That python code look like function or class. So it
 will be
 return some value or instance.

 In SQLAlchemy, declarative_base() return something and then, Bus class
 inherit that Base. I saw the code in SQLAlchemy, But I can't understand
 what
 exactly supposed to be.

  Base = declarative_base()
  Base
 class 'sqlalchemy.ext.declarative.api.Base'


 How this function is return class, not instance? Is it kind of design
 pattern?

 I know It is not a big deal for just using SQLAlchemy, but I can't explain
 what it is
 and how can return the class.

 Please let me know what I need to know about this pattern or style.


 Thanks,

 Edward.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] another postgresql distinct on question

2013-09-23 Thread Philip Scott
I went though the exact same process of discovery that you did Jonathan :)
It does work perfectly but does not get rendered properly when printing out
the queries (possibly even when I set echo=True on the connection, if I
remember correctly)


On Sun, Sep 22, 2013 at 5:44 PM, Jonathan Vanasco jonat...@findmeon.comwrote:

 ah ha! yes!

 that is right.

  query = select( (these,columns,) ).distinct( this.column )

 this was an even tricker problem...  and I might have been executing
 correct queries last night without realizing it.

 i just noticed that i was getting a correct query in my database, while I
 was seeing the wrong query on screen.

 sqlalchemy didn't know that that 'core' commands I was using were for
 postgresql, so it rendered the query not using that dialect.

 when i actually did query the database, it was compiling with the right
 dialect :

 _query_EXT = dbSession.query( model.Table.data )
 _query_INT = dbSession.query( model.Table.data )
 _slurped = sqlalchemy.union( _query_EXT , _query_INT )
 _slurped = sqlalchemy.sql.expression.alias( _slurped , name='slurped')
 _deduped = sqlalchemy.select(\
 (\
 _slurped.c.object_id.label('object_id') ,
 _slurped.c.event_timestamp.label('event_timestamp')
 ),
 )\
 .distinct( _slurped.c.object_id )\
 .order_by(\
 _slurped.c.object_id.desc() ,
 _slurped.c.event_timestamp.desc()
 )
 _deduped = sqlalchemy.sql.expression.alias( _deduped , name='deduped')
 _culled = sqlalchemy.select( (_deduped.c.object_id,)  )\
 .order_by(\
 _deduped.c.event_timestamp.desc()
 )
 _query = _culled


 # this executes a DISTINCT ON ( slurped.object_id ) slurped.object_id
 , event_timestamp
 yay = dbSession.execute( _query )


 # this renders a DISTINCT slurped.object_id , event_timestamp
 nay = str( _query )






 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Mutable column_properties

2013-09-23 Thread Philip Scott
Yes, obviously :) But I meant in general for any python type - native
postgresql type; I guess there are not that many really I could just handle
all the cases I want to use..


On Fri, Sep 20, 2013 at 5:05 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Sep 20, 2013, at 10:35 AM, Philip Scott safetyfirstp...@gmail.com
 wrote:

 
  Without actually querying the DB or enumerating the types and their
 conversions myself which seems a bit naff;  psycopg2.extensions.adapt
 almost does it but not quite (e.g. you get '2013-09-10'::date when CAST()
 gives you 2013-09-10). Any ideas? It's not vital, I don't do much up
 dating of these guys really so I have it doing a supplemental SQL query
 like the one above for every update/insert :)

 date to string without SQL accesshow about strftime() ?


 http://docs.python.org/2/library/datetime.html#strftime-strptime-behavior




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] load events with polymorphic_on

2013-09-23 Thread Michael Bayer
On Sep 23, 2013, at 8:35 AM, Philip Scott safetyfirstp...@gmail.com wrote:
this is normal, loading for the base class only hits those columns which are defined for that base class - it does not automatically fan out to all columns mapped by all subclasses.
to do so, you can specify with_polymorphic:Ahh, thank you very much Michael that does do exactly what I want. So many times in SQLAlchemy I have been rummaging and hacking for days and then there's a simple one liner that does exactly what I was after all along :)
One problem remains though. I use a Query.from_self() which seems to cause SA to forget about the with_polymorphic setting I have given in mapper_args. this sounds like a bug but I can't reproduce.  Attached is a full series of tests for both mapper-level and entity-level with_polymorphic, if you can illustrate your usage within this series of tests where mapper level with_polymorphic is erased by from_self(), we can identify it.If I try to remind it, by explicitly saying my_query.with_polymorphic("*") I get errors like this:
 Query.with_polymorphic() being called on a Query with existing criterion.Which seems to be related in some way to having a .distinct or .order_by clause in the query. Is this expected/understood?yes, you should use the freestanding with_polymorphic() for greater flexibility.from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
import unittest

class WPFromSelfTest(unittest.TestCase):
def _fixture(self, wp=None):
Base = declarative_base()

class A(Base):
__tablename__ = 'a'

id = Column(Integer, primary_key=True)
type = Column(String)

__mapper_args__ = {
'polymorphic_on': type,
with_polymorphic: wp}

class B(A):
__mapper_args__ = {'polymorphic_identity': 'b'}
x = Column(String)

class C(A):
__mapper_args__ = {'polymorphic_identity': 'c'}
y = Column(String)

self.engine = create_engine(sqlite://, echo=True)
Base.metadata.create_all(self.engine)
s = Session(self.engine)
s.add_all([B(x='b1'), C(y='c1')])
s.commit()

return A, B, C

def test_control(self):
A, B, C = self._fixture()
s = Session(self.engine)
q = s.query(A).order_by(A.id)
r = q.all()
assert 'x' not in r[0].__dict__
assert 'y' not in r[1].__dict__

def test_control_from_self(self):
A, B, C = self._fixture()
s = Session(self.engine)
q = s.query(A).order_by(A.id)
q = q.from_self()
r = q.all()
assert 'x' not in r[0].__dict__
assert 'y' not in r[1].__dict__

def test_per_q_wp(self):
A, B, C = self._fixture()

s = Session(self.engine)
wp = with_polymorphic(A, [B, C])
q = s.query(wp).order_by(wp.id)
r = q.all()
assert 'x' in r[0].__dict__
assert 'y' in r[1].__dict__

def test_per_q_wp_from_self(self):
A, B, C = self._fixture()

s = Session(self.engine)
wp = with_polymorphic(A, [B, C])
q = s.query(wp).order_by(wp.id)
q = q.from_self()
r = q.all()
assert 'x' in r[0].__dict__
assert 'y' in r[1].__dict__

def test_mapper_wp(self):
A, B, C = self._fixture(wp=*)
s = Session(self.engine)
q = s.query(A).order_by(A.id)
r = q.all()
assert 'x' in r[0].__dict__
assert 'y' in r[1].__dict__

def test_mapper_wp_from_self(self):
A, B, C = self._fixture(wp=*)
s = Session(self.engine)
q = s.query(A).order_by(A.id)
q = q.from_self()
r = q.all()
assert 'x' in r[0].__dict__
assert 'y' in r[1].__dict__

if __name__ == '__main__':
unittest.main()

signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] another postgresql distinct on question

2013-09-23 Thread Michael Bayer

On Sep 23, 2013, at 9:40 AM, Philip Scott safetyfirstp...@gmail.com wrote:

 I went though the exact same process of discovery that you did Jonathan :) It 
 does work perfectly but does not get rendered properly when printing out the 
 queries (possibly even when I set echo=True on the connection, if I remember 
 correctly)

it will definitely show the right thing for echo=True, that's what's being sent 
to the database.

DISTINCT ON is postgresql specific so a string repr wont show it unless you 
pass a PG dialect:

from sqlalchemy import select, literal

s = select([literal(1)]).distinct(literal(2))
print s

from sqlalchemy.dialects import postgresql
print s.compile(dialect=postgresql.dialect())





 
 
 On Sun, Sep 22, 2013 at 5:44 PM, Jonathan Vanasco jonat...@findmeon.com 
 wrote:
 ah ha! yes!
 
 that is right.  
 
  query = select( (these,columns,) ).distinct( this.column )
 
 this was an even tricker problem...  and I might have been executing correct 
 queries last night without realizing it.
 
 i just noticed that i was getting a correct query in my database, while I was 
 seeing the wrong query on screen.
 
 sqlalchemy didn't know that that 'core' commands I was using were for 
 postgresql, so it rendered the query not using that dialect.  
 
 when i actually did query the database, it was compiling with the right 
 dialect :
 
 _query_EXT = dbSession.query( model.Table.data )
 _query_INT = dbSession.query( model.Table.data )
 _slurped = sqlalchemy.union( _query_EXT , _query_INT )
 _slurped = sqlalchemy.sql.expression.alias( _slurped , name='slurped')
 _deduped = sqlalchemy.select(\
 (\
 _slurped.c.object_id.label('object_id') , 
 _slurped.c.event_timestamp.label('event_timestamp') 
 ),
 )\
 .distinct( _slurped.c.object_id )\
 .order_by(\
 _slurped.c.object_id.desc() ,
 _slurped.c.event_timestamp.desc()
 )
 _deduped = sqlalchemy.sql.expression.alias( _deduped , name='deduped')
 _culled = sqlalchemy.select( (_deduped.c.object_id,)  )\
 .order_by(\
 _deduped.c.event_timestamp.desc()
 )
 _query = _culled
 
 
 # this executes a DISTINCT ON ( slurped.object_id ) slurped.object_id , 
 event_timestamp
 yay = dbSession.execute( _query )
 
 
 # this renders a DISTINCT slurped.object_id , event_timestamp
 nay = str( _query )
 
 
 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Create Table without Metadata and pass it in later?

2013-09-23 Thread Michael Bayer

On Sep 22, 2013, at 11:47 PM, Donald Stufft donald.stu...@gmail.com wrote:

 Mostly I'm trying to avoid global state like metadata.

ironythe purpose of MetaData is to *avoid* global state.   If any Table 
could refer to ForeignKey(someothertable.id), without MetaData it means 
SQLAlchemy would need to have a truly global registry of all tables everywhere 
(it would also be disastrous as far as naming in large and/or multi-tenancy 
style apps).   It would also make create_all() etc. very painful.


 The Tables themselves are global, but I feel like the metatdata maybe 
 shouldn't be?

MetaData and Table are both objects.   Feel free to put them in some context 
that you pass around everywhere.  MetaData itself has all the Table objects 
inside of metadata.tables too.

I never really write Core-only apps in the first place (since it sounds like we 
aren't talking about any mapped classes) so I don't know that there's really an 
official pattern here.Of course historically, having Table objects as 
global seems fairly natural but I'm not as allergic to global state as other 
folks.


 I'm not a SQL alchemy expert so maybe what I'm trying to do is crazy and the 
 answer is don't do that. Mostly I'm trying to figure out what patterns are 
 best for testable apps written with SQL alchemy. 

yeah pretty soon I'm going to have to full on go after this global == not 
testable thing.   I see no evidence for this at all, and the globals aren't 
testable orthodoxy makes the incredibly common and useful registry pattern 
(http://martinfowler.com/eaaCatalog/registry.html) not really possible.My 
Pycon talk in 2014 (assuming it's accepted) will illustrate some simple test 
patterns where global Session registries are involved.





signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] another postgresql distinct on question

2013-09-23 Thread Jonathan Vanasco


On Monday, September 23, 2013 10:31:16 AM UTC-4, Michael Bayer wrote:

 it will definitely show the right thing for echo=True, that's what's being 
 sent to the database.


yeah, the `echo` in my debug log is what showed me that postgres was 
getting the right data.

i was doing this to audit (pseudocode):

   query = build_query()
   results = session( query )
   raise ValueError( str(query) , results )

I naively thought that sqlalchemy was aware that the intended dialect was 
postgresql.that was wrong.

drove me crazy for a day, but I learned a lot about the expression syntax 
and understood a bit more of the source.  no complaints, I'm better from 
this.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] another postgresql distinct on question

2013-09-23 Thread Michael Bayer

On Sep 23, 2013, at 10:59 AM, Jonathan Vanasco jonat...@findmeon.com wrote:

 
 
 On Monday, September 23, 2013 10:31:16 AM UTC-4, Michael Bayer wrote:
 it will definitely show the right thing for echo=True, that's what's being 
 sent to the database.
 
 yeah, the `echo` in my debug log is what showed me that postgres was getting 
 the right data.
 
 i was doing this to audit (pseudocode):
 
query = build_query()
results = session( query )
raise ValueError( str(query) , results )
 
 I naively thought that sqlalchemy was aware that the intended dialect was 
 postgresql.that was wrong.

yeah it does that only if the MetaData is associated with the engine, which as 
you know I'm not a fan of doing.   I don't have another nice way to make this 
automatic...


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Create Table without Metadata and pass it in later?

2013-09-23 Thread Donald Stufft

On Sep 23, 2013, at 11:52 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Sep 23, 2013, at 11:15 AM, Donald Stufft donald.stu...@gmail.com wrote:
 
 
 
 Well mostly I've isolated other pieces of global state and while metadata 
 itself probably isn't a problem I was hesitant to add it as a piece of 
 global state since I didn't have anything else like that. I did come up with 
 a solution though that I think works Ok. It's basically allowing me to 
 declare my tables at the top level (but not use them from there) so that I 
 can organize my code better, but then the real tables exist inside of my 
 application object where I can use them.
 
 Examples here:
 
 https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/packaging/models.py
 https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/models.py
 https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/application.py#L37-L46
 
 I haven't settled on this approach (notably I need to figure out how it's 
 going to interact with alembic) but so far It seems to work alright.
 
 
 In this pattern, warehouse.packaging.models has a dependency on from 
 warehouse import models  in order to get at the Table stub class - then the 
 application module reaches down into a set of warehouse.* modules to get at 
 the tables it wants.   warehouse.packaging.models is reluctant here to 
 declare itself as a part of something - it would rather that external actors 
 reach down to get at it.
 
 If the warehouse.Table class were an actual SQLA Table object, and the 
 original question, can I attach .metadata after the fact were being used 
 here, this pattern would be a little broken - because 
 warehouse.packaging.models is advertising itself as please reach down and 
 use me!, but then, oh only *one* of you can use me! - basically, 
 warehouse.application would be *setting global state* into a module that has 
 gone through great lengths to avoid knowing about any global state.I find 
 declaration of state ownership preferable to runtime injection of that 
 ownership.
 
 if you just changed line 17 of warehouse.packaging.models from from 
 warehouse import models to from warehouse.models import metadata and then 
 used sqlalchemy.Table directly, you'd trim out a lot of quasi-reinvention 
 here and make things a lot more idiomatic to outsiders IMHO.


Yea I eventually removed most of that. The database is pretty much a global 
either way. (Being a shared external resource) so trying to pretend it isn't 
wasn't being very helpful. I ended up doing:

https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/application.py#L38-L42
https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/application.py#L50-L52
https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/packaging/models.py

Which I think is pretty much the same as what you suggested.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Create Table scripts failing randomly - scripts work from sqlite3 driver but not sqlalchemy

2013-09-23 Thread monosij . forums
Sounds good. I will try concating strings for Postgres - see if that works. 
Seems like it should maybe work from a few posts I read.

I tried out the insert on SQLite by zipping the dict keys to values - works 
great!

I tried out the triple-quoting of the strings through SQLAlchemy but that 
did not either. It seems that that may have a chance with the psycopg2 
driver.

Thanks much.

Mono

On Monday, September 23, 2013 6:39:33 AM UTC-4, Simon King wrote:

 A quick google for psycopg multiple statements doesn't turn up 
 anything useful, so I suspect you are going to be out of luck. And 
 unless there are more database drivers than sqlite that support an 
 executescript method, it doesn't seem likely that it'll get added to 
 SQLAlchemy either... 

 For batch loading of data, you could look at the examples at 

 http://docs.sqlalchemy.org/en/rel_0_7/core/tutorial.html#executing-multiple-statements
  

 Simon 

 On Mon, Sep 23, 2013 at 6:54 AM,  monosij...@gmail.com javascript: 
 wrote: 
  Hi Simon - 
  
  Great! The executescript from sqlite worked great. I had not seen that. 
 I 
  was able to execute indices and fks as well. 
  
  Is the same not possible from SQLAlchemy then? Would version 0.9 have 
 it? I 
  don't know if Michael is planning on having this feature. 
  
  Meaning even for PostgreSQL or MySQL? Would I then have to find another 
  driver (such as for SQLite) that would do this? 
  
  For bulk loading of data then would SQLAlchemy's ORM capabilities be the 
  best than executing SQL in bulk load operations. 'Bulk' is relative 
 term, 
  but 10K - 100K records and trying not to resort to ETL tools. 
  
  Look forward to your reply and thanks for your help. 
  
  Mono 
  
  On Sunday, September 22, 2013 5:30:37 PM UTC-4, Simon King wrote: 
  
  The documentation for the Python sqlite driver specifically says: 
  
   
  execute() will only execute a single SQL statement. If you try to 
 execute 
  more than one statement with it, it will raise a Warning. Use 
  executescript() if you want to execute multiple SQL statements with one 
  call. 
   
  
(http://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.execute) 

  
  (raise a Warning is a slightly vague statement, but it appears that 
 it 
  is actually an exception) 
  
  SQLAlchemy's conn.execute eventually calls sqlite's Cursor.execute, so 
 has 
  the same limitations. I don't know, but I suspect that postgres will be 
 the 
  same. 
  
  I think you need to find a way to split your scripts up into individual 
  statements that you can run one at a time. 
  
  Hope that helps, 
  
  Simon 
  
  On 21 Sep 2013, at 16:14, monosij...@gmail.com wrote: 
  
   Hi Michael and Simon - Thank you for your responses and help. Sorry I 
   should have provided more details. 
   
   Putting in the raise gave this error trace blow. 
   
   It says it cannot execute more than one line at a time. It says the 
 same 
   even if I do not have it in a transaction block, but in this case I 
 do. 
   
   Meaning if just do a: conn.execute(scriptFile) - it says cannot 
 execute 
   more than 1 statement at a time, which is the same error I am getting 
 with 
   the transaction. 
   
   ... 
   In this case I do a try catch as in: 
   conn = engine.connect() transact = conn.begin() try: 
   conn.execute(scriptFile) transact.commit() except: raise 
 transact.rollback() 
   
   With scriptFile being just a string with 4 CREATE TABLE statements 
 with 
   3 - 10 fields each. 
   The scriptFile does have '/n' and '/t' (newlines/tabs) in each line 
 of 
   fields in the CREATE TABLE statements. 
   
   And as I said before I can execute the same statements (with the /n 
 and 
   /t) 1 at a time with either the sqlalchemy or sqlite3 drivers, no 
 problem. 
   Eventually I do want to run these in PostgresSQL so I assume it is 
 not a 
   SQLite3 issue. I do need to run on SQLite3 as well. 
   
   Btw, in my use case, while I can run the CREATE TABLEs 1 at a time 
   eventually I do have to run CREATE INDEXES / FKs / INITIAL DATA LOAD 
   multiple at a time, so I would like to get the multiple statements 
 working 
   right. 
   ... 
   This is the error raise generates - at the end it says it can only 
   execute 1 statement at a time but it s warning. 
   
   2013-09-21 10:47:35,543 INFO sqlalchemy.engine.base.Engine BEGIN 
   (implicit) 
   2013-09-21 10:47:35,543 INFO sqlalchemy.engine.base.Engine 
   
   ... then shows the CREATE TABLE statements ... 
   
   2013-09-21 10:47:35,543 INFO sqlalchemy.engine.base.Engine () 
   Traceback (most recent call last): 
 File dbms.model/python3/Controller.py, line 54, in module 
   if __name__ == __main__: main() 
 File dbms.model/python3/Controller.py, line 50, in main 
   controller.initiateEnvironment() 
   
   ... lines from traceback in my code 
   ... end of traceback from SQLAlchemy driver below ... 
   
 File 
   /space/development/python/dbms.model/python3/DBMS/ExecutorSQL.py, 
 line 

Re: [sqlalchemy] Create Table without Metadata and pass it in later?

2013-09-23 Thread Michael Bayer

On Sep 23, 2013, at 11:15 AM, Donald Stufft donald.stu...@gmail.com wrote:

 
 
 Well mostly I've isolated other pieces of global state and while metadata 
 itself probably isn't a problem I was hesitant to add it as a piece of global 
 state since I didn't have anything else like that. I did come up with a 
 solution though that I think works Ok. It's basically allowing me to declare 
 my tables at the top level (but not use them from there) so that I can 
 organize my code better, but then the real tables exist inside of my 
 application object where I can use them.
 
 Examples here:
 
 https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/packaging/models.py
 https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/models.py
 https://github.com/dstufft/warehouse/blob/werkzeug/warehouse/application.py#L37-L46
 
 I haven't settled on this approach (notably I need to figure out how it's 
 going to interact with alembic) but so far It seems to work alright.


In this pattern, warehouse.packaging.models has a dependency on from warehouse 
import models  in order to get at the Table stub class - then the application 
module reaches down into a set of warehouse.* modules to get at the tables it 
wants.   warehouse.packaging.models is reluctant here to declare itself as a 
part of something - it would rather that external actors reach down to get at 
it.

If the warehouse.Table class were an actual SQLA Table object, and the original 
question, can I attach .metadata after the fact were being used here, this 
pattern would be a little broken - because warehouse.packaging.models is 
advertising itself as please reach down and use me!, but then, oh only *one* 
of you can use me! - basically, warehouse.application would be *setting global 
state* into a module that has gone through great lengths to avoid knowing about 
any global state.I find declaration of state ownership preferable to 
runtime injection of that ownership.

if you just changed line 17 of warehouse.packaging.models from from warehouse 
import models to from warehouse.models import metadata and then used 
sqlalchemy.Table directly, you'd trim out a lot of quasi-reinvention here and 
make things a lot more idiomatic to outsiders IMHO.





signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] Duplicating primary key value into another column upon insert?

2013-09-23 Thread Ken Lareau
Hopefully this will make sense...

I have a database which is in need of some normalization of the column
naming in various tables.  In an effort to minimize disruption (since this
is a live database used by many applications), I'm trying to use a two-step
approach:

1) Add a new column that will 'mirror' the current primary key column,
   then update code to utilize that column once in place
2) Once all code is converted, remove the original primary key column
   and make the new column the primary key instead.

In an effort to minimize change to the code I have currently using my
SQLAlchemy model, I'm trying to find a way to deal with this from within
the declarative classes themselves.  So the questions are:

 - Is this even possible, and if so, how can it be done?
 - If not possible, is there a good yet minimally intrusive external
   change that can be done?

My searching through the docs so far hasn't turned up anything useful,
I'm afraid...

Thanks in advance.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Query manipulation when using joinedload

2013-09-23 Thread Mick Heywood
Hi,

I'm attempting to do some universal filtering using a custom Query class. 
 In this case, I'm trying to filter out all items marked as archived in two 
related classes.  I'm having some trouble adding the required filters to 
the query at all the right levels.

I'm using Flask 0.9, SQLAlchemy 0.8 and PostgreSQL 9.1.5

Let's call the two classes Parent and Child, which are inheriting from 
Archivable:

class Archivable(object):
  @declared_attr
  def is_archived(cls):
return Column('is_archived', types.Boolean, nullable=False, 
default=False, index=True)

class Parent(base, Archivable):
  __tablename__ = 'parent'
  id = Column('id', types.BigInteger, primary_key=True, nullable=False)
  is_archived = 

class Child(base, Archivable):
  __tablename__ = 'child'
  id = Column('id', types.BigInteger, primary_key=True, nullable=False)
  parent_id = Column('id', types.BigInteger, ForeignKey('parent.id'), 
nullable=False)
  parent = relationship('Parent', primaryjoin='Child.parent_id==Parent.id',
backref='children')

Somewhere in my code I am calling:
  parent = db.session.query(Parent).filter(Parent.id == 
1234).options(joinedload('children')).first()

This is resulting in a query of the form:

SELECT anon_1.*, child.*
FROM (
  SELECT parent.*
  FROM parent
  WHERE parent.id = 1234
  LIMIT 1) AS anon_1
LEFT OUTER JOIN child ON child.parent_id = parent.id

which is fine.

When I try and use a custom query class to access the query and filter it 
however, I only seem to be able to access elements of the inner subquery. 
 self._entities for instance only shows a single _MapperEntity 
Mapper|Parent|parent, self.whereclause is a BooleanClauseList of parent.id 
= 1234.  If I try and inject my filters at this stage using the following:

class NoArchivesQuery(Query):
  def __iter__(self):
return Query.__iter__(self._add_archive_filter())

  def from_self(self, *ent):
return Query.from_self(self._add_archive_filter(), *ent)

  def _add_archive_filter(self):
entities = self._entities

for entity in entities:
  if entity.entity_zero and hasattr(entity.entity_zero, 'class_'):

tables_involved_in_the_query.add(entity.entity_zero.class_.__table__)

filter_crits = []
for table in tables_involved_in_the_query:
if hasattr(table.c, is_archived):
  filter_crits.append(or_(table.c.is_archived == None, 
table.c.is_archived == False))

if filter_crits:
  return self.enable_assertions(False).filter(*filter_crits)

 I can get as far as 

SELECT anon_1.*, child.*
FROM (
  SELECT parent.*
  FROM parent
  WHERE parent.id = 1234 AND (parent.is_archived IS NULL OR 
parent.is_archived = false)
  LIMIT 1) AS anon_1
LEFT OUTER JOIN child ON child.parent_id = parent.id

But this does not filter the children, and so I get all archived children 
back.  What I would like to get back is more along the lines of:

SELECT anon_1.*, child.*
FROM (
  SELECT parent.*
  FROM parent
  WHERE parent.id = 1234 AND (parent.is_archived IS NULL OR 
parent.is_archived = false)
  LIMIT 1) AS anon_1
LEFT OUTER JOIN child ON child.parent_id = parent.id AND (child.is_archived 
IS NULL OR child.is_archived = false)

Is that sort of manipulation of a joinedload possible at this level?  Do I 
need to look somewhere else in the query processing pipeline?  Or is it 
really not feasible?

Thanks,

Mick


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.