Re: [sqlalchemy] Update from an identical joined table with SQLAlchemy

2013-07-19 Thread Ed Singleton
On 15 Jul 2013, at 18:11, Michael Bayer mike...@zzzcomputing.com wrote:

 On Jul 15, 2013, at 11:06 AM, Ed Singleton singleto...@gmail.com wrote:
 
 I have two tables in SQLAlchemy which are identical and I want to update one 
 from the other where the rows have the same primary key, and I want to do it 
 in an efficient way.
 
 I tried joining the tables on the primary key, but SQLAlchemy doesn't appear 
 to support updates on joined tables at the moment (except using a subquery 
 for every column, which was too inefficient).
 
 SQLAlchemy supports this for those backends which also do, SQL Server is 
 included, you just wouldn't use the JOIN keyword, instead use an implicit 
 join.  Example:
 
addresses.update().
values(email_address=users.c.name).
where(users.c.id == addresses.c.user_id).
where(users.c.name == 'ed')

Thanks for this.  I've got this down to the following:

def do_update(basetable, temptable):
key_names = basetable.get_primary_key_names()
cols = temptable.columns
where_clause = sa.and_(
*[getattr(basetable.c, key)==getattr(temptable.c, key) for key in 
key_names])
update_values = dict(
[(col, temptable.get_column(col)) for col in cols])
query = basetable.update().values(update_values)
query = query.where(where_clause)
return query

(where get_primary_key_names is a custom function that just returns the primary 
key names)

Unfortunately, I get the following error:

CompileError: bindparam() name 'lastmodifieddate' is reserved for automatic 
usage in the VALUES or SET clause of this insert/update statement.   Please use 
a name other than column name when using bindparam() with insert() or update() 
(for example, 'b_lastmodifieddate').

I can't quite work out what to do with the bindparams as I'm not really using 
any.  Any clues as to what I'm missing would be gratefully received.

Thanks again

Ed

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Update from an identical joined table with SQLAlchemy

2013-07-19 Thread Ed Singleton

On 19 Jul 2013, at 16:01, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jul 19, 2013, at 9:53 AM, Ed Singleton singleto...@gmail.com wrote:
 On 15 Jul 2013, at 18:11, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jul 15, 2013, at 11:06 AM, Ed Singleton singleto...@gmail.com wrote:
 
 I have two tables in SQLAlchemy which are identical and I want to update 
 one from the other where the rows have the same primary key, and I want to 
 do it in an efficient way.
 
 I tried joining the tables on the primary key, but SQLAlchemy doesn't 
 appear to support updates on joined tables at the moment (except using a 
 subquery for every column, which was too inefficient).
 
 SQLAlchemy supports this for those backends which also do, SQL Server is 
 included, you just wouldn't use the JOIN keyword, instead use an implicit 
 join.  Example:
 
  addresses.update().
  values(email_address=users.c.name).
  where(users.c.id == addresses.c.user_id).
  where(users.c.name == 'ed')
 
 Thanks for this.  I've got this down to the following:
 
 def do_update(basetable, temptable):
   key_names = basetable.get_primary_key_names()
   cols = temptable.columns
   where_clause = sa.and_(
   *[getattr(basetable.c, key)==getattr(temptable.c, key) for key in 
 key_names])
   update_values = dict(
   [(col, temptable.get_column(col)) for col in cols])
   query = basetable.update().values(update_values)
   query = query.where(where_clause)
   return query
 
 (where get_primary_key_names is a custom function that just returns the 
 primary key names)
 
 Unfortunately, I get the following error:
 
 CompileError: bindparam() name 'lastmodifieddate' is reserved for automatic 
 usage in the VALUES or SET clause of this insert/update statement.   Please 
 use a name other than column name when using bindparam() with insert() or 
 update() (for example, 'b_lastmodifieddate').
 
 I can't quite work out what to do with the bindparams as I'm not really 
 using any.  Any clues as to what I'm missing would be gratefully received.
 
 weird.   that seems like a bug, I'd have to work out a self contained test 
 script on that, do you think you could send me something short I could run 
 that does that ?just some small mappings and a way to generate that query.

In creating it for you I realised I had made two stupid mistakes in my script. 
`temptable.get_column(col)` was a custom function and I was passing in a column 
instead of a column name.  It was then returning `None`.  Also I was passing in 
the wrong column to the update values.

It also only seemed to happen with our `lastmodified` column which has an 
`on_update` param.  if I removed that column it also worked.  Fixing any one of 
those three things made it work.

A sample script is below (bear in mind that the script contains the two errors 
and I wouldn't now expect it to work anyway).

Thanks for your help.

Ed

```
import datetime

import sqlalchemy as sa
from sqlalchemy import (String, Unicode, Integer,
DateTime, ForeignKey, Table, Column)

metadata = sa.MetaData()

email_table = Table(Email, metadata,
Column(id, Integer, primary_key=True),
Column(to_addr, Unicode(256), primary_key=True),
Column(subject, Unicode(256), nullable=False),
Column(lastmodifieddate, DateTime(), default=datetime.datetime.now, 
onupdate=datetime.datetime.now)
)

email_archive_table = Table(EmailArchive, metadata,
Column(id, Integer, primary_key=True),
Column(to_addr, Unicode(256), primary_key=True),
Column(subject, Unicode(256), nullable=False),
Column(lastmodifieddate, DateTime(), default=datetime.datetime.now, 
onupdate=datetime.datetime.now)
)

where_clause = sa.and_(
email_table.c.id==email_archive_table.c.id,
email_table.c.to_addr==email_archive_table.c.to_addr)

update_values = dict(
[(col, None) for col in email_archive_table.columns])

query = email_table.update().values(update_values)
query = query.where(where_clause)

print query
```

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Update from an identical joined table with SQLAlchemy

2013-07-19 Thread Michael Bayer

On Jul 19, 2013, at 12:14 PM, Ed Singleton singleto...@gmail.com wrote:

 
 
 A sample script is below (bear in mind that the script contains the two 
 errors and I wouldn't now expect it to work anyway).


OK this example is calling UPDATE towards email_table as the target, but then 
specifies the columns in email_archive_table in the SET clause, so that's why 
this doesn't work.   The table referred to by UPDATE and SET need to be the 
same table.


 
 Thanks for your help.
 
 Ed
 
 ```
 import datetime
 
 import sqlalchemy as sa
 from sqlalchemy import (String, Unicode, Integer,
DateTime, ForeignKey, Table, Column)
 
 metadata = sa.MetaData()
 
 email_table = Table(Email, metadata,
Column(id, Integer, primary_key=True),
Column(to_addr, Unicode(256), primary_key=True),
Column(subject, Unicode(256), nullable=False),
Column(lastmodifieddate, DateTime(), default=datetime.datetime.now, 
 onupdate=datetime.datetime.now)
 )
 
 email_archive_table = Table(EmailArchive, metadata,
Column(id, Integer, primary_key=True),
Column(to_addr, Unicode(256), primary_key=True),
Column(subject, Unicode(256), nullable=False),
Column(lastmodifieddate, DateTime(), default=datetime.datetime.now, 
 onupdate=datetime.datetime.now)
 )
 
 where_clause = sa.and_(
email_table.c.id==email_archive_table.c.id,
email_table.c.to_addr==email_archive_table.c.to_addr)
 
 update_values = dict(
[(col, None) for col in email_archive_table.columns])
 
 query = email_table.update().values(update_values)
 query = query.where(where_clause)
 
 print query
 ```
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Update from an identical joined table with SQLAlchemy

2013-07-19 Thread Michael Bayer

On Jul 19, 2013, at 9:53 AM, Ed Singleton singleto...@gmail.com wrote:

 On 15 Jul 2013, at 18:11, Michael Bayer mike...@zzzcomputing.com wrote:
 
 On Jul 15, 2013, at 11:06 AM, Ed Singleton singleto...@gmail.com wrote:
 
 I have two tables in SQLAlchemy which are identical and I want to update 
 one from the other where the rows have the same primary key, and I want to 
 do it in an efficient way.
 
 I tried joining the tables on the primary key, but SQLAlchemy doesn't 
 appear to support updates on joined tables at the moment (except using a 
 subquery for every column, which was too inefficient).
 
 SQLAlchemy supports this for those backends which also do, SQL Server is 
 included, you just wouldn't use the JOIN keyword, instead use an implicit 
 join.  Example:
 
   addresses.update().
   values(email_address=users.c.name).
   where(users.c.id == addresses.c.user_id).
   where(users.c.name == 'ed')
 
 Thanks for this.  I've got this down to the following:
 
 def do_update(basetable, temptable):
key_names = basetable.get_primary_key_names()
cols = temptable.columns
where_clause = sa.and_(
*[getattr(basetable.c, key)==getattr(temptable.c, key) for key in 
 key_names])
update_values = dict(
[(col, temptable.get_column(col)) for col in cols])
query = basetable.update().values(update_values)
query = query.where(where_clause)
return query
 
 (where get_primary_key_names is a custom function that just returns the 
 primary key names)
 
 Unfortunately, I get the following error:
 
 CompileError: bindparam() name 'lastmodifieddate' is reserved for automatic 
 usage in the VALUES or SET clause of this insert/update statement.   Please 
 use a name other than column name when using bindparam() with insert() or 
 update() (for example, 'b_lastmodifieddate').
 
 I can't quite work out what to do with the bindparams as I'm not really using 
 any.  Any clues as to what I'm missing would be gratefully received.

weird.   that seems like a bug, I'd have to work out a self contained test 
script on that, do you think you could send me something short I could run that 
does that ?just some small mappings and a way to generate that query.







 
 Thanks again
 
 Ed
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




RE: [sqlalchemy] The cost of defer()

2013-07-19 Thread Gombas, Gabor
Thanks, I was just able to test the patch. With the patch, my query with the 
extra defer() options added is just a bit faster than without those options. So 
while using defer() is not a huge win in my case, it at least no longer causes 
an unexpected slowdown.

From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: 13 July 2013 22:46
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] The cost of defer()

anyway, with some profiling of loading 1000 rows with six deferred cols, the 
function count with the defer was over 50K and without the defer around 37K; 
the patch is now committed and with the defer it's at 32K, so a bit less than 
that of loading the data, as it should be (more would be better sure, but this 
is a decent improvement).  The change to get that much wasn't that big a deal 
so this is in 0.8.


On Jul 11, 2013, at 2:04 PM, Michael Bayer 
mike...@zzzcomputing.commailto:mike...@zzzcomputing.com wrote:


please try out this patch:

http://www.sqlalchemy.org/trac/attachment/ticket/2778/2778.patch


which refactors this particular system to not require the production of a new 
object per instance, which is the slowest part of this, and also inlines the 
work of assembling the callable.  This should give you 50% or more method call 
improvement.  if this is enough, this might be OK for 0.8.





On Jul 11, 2013, at 1:40 PM, Michael Bayer 
mike...@zzzcomputing.commailto:mike...@zzzcomputing.com wrote:


well what kind of data are we talking about?  defer()'s use case was for binary 
large objects and such, fields that are many K/Megs in size.  if you're 
deferring a bunch of ints, then yes it's not optimized very well for that.

Half of the overhead could be easily fixed here, creating those 
LoadDeferredColumns objects could be offloaded to a later point.The other 
half, setting up that callable, I'd have to spend some time reviewing the use 
cases here.  The difference between an attribute that is deferred vs. one 
that is expired is that if you access some other expired attribute, the 
deferred attribute will still not load - because the use case is, you really 
don't want this BLOB column to load unless you touch it specifically.   So to 
get that instruction into the state, don't load these keys even on an 
unexpire, uses some kind of method call on every state.
InstanceState._set_callable could be inlined more here to do less work, 
instructions up to the loader process just to populate a key in a dictionary 
maybe, though these reorganizations can destabilize the code.   it's not 
something I'd be comfortable doing in 0.8, the ticket I created 
(http://www.sqlalchemy.org/trac/ticket/2778) has any potential work here for 
0.9.

The other way to go here is to provide a query option that explicitly delivers 
the attribute as expired as opposed to deferred, looking at how that works 
right now I can give you the recipe below, but it still involves a function 
call per column so that the InstanceState knows the attribute is expired.


from sqlalchemy.orm.strategies import DeferredOption, DeferredColumnLoader


class DontLoadColumnOption(DeferredOption):
def get_strategy_class(self):
return NoColumnLoader


class NoColumnLoader(DeferredColumnLoader):
def create_row_processor(self, context, path, mapper, row, adapter):
if not self.ishttp://self.is/_class_level:
def set_deferred_for_local_state(state, dict_, row):
state.callables[self.key] = state
return set_deferred_for_local_state, None, None
else:
return super(NoColumnLoader, self).create_row_processor(
context, path, mapper, row, adapter)

if __name__ == '__main__':
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class A(Base):
__tablename__ = 'a'

id = Column(Integer, primary_key=True)

x = Column(Integer)
y = Column(Integer)
z = Column(Integer)
q = Column(Integer)

e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add_all([
A(x=x%d % i, y=y%d % i, z=z%d % i, q=q%d % i)
for i in xrange(1000)
])
s.commit()
s.close()

loaded = s.query(A).options(DontLoadColumnOption(y),
DontLoadColumnOption(z)).order_by(A.id).all()

for a in loaded:
assert 'y' not in a.__dict__
assert 'z' not in a.__dict__
assert 'x' in a.__dict__
assert 'q' in a.__dict__

assert a.z == z%d % (a.id - 1), a.z













On Jul 11, 2013, at 10:23 AM, Gombas, Gabor 
gabor.gom...@morganstanley.commailto:gabor.gom...@morganstanley.com wrote:



I did need the objects, not just the raw data, otherwise I'd had to duplicate a 
bunch of existing code which expected full-blown objects to operate on. 

Re: [sqlalchemy] Is definting a commit method good design?

2013-07-19 Thread Victor Reichert
Thank you for the adice.  I see that exclusive relaince on this approach 
would prohibit transactions.
 
The use case at hand is an object storing some information about itself.  
As only one object will be saved at a time, I'm thinking this appraoch is 
OK.  However, I will make the method private.
 
Please let me know if if you think I should do otherwise.
 
Thank you again,
 
~Victor

On Thursday, July 18, 2013 5:02:22 PM UTC-7, Mauricio de Abreu Antunes 
wrote:

 meaning that a single commit() should address all the objects that 
 are related to a particular operation. 

 i commited it to my mind. :) 

 2013/7/18 Michael Bayer mik...@zzzcomputing.com javascript:: 
  
  On Jul 18, 2013, at 6:52 PM, Victor Reichert 
  vfr...@gmail.comjavascript: 
 wrote: 
  
  HI All, 
  
  I'm working on my first SQL Alchemy project.  I'm thinking I'm going to 
 define a commit method for all the objects I want persist, I'm thinking 
 something like: 
  
  def commit(self): 
  session = Session() #Session is a global sessionmaker 
  session.add(self) 
  session.commit() 
  session.close() 
  
  Is that a good practice? 
  
  This is an antipattern.   You should keep the means of persistence 
 separate from the objects that you are persisting, and you should be 
 thinking in terms of use cases as far as how to structure transactions, 
 meaning that a single commit() should address all the objects that are 
 related to a particular operation. 
  
  A bad metaphor might be, suppose we wish to write a collection of 
 sentences to a file.   The antipattern approach to me looks like this: 
  
  class Sentence(object): 
  def __init__(self, text): 
  self.text = text 
  
  def write(self): 
  handle = open(self.file, a) 
  handle.write(self.text) 
  handle.close() 
  
  file = myfile.txt 
  for sentence in sentences: 
  sentence.write() 
  
  While thinking in terms of operations instead of objects looks like 
 this: 
  
  class Sentence(object): 
  def __init__(self, text): 
  self.text = text 
  
  handle = open(self.file, w) 
  for sentence in sentences: 
  handle.write(sentence.text) 
  handle.close() 
  
  besides the obvious efficiency, we don't force each sentence to deal 
 with the target file in isolation of all the other sentences.  Dealing with 
 the file's lifespan is outside of the scope of the thing we're putting in 
 the file. 
  
  By use case, this depends a lot on what kind of application this is.   
 If it's a web application, you want a transaction per request.  If it's a 
 short console script, you want a transaction for the life of the script 
 itself.  There's a lot more written about this here:   
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#session-frequently-asked-questions
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to 
  sqlal...@googlegroups.comjavascript:. 

  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/groups/opt_out. 
  
  



 -- 
 Mauricio de Abreu Antunes 
 Mobile: (51)930-74-525 
 Skype: mauricio.abreua 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Alembic 0.6.0 released

2013-07-19 Thread Michael Bayer
Hey gang - 

Alembic 0.6.0 has been sitting in the hopper for awhile so I figured I'd put it 
out. The two changes here I'm most excited about are no longer needing 2to3 
for Python 3, and also I've made the display of history much nicer, since I 
used that a lot and I needed something easier to read - it also allows for 
ranges now too.

There's some other nifty changes that have more potential than they might 
initially seem to, including that you can now add any number of custom 
arguments to any command using the new -x option, and then parsing for them 
in your env.py using get_x_argument(), and there's also a new feature that 
allows you to limit autogenerate at the column level, in addition to the table 
and schema level as before.

There's a bunch of issues that have piled up in Alembic, the vast majorty 
concern autogenerate, and there's another thread that refers to how Alembic's 
versioning model works.  There are some big ideas in place to rework both of 
these systems in a fairly dramatic way.

For autogenerate, the idea is that we would in most cases not rely upon 
database reflection anymore, instead we will try to compare schemas based on 
different versions of the code directly.  This would require that the current 
state of the incoming MetaData() is written out to some serialized format, 
which I'm thinking might be nice as JSON - though it can start as pickle.
Creating a MetaData-JSON serializer/deserializer would be great but also a 
very big job.   The advantage to comparing Python-generated metadata objects is 
that we are guaranteed to catch all changes in the schema perfectly, defaults, 
types, arguments on types, all of it, without any reliance on the quirks of 
database reflection - since the purpose of autogenerate is really just to 
detect what Tables/Columns have been added, removed or changed in the code, 
versus what the code was before.   Decisions have to be made here as to what 
role reflection will continue to play, what the upgrade path will be for 
existing deployments, does the behavior kick in automatically or does it 
require some configuration, etc.   

The other change to the versioning model is just as dramatic and involves 
reworking the versioning to work based on an open-ended dependency graph, 
meaning any particular revision is dependent on zero or more previous 
revisions, rather than all revisions being lined up in a straight line.   The 
upgrade process would then find itself at a certain target by navigating the 
topological sort of this dependency graph, pretty much the same idea as how 
SQLAlchemy's unit of work functions.   With this model, the issue of merging 
branches mostly goes away, as two unrelated migrations from different branches 
can both be pulled in and just be siblings of each other without issue.

Neither of these two changes are something I personally need in any urgent way 
- I generally treat database migrations as a mostly manual process in any case 
and I'm happy to edit the mistakes in my autogenerate migrations by hand and to 
deal with the very occasional merge manually.I do get a lot of complaints 
about the many edge cases present in autogenerate at least so at some point it 
would be nice to get to this issue.

Anyway, 0.6.0 is ready to go, happy downloading:

https://pypi.python.org/pypi/alembic

https://alembic.readthedocs.org/en/latest/changelog.html#change-0.6.0


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Is definting a commit method good design?

2013-07-19 Thread Michael Bayer
if it's how you want to do it, then go with it for now.   If this particular 
pattern has trouble in store for you, you'll learn all about that by going 
through with it anyway :).



On Jul 19, 2013, at 5:29 PM, Victor Reichert vfr...@gmail.com wrote:

 Thank you for the adice.  I see that exclusive relaince on this approach 
 would prohibit transactions.
  
 The use case at hand is an object storing some information about itself.  As 
 only one object will be saved at a time, I'm thinking this appraoch is OK.  
 However, I will make the method private.
  
 Please let me know if if you think I should do otherwise.
  
 Thank you again,
  
 ~Victor
 
 On Thursday, July 18, 2013 5:02:22 PM UTC-7, Mauricio de Abreu Antunes wrote:
 meaning that a single commit() should address all the objects that 
 are related to a particular operation. 
 
 i commited it to my mind. :) 
 
 2013/7/18 Michael Bayer mik...@zzzcomputing.com: 
  
  On Jul 18, 2013, at 6:52 PM, Victor Reichert vfr...@gmail.com wrote: 
  
  HI All, 
  
  I'm working on my first SQL Alchemy project.  I'm thinking I'm going to 
  define a commit method for all the objects I want persist, I'm thinking 
  something like: 
  
  def commit(self): 
  session = Session() #Session is a global sessionmaker 
  session.add(self) 
  session.commit() 
  session.close() 
  
  Is that a good practice? 
  
  This is an antipattern.   You should keep the means of persistence separate 
  from the objects that you are persisting, and you should be thinking in 
  terms of use cases as far as how to structure transactions, meaning that a 
  single commit() should address all the objects that are related to a 
  particular operation. 
  
  A bad metaphor might be, suppose we wish to write a collection of sentences 
  to a file.   The antipattern approach to me looks like this: 
  
  class Sentence(object): 
  def __init__(self, text): 
  self.text = text 
  
  def write(self): 
  handle = open(self.file, a) 
  handle.write(self.text) 
  handle.close() 
  
  file = myfile.txt 
  for sentence in sentences: 
  sentence.write() 
  
  While thinking in terms of operations instead of objects looks like this: 
  
  class Sentence(object): 
  def __init__(self, text): 
  self.text = text 
  
  handle = open(self.file, w) 
  for sentence in sentences: 
  handle.write(sentence.text) 
  handle.close() 
  
  besides the obvious efficiency, we don't force each sentence to deal with 
  the target file in isolation of all the other sentences.  Dealing with the 
  file's lifespan is outside of the scope of the thing we're putting in the 
  file. 
  
  By use case, this depends a lot on what kind of application this is.   If 
  it's a web application, you want a transaction per request.  If it's a 
  short console script, you want a transaction for the life of the script 
  itself.  There's a lot more written about this here:   
  http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#session-frequently-asked-questions
   
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send an 
  email to sqlalchemy+...@googlegroups.com. 
  To post to this group, send email to sqlal...@googlegroups.com. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/groups/opt_out. 
  
  
 
 
 
 -- 
 Mauricio de Abreu Antunes 
 Mobile: (51)930-74-525 
 Skype: mauricio.abreua 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] PostGIS/SQLite DateTime

2013-07-19 Thread Basil Veerman
Sounds great, thanks.

On Wednesday, 17 July 2013 19:35:48 UTC-7, Michael Bayer wrote:

 well we try to get releases out every 4-6 weeks but sometimes it takes 
 longer.though this issue was a surprise and does lean things towards 
 releasing sooner.



 On Jul 17, 2013, at 12:38 PM, Basil Veerman bvee...@uvic.ca javascript: 
 wrote:

 Hi Michael,

 Thanks for your help.  Using 0.8.3 and the type variant passes our tests.

 Do you have any (even rough) estimate as to when 0.8.3 will be released to 
 PyPI?

 Thanks,
 Basil

 On Friday, 12 July 2013 18:56:14 UTC-7, Michael Bayer wrote:

 Right, DateTime + with_variant() + sqlite.DATETIME with a custom storage 
 format and regexp.  *However*.  There's an unfortunate case that the 
 storage format/regexp arguments, introduced in 0.8.0, are not actually 
 working fully, and I've just committed the fix.  So you'll have to use 
 0.8.3 for now, which is not released you can get it via 
 https://bitbucket.org/zzzeek/sqlalchemy/get/rel_0_8.tar.gz .

 Example:

 from sqlalchemy import Column, BigInteger, Float, String, DateTime, 
 Integer
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.dialects import sqlite
 import re

 Base = declarative_base()

 # needs SQLAlchemy 0.8.3 to work correctly
 sqlite_date = DateTime(timezone=False).with_variant(
 sqlite.DATETIME(
 
 storage_format=%(year)04d-%(month)02d-%(day)02dT%(hour)02d:%(minute)02d:%(second)02d,
 regexp=r(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+),
 ), sqlite)


 class Obs(Base):
 __tablename__ = 'obs_raw'
 id = Column('obs_raw_id', BigInteger, primary_key=True)
 time = Column('obs_time', sqlite_date)
 datum = Column(Float)

 from sqlalchemy.orm import sessionmaker
 from sqlalchemy import create_engine

 engine = create_engine('sqlite:///test.db', echo=True)

 Session = sessionmaker(bind=engine)
 session = Session()

 for ob in session.query(Obs.time):
 print ob



 On Jul 12, 2013, at 8:56 PM, Basil Veerman basilv...@gmail.com wrote:

 Here is a short example that illustrates the original error:

 *Create Test Database:*

 $ sqlite3 testing.sqlite
 SQLite version 3.7.13 2012-06-11 02:05:22
 Enter .help for instructions
 Enter SQL statements terminated with a ;
 sqlite BEGIN TRANSACTION;
 sqlite CREATE TABLE 'obs_raw' ('obs_raw_id' INTEGER PRIMARY KEY, 
 'obs_time' TIMESTAMP, 'mod_time' TIMESTAMP, 'datum' FLOAT);
 sqlite INSERT INTO obs_raw 
 VALUES(32568805,'2004-05-18T00:00:00','2011-08-29T12:13:18',21.0);
 sqlite INSERT INTO obs_raw 
 VALUES(32568806,'2004-05-19T00:00:00','2011-08-29T12:13:18',19.3);
 sqlite INSERT INTO obs_raw 
 VALUES(32568807,'2004-05-20T00:00:00','2011-08-29T12:13:18',20.8);
 sqlite INSERT INTO obs_raw 
 VALUES(32568808,'2004-05-21T00:00:00','2011-08-29T12:13:18',17.8);
 sqlite INSERT INTO obs_raw 
 VALUES(32568809,'2004-05-22T00:00:00','2011-08-29T12:13:18',19.4);
 sqlite COMMIT;
 sqlite .exit

 *Basic python test:*

 from sqlalchemy import Column, BigInteger, Float, String, DateTime
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class Obs(Base):
 __tablename__ = 'obs_raw'
 id = Column('obs_raw_id', BigInteger, primary_key=True)
 time = Column('obs_time', DateTime(timezone=True))
 datum = Column(Float)

 from sqlalchemy.orm import sessionmaker
 from sqlalchemy import create_engine

 engine = create_engine('sqlite+pysqlite:///testing.sqlite')
 Session = sessionmaker(bind=engine)
 session = Session()

 for ob in session.query(Obs.time):
 print ob

 *Results when run:*

   File test_datetime.py, line 19, in module
 for ob in session.query(Obs.time):
   File 
 /home/bveerman/.local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py,
  
 line 75, in instances
 labels) for row in fetch]
   File 
 /home/bveerman/.local/lib/python2.7/site-packages/sqlalchemy/orm/query.py, 
 line 3157, in proc
 return row[column]
 ValueError: Couldn't parse datetime string: u'2004-05-18T00:00:00'



 On Fri, Jul 12, 2013 at 4:50 PM, Michael Bayer 
 mik...@zzzcomputing.comwrote:


 On Jul 12, 2013, at 5:53 PM, Basil Veerman bvee...@uvic.ca wrote:

 Hi,

 I've been struggling for a while trying to create a mapping that works 
 with both PostGIS 


 what's a PostGIS database?  do you mean a Postgresql database with 
 spatial extensions installed?


 Background: Production PostGIS database has been reduced and converted 
 to a spatialite database with the same schema for offline testing purposes.

 Problem: PostGIS TIMESTAMP is now stored as SQLite TIMESTAMP, but 
 effectively as a string.  I think the main problems is that the default 
 SQLite DateTime dialect storage_format includes miliseconds, our data does 
 not.


 OK the DATETIME object that's in the SQLite dialect supports 
 customization of this, but if your data doesnt have milliseconds, it just 
 stores it as zero.  I'm not sure what the problem is exactly.



 A solution which seems