Re: [sqlalchemy] Automatic created and modified timestamp columns (best practice?!)

2013-04-03 Thread Julien Cigar

On 03/27/2013 14:26, Moritz Schlarb wrote:

Hi there everyone,

I am kind of looking for a best practice on how to implement
automatically setting and updating columns for created and modified
timestamps in SQLAlchemy, preferrably database-agnostic.

First of all, is DateTime the appropriate column type or should it be
timestamp instead? Both render to datetime on the Python side, so the
arguments of generic SQL discussions on that topic aren't so relevant here.

Now for the automatic updating, I have two variants:

1)
 created = Column(DateTime, nullable=False, server_default=func.now())
 modified = Column(DateTime, nullable=False,
server_default=func.now(), server_onupdate=func.now())

Which only work if the database supports an ON UPDATE statement, which
e.g. sqlite doesn't seem to.

2)
 created = Column(DateTime, nullable=False, server_default=func.now())
 modified = Column(DateTime, nullable=False,
server_default=func.now(), onupdate=func.now())

Which would account for that, or are there databases that don't even
support a DEFAULT value?

But the second solution isn't really aesthetic - since the modified
timestamp will now always be updated by SQLAlchemy.

Isn't there a way to make SQLAlchemy decide whether to omit data for
modified or not based on the actual database dialect used?

Hope my questions came out clear and maybe someone can help me!

Cheers


If you use the ORM part of SQLAlchemy then I would use a 'before_update' 
event for that. It has the advantage is that the event can be propagated 
(thanks to propagate=True), which can be really usefull if you use 
inheritance. For example:


def update_updated_listener(mapper, connection, target):
target.updated = datetime.now()

event.listen(YourClass, 'before_update', update_updated_listener,
   propagate=True)

I usually create a 'last_update' column on the mapped class, something like:

'last_update' : orm.column_property(
sql.func.coalesce(table['content'].c.updated,
table['content'].c.added)
)


--
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] how to prepare get by primary key query?

2013-04-03 Thread Michael Bayer

On Apr 3, 2013, at 1:01 AM, Roman Yakovenko roman.yakove...@gmail.com wrote:

 Hello.
 
 I am just starting with SQLAlchemy and have a small problem: my get query 
 is compiled every time I use it.
 According to cProfile: 
 ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/sql/expression.py:1864(compile)
 100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/sql/expression.py:1908(_compiler)
 100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/sql/compiler.py:231(__init__)
 100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/engine/interfaces.py:764(__init__)
 100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/engine/interfaces.py:805(process)
 90/100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/sql/visitors.py:73(_compiler_dispatch)
 100.0000.0000.0020.000 
 /usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.8.1dev-py2.7-linux-x86_64.egg/sqlalchemy/sql/compiler.py:1082(visit_select)
 
 
 In case of native API, I would use prepare command/statement, but it looks 
 I am missing something obvious and cannot find similar functionality in the 
 package.

the Python DBAPI doesn't have an explicit concept of a prepared statement - 
the idea of preparing a statement involves that a statement handle is 
established on the database server, which can then be reused.   The closest the 
DBAPI has is the executemany() call, which gives the DBAPI itself the option 
of using a prepared statement behind the scenes; but this option is not 
appropriate for SELECT statements since executemany() doesn't support the 
return of results.

On the Python SQL construction side, SQLAlchemy deals with expression 
constructs that are converted to strings given a database dialect as well as 
compilation options.   The amount of time in Python spent for this compilation 
is relatively small and has been optimized for many years to be as fast as 
possible.  Nevertheless, in some cases we do try to squeeze more performance 
out by caching the compiled form of these queries; the ORM in particular will 
cache the compiled form of insert/update/delete statements that are used by the 
unit of work.   

Right now, the string form of SELECT queries generated by the ORM are not 
automatically cached.   It's only worthwhile to try to cache queries that are 
fixed, such as the get() query we're referring to here as well as some of the 
queries used by lazyloading of relationships.  The overhead of this compilation 
however is fairly minimal; reddit.com uses SQLAlchemy expression constructs for 
all database queries as well, and they serve well over two billion page views a 
month without any caching of the expression string.

There's a recipe to make use of the compiled_cache in conjunction with the 
Query object right now, which is at 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/BakedQuery .



 
 Below, you can find a complete source code, which reproduce the issue.
 
 Thank you!
 
 
 import sqlalchemy 
 import sqlalchemy.ext.declarative
 
 Base = sqlalchemy.ext.declarative.declarative_base()
 
 class Employee(Base):
 __tablename__ = 'employee'
 id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
 name = sqlalchemy.Column(sqlalchemy.String(100), nullable=False)
 
 def run_query(session):
 employee_by_id = session.query(Employee)
 for i in range(10):
 x = employee_by_id.get(i)
 print x.name
 session.rollback()  
 
 if __name__ == '__main__':
 engine = sqlalchemy.create_engine('sqlite://')
 Base.metadata.create_all(engine)
 Session = sqlalchemy.orm.sessionmaker(bind=engine)
 
 session = Session()
 for i in range(10):
 session.add(Employee(id=i, name='name' + str(i)))
 session.commit()

 import cProfile
 cProfile.run('run_query(Session())', 'get.stats') 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: [sqlalchemy] discriminator_on_association example and association proxies

2013-04-03 Thread bruk.habtu
Hey Michael,

Thanks for the speedy response. I went ahead and changed 
my implementation using the table_per_association example and everything 
works great.


On Tuesday, 2 April 2013 16:49:14 UTC-4, Michael Bayer wrote:

 the Customer.comments attribute proxies through comment_association 
 which is configured as a scalar.this should probably be improved in 
 this particular recipe, but for starting assignment you need to assign a 
 collection first:

 c1 = Customer()
 c1.comments = []


 I don't ever use discriminator_on_association, it's the most awkward and 
 least efficient recipe; it's only there to show an improvement over what 
 Django and Rails do.  In practice, I typically advise table_per_assocation; 
 the use case of being able to trace back everything that's tagged back to 
 the parents polymorphically is not needed much in practice.






 On Apr 2, 2013, at 3:45 PM, bruk.habtu bruk@gmail.com javascript: 
 wrote:

 I was attempting to implement the example shown in 
 https://github.com/ContextLogic/sqlalchemy/blob/master/examples/generic_associations/discriminator_on_association.py

 trace: https://gist.github.com/brukhabtu/2294f0873830243845db
 my code: https://gist.github.com/brukhabtu/1423c9825252ddcf85a5

 Customer model inherits CommentMixin. 

 Error occurs when appending to the customer's comments

 customer.comments.append(comment)


 The trace shows that the error happens in associationproxy.py

 return self._scalar_get(getattr(obj, self.target_collection))

 obj is a Customer model object and self.target_collection is a string with 
 the value 'comment_association'. 

 The attribute exists but is set to None. 

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] implementing one-to-many relationship?

2013-04-03 Thread James Hartley
I have implemented a (simplified) one-to-many relationship which works, but
I suspect I am reimplementing functionality in a suboptimal fashion which
is already done by SQLAlchemy.  The following short example:

8---
#!/usr/bin/env python

import datetime
from sqlalchemy import create_engine, Column, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship, backref
from sqlalchemy.schema import UniqueConstraint
from sqlalchemy.dialects.sqlite import INTEGER, TEXT, DATETIME

Base = declarative_base()
Session = sessionmaker()
engine = create_engine('sqlite:///test.db', echo=True)

class Subordinate(Base):
__tablename__ = 'subordinate'

id = Column(INTEGER, primary_key=True)
name = Column(TEXT, unique=True, nullable=False)
discovered = Column(DATETIME, nullable=False)
discontinued = Column(DATETIME, nullable=True)

def __init__(self, name):
constructor
self.name = name
self.discovered = datetime.datetime.now()

def __repr__(self):
string representation overload
return 'subordinate(%d,%s,%s)' % (self.id, self.discovered,
self.discontinued)

class Record(Base):
__tablename__ = 'record'

id = Column(INTEGER, primary_key=True)
subordinate_id = Column(INTEGER, ForeignKey('subordinate.id'),
nullable=False)
timestamp = Column(DATETIME, nullable=False)
UniqueConstraint('subordinate_id', 'timestamp', name='occurrence')

def __init__(self, subordinate):
constructor
self.subordinate_id = subordinate.id
self.timestamp = datetime.datetime.now()

def __repr__(self):
string representation overload
return 'Snapshot(%s,%s,%s)' % (self.id,
self.subordinate_id, self.timestamp)

if __name__ == '__main__':
Session.configure(bind=engine)
session = Session()

Base.metadata.create_all(engine)

d = {'subordinates':{}, 'records':{}}

lst = []
for p in ('abc', 'ijk', 'xyz'):
d['subordinates'][p] = Subordinate(p)
lst.append(d['subordinates'][p])
session.add_all(lst)
session.commit()

lst = []
for p in ('abc', 'ijk', 'xyz'):
d['records'][p] = Record(d['subordinates'][p])
lst.append(d['records'][p])
session.add_all(lst)
session.commit()
8---
I am finding it curious in the following output that once the subordinate
tuples are committed, SQLAlchemy is querying the database once again to
retrieve the primary keys of the second table.  Am I performing too much
work in client code?

Any insight shared would be appreciated.

8---
2013-04-03 13:35:38,291 INFO sqlalchemy.engine.base.Engine ()
2013-04-03 13:35:38,293 INFO sqlalchemy.engine.base.Engine COMMIT
2013-04-03 13:35:38,297 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2013-04-03 13:35:38,299 INFO sqlalchemy.engine.base.Engine INSERT INTO
subordinate (name, discovered, discontinued) VALUES (?, ?, ?)
2013-04-03 13:35:38,300 INFO sqlalchemy.engine.base.Engine ('abc',
'2013-04-03 13:35:38.296111', None)
2013-04-03 13:35:38,301 INFO sqlalchemy.engine.base.Engine INSERT INTO
subordinate (name, discovered, discontinued) VALUES (?, ?, ?)
2013-04-03 13:35:38,301 INFO sqlalchemy.engine.base.Engine ('ijk',
'2013-04-03 13:35:38.296223', None)
2013-04-03 13:35:38,302 INFO sqlalchemy.engine.base.Engine INSERT INTO
subordinate (name, discovered, discontinued) VALUES (?, ?, ?)
2013-04-03 13:35:38,302 INFO sqlalchemy.engine.base.Engine ('xyz',
'2013-04-03 13:35:38.296309', None)
2013-04-03 13:35:38,303 INFO sqlalchemy.engine.base.Engine COMMIT
2013-04-03 13:35:38,305 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2013-04-03 13:35:38,306 INFO sqlalchemy.engine.base.Engine SELECT
subordinate.id AS subordinate_id, subordinate.name AS subordinate_name,
subordinate.discovered AS subordinate_discovered, subordinate.discontinued
AS subordinate_discontinued
FROM subordinate
WHERE subordinate.id = ?
2013-04-03 13:35:38,306 INFO sqlalchemy.engine.base.Engine (1,)
2013-04-03 13:35:38,309 INFO sqlalchemy.engine.base.Engine SELECT
subordinate.id AS subordinate_id, subordinate.name AS subordinate_name,
subordinate.discovered AS subordinate_discovered, subordinate.discontinued
AS subordinate_discontinued
FROM subordinate
WHERE subordinate.id = ?
2013-04-03 13:35:38,309 INFO sqlalchemy.engine.base.Engine (2,)
2013-04-03 13:35:38,311 INFO sqlalchemy.engine.base.Engine SELECT
subordinate.id AS subordinate_id, subordinate.name AS subordinate_name,
subordinate.discovered AS subordinate_discovered, subordinate.discontinued
AS subordinate_discontinued
FROM subordinate
WHERE subordinate.id = ?
2013-04-03 13:35:38,311 INFO sqlalchemy.engine.base.Engine (3,)
2013-04-03 13:35:38,313 INFO sqlalchemy.engine.base.Engine INSERT INTO
record (subordinate_id, timestamp) VALUES (?, ?)
2013-04-03 13:35:38,313 INFO sqlalchemy.engine.base.Engine (1, '2013-04-03
13:35:38.308225')

[sqlalchemy] Re: Advice: Best practice for working with an existing database

2013-04-03 Thread Peter Herndon
On Friday, March 22, 2013 9:31:59 AM UTC-4, Jason wrote:



 Hi Peter,
 I think using a the declarative reflected style is a great way to 
 integrate an existing database into your application. It sounds like you 
 are doing this already, but in case you are not the reflection setup is 
 documented at 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/extensions/declarative.html#using-reflection-with-declarative.
  
 It's very easy to have it populate all of your models with the correct 
 table attributes using the DeferredReflection base class. Then you just 
 have to make (almost empty) class definitions for each model. In theory you 
 could even make it so that the table name is inferred from the name of the 
 Model class, but in practice I found this to be annoying (I would 
 constantly forget how the table names compared to the class names).

 --
 Jason


Thanks, Jason! 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] python: sqlalchemy - how do I ensure connection not stale using new event system

2013-04-03 Thread algotr8der
I think you nailed it. This is the problem. I will follow the approach you 
indicated. Thank you Michael. I appreciate it. 

On Tuesday, April 2, 2013 1:35:07 PM UTC-7, Michael Bayer wrote:

 if you're doing a query that is causing the database connection to simply 
 fail, then you'd need to address that issue primarily.   you'd probably 
 want to perform multiple insert statements, chunking about 5-10K records at 
 at time.


 On Apr 2, 2013, at 1:26 PM, algot...@gmail.com javascript: wrote:

 To clarify my environment. I have a VM (Linux Redhat) which has MySQL 
 server running on it. My script runs locally on the same machine. It is a 
 simple script that is doing a insert many after processing/parsing a csv 
 file. I don't have any web apps or anything of that nature. 

 On Monday, April 1, 2013 7:04:48 PM UTC-7, algot...@gmail.com wrote:

 I am using the sqlalchemy package in python. I have an operation that 
 takes some time to execute after I perform an autoload on an existing 
 table. This causes the following error when I attempt to use the connection:

 sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has 
 gone away')

 I have a simple utility function that performs an insert many:

 def insert_data(data_2_insert, table_name):
 engine = create_engine('mysql://blah:blah123@localhost/dbname')
 # Metadata is a Table catalog. 
 metadata = MetaData()
 table = Table(table_name, metadata, autoload=True, autoload_with=engine)
 for c in mytable.c:
 print c
 column_names = tuple(c.name for c in mytable.c)
 final_data = [dict(zip(column_names, x)) for x in data_2_insert]
 ins = mytable.insert()
 conn = engine.connect()
 conn.execute(ins, final_data)
 conn.close()


 It is the following line that times long time to execute since 
 'data_2_insert' has 677,161 rows. 

 final_data = [dict(zip(column_names, x)) for x in data_2_insert]

 I came across the following post below which refers to a similar problem, 
 however I am not sure how to implement the connection management suggested 
 as I am quite a newbie. 

 http://stackoverflow.com/questions/3033234/handle-mysql-restart-in-sqlalchemy

 Note for SQLAlchemy 0.7 - PoolListener is deprecated, but The same solution 
 can be implemented using the new event system. – robots.jpg

 Here is the link to the new event system described by one of the users:

 http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic

 If someone can please show me a couple of pointers on how I could go 
 about integrating the suggestions into the way I use sqlalchemy I would be 
 very appreciative. Thank you. 








 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.