[sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-30 Thread Russ
I discovered the sqlalchemy.orm.validates decorator, which executes
when I want it to (on attribute assignment) so I got out my hacksaw
and tried to make it work in one fell swoop... implementing automatic
UTC assignment for all UTCEnforcedDateTime columns.

I'm not comfortable with it yet... but a mostly-inclusive demo code
snippet is here:
http://pastebin.com/wB4BLzax

Where I intend to do that last bit of hackery (utcConversionList and
__sa_validators__ work) by introspecting through a large number of
classes (all having the TableDefMixin) and looking for
UTCEnforcedDateTime Column definitions that should be added to the
utcConversionList for the class.  Method tbd there, but not important.

Although it is doing what I want it to do (hands off UTC assignment
where expected), I'm really not comfortable with it.  It seems
*highly* sketchy, quite indirect,  and it seems like there must be a
better way to simply set a default validator for a custom Column
type.  Is there?

It would be much tidier to put the validator into the
UTCEnforcedDateTime class.  If not... can I rely on the direct setting
of __sa_validators__ working in the future?  Using the orm.validates
decorator in each and every class is obviously the better choice for
future compatibility, but if I want one-shot assignment as I do above,
I can't use the orm.validates because it is too late to set
__sa_validators__ directly as orm.validates does, and I needed to be
brutal and go to the __dict__ directly.

I expect there is a simple answer and all my hacking/exploration
related to my posts above was pointless... except it has been highly
educational to root through the SQLAlchemy implementation here.
Tidbits learned include now understanding descriptors (never had cause
to before), and learning about attrgetter and attrsetter.  Stepping
into the attribute assignment (InstrumentedAttribute.__set__) was
highly confusing until reading up on those bits!!  instance_state()
and instance_dict() instantly returning was somewhat mysterious for a
while!

Thanks,
Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mortar_rdb 1.2.0 released!

2011-06-30 Thread Chris Withers

Hi All,

I'm pleased to announce a new release of mortar_rdb.

This package ties together SQLAlchemy, sqlalchemy-migrate and
the component architecture to make it easy to develop projects
using SQLAlchemy through their complete lifecycle.

Changes in this release were:

- Pass None as the default for echo rather than False on the
  advice of Daniel Holth.

- When using mortar_rdb.registerSession, allow exlicit
  disabling of two-phase commit.

- No longer log passwords during session registration.

- Specify sqlalchemy 0.6 as a requirement, until
  zope.sqlalchemy is ported, mortar_rdb shouldn't be
  used with :mod:`sqlalchemy` 0.7.

If you'd like to see what mortar_rdb can do for you, please have a read 
of the narrative usage docs, which give a quick run through of the 
lifespan of a project developers using mortar_rdb:


http://packages.python.org/mortar_rdb/use.html

Full package details including mailing list, irc and bug tracker details 
can be found here:


http://www.simplistix.co.uk/software/python/mortar_rdb

cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
   - http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] mortar_rdb 1.2.0 released!

2011-06-30 Thread Wichert Akkerman

On 06/30/2011 10:14 AM, Chris Withers wrote:

- Specify sqlalchemy 0.6 as a requirement, until
  zope.sqlalchemy is ported, mortar_rdb shouldn't be
  used with :mod:`sqlalchemy` 0.7.


What is missing? zope.sqlalchemy seems to work fine with sqlalchemy 0.7 
as far as I've seen.


WIchert.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Using custom function expression throws 'TypeError: an integer is required' in orm query

2011-06-30 Thread Adrian
SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
---

I have a weird problem with orm queries that contain custom functions,
in this case from postgres contrib modules. When I do a query like
this

session.query(Entity, func.similarity(Entity.string,
'querystring')).all() # postgres pg_trgm extension

I will get the error below. However, when I specify one or all the
columns of the Entity individually it works. It also works if the
function is in the .order_by() clause.

Any ideas where the problem could come from?

-

/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
all(self)
   1675
   1676 
- 1677 return list(self)
   1678
   1679 @_generative(_no_clauseelement_condition)

/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
instances(self, cursor, _Query__context)
   1916
   1917 if filter:
- 1918 rows = filter(rows)
   1919
   1920 if context.refresh_state and self._only_load_props
\

/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/
_collections.pyc in unique_list(seq, hashfunc)
594 if not hashfunc:
595 return [x for x in seq
-- 596 if x not in seen
597 and not seen.__setitem__(x, True)]
598 else:

TypeError: an integer is required

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Using custom function expression throws 'TypeError: an integer is required' in orm query

2011-06-30 Thread Michael Bayer

On Jun 30, 2011, at 9:23 AM, Adrian wrote:

 SQAlchemy 0.7.1 / pyscopg 2.2.1 / PostgreSQL 9.0
 ---
 
 I have a weird problem with orm queries that contain custom functions,
 in this case from postgres contrib modules. When I do a query like
 this
 
 session.query(Entity, func.similarity(Entity.string,
 'querystring')).all() # postgres pg_trgm extension
 
 I will get the error below. However, when I specify one or all the
 columns of the Entity individually it works. It also works if the
 function is in the .order_by() clause.
 
 Any ideas where the problem could come from?

this seems like it has to do with the type of object being returned from 
psycopg2, as the Query runs the rows through a uniquing function that uses 
sets, maybe a comparison is emitting that TypeError.   The stack trace doesn't 
quite make it clear.   It would be interesting to see what 
session.execute(myquery.statement) sends back in the result rows.   (I haven't 
looked up the SIMILARITY function in the PG docs yet to see what it returns).



 
 -
 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
 all(self)
   1675
   1676 
 - 1677 return list(self)
   1678
   1679 @_generative(_no_clauseelement_condition)
 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.pyc in
 instances(self, cursor, _Query__context)
   1916
   1917 if filter:
 - 1918 rows = filter(rows)
   1919
   1920 if context.refresh_state and self._only_load_props
 \
 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/util/
 _collections.pyc in unique_list(seq, hashfunc)
594 if not hashfunc:
595return [x for x in seq
 -- 596 if x not in seen
597 and not seen.__setitem__(x, True)]
598 else:
 
 TypeError: an integer is required
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Full-fledged objects as mapped attributes?

2011-06-30 Thread Michael Bayer
your question has arrived at the same time almost the exact same question is 
coming from another user Russ, so I've added an example of how to use attribute 
events in conjunction with a TypeDecorator, so that the data is coerced both at 
the database level, as well as at the attribute setter level - this is at 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrencesOfType.


On Jun 29, 2011, at 9:40 PM, Jason Denning wrote:

 Hi All,
 I am building a Pyramid app using SQLAlchemy for the model, and I
 would like to be able to use the attributes as full-fledged objects
 (i.e., I would like to be able to define methods attached to the
 attributes), ideal example usage (although this is a somewhat
 contrived example):
 
 class Phone(Base):
id = Column(Integer, primary_key=True)
phone = Column(MyPhoneType)
 
 p1 = Phone(phone=9995551212)
 session.add(p1)
 p1.phone
 '9995551212'
 p1.phone.prettyPrint()
 (999) 555-1212
 p1.phone.foo()
 Now doing foo...
 
 I have made some attempts towards this end using TypeDecorator and by
 trying to extend UserDefinedType, but it seems that the instantiated
 objects always have their mapped attributes converted into regular
 python types (string, unicode, etc..) losing whatever methods/class
 level stuff I try to define.
 
 I am not trying to affect any aspect of the descriptor protocol,
 object management, database interaction or any other ORM type stuff -
 I just want some helper methods, and maybe some instance-level
 attributes to be available for certain types of mapped attributes.
 
 Is this possible?  Should I be doing this using TypeDecorator /
 UserDefinedType / other ?  Is anyone else attempting this type of
 thing, or am I totally crazy?
 
 Thanks,
 Jason
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-30 Thread Michael Bayer
you're close, I've added a note to the validates documentation at 
http://www.sqlalchemy.org/docs/orm/mapper_config.html#simple-validators that 
will point a reader on to the more comprehensive solution.

I've added an example for you as well as Jason who asked almost the same 
question earlier, which illustrates the TypeDecorator in conjunction with an 
attribute listener that is applied to all occurrences of the target type, an 
approach also used by the mutable attributes extension, and which we may look 
into adding as more of a built in feature in the future although the recipe 
should be straightforward.

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrencesOfType


On Jun 30, 2011, at 2:42 AM, Russ wrote:

 I discovered the sqlalchemy.orm.validates decorator, which executes
 when I want it to (on attribute assignment) so I got out my hacksaw
 and tried to make it work in one fell swoop... implementing automatic
 UTC assignment for all UTCEnforcedDateTime columns.
 
 I'm not comfortable with it yet... but a mostly-inclusive demo code
 snippet is here:
 http://pastebin.com/wB4BLzax
 
 Where I intend to do that last bit of hackery (utcConversionList and
 __sa_validators__ work) by introspecting through a large number of
 classes (all having the TableDefMixin) and looking for
 UTCEnforcedDateTime Column definitions that should be added to the
 utcConversionList for the class.  Method tbd there, but not important.
 
 Although it is doing what I want it to do (hands off UTC assignment
 where expected), I'm really not comfortable with it.  It seems
 *highly* sketchy, quite indirect,  and it seems like there must be a
 better way to simply set a default validator for a custom Column
 type.  Is there?
 
 It would be much tidier to put the validator into the
 UTCEnforcedDateTime class.  If not... can I rely on the direct setting
 of __sa_validators__ working in the future?  Using the orm.validates
 decorator in each and every class is obviously the better choice for
 future compatibility, but if I want one-shot assignment as I do above,
 I can't use the orm.validates because it is too late to set
 __sa_validators__ directly as orm.validates does, and I needed to be
 brutal and go to the __dict__ directly.
 
 I expect there is a simple answer and all my hacking/exploration
 related to my posts above was pointless... except it has been highly
 educational to root through the SQLAlchemy implementation here.
 Tidbits learned include now understanding descriptors (never had cause
 to before), and learning about attrgetter and attrsetter.  Stepping
 into the attribute assignment (InstrumentedAttribute.__set__) was
 highly confusing until reading up on those bits!!  instance_state()
 and instance_dict() instantly returning was somewhat mysterious for a
 while!
 
 Thanks,
 Russ
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] milions of expires and types mutability.

2011-06-30 Thread Moch Ramis
SQAlchemy 0.7.1 / pyscopg 2.2.4 / PostgreSQL 8.4
--

Hello everyone.

I recently profiled a task that was making an abusive use of inserts and
queries through sqlalchemy after having requested 1094 objects from the
database and keeping them all the way.
I was surprise to see those results:

ncalls   tottime  percall  cumtimepercall
 filename:lineno(function)
 53606000  3000.0890.000 3351.6170.000
 /usr/local/lib/python2.5/site-packages/sqlalchemy/orm/state.py:220(expire)
 49000 1299.9560.027 1299.9560.027 {method 'commit' of
 'psycopg2._psycopg.connection' objects}
 105981671.593 0.006  671.5930.006 {method 'read' of 'file'
 objects}
 49000 324.784 0.007 3843.1510.078
 /usr/local/lib/python2.5/site-packages/sqlalchemy/orm/session.py:221(_remove_snapshot)
 [...]


The most time consuming operation was not the request to the database but a
dummy expire method call.
Looking a little further behind the numbers, we can see that the number of
calls (53606000) is exactly 49000*1094. So each time we issue a begin/commit
transaction (there was one select count and one insert in those), the expire
function is called... and i could guess that it is called with the
_remove_snapshod method as shows the cumtime field.

Googling about it, i found this:

The usage of mutable types has significant performance implications when
 using the ORM. In order to detect changes, the ORM must create a copy of
 the value when it is first accessed, so that changes to the current valuecan 
 be compared against the “clean” database-loaded value. Additionally,
 when the ORM checks to see if any data requires flushing, it must scan
 through all instances in the session which are known to have 
 “mutable”attributes
 and compare the current value of each one to its “clean” value. So for
 example, if the Session contains 6000 objects (a fairly large amount) and
 autoflush is enabled, every individual execution of Query will require a
 full scan of that subset of the 6000 objects that have mutable attributes,
 possibly resulting in tens of thousands of additional method calls for
 every query.


Ok, so this seems to explain the issue ... except that as far as i know, i
don't use any mutable type (or the is_mutable call does not always return
the same): the data structure on which the operations are performed looks
like this:

uploaded_files = Table('uploaded_files', Metadata,
  Column('id', Integer, unique=True, primary_key=True),
  Column('hash', String(32), index=True),
  Column('relative_file_path', String),
  Column('lookup_directory', String(255), index=True),
  Column('job_id', String(36), default=None),
  Column('image_id', String(36), default=None, unique=True, index=True),
  Column('state', String(16), default=tasks_states.OperationStates.DETECTED,
index=True),
  Column('operation', String(16)),
  Column('retries', Integer, default=0),
  Column('last_update', DateTime, default=datetime.now,
onupdate=datetime.now),

  UniqueConstraint('relative_file_path', 'lookup_directory')
  )

class UploadedFiles(object):

  def __init__(self, Hash, Directory, RelativePath, Operation):
self.hash = Hash
self.lookup_directory = Directory
self.relative_file_path = RelativePath
self.operation = Operation


mapper(UploadedFiles, uploaded_files)


Now, i don't have any clue about how to avoid this... is this normal ? is
there any way to avoid this behaviour ?

Thanks !

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] milions of expires and types mutability.

2011-06-30 Thread Michael Bayer

On Jun 30, 2011, at 12:26 PM, Moch Ramis wrote:

 SQAlchemy 0.7.1 / pyscopg 2.2.4 / PostgreSQL 8.4
 --
 
 Hello everyone.
 
 I recently profiled a task that was making an abusive use of inserts and 
 queries through sqlalchemy after having requested 1094 objects from the 
 database and keeping them all the way.
 I was surprise to see those results:
 
 ncalls   tottime  percall  cumtimepercall   
 filename:lineno(function)
 53606000  3000.0890.000 3351.6170.000 
 /usr/local/lib/python2.5/site-packages/sqlalchemy/orm/state.py:220(expire)
 49000 1299.9560.027 1299.9560.027 {method 'commit' of 
 'psycopg2._psycopg.connection' objects}  
 105981671.593 0.006  671.5930.006 {method 'read' of 'file' 
 objects}
 49000 324.784 0.007 3843.1510.078 
 /usr/local/lib/python2.5/site-packages/sqlalchemy/orm/session.py:221(_remove_snapshot)
 [...]
 
 The most time consuming operation was not the request to the database but a 
 dummy expire method call. 
 Looking a little further behind the numbers, we can see that the number of 
 calls (53606000) is exactly 49000*1094. So each time we issue a begin/commit 
 transaction (there was one select count and one insert in those), the expire 
 function is called... and i could guess that it is called with the 
 _remove_snapshod method as shows the cumtime field.

This looks like you have 49000 calls to session.commit(), so, depending on what 
you're doing, I'd reduce the number of commit calls down to one, after the 
entire series of insert operations is complete.  Transactions should be written 
to enclose a full series of operations.

Otherwise if you're really into calling commit() tens of thousands of times, 
you can turn expire_on_commit=False which will skip that whole step.But 
better to use commit() appropriately.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] milions of expires and types mutability.

2011-06-30 Thread Moch Ramis
Thanks for your fast answer.

2011/6/30 Michael Bayer mike...@zzzcomputing.com

 This looks like you have 49000 calls to session.commit(), so, depending on
 what you're doing, I'd reduce the number of commit calls down to one, after
 the entire series of insert operations is complete.  Transactions should be
 written to enclose a full series of operations.


Hmm i guess so but this is a particular concurrent context where i need
those commits.


 Otherwise if you're really into calling commit() tens of thousands of
 times, you can turn expire_on_commit=False which will skip that whole step.
But better to use commit() appropriately.


The effect of the expire only affect the requests results when a request is
done using primary keys (or through already requested ORM objects) isn't it
? Well those 1094 object are only used as a snapshot of a state of the
database when they are requested, but are not updated through merge or
anything like that so i guess using this argument looks ok.

I'm also looking to group some of those calls, but i'm not sure it will be
possible.

Thanks a lot !

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] milions of expires and types mutability.

2011-06-30 Thread Michael Bayer

On Jun 30, 2011, at 1:01 PM, Moch Ramis wrote:

 Thanks for your fast answer.
 
 2011/6/30 Michael Bayer mike...@zzzcomputing.com
 This looks like you have 49000 calls to session.commit(), so, depending on 
 what you're doing, I'd reduce the number of commit calls down to one, after 
 the entire series of insert operations is complete.  Transactions should be 
 written to enclose a full series of operations.
 
 Hmm i guess so but this is a particular concurrent context where i need those 
 commits.

that sounds curious.   Do you have multiple processes communicating via 
information passed in the database ?




 
 Otherwise if you're really into calling commit() tens of thousands of times, 
 you can turn expire_on_commit=False which will skip that whole step.But 
 better to use commit() appropriately.
 
 The effect of the expire only affect the requests results when a request is 
 done using primary keys (or through already requested ORM objects) isn't it ? 
 Well those 1094 object are only used as a snapshot of a state of the database 
 when they are requested, but are not updated through merge or anything like 
 that so i guess using this argument looks ok.
 
 I'm also looking to group some of those calls, but i'm not sure it will be 
 possible.
 
 Thanks a lot !
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Slightly confusing error when session set up incorrectly

2011-06-30 Thread Ben Sizer
On Jun 29, 7:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 29, 2011, at 12:24 PM, Ben Sizer wrote:

OK, this happens a lot with me, if anyone can please recommend what university 
I should go to in order to learn to speak english correctlyHere's the 
sentence:

It's not incorrect, just ambiguous. I expect most people would have
read it the way you intended it.

But this wasn't my point and it's not directly relevant to my issue:
whether it maintains an open connection for each engine or each
request doesn't matter - what is important is that it won't make any
attempt to connect at all until this point. It's arguably implied, but
not explicitly stated.

  There is a similar implication
  in the 'What does the Session do?' paragraph also.

 Not seeing the implication in that one, please add some details to ticket 
 2204 as to the verbiage you find confusing (log in as guest/guest).

The part I meant is: The Session begins in an essentially stateless
form. Once queries are issued or other objects are persisted with it,
it requests a connection resource from an Engine that is associated
either with the Session itself or with the mapped Table objects being
operated upon. This part isn't confusing, it's just that again it
does not explicitly state that no connection happens at the start.
Saying When X occurs, Y also occurs isn't logically the same as
Before X occurs, Y never occurs. So I've not added this to the
ticket because I don't this anything about this needs to change. I
have just been suggesting something extra to be added in the session
config/creation docs: eg., Note, this does not actually connect to
the database yet. Only when your first query is issued will a physical
connection be made.

  I expect I am not unusual
  in wanting an error in session configuration to fail as soon as
  possible,

  Nobody has ever raised this issue before to my recollection.

  The problem with show-stopper bugs is that people often just give up
  and switch to something else rather than tell you about it. ;)

 show stopper bug is quite an exaggeration over what is essentially a small 
 documentation insufficiency.

To you (and now to me), it's a small documentation insufficiency,
because we know what the problem was. But to me 24 hours ago, this was
a problem that stopped my app running entirely.

 So there is a class of users, and a large one at that, that experience bugs 
 both large and small and don't immediately abandon the project. This class of 
 users continues to expand as the project has grown much stronger over the 
 course of many years, responding to user requests, competing products, etc.   
  There are of course users who abandon the project within 5 minutes, and I 
 would say if their level of tenacity is that low then they're probably better 
 off with whatever tool they end up using.

Yes, of course. sqlalchemy is the best product of its kind, which is
why I am here! I was just pointing out that sometimes it's the stuff
that -isn't- reported that is actually really important for user
retention. It just happens to be one of the things we tracked on the
last product I worked on, and I got an eye-opening view into how most
users stop using a product for reasons completely different to the
ones people complain about. :)

My level of tenacity is pretty low, I'll admit! But, I already fixed
my bug, and with your help also understand the fix. I just came here
to try and help others who might hit the same thing. If you don't
think my suggestions help, that's fine. I do accept your point about
not being able to adjust the docs to suit every individual user.

  [...] that means there would be 4 ways
  of approaching this, which seems a little much.

 I'm not sure what the four ways are here.

As a user who has no real restriction on configuration, where my use
case is simply, connect to a database and get a session for it, it
appears there are 4 different ways I can do that, and the docs tell me
how to do all these but don't really explain why, meaning I was not
sure whether I was using the wrong approach or a sub-optimal one.

 1) import Session directly from sqlalchemy.orm.session and bind that
to the engine with the keyword argument on creation.  --  Session is
a regular Python class which can be directly instantiated.
 2) call sessionmaker() to get a factory for unbound Session
subclasses, and call .configure(bind=whatever) to bind it.   --  You
can also associate a Engine with an existing sessionmaker() using the
sessionmaker.configure() method
 3) call sessionmaker() to get a factory for unbound Session
subclasses, and bind each to the engine with the keyword argument on
creation.  --  you can also associate individual Session objects with
an Engine on each invocation
 4) call sessionmaker(bind=some_engine) to get a factory for bound
Sessions and create an already-bound instance. However, to
standardize how sessions are configured and acquired, the
sessionmaker() function is 

[sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-30 Thread Russ
 I've added an example for you as well as Jason who asked almost the same 
 question earlier, which illustrates the TypeDecorator in conjunction with an 
 attribute listener that is applied to all occurrences of the target type, an 
 approach also used by the mutable attributes extension, and which we may look 
 into adding as more of a built in feature in the future although the recipe 
 should be straightforward.

 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrenc...

Thanks!!  I would *never* have associated Jason's question with mine,
but no matter... the end result is perfect.

I had actually gotten it completely working with my messy
__sa_validators__ hackery (and a heck of a lot of questionable
introspection), but I now have it working with the event system as you
have suggested.  It is MUCH cleaner and I like it a lot.

Here is my final implementation:
http://pastebin.com/33Zkfz1h

The only thing I'm still unsure of in the code is why mapper.columns
is a collection and it required checking columns[0], but I can either
just live with that or look into it later.

Also - prior to your suggestion I was still on SQLAlchemy 0.6.6 and
this prompted me to make the leap to 0.7.1 ... all code now working
fine after that transition with only minor hiccups.

That was an excellent introduction to the new event system as well...
thanks again!

Russ

--
Code is reproduced below as well, in case the pastebin ever fails:

from pytz import UTC
import sqlalchemy as sa
import sqlalchemy.orm as orm
import globalenv

class UTCEnforcedDateTime(sa.types.TypeDecorator):
DateTime type that ensures datetime objects are offset-aware
UTC.
impl = sa.types.DateTime

def process_bind_param(self, value, engine):
if (value is not None) and (value.tzinfo != UTC):
raise Exception(Data MUST be offset-aware UTC!)
return value

def process_result_value(self, value, engine):
if value is not None:
return value.replace(tzinfo = UTC)
return value

def _EnsureUTC(target, value, oldvalue, initiator):
'Set' Event handler for all UTCEnforcedDateTime columns.

This handler simply ensures that the provided 'value' is an offset-
aware
UTC datetime.

SQLAlchemy validator (for @validates) for use with
UTCEnforcedDateTime.

Use of this validator will convert times to UTC on assignment (so
that
the UTCEnforcedDateTime implementation doesn't throw an exception
on
commit).


dt = value
if dt == None:
return dt
if dt.tzinfo == UTC:
return dt
tz = globalenv.LocalTZ  #pytz timezone that naive datetimes are in
#Convert naive time to local time...
# - normalize is needed to deal with DST funkiness
dt_tz = tz.normalize(tz.localize(dt))
return dt_tz.astimezone(UTC)

@sa.event.listens_for(orm.mapper, mapper_configured)
def _Configure_UTCEnforcedDateTime_Setter(mapper, class_):
A mapper-configured listener that is triggered every time an
ORM_ class
mapper is registered (once per class).

This event handler makes sure that any defined UTCEnforcedDateTime
are
always receiving data with properly determined UTC offset-aware
values
(with the use of the _EnsureUTC handler).

Adapted from sample code here:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrencesOfType


for prop in mapper.iterate_properties:
if hasattr(prop, 'columns'):  #it is a column (not a relation)
if isinstance(prop.columns[0].type, UTCEnforcedDateTime):
#Set up a listener for datetime setting events...
classAttr = getattr(class_, prop.key) #the attr of the
mapped class
sa.event.listen(
classAttr, #We want to listen for when
classAttr...
set, #has a set event (like a property
setter)...
_EnsureUTC,#and use _EnsureUTC as the
handler...
retval = True) #and allow _EnsureUTC to change the
attr with it's return




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.