[sqlalchemy] Oracle-Tg2 Combination,sqlalchemy error

2010-04-30 Thread dhanil anupurath
Hi,
   I need help in setting up Oracle under TG/sqlalchemy environment.
   I believe the issue is more of a SQLAlchemy issue and that is why i
am posting here.

Here's what i have done.

 1.Installed TG2 and Sqlalchemy.
 2.Installed Oracle Express 10g.
 3.confirmed that I am able to access oracle using Sqlplus.
 4.Installed cx_Oracle.
 5.Wrote short test program in python to access my database using
cx_Oracle.The program worked fine.
 Here's the test program.

  import pkg_resources
  pkg_resources.require(cx-Oracle=5.0.3)
   import cx_Oracle

 connstr='username/password@localhost'
 conn = cx_Oracle.connect(connstr)
 curs = conn.cursor()

  curs.execute('select * from tabs')
  print curs.description
  print curs.fetchone()

 conn.close()
6.started my application after configuring it for Oracle Express.
7.Here's the error  I got.
  File /root/tg2env/lib/python2.4/site-packages/
SQLAlchemy-0.5.6-py2.4.egg/sqlalchemy/engine/base.py, line 1229, in
contextual_connect
return self.Connection(self, self.pool.connect(),
close_with_result=close_with_result, **kwargs)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 142, in connect
return _ConnectionFairy(self).checkout()
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 304, in __init__
rec = self._connection_record = pool.get()
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 161, in get
return self.do_get()
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 639, in do_get
con = self.create_connection()
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 122, in create_connection
return _ConnectionRecord(self)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 198, in __init__
self.connection = self.__connect()
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/pool.py, line 261, in __connect
connection = self.__pool._creator()
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
py2.4.egg/sqlalchemy/engine/strategies.py, line 80, in connect
raise exc.DBAPIError.instance(None, None, e)
sqlalchemy.exc.DatabaseError: (DatabaseError) ORA-12505: TNS:listener
does not currently know of SID given in connect descriptor
 None None

 8. Here are the .ORA files.
TNSNAMES.ora
  # tnsnames.ora Network Configuration File:

 XE =
  (DESCRIPTION =
  (ADDRESS = (PROTOCOL = TCP)(HOST =
localhost.localdomain)(PORT = 1521))
 CONNECT_DATA = (SID = XE))
  )

 sqlnet.authentication_services = (NONE)

LISTENER.ORA
 # listener.ora Network Configuration File:

  SID_LIST_LISTENER =
   (SID_LIST =
   (SID_DESC =
 (SID_NAME = PLSExtProc)
 (ORACLE_HOME = /usr/lib/oracle/xe/app/
oracle/product/10.2.0/server)
   (PROGRAM = extproc)
  )
   (SID_DESC =
(SID_NAME = m...@localhost)
(ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/
product/10.2.0/server)
 )
)

LISTENER =
(DESCRIPTION_LIST =
 (DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
(ADDRESS = (PROTOCOL = TCP)(HOST =
localhost.localdomain)(PORT = 1521))
 )
   )

  DEFAULT_SERVICE_LISTENER = (XE)

  SQLNET.ORA
  This is the uncommented line in the sqlnet.ora



9.On googling ,I saw someone indicate that sql.authentication should
be set to none.
  I did that but no avail.

 I would really appreciate any help in resolving this problem.
 
Thanks.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Oracle-Tg2 Combination,sqlalchemy error

2010-04-30 Thread Michael Bayer

On Apr 30, 2010, at 3:31 AM, dhanil anupurath wrote:

 
  import pkg_resources
  pkg_resources.require(cx-Oracle=5.0.3)
   import cx_Oracle
 

 connstr='username/password@localhost'
 conn = cx_Oracle.connect(connstr)


I just want to go over how this is documented.  From the cx_oracle 
documentation at 
http://cx-oracle.sourceforge.net/html/module.html#cx_Oracle.connect , we see 
that the above is taken as the format user/p...@dsn, where dsn is the TNS 
name from your tnsnames.ora file - ideally it would be the string xe, but it 
seems to correlate the hostname of localhost as well.

Onto SQLAlchemy's, the document at 
http://www.sqlalchemy.org/docs/reference/dialects/oracle.html?highlight=cx_oracle#connecting
 states :

Connecting with create_engine() uses the standard URL approach of 
oracle://user:p...@host:port/dbname[?key=valuekey=value...]. If dbname is 
present, the host, port, and dbname tokens are converted to a TNS name using 
the cx_oracle makedsn() function. Otherwise, the host token is taken directly 
as a TNS name.

So the equivalent connect string is:

create_engine('oracle://user:password@localhost')

meaning, '/dbname' is not present, so the 'host' portion, in this case 
'localhost', is sent to cx_Oracle.connect as the TNS name.

The usual way to connect looks like:

create_engine('oracle://user:password@localhost/xe')

the advantage to this approach is that SQLAlchemy uses cx_Oracle.makedsn() so 
that no TNS entry is required.

Here are all the ways to connect to a localhost XE server:

import cx_Oracle

print cx_Oracle.connect('scott/ti...@localhost')
print cx_Oracle.connect('scott/ti...@xe')
print cx_Oracle.connect('scott', 'tiger', 'localhost')
print cx_Oracle.connect('scott', 'tiger', 'xe')
print cx_Oracle.connect('scott', 'tiger', cx_Oracle.makedsn('localhost', 1521, 
'xe'))

from sqlalchemy import create_engine

print 
create_engine('oracle://scott:ti...@localhost').connect().connection.connection
print create_engine('oracle://scott:ti...@xe').connect().connection.connection
print 
create_engine('oracle://scott:ti...@localhost/xe').connect().connection.connection


output of this looks like:

cx_Oracle.Connection to sc...@localhost
cx_Oracle.Connection to sc...@xe
cx_Oracle.Connection to sc...@localhost
cx_Oracle.Connection to sc...@xe
cx_Oracle.Connection to 
scott@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SID=xe)))
cx_Oracle.Connection to sc...@localhost
cx_Oracle.Connection to sc...@xe
cx_Oracle.Connection to 
scott@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SID=xe)))

note the two forms of connection that use makedsn() have a locally-generated 
TNS entry present.

Also note that SQLAlchemy is not using pkg_resources to import cx_Oracle.  It 
just calls import cx_Oracle.  If the above pkg_resources call is required 
(extremely unlikely), you need to call it before create_engine().

hope this helps.






  # tnsnames.ora Network Configuration File:
 
 XE =
  (DESCRIPTION =
  (ADDRESS = (PROTOCOL = TCP)(HOST =
 localhost.localdomain)(PORT = 1521))
 CONNECT_DATA = (SID = XE))
  )
 
 sqlnet.authentication_services = (NONE)
 
LISTENER.ORA
 # listener.ora Network Configuration File:
 
  SID_LIST_LISTENER =
   (SID_LIST =
   (SID_DESC =
 (SID_NAME = PLSExtProc)
 (ORACLE_HOME = /usr/lib/oracle/xe/app/
 oracle/product/10.2.0/server)
   (PROGRAM = extproc)
  )
   (SID_DESC =
(SID_NAME = m...@localhost)
(ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/
 product/10.2.0/server)
 )
)
 
LISTENER =
(DESCRIPTION_LIST =
 (DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
(ADDRESS = (PROTOCOL = TCP)(HOST =
 localhost.localdomain)(PORT = 1521))
 )
   )
 
  DEFAULT_SERVICE_LISTENER = (XE)
 
  SQLNET.ORA
  This is the uncommented line in the sqlnet.ora
 
 
 
 9.On googling ,I saw someone indicate that sql.authentication should
 be set to none.
  I did that but no avail.
 
 I would really appreciate any help in resolving this problem.
 
 Thanks.
 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 

Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-30 Thread Kent Bower
Thanks in advance for the discussion, hope I can continue without you 
feeling I'm taking too much of your time...


You said

If you are using merge(), and you're trying to use it to figure out 
what's changed, that implies that you are starting with a business 
object containing the old data and are populating it with the new data.



This isn't quite the case.  Rather, I am handed an object (actually JSON 
text representation of an object) how it should look *now*.  I parse 
that object and create a transient SQLA object with the exact same 
properties and relations - recursively, very similar to what merge() is 
doing.


Then I hand that transient object to merge(), who nicely loads the 
original from the database - if it existed -, or creates it if it didn't 
exist - and attaches it to the session for me.  What I am missing is 
what did it look like before.  Certainly, I could call 
session.query.get() before merge() so I would know what it looks like, 
but the main problem with that is matching up all the related objects 
with the post-merged ones, esp for list relations.
Plus, merge() is already looking up these objects for me, so it seemed 
smarter to harness what merge is already doing.


I've looked into the examples you provided. Waiting for before_flush() 
won't work for our use case since we need the old _original object 
available *before* the flush as well as after.  I could...before the 
flush, use attributes.get_history(), but it quickly becomes 
problematic if the programmer needs to look in two completely different 
places, both attributes.get_history() for post-flush changes and also 
_original for pre-flush changes... somewhat defeats the purpose.


I was considering that a SessionExtension def before_merge(session, 
merged) hook would be handy for me... one that is called before the 
properties are updated on merged.  Inside that hook, I could 
potentially clone merged and set it as an _original to itself and 
check if it was newly created by checking session.new (or new_instance 
boolean could be passed to the hook)


However, in the end, I would *still* have the problem that, yes, each 
post-merge object has a detached reference to _original, but these 
cloned objects' relations would not be populated correctly among each 
other  so I couldn't call order._original.calc_points() and expect 
the relations to be populated correctly  that's what I'm trying to 
figure out how to accomplish.. that would be ideal.  Almost like an 
alternate universe snapshot in time of the object (now detached) 
before changes with all relations still populated and where each 
*current* object has reference to its old version.


Could I look it up in a separate session and then merge it into my main 
session?  Then, after that, merge the transient objects into my main 
session?  Would that get me closer?  Ideally, I wouldn't want it to need 
2 trips to the database.



On 4/29/2010 4:48 PM, Michael Bayer wrote:

Kent Bower wrote:
   

It is helpful to know what SQLA was designed for.

Also, you may be interested to know of our project as we are apparently
stretching SQLA's use case/design.  We are implementing a RESTful web
app (using TurboGears) on an already existent legacy database.  Since
our webservice calls (and TurboGears' in general, I believe) are
completely state-less, I think that precludes my being able to harness
descriptors, custom collections, and/or AttributeListeners, and to
trigger the desired calculations before things change because I'm just
handed the 'new' version of the object.
This is precisely why so much of my attention has been on merge()
because it, for the most part, works that out magically.
 

If you are using merge(), and you're trying to use it to figure out
what's changed, that implies that you are starting with a business
object containing the old data and are populating it with the new
data.   Custom collection classes configured on relationship() and
AttributeListeners on any attribute are both invoked during the merge()
process (but external descriptors notably are not).

merge() is nothing more than a hierarchical attribute-copying procedure
which can be implemented externally.   The only advantages merge() itself
offers are the optimized load=False option which you aren't using here,
support for objects that aren't hashable on identity (a rare use case),
and overall a slightly more inlined approach that removes a small amount
of public method overhead.

You also should be able to even recreate the _original object from
attribute history before a flush occurs.   This is what the versioned
example is doing for scalar attributes.


   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 

[sqlalchemy] Filtering based on Composite() value

2010-04-30 Thread Peter Erickson
Is it possible to use like() on a composite object attribute to filter  
a particular query? I'm trying to pull records in from a postgres db  
based on a the value of a single attribute within a composite object.  
For example, with the setup below, I want to search for records that  
have a url_domain like '%google.com' regardless of the other values.  
Looking through the docs and Google, I'm guessing that I need to use  
the comparator factory, but I wasn't quite sure how to accomplish my  
goal.


Thanks in advance.

(Since I had to re-type all this instead of cut-and-paste, I left out  
a lot of the other column names, etc.)


class URL(object):
def __init__(self, url):
url = urlparse(url)

self.scheme = url.scheme
self.domain = url.domain
self.path   = url.path
self.query  = url.query

def __composite_values__(self):
return [self.scheme, self.domain, self.path, self.query]

class Product(Base):
...
url = composite(URL, \
   Column('url_scheme', String), \
   Column('url_domain', String), \
   Column('url_path', String), \
   Column('url_query', String))

recs =  
session.query(Product).filter(Product.url.url_domain.like('%google.com')


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-30 Thread Kent
I think I've got a strategy that will work and doesn't mean hacking
merge()...

Thanks for your input, if you have inspiration, I'll still gladly hear
it.


On Apr 30, 8:18 am, Kent Bower k...@retailarchitects.com wrote:
 Thanks in advance for the discussion, hope I can continue without you
 feeling I'm taking too much of your time...

 You said
 
 If you are using merge(), and you're trying to use it to figure out
 what's changed, that implies that you are starting with a business
 object containing the old data and are populating it with the new data.
 

 This isn't quite the case.  Rather, I am handed an object (actually JSON
 text representation of an object) how it should look *now*.  I parse
 that object and create a transient SQLA object with the exact same
 properties and relations - recursively, very similar to what merge() is
 doing.

 Then I hand that transient object to merge(), who nicely loads the
 original from the database - if it existed -, or creates it if it didn't
 exist - and attaches it to the session for me.  What I am missing is
 what did it look like before.  Certainly, I could call
 session.query.get() before merge() so I would know what it looks like,
 but the main problem with that is matching up all the related objects
 with the post-merged ones, esp for list relations.
 Plus, merge() is already looking up these objects for me, so it seemed
 smarter to harness what merge is already doing.

 I've looked into the examples you provided. Waiting for before_flush()
 won't work for our use case since we need the old _original object
 available *before* the flush as well as after.  I could...before the
 flush, use attributes.get_history(), but it quickly becomes
 problematic if the programmer needs to look in two completely different
 places, both attributes.get_history() for post-flush changes and also
 _original for pre-flush changes... somewhat defeats the purpose.

 I was considering that a SessionExtension def before_merge(session,
 merged) hook would be handy for me... one that is called before the
 properties are updated on merged.  Inside that hook, I could
 potentially clone merged and set it as an _original to itself and
 check if it was newly created by checking session.new (or new_instance
 boolean could be passed to the hook)

 However, in the end, I would *still* have the problem that, yes, each
 post-merge object has a detached reference to _original, but these
 cloned objects' relations would not be populated correctly among each
 other  so I couldn't call order._original.calc_points() and expect
 the relations to be populated correctly  that's what I'm trying to
 figure out how to accomplish.. that would be ideal.  Almost like an
 alternate universe snapshot in time of the object (now detached)
 before changes with all relations still populated and where each
 *current* object has reference to its old version.

 Could I look it up in a separate session and then merge it into my main
 session?  Then, after that, merge the transient objects into my main
 session?  Would that get me closer?  Ideally, I wouldn't want it to need
 2 trips to the database.

 On 4/29/2010 4:48 PM, Michael Bayer wrote:



  Kent Bower wrote:

  It is helpful to know what SQLA was designed for.

  Also, you may be interested to know of our project as we are apparently
  stretching SQLA's use case/design.  We are implementing a RESTful web
  app (using TurboGears) on an already existent legacy database.  Since
  our webservice calls (and TurboGears' in general, I believe) are
  completely state-less, I think that precludes my being able to harness
  descriptors, custom collections, and/or AttributeListeners, and to
  trigger the desired calculations before things change because I'm just
  handed the 'new' version of the object.
  This is precisely why so much of my attention has been on merge()
  because it, for the most part, works that out magically.

  If you are using merge(), and you're trying to use it to figure out
  what's changed, that implies that you are starting with a business
  object containing the old data and are populating it with the new
  data.   Custom collection classes configured on relationship() and
  AttributeListeners on any attribute are both invoked during the merge()
  process (but external descriptors notably are not).

  merge() is nothing more than a hierarchical attribute-copying procedure
  which can be implemented externally.   The only advantages merge() itself
  offers are the optimized load=False option which you aren't using here,
  support for objects that aren't hashable on identity (a rare use case),
  and overall a slightly more inlined approach that removes a small amount
  of public method overhead.

  You also should be able to even recreate the _original object from
  attribute history before a flush occurs.   This is what the versioned
  example is doing for scalar attributes.

 --
 You received this message because you are subscribed to 

Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-30 Thread Michael Bayer
Kent Bower wrote:

 Could I look it up in a separate session and then merge it into my main
 session?  Then, after that, merge the transient objects into my main
 session?  Would that get me closer?  Ideally, I wouldn't want it to need
 2 trips to the database.

there's this implication here that you are getting these objects back from
the DB and doing some kind of compare operation.Its true this isn't
much different from doing a get() and then running your compare.  The
only thing that's not there for that is the traversal, but that still
confuses me - if merge() is deep inside of a traversal, and its on
order.orderitems.foo.bar.bat and then you see a difference between the
existing and new bat objects, aren't you then traversing all the way up
to order to set some kind of flag ?   i.e. it seems like no matter what,
your system has some explicit traversal implemented, in this case moving
upwards, but if you were to implement your own merge()-like function, it
would be downwards moving.

if the model of how you see this happening is really that you need to
traverse your object deeply and record things in some central point as you
do it, this is best accomplished by using a traversal that is separate
from the internals of merge().

If there were a feature to be added to SQLAlchemy here, it would be a
generic traversal algorithm that anyone can use to create their own
travserse/ merge and/or copy functions, since the traversal along an
object graph of mapped attributes is a standard thing (probably
depth-first and breadth-first would be options).There are components
of this traversal (the methods are called cascade_iterator) already
present but they haven't been packaged into a simple public-facing
presentation.Ideally the internal prop.merge() methods would ride on
top of this system too.   Traversals like these I most prefer to present
as iterators with some system of linking attribute-specific callables
(i.e. visitors).

That said, I don't think implementing your own merge()-like
load-and-traverse is very difficult - which I say only because I don't
have time to create an example for this right now.






 On 4/29/2010 4:48 PM, Michael Bayer wrote:
 Kent Bower wrote:

 It is helpful to know what SQLA was designed for.

 Also, you may be interested to know of our project as we are apparently
 stretching SQLA's use case/design.  We are implementing a RESTful web
 app (using TurboGears) on an already existent legacy database.  Since
 our webservice calls (and TurboGears' in general, I believe) are
 completely state-less, I think that precludes my being able to harness
 descriptors, custom collections, and/or AttributeListeners, and to
 trigger the desired calculations before things change because I'm just
 handed the 'new' version of the object.
 This is precisely why so much of my attention has been on merge()
 because it, for the most part, works that out magically.

 If you are using merge(), and you're trying to use it to figure out
 what's changed, that implies that you are starting with a business
 object containing the old data and are populating it with the new
 data.   Custom collection classes configured on relationship() and
 AttributeListeners on any attribute are both invoked during the merge()
 process (but external descriptors notably are not).

 merge() is nothing more than a hierarchical attribute-copying procedure
 which can be implemented externally.   The only advantages merge()
 itself
 offers are the optimized load=False option which you aren't using
 here,
 support for objects that aren't hashable on identity (a rare use case),
 and overall a slightly more inlined approach that removes a small
 amount
 of public method overhead.

 You also should be able to even recreate the _original object from
 attribute history before a flush occurs.   This is what the versioned
 example is doing for scalar attributes.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Implementing placeholders / get_or_insert / get_or_create over SA

2010-04-30 Thread moranski
Okay, after using synonym combinations like get_or_create,
find_or_create to search this list
it seems that searching for find_or_create yields the most useful
answers.

P.S. Added for future searches.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: making a transient copy of instances recursively at merge() time

2010-04-30 Thread Kent Bower



On 4/30/2010 11:17 AM, Michael Bayer wrote:

Kent Bower wrote:
   

Could I look it up in a separate session and then merge it into my main
session?  Then, after that, merge the transient objects into my main
session?  Would that get me closer?  Ideally, I wouldn't want it to need
2 trips to the database.
 

there's this implication here that you are getting these objects back from
the DB and doing some kind of compare operation.Its true this isn't
much different from doing a get() and then running your compare.  The
only thing that's not there for that is the traversal, but that still
confuses me - if merge() is deep inside of a traversal, and its on
order.orderitems.foo.bar.bat and then you see a difference between the
existing and new bat objects, aren't you then traversing all the way up
to order to set some kind of flag ?   i.e. it seems like no matter what,
your system has some explicit traversal implemented, in this case moving
upwards, but if you were to implement your own merge()-like function, it
would be downwards moving.

   


We are writing software that allows hooks for a system administrator to 
write sqlalchemy-python code, which we compile and save as a BLOB in the 
database.  These are called rules and are loaded depending on the type 
of DB Entity and executed inline.  This is partly what is motivating 
this.  I want a very easy mechanism for these rules to determine - how 
has this object changed? what did this look like before these 
changes?  The system administrators can then do their own changes or 
not allow certain transactions to succeed by throwing errors back from 
the webserver to the client.  It is very flexible - and probably too 
powerful, but we will eventually do something about that hopefully.


This means I can't state all the reasons we might be wanting to compare 
the _original to the current.  I also want to allow the call to 
calculated methods such as the calc_points or calc_volume or whatever.


It also means, I am not traversing back up the chain, because I don't 
know how far up to go.  The same code should kick in if I am saving a 
customer alone versus a customer that is embedded within an order which 
is being saved.  I may want to compare customer.email to 
customer._original.email and do something like unsubscribe the old email 
address.  Therefore it is important, in my mind, that each object 
maintains its own _original reference.  You don't know whether the 
object that was originally merged() was the customer or the order that 
contains the customer, as an example.


I can tell you I've traversed sqlalchemy objects numerous times already 
using the object's mapper and recursion, so I would completely agree in 
seeing the benefit of a traversal algorithm, although now I've done it 
so many times, it isn't too difficult anymore.  One thing I've noticed 
is that the cascade options/load options that live on the mapper aren't 
always universal to our application.  That is to say, if I am querying 
order headers, I want fewer relations eagerly loaded than if I am in an 
order get service call, where I want to pass the client a very deeply 
loaded object.  If I later expire an object, it might be nice to 
optionally follow the same cascading scheme as what originally loaded 
it, instead of what lives on the mapper.
So if I were doing the traversal algorithm, I'd consider two or maybe 
four optional parameters to pass:
eagerloads, lazyloads, (maybe also defers and undefers).  Maybe they are 
renamed as they apply more to cascading than actually loading.  That way 
the cascading of the traversal can be dynamic.


Thanks again.



if the model of how you see this happening is really that you need to
traverse your object deeply and record things in some central point as you
do it, this is best accomplished by using a traversal that is separate
from the internals of merge().

If there were a feature to be added to SQLAlchemy here, it would be a
generic traversal algorithm that anyone can use to create their own
travserse/ merge and/or copy functions, since the traversal along an
object graph of mapped attributes is a standard thing (probably
depth-first and breadth-first would be options).There are components
of this traversal (the methods are called cascade_iterator) already
present but they haven't been packaged into a simple public-facing
presentation.Ideally the internal prop.merge() methods would ride on
top of this system too.   Traversals like these I most prefer to present
as iterators with some system of linking attribute-specific callables
(i.e. visitors).

That said, I don't think implementing your own merge()-like
load-and-traverse is very difficult - which I say only because I don't
have time to create an example for this right now.




   


On 4/29/2010 4:48 PM, Michael Bayer wrote:
 

Kent Bower wrote:

   

It is helpful to know what SQLA was designed for.

Also, you may be interested to know of our project as we are 

[sqlalchemy] upgrade to SQLA 0.6

2010-04-30 Thread Kent
I did read 0.6 Migration document.

I was using the contains_column method of ForeignKeyConstraint.
Apparently removed?

AttributeError: 'ForeignKeyConstraint' object has no attribute
'contains_column'

Easy workaround or replacement call?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] upgrade to SQLA 0.6

2010-04-30 Thread Michael Bayer
Kent wrote:
 I did read 0.6 Migration document.

 I was using the contains_column method of ForeignKeyConstraint.
 Apparently removed?

 AttributeError: 'ForeignKeyConstraint' object has no attribute
 'contains_column'

 Easy workaround or replacement call?

it has a columns collection where you can say col in const.columns.





 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.