[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Werner F. Bruhin

Mike,

Mike wrote:
 Hi,

 I am working on a wxPython application that saves data to various
 tables in our MS SQL Server 2000. I connect to one table and get data
 using a session. This works great. I then do a session.close() and
 then a conn.close() where conn = engine.connect().  This seems to work
 as expected as well. The problem happens when I connect to the second
 table. I get the data from the second table just fine, but when I try
 to update it using my 2nd session object, I get the following error:

 Traceback (most recent call last):
   File \\debianis\loginscript$\PythonPackages\Development\Timesheet_sa
 \ts_worksheet_MT.py, line 689, in onClose
 session.flush()
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\session.py, line 789, in flush
 self.uow.flush(self, objects)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 233, in flush
 flush_context.execute()
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 445, in execute
 UOWExecutor().execute(self, tasks)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 930, in execute
 self.execute_save_steps(trans, task)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 945, in execute_save_steps
 self.save_objects(trans, task)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 936, in save_objects
 task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\mapper.py, line 1144, in _save_obj
 c = connection.execute(statement.values(value_params), params)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 844, in execute
 return Connection.executors[c](self, object, multiparams, params)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 895, in execute_clauseelement
 return self._execute_compiled(elem.compile(dialect=self.dialect,
 column_keys=keys, inline=len(params)  1), distilled_params=params)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 907, in _execute_compiled
 self.__execute_raw(context)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 916, in __execute_raw
 self._cursor_execute(context.cursor, context.statement,
 context.parameters[0], context=context)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 960, in _cursor_execute
 self._handle_dbapi_exception(e, statement, parameters, cursor)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 942, in _handle_dbapi_exception
 raise exceptions.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 sqlalchemy.exceptions.DatabaseError: (DatabaseError) internal error:
 None 'UPDATE [tbl_Acct_Prefs] SET pref_value=%(pref_value)s WHERE
 [tbl_Acct_Prefs].[empID] = %(tbl_Acct_Prefs_empID)s AND
 [tbl_Acct_Prefs].pref_name = %
 (tbl_Acct_Prefs_pref_name)s' {'pref_value': u'4',
 'tbl_Acct_Prefs_pref_name': 'last_payPeriod', 'tbl_Acct_Prefs_empID':
 258}


 To get the data, I do the following:

 pref = session.query(Acct_Prefs).filter_by(empID=self.emp_id,
 pref_name='last_payPeriod').first()

 Then I change a value like this:

 pref.pref_value = someValue

 Finally, I do a session.flush() and session.commit(). I've tried using
 just a commit(), but I get the same error. Since I can run this second
 session by itself in IDLE, I think the 1st session or connection or
 something is somehow messing up the second one. Any ideas on what is
 going on would be great. I'm still pretty green with SA, by the way.

 I am developing on Windows XP, Python 2.5.2 and using SqlAlchemy
 0.4.7. I can upgrade if you guys think that's the issue.
   
I am no expert in SA, but I am converting my application over to it.

Why are you closing the session?

You can work with many tables and do all your queries all in the one 
session.

self.session = Session()
pref = self.session.query(db.Preferences).get(1)
# change pref here
self.session.commit()

cellar = self.session.query(db.Cellar).get(1)
# change cellar here
self.session.commit()

etc etc

I think in most wxPython application you would create the session in 
wx.App.OnInit and close it in wx.App.OnExit.

Should you package your application with py2exe you should probably 
include this in your OnExit method:

self.session.close_all()
self.engine.dispose()


 Thanks,

 Mike

 


   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 

[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Werner F. Bruhin

Mike,

Sent this a bit to quickly

Mike wrote:
 Hi,

 I am working on a wxPython application that saves data to various
 tables in our MS SQL Server 2000. I connect to one table and get data
 using a session. This works great. I then do a session.close() and
 then a conn.close() where conn = engine.connect().  This seems to work
 as expected as well. The problem happens when I connect to the second
 table. I get the data from the second table just fine, but when I try
 to update it using my 2nd session object, I get the following error:

 Traceback (most recent call last):
   File \\debianis\loginscript$\PythonPackages\Development\Timesheet_sa
 \ts_worksheet_MT.py, line 689, in onClose
 session.flush()
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\session.py, line 789, in flush
 self.uow.flush(self, objects)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 233, in flush
 flush_context.execute()
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 445, in execute
 UOWExecutor().execute(self, tasks)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 930, in execute
 self.execute_save_steps(trans, task)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 945, in execute_save_steps
 self.save_objects(trans, task)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 936, in save_objects
 task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\orm\mapper.py, line 1144, in _save_obj
 c = connection.execute(statement.values(value_params), params)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 844, in execute
 return Connection.executors[c](self, object, multiparams, params)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 895, in execute_clauseelement
 return self._execute_compiled(elem.compile(dialect=self.dialect,
 column_keys=keys, inline=len(params)  1), distilled_params=params)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 907, in _execute_compiled
 self.__execute_raw(context)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 916, in __execute_raw
 self._cursor_execute(context.cursor, context.statement,
 context.parameters[0], context=context)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 960, in _cursor_execute
 self._handle_dbapi_exception(e, statement, parameters, cursor)
   File c:\python25\lib\site-packages\sqlalchemy-0.4.7-py2.5.egg
 \sqlalchemy\engine\base.py, line 942, in _handle_dbapi_exception
 raise exceptions.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 sqlalchemy.exceptions.DatabaseError: (DatabaseError) internal error:
 None 'UPDATE [tbl_Acct_Prefs] SET pref_value=%(pref_value)s WHERE
 [tbl_Acct_Prefs].[empID] = %(tbl_Acct_Prefs_empID)s AND
 [tbl_Acct_Prefs].pref_name = %
 (tbl_Acct_Prefs_pref_name)s' {'pref_value': u'4',
 'tbl_Acct_Prefs_pref_name': 'last_payPeriod', 'tbl_Acct_Prefs_empID':
 258}


 To get the data, I do the following:

 pref = session.query(Acct_Prefs).filter_by(empID=self.emp_id,
 pref_name='last_payPeriod').first()

 Then I change a value like this:

 pref.pref_value = someValue

 Finally, I do a session.flush() and session.commit(). I've tried using
 just a commit(), but I get the same error. Since I can run this second
 session by itself in IDLE, I think the 1st session or connection or
 something is somehow messing up the second one. Any ideas on what is
 going on would be great. I'm still pretty green with SA, by the way.

 I am developing on Windows XP, Python 2.5.2 and using SqlAlchemy
 0.4.7. I can upgrade if you guys think that's the issue.
   
I am no expert in SA, but I am converting my application over to it.

Why are you closing the session?

You can work with many tables and do all your queries all in the one
session.

self.session = Session()
pref = self.session.query(db.Preferences).get(1)
# change pref here
self.session.commit()

cellar = self.session.query(db.Cellar).get(1)
# change cellar here
self.session.commit()

etc etc

I think in most wxPython application you would create the session in
wx.App.OnInit and close it in wx.App.OnExit.

Should you package your application with py2exe you should probably
include this in your OnExit method:

self.session.close_all()
self.engine.dispose()

I was seeing that my application would hang on closing if I did not do 
this.  I am using Firebird SQL, it might be something to do 

[sqlalchemy] Re: error when running query.count()

2008-08-28 Thread ml

I don't know how TG works. You have to ask TG users.

D


Mohammed Khan napsal(a):
 I think turbogears is maintain the mapper config?... how do I get this 
 information out..
 
 Thanks
 
 mfk
 
 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of ml
 Sent: Wednesday, August 27, 2008 2:09 AM
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Re: error when running query.count()
 
 
 Hi!
 
 There are more than 1 mapper to an object. Send your mappers configuration.
 
 David
 
 
 mkhan napsal(a):
 Hi, I am trying get a count for a query but it errors out with
 following message:

 Page handler: bound method Root.index of harvest.controllers.Root
 object at 0x018444B0
 Traceback (most recent call last):
   File c:\python25\lib\site-packages\cherrypy-2.3.0-py2.5.egg\cherrypy
 \_cphttptools.py, line 121, in _run
 self.main()
   File c:\python25\lib\site-packages\cherrypy-2.3.0-py2.5.egg\cherrypy
 \_cphttptools.py, line 264, in main
 body = page_handler(*virtual_path, **self.params)
   File string, line 3, in index
   File c:\python25\lib\site-packages\TurboGears-1.0.5-py2.5.egg
 \turbogears\controllers.py, line 359, in expose
 *args, **kw)
   File string, line 5, in run_with_transaction
   File c:\python25\lib\site-packages\TurboGears-1.0.5-py2.5.egg
 \turbogears\database.py, line 407, in sa_rwt
 retval = func(*args, **kw)
   File string, line 5, in _expose
   File c:\python25\lib\site-packages\TurboGears-1.0.5-py2.5.egg
 \turbogears\controllers.py, line 372, in lambda
 mapping, fragment, args, kw)))
   File c:\python25\lib\site-packages\TurboGears-1.0.5-py2.5.egg
 \turbogears\controllers.py, line 401, in _execute_func
 output = errorhandling.try_call(func, *args, **kw)
   File c:\python25\lib\site-packages\TurboGears-1.0.5-py2.5.egg
 \turbogears\errorhandling.py, line 77, in try_call
 return func(self, *args, **kw)
   File C:\Documents and Settings\mkhan\workspace\TestAutomation
 \Reports\Harvest\harvest\controllers.py, line 31, in index
 print results2.count()
   File c:\python25\lib\site-packages\sqlalchemy-0.5.0beta3-py2.5.egg
 \sqlalchemy\orm\query.py, line 1193, in count
 return self._col_aggregate(sql.literal_column('1'),
 sql.func.count,
 nested_cols=list(self._only_mapper_zero().primary_key))
   File c:\python25\lib\site-packages\sqlalchemy-0.5.0beta3-py2.5.egg
 \sqlalchemy\orm\query.py, line 240, in _only_mapper_zero
 raise sa_exc.InvalidRequestError(This operation requires a Query
 against a single mapper.)
 InvalidRequestError: This operation requires a Query against a single
 mapper.

 I would appreciate some comments on what i am doing wrong,

 thanks,

 mfk

 
 
 
  
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Orphans not deleted using cascade parameter

2008-08-28 Thread Bob Farrell
On Wed, Aug 27, 2008 at 12:53:38PM -0700, Alex Mathieu wrote:
 
 Thanks Michael, I'll have a look over this !!
 
 Bob, thanks also for your help, however, I'm not able to use the
 code... maybe the indention is wrong here or I don't know... I was
 able to execute the function, but even by putting a print as the first
 line of the function, nothing got printed out, weird... (maybe my lack
 of skills using python, yet :P)
 
Alex,

Weird. I've attached it here so maybe that will make things work.
-- 
--
Bob Farrell
pH, an Experian Company
www.phgroup.com
Office Line: 020 7598 0310
Fax: 020 7598 0311
--

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

def delete_cascade(orm_obj):
Perform a cascading delete on any ORM object and its children.
# Since we take an ORM _object_, we need to discover its table:
obj_table = class_mapper(type(orm_obj)).mapped_table
def get_child_tables(parent_table, children=[]):
Recursively find all child tables.
new_children = []
# Use SQLAlchemy's table_iterator reversed to give us the tables in the
# correct order to ensure that we can delete without breaking any constraints
# (i.e. we will not delete a parent before its child:
for table in obj_table.metadata.table_iterator(reverse=True):
for fk in table.foreign_keys:
if fk.references(parent_table) and \
(table, fk, parent_table) not in children:
new_children.append((table, fk, parent_table))
break
# If no new children are found we have reached the top of the recursion so we
# fall back down the stack:
if not new_children:
return []
else:
for child in new_children:
# Here is the recursive call:
children.extend(get_child_tables(child[0]))
children.extend(new_children)
return children
_children = get_child_tables(obj_table)
children = []
# This loop filters out any tables who have more than one foreign key where one
# of the foreign keys references the root node so we have no duplicates. The
# result is a list of tables that reference either the root node or their
# parent:
for child in _children:
if child[0] not in [x[0] for x in children]:
children.append(child)
elif child[1].references(obj_table):
for i, _child in enumerate(children):
if _child[0] == child[0]:
children[i] = child
break
# This is a rare-case optimisation that sees if any of the tables reference the
# root node indirectly by having a foreign key whose counterpart is a direct
# reference to the root node:
for child in children:
table, fk, parent_table = child
if not fk.references(obj_table):
parent_fk = fk.column.foreign_key
while parent_fk is not None:
if parent_fk.references(obj_table):
obj_column = (
parent_fk.column.key
)
break
parent_fk = parent_fk.column.foreign_key
# Finally build a select for grandchildren or later to establish which records
# need to be removed by seeing which of their parent's records are ancestors of
# the root node:
if parent_fk is None:
sel = select([fk.parent])
parent_fk = fk.column.foreign_key
while parent_fk is not None:
sel.append_whereclause(
parent_fk.parent==parent_fk.column
)
tmp = parent_fk.column.foreign_key
if tmp is not None:
parent_fk = tmp
else:
break
obj_column = (
parent_fk.column.key
)
sel.append_whereclause(
parent_fk.column==getattr(orm_obj, obj_column)
)
in_column = fk.column.key
yield delete(
fk.parent.table,
fk.parent.in_(sel)
)
continue
# Otherwise simply yield a delete statement to delete the first-generation
# child of the root node:
else:
obj_column = fk.column.key
yield delete(
table,
fk.parent==getattr(orm_obj, obj_column)
)
# Build the delete statement for the root node itself by introspectively
# discovering the primary keys of 

[sqlalchemy] Storing UTC Dates

2008-08-28 Thread Heston James - Cold Beans
Hello Guys,

 

This might seem like a bit of a naive question but I'm looking for your
advice. Being from the UK we operate on Daylight Savings Time which gives us
a one hour offset on times for a few months of the year.

 

I currently have a DateTime column which is declared like so:

 

created = Column(DateTime, default=func.now())

modified = Column(DateTime, default=func.now(), onupdate=func.now())

 

Which generally works very well, when I create a record it inserts the
current locale time into the column, however, it stores the datetime with
DST applied too it. As I use the datetime at a later point for posting over
web services I really need to store the UTC version of now() in the
database, without DST applied to it.

 

How can I modify the above column definition to do this? Can I simply use
something instead of func.now()? I was given the advise to use func.now() by
someone but not really sure what it returns, is it a datetime.datetime
object? Or a time tuple?

 

Or is there a parameter I can pass to Column() or DateTime() which will
ensure it uses the UTC format of the date when creating and modifying
records?

 

Many thanks guys,

 

Heston


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Storing UTC Dates

2008-08-28 Thread Werner F. Bruhin

Heston,

Heston James - Cold Beans wrote:

 Hello Guys,

 This might seem like a bit of a naive question but I’m looking for 
 your advice. Being from the UK we operate on Daylight Savings Time 
 which gives us a one hour offset on times for a few months of the year.

 I currently have a DateTime column which is declared like so:

 created = Column(DateTime, default=func.now())

 modified = Column(DateTime, default=func.now(), onupdate=func.now())

 Which generally works very well, when I create a record it inserts the 
 current locale time into the column, however, it stores the datetime 
 with DST applied too it. As I use the datetime at a later point for 
 posting over web services I really need to store the UTC version of 
 now() in the database, without DST applied to it.

 How can I modify the above column definition to do this? Can I simply 
 use something instead of func.now()? I was given the advise to use 
 func.now() by someone but not really sure what it returns, is it a 
 datetime.datetime object? Or a time tuple?

 Or is there a parameter I can pass to Column() or DateTime() which 
 will ensure it uses the UTC format of the date when creating and 
 modifying records?

IIUC func.now is a database function.

You should be able to use datetime instead i.e.:

created = Column(DateTime, default=datetime.datetime.utcnow)

modified = Column(DateTime, default=datetime.datetime.utcnow, 
onupdate=datetime.datetime.utcnow)


Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread Cecil Westerhof

I was just wondering if the folowing possible.

A record has severall groups connected through a N:M relation.
But it also has a parent. What I would like is that all the groups
from the parent (and its parent, etc.) also are seen as group for the
record.
When the parent has a group added or removed that should then be
'visible' at the child.
Not a very good description I am afraid, so I'll try to give an example.

Say I have the groups G1, G2, G3, G4 and G5.
I have the records R1 and R2. R1 is the parent of R2 and does not have
a parent itself.

If R1 has the groups G1 and G2 and R2 has the groups G2, G4 and G5,
then when asking the list of groups of R2 should give, G1, G2, G4 and
G5.
When quering for records that have group G1, both R1 and R2 should be returned.

When adding G3 to R1 the list for R2 should be: G1, G2, G3, G4 and G5.
When removing G1 from R1 the list for R2 should be: G2, G3, G4 and G5.
When removing G2 from R1 the list for R2 should be: G2, G3, G4 and G5.
(R2 has itself also group G2.)

Is this possible with sqlalchemy or has this to be done by hand?

-- 
Cecil Westerhof

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Storing UTC Dates

2008-08-28 Thread Heston James - Cold Beans

Hi Werner,

 IIUC func.now is a database function.

Ah, ok, that makes fair sense.

 You should be able to use datetime instead i.e.:

 created = Column(DateTime, default=datetime.datetime.utcnow)

 modified = Column(DateTime, default=datetime.datetime.utcnow, 
 onupdate=datetime.datetime.utcnow)

Yes, this worked just great Werner, I've used that and it seems to have done
the job! I hoped it would be that simple :-)

Thanks again,

Heston


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread az

this is sort of inheritance of data, right? 
the best i've made so far about this is to get (somehow) all the Rs 
each with it's groups, and then do the inheritance (union over the 
path towards root in your case) over the result rows by hand. if u 
find a better way let me know...
traversing a (recursive) tree isn't an easy thing either, unless u 
save it as nonrecursive (keeping extra links etc) in a way or 
another.

On Thursday 28 August 2008 15:11:42 Cecil Westerhof wrote:
 I was just wondering if the folowing possible.

 A record has severall groups connected through a N:M relation.
 But it also has a parent. What I would like is that all the groups
 from the parent (and its parent, etc.) also are seen as group for
 the record.
 When the parent has a group added or removed that should then be
 'visible' at the child.
 Not a very good description I am afraid, so I'll try to give an
 example.

 Say I have the groups G1, G2, G3, G4 and G5.
 I have the records R1 and R2. R1 is the parent of R2 and does not
 have a parent itself.

 If R1 has the groups G1 and G2 and R2 has the groups G2, G4 and G5,
 then when asking the list of groups of R2 should give, G1, G2, G4
 and G5.
 When quering for records that have group G1, both R1 and R2 should
 be returned.

 When adding G3 to R1 the list for R2 should be: G1, G2, G3, G4 and
 G5. When removing G1 from R1 the list for R2 should be: G2, G3, G4
 and G5. When removing G2 from R1 the list for R2 should be: G2, G3,
 G4 and G5. (R2 has itself also group G2.)

 Is this possible with sqlalchemy or has this to be done by hand?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Mike

Hi Werner,

On Aug 28, 1:39 am, Werner F. Bruhin [EMAIL PROTECTED] wrote:
 Mike,

 Sent this a bit to quickly



 Mike wrote:
  Hi,

  I am working on a wxPython application that saves data to various
  tables in our MS SQL Server 2000. I connect to one table and get data
  using a session. This works great. I then do a session.close() and
  then a conn.close() where conn = engine.connect().  This seems to work
  as expected as well. The problem happens when I connect to the second
  table. I get the data from the second table just fine, but when I try
  to update it using my 2nd session object, I get the following error:


traceback snipped

  To get the data, I do the following:

  pref = session.query(Acct_Prefs).filter_by(empID=self.emp_id,
  pref_name='last_payPeriod').first()

  Then I change a value like this:

  pref.pref_value = someValue

  Finally, I do a session.flush() and session.commit(). I've tried using
  just a commit(), but I get the same error. Since I can run this second
  session by itself in IDLE, I think the 1st session or connection or
  something is somehow messing up the second one. Any ideas on what is
  going on would be great. I'm still pretty green with SA, by the way.

  I am developing on Windows XP, Python 2.5.2 and using SqlAlchemy
  0.4.7. I can upgrade if you guys think that's the issue.

 I am no expert in SA, but I am converting my application over to it.

 Why are you closing the session?



As I walked in this morning, I realized that closing one session and
opening a second one was probably stupid. I was trying to avoid
mapping tables unless I needed them, but I think I'll just map all of
them.



 You can work with many tables and do all your queries all in the one
 session.

 self.session = Session()
 pref = self.session.query(db.Preferences).get(1)
 # change pref here
 self.session.commit()

 cellar = self.session.query(db.Cellar).get(1)
 # change cellar here
 self.session.commit()

 etc etc


Does this work for multiple databases? This particular program I am
working on will be connecting to 2 or 3 databases and a table or three
in each of those. I'm pretty sure I have to create separate engines
for each db and probably bind separate sessions for those.




 I think in most wxPython application you would create the session in
 wx.App.OnInit and close it in wx.App.OnExit.

 Should you package your application with py2exe you should probably
 include this in your OnExit method:

             self.session.close_all()
             self.engine.dispose()

 I was seeing that my application would hang on closing if I did not do
 this.  I am using Firebird SQL, it might be something to do with that db
 as I heard from others using SA that they did not see this problem.

 Werner

Thanks for the quick reply.

Mike
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Cecil Westerhof

2008/8/28 Mike [EMAIL PROTECTED]:
 Does this work for multiple databases? This particular program I am
 working on will be connecting to 2 or 3 databases and a table or three
 in each of those. I'm pretty sure I have to create separate engines
 for each db and probably bind separate sessions for those.

Nope. I use two databases, no engine and only one session. Maybe not
optimal (I just started learning sqlalchemy and wxpython), but it is
possible.

-- 
Cecil Westerhof

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread Cecil Westerhof

2008/8/28  [EMAIL PROTECTED]:

 this is sort of inheritance of data, right?

You could it call like that I suppose.


 the best i've made so far about this is to get (somehow) all the Rs
 each with it's groups, and then do the inheritance (union over the
 path towards root in your case) over the result rows by hand.

I was afraid for that.

 if u find a better way let me know...

Ofcourse, but I am afraid it is not likely. :-{


 traversing a (recursive) tree isn't an easy thing either, unless u
 save it as nonrecursive (keeping extra links etc) in a way or
 another.

I'll have to think about that. But properly this is only going to work
with tables that are not to big I am afraid.
Maybe I want to fancy things.

-- 
Cecil Westerhof

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: loading tables is very slow

2008-08-28 Thread vkuznet

Hi,
so I modified a code to use hotshot module, basically I added

profiler = hotshot.Profile(profile.dat)
profiler.run(loadTables(tableNames))
profiler.close()

to my code instead of just calling loadTables function. Results are
below. I played with code a little bit and it seems to me that most of
the time spent in database/oracle.py, in reflecttable method of
OracleDialect class.

Anyway, let me know if more profile is necessary.
Thank you,
Valentin.


create engine 0.11220407486
engine.connect 1.89971590042
SELECT table_name FROM all_tables WHERE owner='ZZZ'
get tables 0.155882120132
Loading 'triggerpathdescription' table
5.56913805008
Load tables manually ['triggerpathdescription']
load table 5.57045388222
 1790 function calls (1717 primitive calls) in 5.570 CPU
seconds

   Ordered by: internal time, call count

   ncalls  tottime  percall  cumtime  percall
filename:lineno(function)
   252.9330.1172.9330.117 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1607(_fetchone_impl)
11.8091.8091.8091.809 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/strategies.py:76(connect)
40.6620.1660.6620.166 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/default.py:128(do_execute)
10.1460.1460.1470.147 /opt/local/lib/python2.4/
site-packages/sqlalchemy/pool.py:276(_finalize_fairy)
  2/10.0020.0013.6113.611 /opt/local/lib/python2.4/
site-packages/sqlalchemy/databases/oracle.py:439(reflecttable)
  147/1110.0010.0000.0020.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1590(_get_col)
   140.0010.0000.0020.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/schema.py:406(__init__)
   350.0010.0000.0010.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/sql/compiler.py:1001(_requires_quotes)
  1470.0010.0000.0010.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/util.py:121(__getitem__)
   380.0010.0000.0020.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/databases/oracle.py:366(_normalize_name)
40.0010.0000.0010.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1423(_init_metadata)
   250.0010.0002.9340.117 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1641(fetchone)
   140.0010.0000.0020.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/schema.py:583(_set_parent)
   900.0010.0000.0010.000 /opt/local/Library/
Frameworks/Python.framework/Versions/2.4/lib/python2.4/encodings/
utf_8.py:15(decode)
   410.0010.0000.0010.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/pool.py:442(__getattr__)
   160.0000.0000.0010.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/sql/expression.py:2651(__init__)
  1110.0000.0000.0030.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1338(__getitem__)
 10/10.0000.0000.0010.001 /opt/local/Library/
Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_parse.py:
374(_parse)
 11/10.0000.0000.0000.000 /opt/local/Library/
Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_compile.py:
27(_compile)
40.0000.0000.0000.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/pool.py:429(close)
   210.0000.0000.0000.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1299(__init__)
 11/10.0000.0000.0000.000 /opt/local/Library/
Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_parse.py:
140(getwidth)
40.0000.0000.0020.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/databases/oracle.py:216(get_result_proxy)
   720.0000.0000.0000.000 /opt/local/Library/
Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_parse.py:
182(__next)
   200.0000.0000.0000.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/util.py:472(__setitem__)
10.0000.0005.5705.570 test.py:14(loadTables)
  2/10.0000.0005.5675.567 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/base.py:1267(reflecttable)
  2/10.0000.0005.5695.569 /opt/local/lib/python2.4/
site-packages/sqlalchemy/schema.py:124(__init__)
   200.0000.0000.0000.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/util.py:568(__setitem__)
   340.0000.0000.0000.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/sql/expression.py:1574(__contains__)
40.0000.0000.0010.000 /opt/local/lib/python2.4/
site-packages/sqlalchemy/engine/default.py:136(__init__)
50.0000.0000.0020.000 

[sqlalchemy] running average

2008-08-28 Thread dirjoe

Hi,

is there a way to query on running averages in sqlalchemy?

Eg for a baseball database how could I query on
a team scored 2 more runs then their year-to-date average?

Thanks for any clues.

joe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Result set slice (session.query)

2008-08-28 Thread PyDevler

I am using the ORM query strategy using session.query. I vaguely
remember seeing in the past that doing a slice on the result set, i.e.
result[:10] used to perform a Limit/Offset query, it may not have been
so. Nevertheless, with SA 0.5beta3 this is definitely not the case. It
seems that limit offset queries are only issued when I index the
result set, e.g. result[0] which does a limit 1 offset 0.

The question is, how would I be able to extend the slicing to use
Limit Offset. e.g:

result[:10] = Limit 10 offset 0

I have a large database, and I quickly run out of memory with queries
that do not utilize the Limit Offset.

Anything better than [result[i] for i in range(0,10)] would be greatly
appreciated (since that would run 10 different queries).

Thanks,

--
Hatem Nassrat


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread Michael Bayer


On Aug 28, 2008, at 8:11 AM, Cecil Westerhof wrote:


 I was just wondering if the folowing possible.

 A record has severall groups connected through a N:M relation.
 But it also has a parent. What I would like is that all the groups
 from the parent (and its parent, etc.) also are seen as group for the
 record.
 When the parent has a group added or removed that should then be
 'visible' at the child.
 Not a very good description I am afraid, so I'll try to give an  
 example.

 Say I have the groups G1, G2, G3, G4 and G5.
 I have the records R1 and R2. R1 is the parent of R2 and does not have
 a parent itself.

 If R1 has the groups G1 and G2 and R2 has the groups G2, G4 and G5,
 then when asking the list of groups of R2 should give, G1, G2, G4 and
 G5.
 When quering for records that have group G1, both R1 and R2 should  
 be returned.

 When adding G3 to R1 the list for R2 should be: G1, G2, G3, G4 and G5.
 When removing G1 from R1 the list for R2 should be: G2, G3, G4 and G5.
 When removing G2 from R1 the list for R2 should be: G2, G3, G4 and G5.
 (R2 has itself also group G2.)

 Is this possible with sqlalchemy or has this to be done by hand?

I wouldn't say by hand.   An element's groups are basically its own  
groups unioned with the groups of its parent.So a simple recursion  
is the simplest way to form this group, but this only handles the  
object graph side of the equation, not the Query side.

Suppose a mapping like:

mapper(MyObject, mytable, properties={
'groups':relation(Group, collection_class=set),
'parent':relation(MyObject, remote_side=mytable.c.id)
})

MyObject could return the full groups using:

class MyObject(object):
@property
def full_groups(self):
if self.parent:
return self.groups.union(self.parent.full_groups)   
else:
return self.groups

On the Query side, the basic job is to formulate joins to the parent,  
which has the inherent issue that each level of depth corresponds to  
another join.  For the R1/R2 example it looks like:

parent_alias = aliased(MyObject)
sess 
.query 
(MyObject 
).outerjoin 
(parent_alias.parent).filter(or_(MyObject.groups.contains(somegroup),  
parent_alias.groups.contains(somegroup))

You can attempt to create a generalized function for the query above  
which is given N, number of levels to be queried, and then creates the  
joins N times as needed.  An example of this technique is in examples/ 
elementtree/adjacency_list.py .




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: loading tables is very slow

2008-08-28 Thread Michael Bayer

Below, you can see that establishing the database connection itself to  
oracle is taking 1.8 seconds (connect() on the SQLA side).  And a huge  
2.9 seconds to fetch just 25 rows from your Oracle database  
(_fetchone_impl on the SQLA side, which calls cursor.fetchone(), a  
cx_oracle native function, directly).   The total time spent waiting  
for results from cx_oracle, for a total of four statement executions  
(do_execute()), is .662 seconds.

So that is a total of 5.3 seconds spent entirely on the cx_oracle side  
of the equation, which is the time it's taking your oracle database to  
connect, execute four select statements, and fetch a total of 25  
rows.   0.2 seconds remain which is the time actually spent within  
SQLA's reflection implementation.

It seems you have an extremely slow network connection or an  
overloaded Oracle database.


On Aug 28, 2008, at 10:26 AM, vkuznet wrote:


 Hi,
 so I modified a code to use hotshot module, basically I added

 profiler = hotshot.Profile(profile.dat)
 profiler.run(loadTables(tableNames))
 profiler.close()

 to my code instead of just calling loadTables function. Results are
 below. I played with code a little bit and it seems to me that most of
 the time spent in database/oracle.py, in reflecttable method of
 OracleDialect class.

 Anyway, let me know if more profile is necessary.
 Thank you,
 Valentin.


 create engine 0.11220407486
 engine.connect 1.89971590042
 SELECT table_name FROM all_tables WHERE owner='ZZZ'
 get tables 0.155882120132
 Loading 'triggerpathdescription' table
 5.56913805008
 Load tables manually ['triggerpathdescription']
 load table 5.57045388222
 1790 function calls (1717 primitive calls) in 5.570 CPU
 seconds

   Ordered by: internal time, call count

   ncalls  tottime  percall  cumtime  percall
 filename:lineno(function)
   252.9330.1172.9330.117 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/base.py:1607(_fetchone_impl)
11.8091.8091.8091.809 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/strategies.py:76(connect)
40.6620.1660.6620.166 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/default.py:128(do_execute)
10.1460.1460.1470.147 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/pool.py:276(_finalize_fairy)
  2/10.0020.0013.6113.611 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/databases/oracle.py:439(reflecttable)
  147/1110.0010.0000.0020.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/base.py:1590(_get_col)
   140.0010.0000.0020.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/schema.py:406(__init__)
   350.0010.0000.0010.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/sql/compiler.py:1001(_requires_quotes)
  1470.0010.0000.0010.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/util.py:121(__getitem__)
   380.0010.0000.0020.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/databases/oracle.py:366(_normalize_name)
40.0010.0000.0010.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/base.py:1423(_init_metadata)
   250.0010.0002.9340.117 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/base.py:1641(fetchone)
   140.0010.0000.0020.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/schema.py:583(_set_parent)
   900.0010.0000.0010.000 /opt/local/Library/
 Frameworks/Python.framework/Versions/2.4/lib/python2.4/encodings/
 utf_8.py:15(decode)
   410.0010.0000.0010.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/pool.py:442(__getattr__)
   160.0000.0000.0010.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/sql/expression.py:2651(__init__)
  1110.0000.0000.0030.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/base.py:1338(__getitem__)
 10/10.0000.0000.0010.001 /opt/local/Library/
 Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_parse.py:
 374(_parse)
 11/10.0000.0000.0000.000 /opt/local/Library/
 Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_compile.py:
 27(_compile)
40.0000.0000.0000.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/pool.py:429(close)
   210.0000.0000.0000.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/engine/base.py:1299(__init__)
 11/10.0000.0000.0000.000 /opt/local/Library/
 Frameworks/Python.framework/Versions/2.4/lib/python2.4/sre_parse.py:
 140(getwidth)
40.0000.0000.0020.000 /opt/local/lib/python2.4/
 site-packages/sqlalchemy/databases/oracle.py:216(get_result_proxy)
   720.0000.0000.0000.000 /opt/local/Library/
 

[sqlalchemy] Re: Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread Cecil Westerhof

2008/8/28 Michael Bayer [EMAIL PROTECTED]:


 On Aug 28, 2008, at 8:11 AM, Cecil Westerhof wrote:


 I was just wondering if the folowing possible.

 A record has severall groups connected through a N:M relation.
 But it also has a parent. What I would like is that all the groups
 from the parent (and its parent, etc.) also are seen as group for the
 record.
 When the parent has a group added or removed that should then be
 'visible' at the child.
 Not a very good description I am afraid, so I'll try to give an
 example.

 Say I have the groups G1, G2, G3, G4 and G5.
 I have the records R1 and R2. R1 is the parent of R2 and does not have
 a parent itself.

 If R1 has the groups G1 and G2 and R2 has the groups G2, G4 and G5,
 then when asking the list of groups of R2 should give, G1, G2, G4 and
 G5.
 When quering for records that have group G1, both R1 and R2 should
 be returned.

 When adding G3 to R1 the list for R2 should be: G1, G2, G3, G4 and G5.
 When removing G1 from R1 the list for R2 should be: G2, G3, G4 and G5.
 When removing G2 from R1 the list for R2 should be: G2, G3, G4 and G5.
 (R2 has itself also group G2.)

 Is this possible with sqlalchemy or has this to be done by hand?

 I wouldn't say by hand.   An element's groups are basically its own
 groups unioned with the groups of its parent.So a simple recursion
 is the simplest way to form this group, but this only handles the
 object graph side of the equation, not the Query side.

Thanks for the info. I'll have to digest it, because I am just
starting with sqlalchemy. But it is nice to know that I could
implement this functionality.


 On the Query side, the basic job is to formulate joins to the parent,

Would it no be better to the child? Otherwise you need to traverse all
records, which would be inefficient -I think- when for example only 1%
of the records are in the group. Or am I overlooking something?

-- 
Cecil Westerhof

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Result set slice (session.query)

2008-08-28 Thread Michael Bayer


On Aug 28, 2008, at 12:18 AM, PyDevler wrote:


 I am using the ORM query strategy using session.query. I vaguely
 remember seeing in the past that doing a slice on the result set, i.e.
 result[:10] used to perform a Limit/Offset query, it may not have been
 so. Nevertheless, with SA 0.5beta3 this is definitely not the case. It
 seems that limit offset queries are only issued when I index the
 result set, e.g. result[0] which does a limit 1 offset 0.

 The question is, how would I be able to extend the slicing to use
 Limit Offset. e.g:

result[:10] = Limit 10 offset 0

 I have a large database, and I quickly run out of memory with queries
 that do not utilize the Limit Offset.

 Anything better than [result[i] for i in range(0,10)] would be greatly
 appreciated (since that would run 10 different queries).


query[start:end] works just great, I've been using every 0.5 version  
for a soon-to-be production app here.   There may be some edge cases  
that are not supported, such as result[-5:-10], and we also now have  
checks which prevent subsequent filtering applied to an already  
limited query, since this is an ambiguous use case (i.e., do you want  
to filter from the limited results, or apply the filter first?  SQLA  
would not like to guess.  If you were expecting the latter, then I'm  
glad I decided to implement it this way since the former is more  
correct).

Additionally limit/offset can be applied directly using limit() and  
offset().Do you have a test case illustrating your specific issue ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread Michael Bayer


On Aug 28, 2008, at 10:54 AM, Cecil Westerhof wrote:


 On the Query side, the basic job is to formulate joins to the parent,

 Would it no be better to the child? Otherwise you need to traverse all
 records, which would be inefficient -I think- when for example only 1%
 of the records are in the group. Or am I overlooking something?

its all joins between X and Y so at that level its the same thing.   
The optimizations you'd be looking for here would involve additional  
tables that store information such as a full path for each node, or a  
materialized path view of some kind (or even, nested sets, something I  
should eventually create an ORM example for since it's tricky).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Werner F. Bruhin

Mike,

Mike wrote:

...

 Does this work for multiple databases? This particular program I am
 working on will be connecting to 2 or 3 databases and a table or three
 in each of those. I'm pretty sure I have to create separate engines
 for each db and probably bind separate sessions for those.
   
I don't think so, using sessions and engines I would think you have to 
have one per database.

You probably need to explain a bit more what you are doing with these 
databases, i.e. are you moving data from one to the other, or are they 
independent databases or ..

Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Mike

Hi,

Here's what I'm doing. I have a timesheet application written in
wxPython. It works, but I think it would work better if I changed the
database calls into SA calls. There are 3 databases. I created one and
the other two are pre-existing. The one I created is the one I store
all the user entered data into. I read from the official accounting
database and I read from a couple of views in the third database. The
two databases I read from are for checking purposes to make sure that
I am doing the calculations correctly in my program and also for
authentication and certain bits of user data, such as employee number,
name and such.

Anyway, I dumped the close calls I had and put them in the OnExit
method, which is something I've never used before. I'm not sure that I
have that set up right, but at this point it doesn't matter. I am
still receiving the same error.

If I take all of my SA setup out of the wxPython code and stick it in
my own module, it works. Here's what that looks like:


code

import ts_info
from db_tables import Acct_Prefs, TimeEntries
from sqlalchemy import Table
from sqlalchemy.orm import mapper, sessionmaker

# Connect to the database
print 'connecting to MCISAccounting DB...'
conn, engine, meta = ts_info.setupDB('acct')

# Load the tables
print 'loading tables...'
entry_table = Table('tbl_TimeEntries', meta, autoload=True)
prefs_table = Table('tbl_Acct_Prefs', meta, autoload=True)

# Map the tables
print 'mapping tables...'
mapper(TimeEntries, entry_table)
mapper(Acct_Prefs, prefs_table)

# Create a session object
print 'creating session...'
Session = sessionmaker(bind=engine)
session = Session()

pref = self.session.query(Acct_Prefs).filter_by(empID=self.emp_id,
pref_name='last_payPeriod').first()
pref.pref_value = SomeValue
self.session.commit()

/code

For some weird reason, if I do those last three lines in one of my
wxPython methods, I get an error. I know it has to be something really
stupid, but I'm just not seeing it...

Mike




On Aug 28, 10:29 am, Werner F. Bruhin [EMAIL PROTECTED] wrote:
 Mike,

 Mike wrote:

 ...

  Does this work for multiple databases? This particular program I am
  working on will be connecting to 2 or 3 databases and a table or three
  in each of those. I'm pretty sure I have to create separate engines
  for each db and probably bind separate sessions for those.

 I don't think so, using sessions and engines I would think you have to
 have one per database.

 You probably need to explain a bit more what you are doing with these
 databases, i.e. are you moving data from one to the other, or are they
 independent databases or ..

 Werner
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Werner F. Bruhin

Mike,

Mike wrote:
 Hi,

 Here's what I'm doing. I have a timesheet application written in
 wxPython. It works, but I think it would work better if I changed the
 database calls into SA calls. There are 3 databases. I created one and
 the other two are pre-existing. The one I created is the one I store
 all the user entered data into. I read from the official accounting
 database and I read from a couple of views in the third database. The
 two databases I read from are for checking purposes to make sure that
 I am doing the calculations correctly in my program and also for
 authentication and certain bits of user data, such as employee number,
 name and such.

 Anyway, I dumped the close calls I had and put them in the OnExit
 method, which is something I've never used before. I'm not sure that I
 have that set up right, but at this point it doesn't matter. I am
 still receiving the same error.

 If I take all of my SA setup out of the wxPython code and stick it in
 my own module, it works. Here's what that looks like:


 code

 import ts_info
 from db_tables import Acct_Prefs, TimeEntries
 from sqlalchemy import Table
 from sqlalchemy.orm import mapper, sessionmaker

 # Connect to the database
 print 'connecting to MCISAccounting DB...'
 conn, engine, meta = ts_info.setupDB('acct')

 # Load the tables
 print 'loading tables...'
 entry_table = Table('tbl_TimeEntries', meta, autoload=True)
 prefs_table = Table('tbl_Acct_Prefs', meta, autoload=True)

 # Map the tables
 print 'mapping tables...'
 mapper(TimeEntries, entry_table)
 mapper(Acct_Prefs, prefs_table)

 # Create a session object
 print 'creating session...'
 Session = sessionmaker(bind=engine)
 session = Session()

 pref = self.session.query(Acct_Prefs).filter_by(empID=self.emp_id,
 pref_name='last_payPeriod').first()
 pref.pref_value = SomeValue
 self.session.commit()

 /code

 For some weird reason, if I do those last three lines in one of my
 wxPython methods, I get an error. I know it has to be something really
 stupid, but I'm just not seeing it...

 Mike

   
I did a small test based on the test/demo project I am working on.

Just duplicated the database and changed the application to read from 
one database and show a list of countries and to read/update from a 
second database.  I can view both without problem and update the 2nd db 
without getting an error.

The way I set them up is:
class BoaApp(wx.App):
def OnInit(self):
   
self.ConnectDb()
   
self.main = demoFrame2db.create(None)
self.main.Show()
self.SetTopWindow(self.main)
return True
   
def OnExit(self):
self.session.close_all()
self.engine.dispose()
   
def ConnectDb(self):
# db 1
database = u'C:/Dev/BoaTest04/dbsampleSAnew/database.sqldb'
dburl = sa.engine.url.URL('sqlite', username=None, 
password=None, host=None, port=None, database=database)
self.engine = sa.create_engine(dburl, encoding='utf8', echo=False)
Session = sao.sessionmaker()
Session.configure(bind=self.engine)
self.session = Session()
   
# db 2
database2 = u'C:/Dev/BoaTest04/dbsampleSAnew/database2.sqldb'
dburl = sa.engine.url.URL('sqlite', username=None, 
password=None, host=None, port=None, database=database2)
self.engine2 = sa.create_engine(dburl, encoding='utf8', echo=False)
Session2 = sao.sessionmaker()
Session2.configure(bind=self.engine2)
self.session2 = Session2()
   
def Getds(self):
return self.session

def Getds2(self):
return self.session2

In my primary frame I then do:
   self.theList.SetSession(wx.GetApp().Getds())
self.theList.InitObjectListView()
self.theList.Bind(wx.EVT_LIST_ITEM_SELECTED, self.OnItemSelected)

# a list from a 2 db
self.theList2.SetSession(wx.GetApp().Getds2())
self.theList2.InitObjectListView()
 
when selecting items from theList I display some details and I can 
update these details and commit.

Maybe this helps, but maybe it just causes more confusion?

Werner



 On Aug 28, 10:29 am, Werner F. Bruhin [EMAIL PROTECTED] wrote:
   
 Mike,

 Mike wrote:

 ...

 
 Does this work for multiple databases? This particular program I am
 working on will be connecting to 2 or 3 databases and a table or three
 in each of those. I'm pretty sure I have to create separate engines
 for each db and probably bind separate sessions for those.
   
 I don't think so, using sessions and engines I would think you have to
 have one per database.

 You probably need to explain a bit more what you are doing with these
 databases, i.e. are you moving data from one to the other, or are they
 independent databases or ..

 Werner
 
 


   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 

[sqlalchemy] Re: Result set slice (session.query)

2008-08-28 Thread Michael Bayer


On Aug 28, 2008, at 12:40 PM, PyDevler wrote:


 Hi Michael,

 (Sorry for the repost at sqlalchemy-devel I was not able to find this
 post, so I thought it didnt go through)

 I can write up a small sample later. When I turn echoing on the DB
 engine, I can see that doing:

query[start:end]

 issues the full query, not using limit, offset. Unlike:

query[start]

 which does issue a limit 1 offset `start` query

 I forgot about using limit and offset directly, which is what i need
 to do. But I would have thought that it was more usable to do a
 limit/offset if a user is requesting a slice of the result set.

 I undrestand that edge slices may not be supported, especially in
 reverse, or when they overlap a boundary.

I cant reproduce query[start:end] not producing LIMIT/OFFSET.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Result set slice (session.query)

2008-08-28 Thread PyDevler

Hi Michael,

(Sorry for the repost at sqlalchemy-devel I was not able to find this
post, so I thought it didnt go through)

I can write up a small sample later. When I turn echoing on the DB
engine, I can see that doing:

query[start:end]

issues the full query, not using limit, offset. Unlike:

query[start]

which does issue a limit 1 offset `start` query

I forgot about using limit and offset directly, which is what i need
to do. But I would have thought that it was more usable to do a
limit/offset if a user is requesting a slice of the result set.

I undrestand that edge slices may not be supported, especially in
reverse, or when they overlap a boundary.

Thanks,

--
Hatem

On Aug 28, 8:57 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Aug 28, 2008, at 12:18 AM, PyDevler wrote:





  I am using the ORM query strategy using session.query. I vaguely
  remember seeing in the past that doing a slice on the result set, i.e.
  result[:10] used to perform a Limit/Offset query, it may not have been
  so. Nevertheless, with SA 0.5beta3 this is definitely not the case. It
  seems that limit offset queries are only issued when I index the
  result set, e.g. result[0] which does a limit 1 offset 0.

  The question is, how would I be able to extend the slicing to use
  Limit Offset. e.g:

 result[:10] = Limit 10 offset 0

  I have a large database, and I quickly run out of memory with queries
  that do not utilize the Limit Offset.

  Anything better than [result[i] for i in range(0,10)] would be greatly
  appreciated (since that would run 10 different queries).

 query[start:end] works just great, I've been using every 0.5 version
 for a soon-to-be production app here.   There may be some edge cases
 that are not supported, such as result[-5:-10], and we also now have
 checks which prevent subsequent filtering applied to an already
 limited query, since this is an ambiguous use case (i.e., do you want
 to filter from the limited results, or apply the filter first?  SQLA
 would not like to guess.  If you were expecting the latter, then I'm
 glad I decided to implement it this way since the former is more
 correct).

 Additionally limit/offset can be applied directly using limit() and
 offset().Do you have a test case illustrating your specific issue ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: wx and SqlAlchemy

2008-08-28 Thread Mike

Werner,

On Aug 28, 11:24 am, Werner F. Bruhin [EMAIL PROTECTED] wrote:
 Mike,



 Mike wrote:
  Hi,

  Here's what I'm doing. I have a timesheet application written in
  wxPython. It works, but I think it would work better if I changed the
  database calls into SA calls. There are 3 databases. I created one and
  the other two are pre-existing. The one I created is the one I store
  all the user entered data into. I read from the official accounting
  database and I read from a couple of views in the third database. The
  two databases I read from are for checking purposes to make sure that
  I am doing the calculations correctly in my program and also for
  authentication and certain bits of user data, such as employee number,
  name and such.

  Anyway, I dumped the close calls I had and put them in the OnExit
  method, which is something I've never used before. I'm not sure that I
  have that set up right, but at this point it doesn't matter. I am
  still receiving the same error.

  If I take all of my SA setup out of the wxPython code and stick it in
  my own module, it works. Here's what that looks like:

  code

  import ts_info
  from db_tables import Acct_Prefs, TimeEntries
  from sqlalchemy import Table
  from sqlalchemy.orm import mapper, sessionmaker

  # Connect to the database
  print 'connecting to MCISAccounting DB...'
  conn, engine, meta = ts_info.setupDB('acct')

  # Load the tables
  print 'loading tables...'
  entry_table = Table('tbl_TimeEntries', meta, autoload=True)
  prefs_table = Table('tbl_Acct_Prefs', meta, autoload=True)

  # Map the tables
  print 'mapping tables...'
  mapper(TimeEntries, entry_table)
  mapper(Acct_Prefs, prefs_table)

  # Create a session object
  print 'creating session...'
  Session = sessionmaker(bind=engine)
  session = Session()

  pref = self.session.query(Acct_Prefs).filter_by(empID=self.emp_id,
  pref_name='last_payPeriod').first()
  pref.pref_value = SomeValue
  self.session.commit()

  /code

  For some weird reason, if I do those last three lines in one of my
  wxPython methods, I get an error. I know it has to be something really
  stupid, but I'm just not seeing it...

  Mike

 I did a small test based on the test/demo project I am working on.

 Just duplicated the database and changed the application to read from
 one database and show a list of countries and to read/update from a
 second database.  I can view both without problem and update the 2nd db
 without getting an error.

 The way I set them up is:
 class BoaApp(wx.App):
     def OnInit(self):

         self.ConnectDb()

         self.main = demoFrame2db.create(None)
         self.main.Show()
         self.SetTopWindow(self.main)
         return True

     def OnExit(self):
         self.session.close_all()
         self.engine.dispose()

     def ConnectDb(self):
         # db 1
         database = u'C:/Dev/BoaTest04/dbsampleSAnew/database.sqldb'
         dburl = sa.engine.url.URL('sqlite', username=None,
 password=None, host=None, port=None, database=database)
         self.engine = sa.create_engine(dburl, encoding='utf8', echo=False)
         Session = sao.sessionmaker()
         Session.configure(bind=self.engine)
         self.session = Session()

         # db 2
         database2 = u'C:/Dev/BoaTest04/dbsampleSAnew/database2.sqldb'
         dburl = sa.engine.url.URL('sqlite', username=None,
 password=None, host=None, port=None, database=database2)
         self.engine2 = sa.create_engine(dburl, encoding='utf8', echo=False)
         Session2 = sao.sessionmaker()
         Session2.configure(bind=self.engine2)
         self.session2 = Session2()

     def Getds(self):
         return self.session

     def Getds2(self):
         return self.session2

 In my primary frame I then do:
        self.theList.SetSession(wx.GetApp().Getds())
         self.theList.InitObjectListView()
         self.theList.Bind(wx.EVT_LIST_ITEM_SELECTED, self.OnItemSelected)

         # a list from a 2 db
         self.theList2.SetSession(wx.GetApp().Getds2())
         self.theList2.InitObjectListView()

 when selecting items from theList I display some details and I can
 update these details and commit.

 Maybe this helps, but maybe it just causes more confusion?

 Werner


Your code makes sense. I tried to create a small runnable example, but
my sample works and my original does not. I'm not seeing the
difference, but obviously something is wrong with my original code. I
may have to just re-write it from scratch. I have a feeling that the
end of the traceback has a clue, but I don't know how to read it:

raise exceptions.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exceptions.DatabaseError: (DatabaseError) internal error:
None 'UPDATE [tbl_Acct_Prefs] SET pref_value=%(pref_value)s WHERE
[tbl_Acct_Prefs].[empID] = %(tbl_Acct_Prefs_empID)s AND
[tbl_Acct_Prefs].pref_name = %
(tbl_Acct_Prefs_pref_name)s' {'pref_value': u'4',
'tbl_Acct_Prefs_pref_name': 

[sqlalchemy] Re: Is there a simple way to let records have the same groups as it parents

2008-08-28 Thread az

On Thursday 28 August 2008 18:00:08 Michael Bayer wrote:
 On Aug 28, 2008, at 10:54 AM, Cecil Westerhof wrote:
  On the Query side, the basic job is to formulate joins to the
  parent,
 
  Would it no be better to the child? Otherwise you need to
  traverse all records, which would be inefficient -I think- when
  for example only 1% of the records are in the group. Or am I
  overlooking something?

 its all joins between X and Y so at that level its the same thing.
 The optimizations you'd be looking for here would involve
 additional tables that store information such as a full path for
 each node, or a materialized path view of some kind (or even,
 nested sets, something I should eventually create an ORM example
 for since it's tricky).

cecil, think well about which point of view is more important in the 
means of speed vs number-of-records vs needed-queries, and just then 
apply specific technics to denormalize the tree.
e.g. nested sets (where an item knows all its children recursively - 
like someclass.__subclass__() ) might be good for answering which Rs 
belong to a group, while path-to-the-root (like someclass.mro()) 
might be better in other cases - e.g. which groups some R belongs to 
(these are just guesses, dont rely on them being 100% bcorrect).

as Mike said, there are two sides of the equatoin, one is the object 
graph as such, another is how u represent it in db/queries over that. 
the representation can be very twisted towards some sort of queries - 
e.g. one extreme may be to store/cache all inherited-result groups 
for each R - and refresh the cache at any change anywhere on the 
path-to-root.. (cache here is actualy some records inside db)

in my case i've decided that getting all objects + their assoc.values 
in one go/query and then doing inheritance in python is better than 
all else, like above cache or walking the tree in python and making 
many small queries to get values, for two reasons: thousands of rows 
and the order/priority of inheritance is changeable. i have not 
decided myself how to derecurse the tree yet, i'm making an N-level 
generated query. The results so far are that for self-recursive query 
(A1-A2-A3), 4 levels are enough to kill postgres (2 for sqlite). for 
a non-self-recursive (A1-B1-C3), 3,4,5,6 levels makes no much 
difference.

also, if u need count over above, query.count() may not be correct, 
use this instead (i dont know why but count(1) is not correct over a 
filtered cartesian product):

def exact_count( query):
from sqlalchemy.sql import func
m = getattr( query, 'mapper', None) #0.4
if m is None: m = query._mapper_zero()  #0.5
#see Query.count()
return query._col_aggregate( m.primary_key[0],
lambda x: func.count( func.distinct(x)) )

mike, why not have some query._count_expr() method that defaults to 
count(1) but one can easily subst something else, e.g. the above 
count(distinct(x.dbid)) ?
now i have to subclass Query and replace the whole .count() with above 
which is partialy copypaste from original Query.count() (and probably 
has too much internal stuff)

have fun
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Result set slice (session.query)

2008-08-28 Thread PyDevler

On Aug 28, 10:50 am, Michael Bayer [EMAIL PROTECTED] wrote:
 I cant reproduce query[start:end] not producing LIMIT/OFFSET.

Actually that is right:

query[start:end]

does produce limit/offset. What I was doing was:

query[:end]

Since in lists start defaults to 0, that does not produce a Limit/
Offset.

Thanks,

--
Hatem

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Result set slice (session.query)

2008-08-28 Thread Michael Bayer


On Aug 28, 2008, at 2:59 PM, PyDevler wrote:


 On Aug 28, 10:50 am, Michael Bayer [EMAIL PROTECTED] wrote:
 I cant reproduce query[start:end] not producing LIMIT/OFFSET.

 Actually that is right:

query[start:end]

 does produce limit/offset. What I was doing was:

query[:end]

 Since in lists start defaults to 0, that does not produce a Limit/
 Offset.


we'll try to look into slices lke [:y] and [x:] producing something  
reasonable.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Extension Mapper

2008-08-28 Thread GustaV

Hi all.
I'm currently working on a map (like in geography :) )

When a new tile in inserted in the DB, I'm using an extension mapper
to update some neighbor's properties (like the neighbors count). The
after_insert method helps a lot... but:
When I modify another object than the one being inserted in the
after_insert method, the modification happens in the python object,
but doesn't occur is the DB. The commit at the end does not seem to
have an effect.

What should I do?

Thanks

Guillaume
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extension Mapper

2008-08-28 Thread Michael Bayer


On Aug 28, 2008, at 6:57 PM, GustaV wrote:


 Hi all.
 I'm currently working on a map (like in geography :) )

 When a new tile in inserted in the DB, I'm using an extension mapper
 to update some neighbor's properties (like the neighbors count). The
 after_insert method helps a lot... but:
 When I modify another object than the one being inserted in the
 after_insert method, the modification happens in the python object,
 but doesn't occur is the DB. The commit at the end does not seem to
 have an effect.

 What should I do?

modifications to objects inside of flush() aren't going to propigate  
the same way as when they're outside of the flush().   Within  
MapperExtension you should generally just do things with the  
connection (i.e., issue SQL directly).

Otherwise, we have SessionExtension which has a before_flush() hook,  
and you can poke around inside the Session and change things freely  
before anything flush()-related occurs.  The catch there is that you'd  
probably want to be using the latest 0.5 trunk for that (post beta3)  
since we've fixed it up a bit to work in a more useful way.   I find  
that using before_flush() and after_flush() is generally a better way  
to go for dependent changes/SQL to be issued since you aren't doing  
things inside of the flush() itself, where its hard to predict when  
things will actually happen.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---