[sqlalchemy] Re: Problems/Bug in ordering_list (UNIQUE KEY violation)

2009-03-10 Thread oberger

Thank's for your answer.



On 27 Feb., 02:14, jason kirtland j...@discorporate.us wrote:
 Michael Bayer wrote:

  On Feb 19, 2009, at 4:33 PM, oberger wrote:

  Thank you Michael,

  but I am not able to bring this to work. Even with a flush and a
  commit after every Statement.
  I understand the problem with dependend UPDATES/DELETES.

  But how is theordering_listsuposed to work?
  When I delete on entry with: del short_trip.trip_stops[1]

  and then flush() and commit(). Theordering_listhas to do some
  work in the corresponding database table.

  im not sure, perhaps Jason can chime in on this

 For this constraint configuration you might try making the DB constraint
 initially deferred.  Given the ordering of statement execution in the
 unit of work, no other ideas are coming to mind.  Theordering_list
 itself is totally ignorant of the ORM.  It doesn't issue any flushes or
 deletions, though one could make an implementation that did embed that
 level of control over the unit of work.

 -j
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-10 Thread 一首诗

Hi Jeff,

Thanks for your kind suggestion.

I first add some log decorators, but i found when it might cause to
print sqalchemy objects which has not been bound to any session.

And I am not quite sure about how to make the decorator mor genreal.

Actually, I think I must use model as the first parameter because as a
instance method, _getObjectById require the first parameter to be
self.
Can you write a few lines of code to show your suggestion?

On Mar 8, 5:06 am, Jeff FW jeff...@gmail.com wrote:
 That's pretty similar to what I do, actually, if a bit simpler (but
 that's good!)  One suggestion would be to throw an except (maybe for
 the base SQLAlchemy exception class)  in your try block, otherwise you
 run the risk of things dying in an ugly way.  I'm not familiar with
 pyamf, so I don't know how it would handle errors, but twisted usually
 carries on as if nothing happens.

 Also, I'd make the decorator a bit more general--don't put the model
 argument in wrapper().  Put sess first, then take *args and **kwargs,
 and pass those right to the inner function f(). That way you can reuse
 it for anything that requires a DB session.

 Other things you could add (if so inclined) are decorators for logging
 and other types of error handling (like catching IntegrityErros thrown
 by duplicates.)  I do those things, but I might be a bit OCD :-)

 -Jeff

 On Mar 7, 1:41 am, 一首诗 newpt...@gmail.com wrote:



  Hi, Thanks for your reply.  I'm using it the way like you.  The only
  difference is that I am using pyamf instead of PB.

  On every request, I delegate required db operations to a class called
  Database, similar to these code below.

  I used to use scope_session instead of create and close session every
  time.   But as I said in my earlier mails, they don't work.

  These code below seems to work right now.  But if you have more
  suggestion,  I will be very thankful.

  #=

  def require_session(f):
  '''create and close session for each synchronous method'''
  def wrapper(model, *args, **kw):
  sess = model.Session()
  try:
  return f(model, sess, *args, **kw)
  finally:
  sess.close()
  return wrapper

  class Database()
  def __init__(self, conn_str):
  self.conn_str = conn_str
  self.engine = create_engine(self.conn_str, echo=False)
  self.Session = sessionmaker(bind = self.engine,
  expire_on_commit=False)

  def getObjectById(self, klass, id):
  return threads.deferToThread(self._getObjectById, klass, id)

  @require_session

  def _getObjectById(self, sess, klass, id):

  return sess.query(klass).get(id)
  #=

  On Mar 6, 5:44 am, Jeff FW jeff...@gmail.com wrote:

   Don't use scoped_session--you'll run into problems no matter what you
   do.  I'm using Perspective Broker from Twisted with SQLAlchemy.  I
   make sure to create and commit/rollback a session for *every* PB
   request.  It works perfectly, and that's the only way I was really
   able to get it to work in all cases.

   Assuming you're using Twisted in a similar way, you could write a
   simple decorator to wrap any functions that need a database session in
   the begin/commit stuff as necessary.

   If you can give more details of how you're using Twisted, I might be
   able to offer some more insight.

   -Jeff

   On Mar 5, 12:33 am, 一首诗 newpt...@gmail.com wrote:

I'm not quite sure, but I think I'm pretty careful of sharing objects
between threads.

1st, I only cached as few as possible orm objects.  I tried to detach
them, but I found that if I detach them,  I can't access any of their
fields any more.

2nd, I create new orm objects based on client request, pass them to
class Database and then merge them to scoped sessions, change, commit
and then discard these objects.

3rd, I switch to sqlite frequently to check if there is any database
operation outside Database, because sqlite doesn't allow multi-thread
access.

Actually it seems to work until 2 or 3 days ago suddenly cases hang
the server.

Ah, as I've already written lots of code in ORM, I think maybe I
should try to change Database to use a dedicated thread to handle all
database operations.

That might be a bottle neck of my application, but I really can't give
up orm as these mapper classes are used everywhere in my application.

On Mar 4, 7:26 pm, 一首诗 newpt...@gmail.com wrote:

 Hi, all

 I am using sqlalchemy in twisted in my project in the way below.
 Defer any database operation so the twisted's main thread won't be
 blocked.

 And I use scoped_session, so that sessions won't have to be created
 again and again.

 ==
 class Database()
 def __init__(self, conn_str):
 self.conn_str = conn_str
 

[sqlalchemy] Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread phrrn...@googlemail.com

Hi,
I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
now in a primitive but usable condition for simple applications. My
employers are fine with contributing the code back to the project and
I intended to coordinate with Mike Bayer about this shortly. In the
meantime, we would like to deploy the driver locally and work out some
of the bugs. Ideally, we would like to do this separately from our
centralized SQL Alchemy installation as the release cycles for
production s/w are much longer than the anticipated cycles for the
Sybase dialect.Is it possible to use a dialect located  outside the
main installation by something as simple as the connection URI?

Have any of you similar situations? Have you any suggestions on ways
to address this issue?

pjjH



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread Michael Bayer



you can install the dialect using a setuptools entry point.  SQLAlchemy
looks for dialect modules using the sqlalchemy.databases entry point
name, so in this case you might name it sqlalchemy.databases.sybase-ase.


phrrn...@googlemail.com wrote:

 Hi,
 I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
 now in a primitive but usable condition for simple applications. My
 employers are fine with contributing the code back to the project and
 I intended to coordinate with Mike Bayer about this shortly. In the
 meantime, we would like to deploy the driver locally and work out some
 of the bugs. Ideally, we would like to do this separately from our
 centralized SQL Alchemy installation as the release cycles for
 production s/w are much longer than the anticipated cycles for the
 Sybase dialect.Is it possible to use a dialect located  outside the
 main installation by something as simple as the connection URI?

 Have any of you similar situations? Have you any suggestions on ways
 to address this issue?

 pjjH



 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread phrrn...@googlemail.com

Thanks Mike. This sounds great although I have to admit that I don't
follow it completely as I have not used authored anything via
setuptools. If this is trivial for you, could you sketch out what this
would look like?

pjjH


On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
 you can install the dialect using a setuptools entry point.  SQLAlchemy
 looks for dialect modules using the sqlalchemy.databases entry point
 name, so in this case you might name it sqlalchemy.databases.sybase-ase.

 phrrn...@googlemail.com wrote:

  Hi,
  I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
  now in a primitive but usable condition for simple applications. My
  employers are fine with contributing the code back to the project and
  I intended to coordinate with Mike Bayer about this shortly. In the
  meantime, we would like to deploy the driver locally and work out some
  of the bugs. Ideally, we would like to do this separately from our
  centralized SQL Alchemy installation as the release cycles for
  production s/w are much longer than the anticipated cycles for the
  Sybase dialect.Is it possible to use a dialect located  outside the
  main installation by something as simple as the connection URI?

  Have any of you similar situations? Have you any suggestions on ways
  to address this issue?

  pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Inheritance and binds to several engines

2009-03-10 Thread Denis S. Otkidach

I use declarative to define database scheme, and binds parameter to
session constructed from several metadata tables lists. And I have a
problem with inherited models, where table is represented as Join
object: get_bind() method doesn't find an engine. A quick-n-dirty
solution I use is:

class Session(orm.session.Session):

def get_bind(self, mapper, clause=None):
from sqlalchemy.orm.util import _class_to_mapper
if mapper is not None and clause is None:
c_mapper = _class_to_mapper(mapper)
if hasattr(c_mapper, 'mapped_table'):
clause = mapper.mapped_table
return orm.session.Session.get_bind(self, mapper, clause)

Is it a bug in SQLAlchemy or I use it inapropriately? What is correct
solution?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread jason kirtland

It'd look like this:

http://code.google.com/p/ibm-db/source/browse/trunk/IBM_DB/ibm_db_sa/setup.py

Your dialect will be available to SA after you 'python setup.py install' 
or 'python setup.py develop' in your -ase distribution.

phrrn...@googlemail.com wrote:
 Thanks Mike. This sounds great although I have to admit that I don't
 follow it completely as I have not used authored anything via
 setuptools. If this is trivial for you, could you sketch out what this
 would look like?
 
 pjjH
 
 
 On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
 you can install the dialect using a setuptools entry point.  SQLAlchemy
 looks for dialect modules using the sqlalchemy.databases entry point
 name, so in this case you might name it sqlalchemy.databases.sybase-ase.

 phrrn...@googlemail.com wrote:

 Hi,
 I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
 now in a primitive but usable condition for simple applications. My
 employers are fine with contributing the code back to the project and
 I intended to coordinate with Mike Bayer about this shortly. In the
 meantime, we would like to deploy the driver locally and work out some
 of the bugs. Ideally, we would like to do this separately from our
 centralized SQL Alchemy installation as the release cycles for
 production s/w are much longer than the anticipated cycles for the
 Sybase dialect.Is it possible to use a dialect located  outside the
 main installation by something as simple as the connection URI?
 Have any of you similar situations? Have you any suggestions on ways
 to address this issue?
 pjjH
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread phrrn...@googlemail.com

The quality of the support in this group is remarkable. The answers
are starting to remind me of Guy Harris in terms of quality and
clarity! (If the name is not familiar to you then check out the Usenet
archives from the mid to late 80's)

thanks very much.
pjjH


On Mar 10, 11:34 am, jason kirtland j...@discorporate.us wrote:
 It'd look like this:

 http://code.google.com/p/ibm-db/source/browse/trunk/IBM_DB/ibm_db_sa/...

 Your dialect will be available to SA after you 'python setup.py install'
 or 'python setup.py develop' in your -ase distribution.

 phrrn...@googlemail.com wrote:
  Thanks Mike. This sounds great although I have to admit that I don't
  follow it completely as I have not used authored anything via
  setuptools. If this is trivial for you, could you sketch out what this
  would look like?

  pjjH

  On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
  you can install the dialect using a setuptools entry point.  SQLAlchemy
  looks for dialect modules using the sqlalchemy.databases entry point
  name, so in this case you might name it sqlalchemy.databases.sybase-ase.

  phrrn...@googlemail.com wrote:

  Hi,
  I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
  now in a primitive but usable condition for simple applications. My
  employers are fine with contributing the code back to the project and
  I intended to coordinate with Mike Bayer about this shortly. In the
  meantime, we would like to deploy the driver locally and work out some
  of the bugs. Ideally, we would like to do this separately from our
  centralized SQL Alchemy installation as the release cycles for
  production s/w are much longer than the anticipated cycles for the
  Sybase dialect.Is it possible to use a dialect located  outside the
  main installation by something as simple as the connection URI?
  Have any of you similar situations? Have you any suggestions on ways
  to address this issue?
  pjjH
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Inheritance and binds to several engines

2009-03-10 Thread Michael Bayer

Denis S. Otkidach wrote:

 I use declarative to define database scheme, and binds parameter to
 session constructed from several metadata tables lists. And I have a
 problem with inherited models, where table is represented as Join
 object: get_bind() method doesn't find an engine. A quick-n-dirty
 solution I use is:

 class Session(orm.session.Session):

 def get_bind(self, mapper, clause=None):
 from sqlalchemy.orm.util import _class_to_mapper
 if mapper is not None and clause is None:
 c_mapper = _class_to_mapper(mapper)
 if hasattr(c_mapper, 'mapped_table'):
 clause = mapper.mapped_table
 return orm.session.Session.get_bind(self, mapper, clause)

 Is it a bug in SQLAlchemy or I use it inapropriately? What is correct
 solution?

if you're using bound metadata, the Session will use the binds associated
with each table automatically.  it doesn't matter if a mapper is mapped to
a join, a join object will find a bind based on the tables it joins.  if
you are mapped specifically to a select() object, I noticed that the
select is only going to locate the bind from the first selectable in the
JOIN list, which might be your problem.  this should be improved.

the best way to go is to just bind the session directly to the engine or
engines needed, using the bind or binds arguments.   i don't bind
metadata to engines in any case when using the ORM.  bound metadata has
also been downplayed very much in the documentation in recent months.


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using order_by in an association many-to-many relationship with columns from the association object

2009-03-10 Thread Scott

Is there a way with the current iteration of SQLAlchemy to add a
column to the association table in a many-to-many relationship with
that column used to order the join? I looked at the order_by attribute
of the ManyToMany() relationship definition, but it seems that this is
expecting a string naming the column in the related entity. I'm using
Elixir on top of alchemy, but here are my relevant class and table
definitions:

procedure_cpt_codes = Table('procedure_cpt_codes', metadata,
autoload=True)

class CptCode(Entity):
using_options(tablename='cpt_codes', autosetup=True)

name = Field(Unicode)
code = Field(Unicode)
description= Field(Unicode)

class Procedure(Entity):
using_options(tablename='procedures', autosetup=True)

complications = OneToMany('Complication')
cpt_codes = ManyToMany(
'CptCode',
table = procedure_cpt_codes, lazy=False,
foreign_keys = lambda: [ procedure_cpt_codes.c.procedure_id,
procedure_cpt_codes.c.cpt_code_id ],
primaryjoin = lambda: Procedure.id ==
procedure_cpt_codes.c.procedure_id,
secondaryjoin = lambda: CptCode.id ==
procedure_cpt_codes.c.cpt_code_id,
order_by = procedure_cpt_codes.c.cpt_codes_idx
)
procedure_date = Field(Date)

I get the following exception when run as listed:

Traceback (most recent call last):
  File /System/Library/Frameworks/Python.framework/Versions/Current/
Extras/lib/python/PyObjC/PyObjCTools/AppHelper.py, line 235, in
runEventLoop
main(argv)
  File /Users/tswall/Documents/workspace/Cocoa/python/Epdb/build/
Debug/Epdb.app/Contents/Resources/MyController.py, line 15, in
buttonPushed_
for instance in Patient.query.all():
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/entity.py, line 641, in __get__
elixir.setup_all()
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/__init__.py, line 145, in setup_all
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/entity.py, line 816, in setup_entities
method()
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/entity.py, line 421, in setup_properties
self.call_builders('create_properties')
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/entity.py, line 433, in call_builders
getattr(builder, what)()
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/relationships.py, line 417, in create_properties
self.target._descriptor.translate_order_by(kwargs['order_by'])
  File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
elixir/entity.py, line 322, in translate_order_by
for colname in order_by:
TypeError: 'Column' object is not iterable

When I change the order_by above to
  order_by = 'procedure_cpt_codes.c.cpt_codes_idx' #or 'cpt_codes_idx'
I get an error that it can't find column 'cpt_codes_idx' on relation
table 'CptCode'.

Any advice would be appreciated!
Scott

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using order_by in an association many-to-many relationship with columns from the association object

2009-03-10 Thread Michael Bayer

order_by accepts a Column object, i.e. table.c.whatever, so pass that in.

Scott wrote:

 Is there a way with the current iteration of SQLAlchemy to add a
 column to the association table in a many-to-many relationship with
 that column used to order the join? I looked at the order_by attribute
 of the ManyToMany() relationship definition, but it seems that this is
 expecting a string naming the column in the related entity. I'm using
 Elixir on top of alchemy, but here are my relevant class and table
 definitions:

 procedure_cpt_codes = Table('procedure_cpt_codes', metadata,
 autoload=True)

 class CptCode(Entity):
   using_options(tablename='cpt_codes', autosetup=True)

   name = Field(Unicode)
   code = Field(Unicode)
   description= Field(Unicode)

 class Procedure(Entity):
   using_options(tablename='procedures', autosetup=True)

   complications = OneToMany('Complication')
   cpt_codes = ManyToMany(
   'CptCode',
   table = procedure_cpt_codes, lazy=False,
   foreign_keys = lambda: [ procedure_cpt_codes.c.procedure_id,
 procedure_cpt_codes.c.cpt_code_id ],
   primaryjoin = lambda: Procedure.id ==
 procedure_cpt_codes.c.procedure_id,
   secondaryjoin = lambda: CptCode.id ==
 procedure_cpt_codes.c.cpt_code_id,
   order_by = procedure_cpt_codes.c.cpt_codes_idx
   )
   procedure_date = Field(Date)

 I get the following exception when run as listed:

 Traceback (most recent call last):
   File /System/Library/Frameworks/Python.framework/Versions/Current/
 Extras/lib/python/PyObjC/PyObjCTools/AppHelper.py, line 235, in
 runEventLoop
 main(argv)
   File /Users/tswall/Documents/workspace/Cocoa/python/Epdb/build/
 Debug/Epdb.app/Contents/Resources/MyController.py, line 15, in
 buttonPushed_
 for instance in Patient.query.all():
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/entity.py, line 641, in __get__
 elixir.setup_all()
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/__init__.py, line 145, in setup_all
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/entity.py, line 816, in setup_entities
 method()
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/entity.py, line 421, in setup_properties
 self.call_builders('create_properties')
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/entity.py, line 433, in call_builders
 getattr(builder, what)()
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/relationships.py, line 417, in create_properties
 self.target._descriptor.translate_order_by(kwargs['order_by'])
   File /Library/Python/2.5/site-packages/Elixir-0.6.1-py2.5.egg/
 elixir/entity.py, line 322, in translate_order_by
 for colname in order_by:
 TypeError: 'Column' object is not iterable

 When I change the order_by above to
   order_by = 'procedure_cpt_codes.c.cpt_codes_idx' #or 'cpt_codes_idx'
 I get an error that it can't find column 'cpt_codes_idx' on relation
 table 'CptCode'.

 Any advice would be appreciated!
 Scott

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] GUID creation causing foreign key errors

2009-03-10 Thread Bryan

The primary keys in my db are GUIDs, char(36).  When I generate the
GUID in python using the uuid module, everything works fine.  But when
I allow the db to generate the GUIDs I get foreign key errors when
trying to save a new parent and child.

A look at the SQL generated shows that the parent is being saved
first, but when the child is saved, it does not have the parent's new
primary key in the related field.  Instead of the parent's new GUID in
the related field, it has 0L.

When using the first method below, what is stopping sqlalchemy from
getting the newly created guid so it can be referenced by the child's
SQL??

# This way does not work
#
---
def colId(): return Column('id', types.CHAR(36), primary_key=True,
default=func.convert(literal_column('UUID() USING utf8')))

# This way works
#
---
from uuid import uuid4
def colId(): return Column('id', types.CHAR(36), primary_key=True,
default=lambda: str(uuid4()))

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using Alchemy across tens of thousands of tables, hundreds of databases and dozens of dataservers.

2009-03-10 Thread phrrn...@googlemail.com

Thanks to all the help from the group, I am now facing an issue much
earlier than anticipated: how to manage Alchemy in -- apologies for
using a dreadfully overused phrase -- an 'Enterprise Setting'. This
really boils down to matters of scale: tens of thousands of tables in
hundreds of databases across several dozen dataservers.  I am
interested in how to structure and organize the Python code-artifacts
containing the metadata collections, POPO class declarations and
mapper() invocations which associate the classes to the tables. I am
also interested in configuration and credential management i.e. how to
control which dataservers are queried and the credentials used to
connect to them. Various use-cases include: use a replica reporting
dataserver for queries; use a development system for the
foo.bar.bletch class/class-hierarchy but use 'production' for
everything else; use SQLite for high-performance querying of stable
(i.e. does not change much if ever over time) reference/lookup data
but use production systems for live, trading-related data.

Now how does one manage all of this at the kinds of scale described
above? Hoes does one stitch together the various mapped classes to the
appropriate database engines at runtime? What kind of namespaces -- if
any -- would you use to manage large numbers of metadata collections?

If your eyes have glazed over at this point, there are more details
below!

I am looking forward to hearing if anyone has used Alchemy 'in the
large' and what their experiences have been, either positive or
negative.

thanks,

pjjH


Starting at the lowest levels, we have a centralized time-series of
physical meta-data for a number of our dataservers (production,
development and QA) with a reasonly unified representation independent
of the underlying dataserver technology (thanks to the ODBC catalog
calls). It is reasonably easy to add in new dataserver platforms and
the system is linearly scalable. In conjunction with the Cheetah
templating system and the wonderful cog code-generation tool, we can
code-generate Alchemy meta-data collections for arbitrary subsets of
tables: this fragment will generate the Python code to populate a
MetaData collection with all tables from the 'pear' database on the
'BANANA' dataserver

metadata = MetaData()

#[[[cog
#m = Mongo(dataserver='BANANA', database='pear)
#cog.outl(# auto-generated SQLAlchemy stuff here)
#tables = m.lookup()
#for table in tables:
#   cog.outl(%s % m.apply_template('sa.tmpl', table))
#]]]

Similarly, later on in the same file or in a completely different
file, we can have a cog fragment like this one that generates stub
POPO class declarations and mapper invocations that map the POPO class
to the given table.

#[[[cog
#def camelize(s):
#   return ''.join(word[0].upper() + word[1:] for word in s.split
('_'))
#
#for table in tables:
#   cog.outl(%s % m.apply_template('sa_mappers.tmpl', table,
{'camelize': camelize}))
#]]]


We also have hand-written mapper code that adds what I call 'cooked
accessors' to the POPO classes: this one adds a property called 'Type'
which returns a single OrgType object (OrgType being the wrapper class
around the foreign key reference/key/lookup table 'org_type'

class_mapper(Organization).add_properties({
   'Type' : relation(OrgType, uselist=False),
   }

This basic mechanism can be used to build up a collection of 'boring'
classes (which I sometimes hear referred to as Data Transfer Objects
or DTOs). cog allows us to mix hand-written and auto-generated code in
the same file so we can have a reasonably loose, 'build-time' coupling
between Python and the database schema and we get change management
and auditability (because now the interesting bits of the database
schema are serialized as Python code and get checked in, tagged etc
just like any other file). We also get documentation, thanks to Mike's
suggestion to use attributes.instrumentation_finders.

It seems obvious that related groups of objects on the same dataserver
should be grouped together in the same metadata collection as part of
the 'build'/code-generation process. It also seems obvious that each
metadata collection should have some form of default association with
a dataserver URI. However, we also want to be able to configure
metadata subsets to talk to development dataservers while everything
else talks to 'production' or, perhaps less controversially, have
reads go against a read-only replica while writes go to a master.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: GUID creation causing foreign key errors

2009-03-10 Thread Michael Bayer

I would qualify this as a bug since the compiler should be executing the
guid() function externally to the INSERT statement, since its newly
generated value is required as a return value - this is ticket 1335.

Note that pre-execution of the guid function is required here.   You can
force this manually as follows:

Column('id', types.CHAR(36), primary_key=True,
 default=lambda ctx:
ctx.connection.scalar(func.convert(literal_column('UUID() USING
utf8'



Bryan wrote:

 The primary keys in my db are GUIDs, char(36).  When I generate the
 GUID in python using the uuid module, everything works fine.  But when
 I allow the db to generate the GUIDs I get foreign key errors when
 trying to save a new parent and child.

 A look at the SQL generated shows that the parent is being saved
 first, but when the child is saved, it does not have the parent's new
 primary key in the related field.  Instead of the parent's new GUID in
 the related field, it has 0L.

 When using the first method below, what is stopping sqlalchemy from
 getting the newly created guid so it can be referenced by the child's
 SQL??

 # This way does not work
 #
 ---
 def colId(): return Column('id', types.CHAR(36), primary_key=True,
 default=func.convert(literal_column('UUID() USING utf8')))

 # This way works
 #
 ---
 from uuid import uuid4
 def colId(): return Column('id', types.CHAR(36), primary_key=True,
 default=lambda: str(uuid4()))

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Supporting fully-qualified table names and cross-database references in Sybase and SQL Server?

2009-03-10 Thread phrrn...@googlemail.com

As it happens, this works on the Sybase dialect without fixing the
quoting at all!  Apparently SQL such as this is happily accepted by
Sybase:

SELECT [fdcommon.dbo].organization.org_id,
[fdcommon.dbo].organization.abbrev
FROM [fdcommon.dbo].organization JOIN [fdcommon.dbo].org_type
ON [fdcommon.dbo].org_type.org_type_id =
[fdcommon.dbo].organization.org_type

I resorted to some brute-force list operations rather than regular
expressions to parse out the component names (see diff below). I will
fix the quoting shortly (within the next day or so) and submit a
single diff.

thanks,

pjjH


Index: schema.py

===

--- schema.py   (revision 5816)

+++ schema.py   (working copy)

@@ -876,17 +876,22 @@

 raise exc.ArgumentError(
 Parent column '%s' does not descend from a 
 table-attached Column % str(self.parent))
-m = re.match(r^(.+?)(?:\.(.+?))?(?:\.(.+?))?$,
self._colspec,
- re.UNICODE)
+m = self._colspec.split('.')
 if m is None:
 raise exc.ArgumentError(
 Invalid foreign key column specification: %s %
 self._colspec)
-if m.group(3) is None:
-(tname, colname) = m.group(1, 2)
+
+m.reverse()
+(colname, tname) = m[0:2]
+
+if m[2] is None:
 schema = None
 else:
-(schema, tname, colname) = m.group(1, 2, 3)
+m1 = m[2:]
+m1.reverse()
+schema = '.'.join(m1)
+


On Mar 5, 7:21 pm, phrrn...@googlemail.com phrrn...@googlemail.com
wrote:
 OK. If it might be as easy as that, I will have a go and see how well
 it works.

 pjjH

 On Mar 5, 4:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  phrrn...@googlemail.com wrote:

   Sybase (and SQL Server) support cross-database JOINs (Sybase even
   supports cross-database foreign-key constraints). There are four
   components to an object identifier:

   1 = Object name
   2 = Schema name
   3 = Database name
   4 = Server name

   the dataserver, database and schema are assumed for one-part
   identifiers (e.g. 'foo'), dataserver and database assumed for two-part
   identifiers (e.g. 'dbo.foo') and, finally, dataserver assumed for
   three-part identifiers ('production.dbo.foo')

   e.g. SELECT foo.* FROM BANANA.production.dbo.foo
  SELECT foo.* FROM production.dbo.foo
  SELECT foo.* FROM production..foo -- same as the previous query
   if the callers default schema is dbo
  SELECT foo.* FROM dbo.foo
  SELECT foo.* FROM foo
  SELECT foo.* FROM ..foo
   I am not so interested in supporting four-part identifiers in SA but I
   would like to figure out how to support three-part identifiers as very
   many of our databases have cross-database references.

   One natural (to me!) way of doing this is to add a 'database' property
   to the Table and ForeignKeyConstraint schema items and have the Sybase/
   SQL Server dialects always emit fully-qualified three-part identifiers
   for table names.

  we have an element on Table called schema.  I had in mind that schema
  should accept dotted names, so SQLA generally doesn't need to get
  involved.  The only change needed is to the IdentifierPreparer, such that
  when quote_schema is called, it separates the name along the dot first so
  that it can quote each token separately.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---