[sqlalchemy] Re: app memory leak - execute() with SA 0.5.2, PG 8.3, Psycopg2

2009-02-25 Thread Allen Bierbaum

I hadn't described the details be there is not much to the work around
and pretty application specific.

The summary is that I moved some application level filtering that was
being done in python code on the results into the query so less
results are returned.  This saves a great deal of memory in my cases
and speeds up the system dramatically.

Once I understood that there wasn't really a memory leak, I just
optimized what was already there to use less memory.

-Allen

On Wed, Feb 25, 2009 at 6:59 PM, Peter Hansen pe...@engcorp.com wrote:

 Allen Bierbaum wrote:
 On Tue, Feb 24, 2009 at 4:44 AM, Chris Miles miles.ch...@gmail.com wrote:
 On Feb 22, 6:08 am, Allen Bierbaum abierb...@gmail.com wrote:
 Python 2.5 and later will free up garbage collected memory, handing it
 back to the system.  Previous versions of Python would never free up
 memory (hence never shrink in size).

 Are you using Python 2.4?

 I am using Python 2.5.  But now that I understand the issue better I
 have come up with a workaround.  The biggest issue was that I didn't
 understand what I should be seeing as far as memory usage.

 Although your workaround may not be generally useful, it would still be
 nice for posterity (i.e. those searching through this thread in future)
 if you could summarize how you've actually addressed this issue to your
 satisfaction, however crude or unusual that might be.  Thanks. :)

 --
 Peter Hansen

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: app memory leak - execute() with SA 0.5.2, PG 8.3, Psycopg2

2009-02-24 Thread Allen Bierbaum

On Tue, Feb 24, 2009 at 4:44 AM, Chris Miles miles.ch...@gmail.com wrote:
 On Feb 22, 6:08 am, Allen Bierbaum abierb...@gmail.com wrote:
 The python process.  The number of objects seems to remain fairly
 controlled.  But the amount of resident memory used by the python
 process does not decrease.  I had expected that by calling
 gc.collect(2) python would reclaim any objects that could be freed and
 free all memory associated with them, thus decreasing the consumed
 memory.  Maybe this is an invalid assumption.  Do you know any way to
 ask python to shrink it's process size (ie. clear unused memory that
 has been freed but evidently not given back to the OS)?

 Python 2.5 and later will free up garbage collected memory, handing it
 back to the system.  Previous versions of Python would never free up
 memory (hence never shrink in size).

 Are you using Python 2.4?

I am using Python 2.5.  But now that I understand the issue better I
have come up with a workaround.  The biggest issue was that I didn't
understand what I should be seeing as far as memory usage.

Thanks to everyone for the help.

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] app memory leak - execute() with SA 0.5.2, PG 8.3, Psycopg2

2009-02-21 Thread Allen Bierbaum

Hello all:

We are running into a very strange memory leak issue.  Here are the details:

- The co executes select statements directly on a table using SA
0.5.2, Postgres 8.3, Psycopg2, all running on Linux (Ubuntu 8.04)
- Querying a table with approximately 7 records
- When we query the table, convert the results to a list, and then
free both the list and the results proxy, memory seems to stay used
- If we query the table with ever increasing size limits (limit
clause), the amount of memory consumed increases
- If we query with a limit of say 5 and then perform smaller
queries, the smaller queries do not increase the amount of memory
consumed overall
- In other words, the amount of memory caps out in a way that looks
like it is linear with the maximum number of table rows ever returned

It looks like something in SA, Psycopg2, or PG is holding references
to the returned rows but we have not been able to track it down.  I am
working on a completely standalone application to demonstrate the
issue.  Before I went much further though I wanted to post this
question and code sample (not standalone, but shows the commands we
are using) in case anyone sees something fishy.

Any ideas?

Thanks,
Allen

--
import os, sys, gc
import psycopg2
import sqlalchemy as sa
import sqlalchemy.exc

class DataProvider(object):
   def __init__(self, dbUri, tableName):
  self._dbUri   = dbUri
  self._dbEngine= None
  self._metadata= None
  self._table   = None
  self._tableName   = tableName

   def initialize(self):
  self._dbEngine = sa.create_engine(self._dbUri)
  self._metadata = sa.MetaData()
  self._metadata.bind = self._dbEngine

  # Build up the table metadata reflectively
  name_parts = self._tableName.split('.')

  if(len(name_parts)  1):
 self._table = sa.Table(name_parts[1], self._metadata, autoload = True,
schema = name_parts[0])
  else:
 self._table = sa.Table(name_parts[0], self._metadata, autoload = True)

   def query(self, limit = None):
  s = self._table.select()

  if limit is not None:
 s = s.limit(limit)

  print objs before: , len(gc.get_objects())
  print garbage: , len(gc.garbage)
  result_proxy = s.execute()
  result_objects = [x for x in result_proxy]
  #print num results: , len(result_objects)
  result_proxy.close()
  result_objects = []
  results = []
  gc.collect(2)
  print objs after: , len(gc.get_objects())
  print garbage: , len(gc.garbage)
  return results

db_uri  = sys.argv[0]
collection_name = sys.argv[1]

prov = DataProvider(db_uri, collection_name)
prov.initialize()

limits = [1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 6]
# Query
for x in range(10):
   limit = limits[x]
   print [ %s ] limit: %s  % (x, limit)
   gc.collect(2)
   printMemoryInformation()
   features = prov.query(limit = limit)
   features = None
   gc.collect(2)
   printMemoryInformation()

--

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: app memory leak - execute() with SA 0.5.2, PG 8.3, Psycopg2

2009-02-21 Thread Allen Bierbaum
I spent some more time and came up with a completely standalone test
application. (attached) (requires SA 0.5.2 and some database backend).
 I have tested this with postgres and with sqlite and both of them
exhibit the behavior I was talking about.  Namely the memory keeps
increasing and stays topped out even though all handles to the results
have been cleared in python.

I am sure that I must be doing something very wrong, but I can't
figure it out.  Can anyone point out my error?

-Allen

On Sat, Feb 21, 2009 at 7:52 AM, Allen Bierbaum abierb...@gmail.com wrote:
 Hello all:

 We are running into a very strange memory leak issue.  Here are the details:

 - The co executes select statements directly on a table using SA
 0.5.2, Postgres 8.3, Psycopg2, all running on Linux (Ubuntu 8.04)
 - Querying a table with approximately 7 records
 - When we query the table, convert the results to a list, and then
 free both the list and the results proxy, memory seems to stay used
 - If we query the table with ever increasing size limits (limit
 clause), the amount of memory consumed increases
 - If we query with a limit of say 5 and then perform smaller
 queries, the smaller queries do not increase the amount of memory
 consumed overall
 - In other words, the amount of memory caps out in a way that looks
 like it is linear with the maximum number of table rows ever returned

 It looks like something in SA, Psycopg2, or PG is holding references
 to the returned rows but we have not been able to track it down.  I am
 working on a completely standalone application to demonstrate the
 issue.  Before I went much further though I wanted to post this
 question and code sample (not standalone, but shows the commands we
 are using) in case anyone sees something fishy.

 Any ideas?

 Thanks,
 Allen

 --
 import os, sys, gc
 import psycopg2
 import sqlalchemy as sa
 import sqlalchemy.exc

 class DataProvider(object):
   def __init__(self, dbUri, tableName):
  self._dbUri   = dbUri
  self._dbEngine= None
  self._metadata= None
  self._table   = None
  self._tableName   = tableName

   def initialize(self):
  self._dbEngine = sa.create_engine(self._dbUri)
  self._metadata = sa.MetaData()
  self._metadata.bind = self._dbEngine

  # Build up the table metadata reflectively
  name_parts = self._tableName.split('.')

  if(len(name_parts)  1):
 self._table = sa.Table(name_parts[1], self._metadata, autoload = True,
schema = name_parts[0])
  else:
 self._table = sa.Table(name_parts[0], self._metadata, autoload = True)

   def query(self, limit = None):
  s = self._table.select()

  if limit is not None:
 s = s.limit(limit)

  print objs before: , len(gc.get_objects())
  print garbage: , len(gc.garbage)
  result_proxy = s.execute()
  result_objects = [x for x in result_proxy]
  #print num results: , len(result_objects)
  result_proxy.close()
  result_objects = []
  results = []
  gc.collect(2)
  print objs after: , len(gc.get_objects())
  print garbage: , len(gc.garbage)
  return results

 db_uri  = sys.argv[0]
 collection_name = sys.argv[1]

 prov = DataProvider(db_uri, collection_name)
 prov.initialize()

 limits = [1, 2, 3, 4, 5, 6,
1, 2, 3, 4, 5, 6]
 # Query
 for x in range(10):
   limit = limits[x]
   print [ %s ] limit: %s  % (x, limit)
   gc.collect(2)
   printMemoryInformation()
   features = prov.query(limit = limit)
   features = None
   gc.collect(2)
   printMemoryInformation()

 --


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

import os, sys, gc
import psycopg2

import sqlalchemy as sa
import sqlalchemy.exc

def getMemoryUsage(VmKey):
   proc_status = '/proc/%d/status' % os.getpid()
   scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0}
   v = open(proc_status).read()
   
   # get VmKey line e.g. 'VmRSS:    kB\n ...'
   i = v.index(VmKey)
   v = v[i:].split(None, 3)  # whitespace
   if len(v)  3:
  return 0.0  # invalid format?
   # convert Vm value to bytes
   return float(v[1]) * scale[v[2]]

def memory():
return getMemoryUsage('VmSize:') / (1024*1024)
def resident():
   return getMemoryUsage('VmRSS:') / (1024*1024)
def printMemoryUsage():
   print mem: , memory(),  res

[sqlalchemy] Re: app memory leak - execute() with SA 0.5.2, PG 8.3, Psycopg2

2009-02-21 Thread Allen Bierbaum

The python process.  The number of objects seems to remain fairly
controlled.  But the amount of resident memory used by the python
process does not decrease.  I had expected that by calling
gc.collect(2) python would reclaim any objects that could be freed and
free all memory associated with them, thus decreasing the consumed
memory.  Maybe this is an invalid assumption.  Do you know any way to
ask python to shrink it's process size (ie. clear unused memory that
has been freed but evidently not given back to the OS)?

-Allen

On Sat, Feb 21, 2009 at 12:15 PM, Michael Bayer
mike...@zzzcomputing.com wrote:

 are you referring to memory of the python process or number of
 unclaimed objects still present ?   psycopg2 will buffer the full
 results from the database in memory before fetchone() is ever called,
 which will force the python process to grow to accomodate it.  a
 python process generally stays large once it grows.



 On Feb 21, 2009, at 11:16 AM, Allen Bierbaum wrote:

 I spent some more time and came up with a completely standalone test
 application. (attached) (requires SA 0.5.2 and some database backend).
 I have tested this with postgres and with sqlite and both of them
 exhibit the behavior I was talking about.  Namely the memory keeps
 increasing and stays topped out even though all handles to the results
 have been cleared in python.

 I am sure that I must be doing something very wrong, but I can't
 figure it out.  Can anyone point out my error?

 -Allen

 On Sat, Feb 21, 2009 at 7:52 AM, Allen Bierbaum
 abierb...@gmail.com wrote:
 Hello all:

 We are running into a very strange memory leak issue.  Here are the
 details:

 - The co executes select statements directly on a table using SA
 0.5.2, Postgres 8.3, Psycopg2, all running on Linux (Ubuntu 8.04)
 - Querying a table with approximately 7 records
 - When we query the table, convert the results to a list, and then
 free both the list and the results proxy, memory seems to stay used
 - If we query the table with ever increasing size limits (limit
 clause), the amount of memory consumed increases
 - If we query with a limit of say 5 and then perform smaller
 queries, the smaller queries do not increase the amount of memory
 consumed overall
 - In other words, the amount of memory caps out in a way that looks
 like it is linear with the maximum number of table rows ever returned

 It looks like something in SA, Psycopg2, or PG is holding references
 to the returned rows but we have not been able to track it down.  I
 am
 working on a completely standalone application to demonstrate the
 issue.  Before I went much further though I wanted to post this
 question and code sample (not standalone, but shows the commands we
 are using) in case anyone sees something fishy.

 Any ideas?

 Thanks,
 Allen

 --
 import os, sys, gc
 import psycopg2
 import sqlalchemy as sa
 import sqlalchemy.exc

 class DataProvider(object):
  def __init__(self, dbUri, tableName):
 self._dbUri   = dbUri
 self._dbEngine= None
 self._metadata= None
 self._table   = None
 self._tableName   = tableName

  def initialize(self):
 self._dbEngine = sa.create_engine(self._dbUri)
 self._metadata = sa.MetaData()
 self._metadata.bind = self._dbEngine

 # Build up the table metadata reflectively
 name_parts = self._tableName.split('.')

 if(len(name_parts)  1):
self._table = sa.Table(name_parts[1], self._metadata,
 autoload = True,
   schema = name_parts[0])
 else:
self._table = sa.Table(name_parts[0], self._metadata,
 autoload = True)

  def query(self, limit = None):
 s = self._table.select()

 if limit is not None:
s = s.limit(limit)

 print objs before: , len(gc.get_objects())
 print garbage: , len(gc.garbage)
 result_proxy = s.execute()
 result_objects = [x for x in result_proxy]
 #print num results: , len(result_objects)
 result_proxy.close()
 result_objects = []
 results = []
 gc.collect(2)
 print objs after: , len(gc.get_objects())
 print garbage: , len(gc.garbage)
 return results

 db_uri  = sys.argv[0]
 collection_name = sys.argv[1]

 prov = DataProvider(db_uri, collection_name)
 prov.initialize()

 limits = [1, 2, 3, 4, 5, 6,
   1, 2, 3, 4, 5, 6]
 # Query
 for x in range(10):
  limit = limits[x]
  print [ %s ] limit: %s  % (x, limit)
  gc.collect(2)
  printMemoryInformation()
  features = prov.query(limit = limit)
  features = None
  gc.collect(2)
  printMemoryInformation()

 --


 
 import os, sys, gc
 import psycopg2

 import sqlalchemy as sa
 import sqlalchemy.exc

 def getMemoryUsage(VmKey):
   proc_status = '/proc/%d/status' % os.getpid()
   scale = {'kB

[sqlalchemy] Re: Using deferred with declarative

2008-11-14 Thread Allen Bierbaum

Does anyone have any ideas on this?

Does declarative simply not support the deferred property?

-Allen

On Sat, Nov 8, 2008 at 11:32 AM, Allen Bierbaum [EMAIL PROTECTED] wrote:
 We have been using the declarative successfully in our codebase for a
 couple months now with 0.4.x, but we have just run into a problem.

 We have a table we we want to map using declarative but we want to
 have one of the columns be deferred because it contains binary data.
 Unfortunately we can't figure out how to do this.

 We tried something like this:

 class Person(Base):
__tablename__ = 'people'
id = Column('id', Integer, primary_key=True)
data = Column('data', Binary)
__mapper_args__ = {'properties' : {'data' : sa.orm.deferred(data)}

 The problem is that when the declarative metaclass assembles the
 arguments to pass to the mapper creation method, this causes two
 arguments of name 'properties' to be passed into the call.  (see
 DeclarativeMeta.__init__)

 - Is this a bug in the implementation of declarative?  (perhaps the
 code in DeclarativeMeta.__init__ should look for
 __mapper_args__['properties'] and merge it with the internally created
 properties)
 - Or is there some other way to use deferred columns with declarative?

 Thanks,
 Allen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using deferred with declarative

2008-11-14 Thread Allen Bierbaum

Nope.  Very strange.  It didn't come through to my gmail account.

Oh well, thanks for the pointer.

-Allen


On Fri, Nov 14, 2008 at 10:55 AM, King Simon-NFHD78
[EMAIL PROTECTED] wrote:

 -Original Message-
 From: sqlalchemy@googlegroups.com
 [mailto:[EMAIL PROTECTED] On Behalf Of Allen Bierbaum
 Sent: 14 November 2008 16:40
 To: sqlalchemy
 Subject: [sqlalchemy] Re: Using deferred with declarative


 Does anyone have any ideas on this?

 Does declarative simply not support the deferred property?

 -Allen

 On Sat, Nov 8, 2008 at 11:32 AM, Allen Bierbaum
 [EMAIL PROTECTED] wrote:
  We have been using the declarative successfully in our
 codebase for a
  couple months now with 0.4.x, but we have just run into a problem.
 
  We have a table we we want to map using declarative but we want to
  have one of the columns be deferred because it contains binary data.
  Unfortunately we can't figure out how to do this.
 
  We tried something like this:
 
  class Person(Base):
 __tablename__ = 'people'
 id = Column('id', Integer, primary_key=True)
 data = Column('data', Binary)
 __mapper_args__ = {'properties' : {'data' :
 sa.orm.deferred(data)}
 
  The problem is that when the declarative metaclass assembles the
  arguments to pass to the mapper creation method, this causes two
  arguments of name 'properties' to be passed into the call.  (see
  DeclarativeMeta.__init__)
 
  - Is this a bug in the implementation of declarative?  (perhaps the
  code in DeclarativeMeta.__init__ should look for
  __mapper_args__['properties'] and merge it with the
 internally created
  properties)
  - Or is there some other way to use deferred columns with
 declarative?
 
  Thanks,
  Allen
 


 Did you see Michael's reply on the same day you sent your original
 message?

 http://groups.google.com/group/sqlalchemy/browse_thread/thread/d548138fe
 d9903ef

 Simon

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using deferred with declarative

2008-11-08 Thread Allen Bierbaum

We have been using the declarative successfully in our codebase for a
couple months now with 0.4.x, but we have just run into a problem.

We have a table we we want to map using declarative but we want to
have one of the columns be deferred because it contains binary data.
Unfortunately we can't figure out how to do this.

We tried something like this:

class Person(Base):
__tablename__ = 'people'
id = Column('id', Integer, primary_key=True)
data = Column('data', Binary)
__mapper_args__ = {'properties' : {'data' : sa.orm.deferred(data)}

The problem is that when the declarative metaclass assembles the
arguments to pass to the mapper creation method, this causes two
arguments of name 'properties' to be passed into the call.  (see
DeclarativeMeta.__init__)

- Is this a bug in the implementation of declarative?  (perhaps the
code in DeclarativeMeta.__init__ should look for
__mapper_args__['properties'] and merge it with the internally created
properties)
- Or is there some other way to use deferred columns with declarative?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is inserting new data with column_mapped_collection inconsistent?

2008-05-17 Thread Allen Bierbaum

On Fri, May 16, 2008 at 4:54 PM, jason kirtland [EMAIL PROTECTED] wrote:
[..]

 Anyway, I think this is a bit non-intuitive.  What I propose instead
 is that SA could automatically set the 'keyword' attribute of the Note
 object as part of the process of assigning it to the mapped collection
 dictionary.  This way the insert could look more like:

 item.notes['not-color'] = Note(value='blue')

 and behind the scenes SA would call:  new Note.keyword = 'not-color'

 Any thoughts on this?  Has anyone tried this in the past?

 MappedCollection doesn't currently have a mismatch guard on __setitem__
 (d[key] = val) or setdefault(), but easily could.  There *is* a guard
 protecting against item.notes = {'not-color': Note('color', 'blue')}, so
 that machinery is available and applying it to the other setters is
 straightforward.

 Automatically setting the value for the attribute_ and column_mapped
 dict collections would be pretty convenient and DRY.  This is a great
 time to integrate that feature, if you want to try your hand at putting
 together a patch and tests.  If it's not too disruptive to existing
 users it could slide right in as a new feature of 0.5.

I would be more then happy to look into this (I already have), but I
think my skills aren't quite up to the challenge.  Could you point me
in the general direction?

On a related note, I think it would be good to make this behavior come
through a user customizable callback method that takes the index value
and the newly assigned class item as values.  This would allow users
to add more automatic behavior that may be needed.

For example I my current relationship is actually like this:

   'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id
== var_table.c.script_id,

var_table.c.input_output_type == 0),

collection_class=column_mapped_collection(var_table.c.name)),

So I would want to not only set the name automatically based on the
key, but I would want to set the input_output_type to 0 in this case.
Something like this would be good.

def input_cb(key, item):
   item.name = key
   item.input_output_type = 0

'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id ==
var_table.c.script_id,

var_table.c.input_output_type == 0),

collection_class=column_mapped_collection(var_table.c.name,
set_cb=input_cb)),

Any thoughts and/or pointers on how to implement this?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is inserting new data with column_mapped_collection inconsistent?

2008-05-16 Thread Allen Bierbaum

I have just started using column_mapped_collections.
(http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_collections_dictcollections

I must say, these are very powerful and extremely nice when reading
data.  But I have run into one thing that seems confusing when it
comes to creating new objects in a session.  Namely, it is possible to
add data to the mapped dictionary in such a way that the data
structure is inconsistent and not what it would be when reading the
same data back.

Using the example from the documentation as a start:

mapper(Item, items_table, properties={
'notes': relation(Note,
collection_class=column_mapped_collection(notes_table.c.keyword)),
})

# ...
item = Item()
item.notes['color'] = Note('color', 'blue')   # Set keyword attribute to 'color'
print item.notes['color']

Everything is good here, but what if I did it this way instead

item.notes['not-color'] = Note('color', 'blue')

This last line is the problem because it has inserted a link to a new
Note that has a keyword of
'color' but is showing up in the dictionary as 'not-color'.  If we
flush all of this and reload from the database using a query, there
will be no 'not-color' entry in the database.

Anyway, I think this is a bit non-intuitive.  What I propose instead
is that SA could automatically set the 'keyword' attribute of the Note
object as part of the process of assigning it to the mapped collection
dictionary.  This way the insert could look more like:

item.notes['not-color'] = Note(value='blue')

and behind the scenes SA would call:  new Note.keyword = 'not-color'

Any thoughts on this?  Has anyone tried this in the past?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Challenging relationship: many-to-many, custom join, mapped dictionary, can it work???

2008-05-15 Thread Allen Bierbaum

I am trying to setup a many-to-many relationship for two tables where
I would like to allow more natural access to the data using a
dictionary interface.  The exact usage is pretty complex to explain,
but I have come up with a simple example that demonstrates the same
concept. (there is a full code example at the end of the e-mail)

The tables are:
  - Script: holds a code script to run
  - VarTypes: Type details for variables that can be used as input and
output to the script

The relationship is:
  - For each Script, there is a set of named variables.
  - Each variable has a type associated with it
  - Each variable can be either an input variable or an output variable

What I want to allow in the end is something like this:

script = Script()
script.code = test code

var1 = VarType()
var2 = VarType()
var3 = VarType()
var4 = VarType()

script.input_vars[in1] = var1
script.input_vars[in2] = var2
script.output_vars[out1] = var3
script.output_vars[out2] = var4

Is there some way to setup a many-to-many relationship to do this?

Thanks,
Allen


Here is the more complete code example to play with and see what I am
thinking so far for table definitions.

#---
from sqlalchemy import (create_engine, Table, Column, Integer, String, Text,
MetaData, ForeignKey)
from sqlalchemy.orm import mapper, backref, relation, create_session

engine = create_engine('sqlite:///:memory:', echo=True)

metadata = MetaData()
metadata.bind = engine

script_table = Table('script', metadata,
Column('id', Integer, primary_key=True),
Column('code', Text)
)

types_table = Table('types', metadata,
Column('id', Integer, primary_key=True),
Column('desc', Text))

script_var_table = Table('script_var_table', metadata,
Column('script_id', Integer, ForeignKey('script.id')),
Column('var_name', Text),
Column('input_output_type', Integer),
Column('type_id', Integer, ForeignKey('types.id')))

print Creating all tables
metadata.create_all(engine)

INPUT_VAR  = 0
OUTPUT_VAR = 1

class Script(object):
   def __init__(self):
  pass

class VarType(object):
   def __init__(self):
  pass

mapper(VarType, types_table)
mapper(Script, script_table, properties = {
   'vars':relation(VarType, secondary=script_var_table)
   })

session = create_session(bind = engine)
script = Script()
script.code = test code
var1 = VarType()
var1.desc = var type 1
var2 = VarType()
var2.desc = var type 2

script.vars.append(var1)
script.vars.append(var2)

session.save(script)
session.flush()

#  WOULD LIKE - #
# Can this be done using
# - Custom join condition on input_output_type
# - column_mapped_collection
#
script = Script()
script.code = test code

var1 = VarType()
var2 = VarType()
var3 = VarType()
var4 = VarType()

script.input_vars[in1] = var1
script.input_vars[in2] = var2
script.output_vars[out1] = var3
script.output_vars[out2] = var4

print Done
#---

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Challenging relationship: many-to-many, custom join, mapped dictionary, can it work???

2008-05-15 Thread Allen Bierbaum

On Thu, May 15, 2008 at 11:37 AM, Michael Bayer
[EMAIL PROTECTED] wrote:
 On May 15, 2008, at 11:23 AM, Allen Bierbaum wrote:
 #  WOULD LIKE - #
 # Can this be done using
 # - Custom join condition on input_output_type
 # - column_mapped_collection
 #

 it can be done.  Try working first with two separate relation()s using
 a secondary join that filters on input or output (theres an
 example of something similar in the docs); then add
 column_mapped_collection in.

Will do.

The thing that I am missing though is an example of using
column_mapped_collection with a many-to-many relationship.  Maybe I am
just a bit slow, but I can't wrap my head around how to make that work
or more specifically, how to specify it.

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using declarative when imports happen before create_engine() ??

2008-05-05 Thread Allen Bierbaum

Thanks for the help.  I almost have this up and running, but I have
found one preplexing issue.

My current code base has a very extensive test suite.  As part of this
test suite, I have my test fixtures setup and teardown databases,
mappers, and just about every SA related.  This is meant to help keep
each test clean from anything remaining from the previous test.

As part of the setup and teardown for the tests, I explicitly call
sa.orm.clear_mappers() to clear out all mappers that were setup in the
previous tests.  This was not a problem before the declarative code
change-over because each time I setup the DB's for the tests I have
the system call an initialization method that sets up all metadata,
mappers, etc.  But now, with the mappers being created behind the
scenes I don't have any way to force the system to recreate the
metadata or mappers.

Is there some way to clear the declarative layer and have it
regenerate all automatically created metadata and mappers?

-Allen

On Mon, Apr 28, 2008 at 5:07 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Apr 28, 2008, at 5:42 PM, Allen Bierbaum wrote:

  
   So, if I understand this right, I could import a base module that
   does a lazy creation of the Base class with a metadata object and then
   just use that base class everywhere I need it for the declarative
   class definitions.  Then at a later time (before I use the mapped
   classes), I could go and bind the metadata for the Base class to an
   engine for use.
  
   Correct?  (I apologize if I used the terms incorrectly).
  
   If this is true, then I think I see how I can solve my problem.

  thats right.  the whole idea of declarative_base is that its just a
  common base class from which all your other classes inherit.  then you
  can plug whatever engine configuration is needed at any point in time
  and it will become active for all base-inheriting classes.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using declarative when imports happen before create_engine() ??

2008-05-05 Thread Allen Bierbaum

On Mon, May 5, 2008 at 4:29 PM, Michael Bayer [EMAIL PROTECTED] wrote:
  On May 5, 2008, at 5:04 PM, Allen Bierbaum wrote:
   Is there some way to clear the declarative layer and have it
   regenerate all automatically created metadata and mappers?

  not as of yet but this could be done.

  but if you are using declarative, that implies that for a class X
  there is only one mapping.  what's the need to call clear_mappers() ?
  are there other, non-declarative mappers which are interacting with
  the declarative ones and require changes on each test ?

Not really.  I was just trying to be very cautious and isolate each
unit being tested.  If that is not possible right now, that is fine.
I can still get the test suite up and running, I just have to change
the code that was trying to isolate each db test.

Thanks,
Allen


  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using declarative when imports happen before create_engine() ??

2008-04-28 Thread Allen Bierbaum

So, if I understand this right, I could import a base module that
does a lazy creation of the Base class with a metadata object and then
just use that base class everywhere I need it for the declarative
class definitions.  Then at a later time (before I use the mapped
classes), I could go and bind the metadata for the Base class to an
engine for use.

Correct?  (I apologize if I used the terms incorrectly).

If this is true, then I think I see how I can solve my problem.

-Allen


On Sun, Apr 27, 2008 at 6:28 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Apr 27, 2008, at 8:25 AM, Allen Bierbaum wrote:

   The problem that as I understand it, to use declarative, you can't
   import an module that defines a table-based object until after some
   initialization code has been run to connect to a database and create a
   'Base' class for the declarative layer.

  This is not true; the declarative extension serves as a holding zone
  for a MetaData object; like the MetaData object, it requires no
  association to any database engine at any time.   The initial example
  in the declarative docunentation, which I am going to change right
  now, illustrates the engine being associated with the underlying
  MetaData as just as an example.The only time a databse connection
  is needed is when you are ready to query the database.   You can
  create your Session and bind it to a newly created engine right before
  the first query is issued, if you like.


  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using declarative when imports happen before create_engine() ??

2008-04-27 Thread Allen Bierbaum

I am investigating the use of the declarative extension.  It looks
very attractive, but I have a problem...

I have a large project that is using the standard methods of
configuring tables and mappers.  More specifically, there is a single
method in my domain package that is called once the system has been
started up and this method creates all tables and configures the
mappers.

So, the order of initialization in my system looks something like this:

- main imports core classes
- main loads configuration files
- main imports all plugin modules and associated packages
- process the configuration
   - initialize databases and domain mapping
  - connect to databases and create engines and metadata
  - call mapper method to setup tables and mappers
   - process remaining configuration...
- Initialize all plugins and start the system running...


The problem that as I understand it, to use declarative, you can't
import an module that defines a table-based object until after some
initialization code has been run to connect to a database and create a
'Base' class for the declarative layer.  In my case this is pretty
problematic because the DB entity modules are most likely imported
before the system has connected to a database.

Does anyone have a suggestion about how to handle this?  For example
is there a way to create a lazy base class that doesn't actually do
anything until it has been connected to a database?  Am I missing
something fundamental here?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Schema Display recipe

2008-01-17 Thread Allen Bierbaum

I was just taking a look at the recipes on the SA wiki and stumbled
across this one:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay

It is a pretty nice little piece of code to automatically create
schema diagrams from a DB engine and UML diagrams for a set of
mappers.  I am very interested in using this because I think it would
help documentation on our current project quite a bit.  It is very
easy for a diagram to get out of sync with the code, and this looks
like a great tool to generate the diagram directly from the code.
Very sweet. :)

The problem is that I can't seem to get the code to work with SA
0.3.11.  It looks like it expects some old naming conventions to get
mappers.

This seems to me like a very nice tool that could prove useful as an
addon to SA. Am I alone in thinking this or is anyone else
successfully using it?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Schema Display recipe

2008-01-17 Thread Allen Bierbaum

Thanks, that worked great.

Have their been any new capabilities added to this code?

-Allen

On Jan 17, 2008 12:21 PM,  [EMAIL PROTECTED] wrote:

 use sqlalchemy.orm.class_mapper(cls) instead of cls.mapper, and it should 
 work?


 Allen Bierbaum wrote:
  I was just taking a look at the recipes on the SA wiki and stumbled
  across this one:
  http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay
 
  It is a pretty nice little piece of code to automatically create
  schema diagrams from a DB engine and UML diagrams for a set of
  mappers.  I am very interested in using this because I think it would
  help documentation on our current project quite a bit.  It is very
  easy for a diagram to get out of sync with the code, and this looks
  like a great tool to generate the diagram directly from the code.
  Very sweet. :)
 
  The problem is that I can't seem to get the code to work with SA
  0.3.11.  It looks like it expects some old naming conventions to get
  mappers.
 
  This seems to me like a very nice tool that could prove useful as an
  addon to SA. Am I alone in thinking this or is anyone else
  successfully using it?
 
  -Allen
 
  


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sessions and threads, find in one thread, use in another

2007-12-15 Thread Allen Bierbaum

On Dec 13, 2007 12:29 PM, Allen Bierbaum [EMAIL PROTECTED] wrote:
 On Dec 13, 2007 10:47 AM, Michael Bayer [EMAIL PROTECTED] wrote:
  On Dec 13, 2007, at 9:55 AM, Allen Bierbaum wrote:
  
   In my current application I am running a rather expensive query in a
   background thread, but then I need to use the results in the
   foreground thread.  The object I find has a great deal of lazy
   evaluated properties that link it to several other mapped objects.  As
   it stands now, the application is only following the lazy properties
   when it uses the object in the primary thread.  The has multiple
   engines connected to multiple databases.  I have a separate session
   for each database for each thread.  (Note: right now I am doing this
   manually but I am debating whether I should be using something like
   SessionContext.)
  
   What I am worried about is that by querying the initial parent object
   in the background thread and then using it's lazy props in the
   foreground thread, I think SA is probably using the background session
   to evaluate these links.
  
   Is there a recommended way to deal with a situation like this? In
   other words, what is the recommended practice for moving, reusing
   objects from a session across multiple threads.  Is there some way to
   remap the object and attach it to the foreground session?
  
 
  theres two general options here.   the most appropriate way to move
  the object between sessions is to use session.merge().  this will
  create a copy of the object in the target session, which is returned,
  leaving the old one unchanged, so that it can additionally be merged
  elsewhere.
 
  as of version 0.4.1 we added a flag to merge called dont_load which
  prevents merge() from reloading the instance from the database upon
  merge (the classical behavior of this method is that it loads the
  current data from the database which is merged with the given data).
  so setting dont_load=True will prevent these loads from happening.  we
  also have some fairly important fixes to dont_load=True in the current
  trunk which will be out in version 0.4.2, so if you are getting into
  heavy merge() usage and you need to use the dont_load flag (which is
  strictly for performance reasons), you might want to go on trunk for
  awhile.
 
  The other option here is to move the object completely from the
  background to the foreground session.  to do this, you would expunge()
  it from the background session, and then update() it into the target
  session.   this is a simpler operation than merge since nothing is
  being copied.  but youd want to ensure the objects youre moving werent
  part of some larger collection thats still sitting in the background
  session.

I think I am going to use this second method because I don't want to
force a database reload during the merge.  Instead I want to use the
detached instance for a while.  My foreground thread can get by using
the loaded version of a couple of the fields of the object in the
detached state.  Then if/when the code needs to see additional data
bout the record (through relationship mappings), I will use update to
attach to the foreground session.

The one question I have here though is, how can I make the update()
call that reattaches it to the foreground session automatically reload
the entire record from the database (ie. ignore any local
modifications)?  Do I have to call update() followed by refresh()?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Sessions and threads, find in one thread, use in another

2007-12-13 Thread Allen Bierbaum

In my current application I am running a rather expensive query in a
background thread, but then I need to use the results in the
foreground thread.  The object I find has a great deal of lazy
evaluated properties that link it to several other mapped objects.  As
it stands now, the application is only following the lazy properties
when it uses the object in the primary thread.  The has multiple
engines connected to multiple databases.  I have a separate session
for each database for each thread.  (Note: right now I am doing this
manually but I am debating whether I should be using something like
SessionContext.)

What I am worried about is that by querying the initial parent object
in the background thread and then using it's lazy props in the
foreground thread, I think SA is probably using the background session
to evaluate these links.

Is there a recommended way to deal with a situation like this? In
other words, what is the recommended practice for moving, reusing
objects from a session across multiple threads.  Is there some way to
remap the object and attach it to the foreground session?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sessions and threads, find in one thread, use in another

2007-12-13 Thread Allen Bierbaum

On Dec 13, 2007 10:47 AM, Michael Bayer [EMAIL PROTECTED] wrote:



 On Dec 13, 2007, at 9:55 AM, Allen Bierbaum wrote:

 
  In my current application I am running a rather expensive query in a
  background thread, but then I need to use the results in the
  foreground thread.  The object I find has a great deal of lazy
  evaluated properties that link it to several other mapped objects.  As
  it stands now, the application is only following the lazy properties
  when it uses the object in the primary thread.  The has multiple
  engines connected to multiple databases.  I have a separate session
  for each database for each thread.  (Note: right now I am doing this
  manually but I am debating whether I should be using something like
  SessionContext.)
 
  What I am worried about is that by querying the initial parent object
  in the background thread and then using it's lazy props in the
  foreground thread, I think SA is probably using the background session
  to evaluate these links.
 
  Is there a recommended way to deal with a situation like this? In
  other words, what is the recommended practice for moving, reusing
  objects from a session across multiple threads.  Is there some way to
  remap the object and attach it to the foreground session?
 

 theres two general options here.   the most appropriate way to move
 the object between sessions is to use session.merge().  this will
 create a copy of the object in the target session, which is returned,
 leaving the old one unchanged, so that it can additionally be merged
 elsewhere.

 as of version 0.4.1 we added a flag to merge called dont_load which
 prevents merge() from reloading the instance from the database upon
 merge (the classical behavior of this method is that it loads the
 current data from the database which is merged with the given data).
 so setting dont_load=True will prevent these loads from happening.  we
 also have some fairly important fixes to dont_load=True in the current
 trunk which will be out in version 0.4.2, so if you are getting into
 heavy merge() usage and you need to use the dont_load flag (which is
 strictly for performance reasons), you might want to go on trunk for
 awhile.

 The other option here is to move the object completely from the
 background to the foreground session.  to do this, you would expunge()
 it from the background session, and then update() it into the target
 session.   this is a simpler operation than merge since nothing is
 being copied.  but youd want to ensure the objects youre moving werent
 part of some larger collection thats still sitting in the background
 session.

 hope this helps...

That helps.  Thanks.

I will try to start using merge along with SessionContext to fix up my
threading issues.

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible to build a query object from a relation property?

2007-12-10 Thread Allen Bierbaum

Thanks.  This looks like it should work.  I will give it a try.

-Allen

On Dec 9, 2007 10:39 PM, Michael Bayer [EMAIL PROTECTED] wrote:


 On Dec 9, 2007, at 10:55 PM, Allen Bierbaum wrote:

 
  I am using SA 0.3.11 and I would like to know if there is a way to get
  a query object from a relation property.  I have several one-to-many
  relationships in my application.  These are all setup and work very
  well, but I find that I often want to perform further filtering of the
  objects in the relationship list property.  I could write python code
  to do it, but if I could get SA to do it on the server, then all the
  better.

 it is the dynamic relation that you want, but for 0.3 you can write
 your own read-only property via:

 class MyClass(object):
 def _get_prop(self):
 return 
 object_session(self).query(ChildClass).with_parent(self,
 'attributename')
  attributename = property(_get_prop)




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Question about using python expressions to create a GIS query

2007-12-10 Thread Allen Bierbaum

I am trying to figure out how to best use SA to create a GIS query.
In my application I am actually using ORM objects and mappers, but to
keep my question focused on clauses and python expressions, I am just
trying to test this out without the ORM first.

The SQL query I would like to generate is this:

select AsText(the_geom), *
from pt
where SetSRID('BOX3D(-95.0 28.5, -95.8 28.8)'::box3d,4326)  the_geom and
  contains(SetSRID('BOX3D(-95.0 28.5, -95.8 28.8)'::box3d,4326), the_geom)
limit 100;

So far the best I have been able to come up with is this:

pt.select(
   sa.and_(
pt.c.pos.op('')(func.SetSRID('BOX3D(-95.0 28.5, -95.8
28.8)'::box3d,4326)),
func.contains(func.SetSRID('BOX3D(-95 28.5, -95.8
28.8)'::box3d,4326), pt.c.pos)
  )
)

Not the most readable way to represent it, but it seems to work.  I
have a couple questions though.

- I reuse func.SetSRID('BOX3D(-95 28.5, -95.8 28.8)'::box3d,4326)
twice.  Is there a way to split this out into something I can just
reuse?

- Is there any way to write an extension operator or something that
could generate this for me?  If I had my way, I would want the query
to look like this:

pt.select( smart_contains( ((-95 28.5, -95.8 28.82), 4326), pt.c.pos))

- Can anyone point out a better way I could construct this query?  Is
there anything I am missing?


Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Possible to build a query object from a relation property?

2007-12-09 Thread Allen Bierbaum

I am using SA 0.3.11 and I would like to know if there is a way to get
a query object from a relation property.  I have several one-to-many
relationships in my application.  These are all setup and work very
well, but I find that I often want to perform further filtering of the
objects in the relationship list property.  I could write python code
to do it, but if I could get SA to do it on the server, then all the
better.

To be more specific, here is an example from the SA documentation:

mapper(Address, addresses_table)
mapper(User, users_table, properties = {
'addresses' : relation(Address)
}
  )

user_fred = session.query(User).filter_by(user_name=fred)

# Is it possible to do something like this?
fred_ny_addresses = getQuery(user_fred.addresses).filter_by(state=NY)


I know that SA 0.4 has support for dynamic_loader properties which
would be fairly similar to this, but I am stuck with 0.3.11 for now.
I think what I want is also a bit different then a dynamic_loader
because 95% of the time I want to use it as a standard relation
property and the performance is not such that I am worried about
loading the entire list of Address entities.  All I want is a shortcut
for creating a query object with the same settings as those used for
the query used to create the list for the relation property.

Is this possible in any way?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Question about an ORM query with an aggregate function

2007-12-07 Thread Allen Bierbaum

I forgot to mention, I am using SA 0.3.10.

Thanks,
Allen

On Dec 7, 2007 7:49 AM, Allen Bierbaum [EMAIL PROTECTED] wrote:
 I am trying to create two queries with some of my SA ORM objects that
 will use the sum of a field found through a relationship.  To be a bit
 more concrete, here is a simple setup similar to mine.

 # table object
 users_table = Table('users', meta,
 Column('user_id', Integer, primary_key=True),
 Column('user_name', String(16)),
 Column('state', String(2))
 )

 order_table = Table('orders', metadata,
 Column('user_id', Integer, ForeignKey(users.user_id)),
 Column('total', Integer),
 )

 mapper(Order, order_table)
 mapper(User, users_table, properties = {
 'orders' : relation(Order)})

 What I want to do is two different queries.

 1. Return a list of all User objects meeting a given criteria along
 with a field that is a sum of all their order totals.

 For example:

 users = session.query(User).filter(User.c.state == 'IA').X.all()

 for u in users:
   print u.rolled_up_order_total

 2. Find all users in a given state (User.c.state == NY) with their
 sum of all order totals being greater then some number X.

 Very similar to above, but I don't need the total to come out as an
 attribute on the object.

 Is there some way to do this with SA and the ORM?  I know that with
 straight up SQL I could create a query for this information and store
 the results of an aggregate sum function in a column alias, but how do
 I do this with the ORM?  Is there a way to create a query that
 dynamically injects additional attributes into the loaded object on
 demand?  (if not, that sounds like a pretty nice feature to me :)

 Thanks,
 Allen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Question about an ORM query with an aggregate function

2007-12-07 Thread Allen Bierbaum

I am trying to create two queries with some of my SA ORM objects that
will use the sum of a field found through a relationship.  To be a bit
more concrete, here is a simple setup similar to mine.

# table object
users_table = Table('users', meta,
Column('user_id', Integer, primary_key=True),
Column('user_name', String(16)),
Column('state', String(2))
)

order_table = Table('orders', metadata,
Column('user_id', Integer, ForeignKey(users.user_id)),
Column('total', Integer),
)

mapper(Order, order_table)
mapper(User, users_table, properties = {
'orders' : relation(Order)})

What I want to do is two different queries.

1. Return a list of all User objects meeting a given criteria along
with a field that is a sum of all their order totals.

For example:

users = session.query(User).filter(User.c.state == 'IA').X.all()

for u in users:
  print u.rolled_up_order_total

2. Find all users in a given state (User.c.state == NY) with their
sum of all order totals being greater then some number X.

Very similar to above, but I don't need the total to come out as an
attribute on the object.

Is there some way to do this with SA and the ORM?  I know that with
straight up SQL I could create a query for this information and store
the results of an aggregate sum function in a column alias, but how do
I do this with the ORM?  Is there a way to create a query that
dynamically injects additional attributes into the loaded object on
demand?  (if not, that sounds like a pretty nice feature to me :)

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Question about an ORM query with an aggregate function

2007-12-07 Thread Allen Bierbaum

Thanks for the feedback.  I hadn't thought of doing it this way.

I did comb through the documentation a 2nd and 3rd time today though
and found the add_column() method for queries.  It looks like that
may be another way to get what I want.  I am thinking about even
creating a little builder function that takes the results from a query
with add_column and adds them back to the primary object as custom
attributes.

Now that I have a couple options, I think I can get at least one of
them to work. :)

-Allen

On Dec 7, 2007 3:11 PM, Paul Johnston [EMAIL PROTECTED] wrote:

 Hi,

 1. Return a list of all User objects meeting a given criteria along
 with a field that is a sum of all their order totals.
 
 
 You want to create another class and mapper, UserWithSum (or a
 friendlier name) that maps to an arbitrary select.
 http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_selects

 2. Find all users in a given state (User.c.state == NY) with their
 sum of all order totals being greater then some number X.
 
 
 Once you've achieved 1, this is pretty straightforward.

 Paul

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Future of migrate project

2007-06-27 Thread Allen Bierbaum

Is anyone currently keeping the migrate project up to date or are
there any other efforts to provide similar functionality?

We have a rather large project where we started using migrate with SA
because we wanted a robust way to track database modifications and
apply then to production databases.  Everything has worked fairly
well, but we recently upgraded from SA 0.3.6 to 0.3.8 and found that
all of our migrate scripts have broken (same with 0.3.7).  I can
provide some details of the problems to help fix migrate but first I
thought I would ask if we are fighting a losing battle.

Should we be using migrate?  Are other people using migrate or some
other tool or just rolling your own code for database migration?

This seems like a *very* valuable capability to have in SA I am hoping
that there is a way to keep it going.

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Spatial data (PostGIS/OGC) with SqlAlchemy

2007-02-27 Thread Allen Bierbaum

I just wanted to give an update.  I have found a solution that works
ok for now but I had to hack a bit.  I ended up having to use
psycopg2's ability to override how python classes are converted
to/from a database.  psycopg2 seems to have more complete support for
this then SA, so it let me convert to an exact string representation
for my type (ie. GeomFromText('POINT(1 1)') ).  I combined this with
the GeoTypes library to represent geometry types and to read them back
from the OGC binary format.

The relevant code looks like this:

import sqlalchemy as sa
import psycopg2.extensions
import logging
pe = psycopg2.extensions

from GeoTypes import (OGGeoTypeFactory, WKBParser,
 OGGeometry, OGPoint, OGPolygon, OGLineString)

class PostGisWKBFactory(object):
   def __init__(self):
  pass

   def __call__(self, s=None):
  factory = OGGeoTypeFactory()
  parser = WKBParser(factory)
  parser.parseGeometry(s)
  return factory.getGeometry()


class GeometryType(sa.types.TypeEngine):
   def __init__(self, SRID, typeName, dimension):
  super(GeometryType, self).__init__()
  self.mSrid = SRID
  self.mType = typeName.upper()
  self.mDim  = dimension
  self.bfact = PostGisWKBFactory()

   def __repr__(self):
  return %s:%s-%s(%s) % (self.__class__.__name__, self.mType,
self.mDim, self.mSrid)

   def get_col_spec(self):
  return GEOMETRY

   def convert_bind_param(self, value, engine):
  # Could be used to make the type _conform_
  #return pe.AsIs(str(value))
  return value

   def convert_result_value(self, value, engine):
  geom_obj = self.bfact(binascii.a2b_hex(value))
  geom_obj._srid = self.mSrid   # set directly
  return geom_obj

class GeometryPOINT(GeometryType):
   def __init__(self, srid):
  super(GeometryPOINT,self).__init__(srid, POINT, 2)

class GeometryLINESTRING(GeometryType):
   def __init__(self, srid):
  super(GeometryPOINT,self).__init__(srid, LINESTRING, 2)


Then I added the following method to the GeoTypes.Geometry class to
allow it to represent itself to psycopg2 correctly.

# Interface to allow psycopg2 to convert to database automatically
def getquoted(self):
return self.__str__()

def __conform__(self, proto):
# Return ourselves since we have a getquoted method
return self

I still don't see a way to handle this directly with SA, so if anyone
can tell me a way to let SA know exactly how I want the object's value
to appear in the generated SQL statement please let me know so I can
refine my code.

Thanks,
Allen


On 2/25/07, Allen Bierbaum [EMAIL PROTECTED] wrote:
 [snip]
   When I use this with my table and datamapper code, it looks like
  everything is working fine but the generated SQL insert statement
  fails with a exception:
 
  sqlalchemy.exceptions.SQLError: (ProgrammingError) parse error -
  invalid geometry
'INSERT INTO gis_entity (id, name, pos) VALUES (%(mId)s, %(mName)s,
  %(mPos)s)' {'mName': 'New entity', 'mId': 1L, 'mPos':
  GeomFromText('POINT(100 100)',-1)}
 
  I know from using sql directly in pgadmin3 that this line works correctly:
 
  insert into gis_entity (id, name, pos) values (2, 'New entity',
  GeomFromText('POINT(100 100)', -1));
 
  Does anyone see how this varies from the sql statement issued by SA?

 By looking at the postgres log I figured out what was causing the
 error, but I still don't know how to fix it.

 The problem is that SA considers GeomFromText('POINT(100 100)', -1)
 to be a string so it puts it in single quotes when creating the SQL
 command to execute. This causes problems because them postgres doesn't
 know it could be calling a method instead.  I have tried returning an
 sqlalchemy.func object but this doesn't work either.

 Any ideas?

 -Allen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Spatial data (PostGIS/OGC) with SqlAlchemy

2007-02-25 Thread Allen Bierbaum

I have been pursuing this further on my own and one of the issues I
have run into is how to cleanly add a geometry column to a database.
The posting referenced in the first e-mail [2] talks about doing this
directly with psycopg2 cursor because the poster could not get it
working with SA.  I gave it another try to see if I could get it
working and I think I have narrowed down the problem.  That said, I
still don't fully understand how to fix it with SA only.

Here is my code example:


db = sa.create_engine(dsn_str)

# Option 1: Try using a function on the database  (doesn't work)
print Run with func
db.func.AddGeometryColumn('','gis_entity','fpos',-1,'LINESTRING',2).execute()

# Option 2: Try directly with database statement execution (doesn't work)
print Run on db
r = db.execute(select
AddGeometryColumn('','gis_entity','cpos',-1,'LINESTRING',2))
r.close()

# Option 3: use psycopg to execute directly without autocomit? (works)
print Run with psycopg isolation level
con = db.connect()
con.connection.connection.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
con.execute(select
AddGeometryColumn('','gis_entity','p_pos',-1,'LINESTRING',2))
con.close()



So option 3 works, but I don't fully understand why it works and why I
can't use one of the other options.  Option 1 is definitely my
preferred way to do this because it will make the calls look like
normal SA calls to a DB function.

Can anyone tell my whey option 3 is working and if there is a way to
do this directly with SA only?

Thanks,
Allen


On 2/23/07, Allen [EMAIL PROTECTED] wrote:

 I would like to use SqlAlchemy with PostGIS to create, read, update,
 and query spatial data.  I have search around a bit and found a few
 ideas of doing this [1][2] but I haven't seen a definitive best
 practice by any means.  It looks like all the solutions I can find
 have a least some limitations.

 Is anyone here using SqlAlchemy with spatial databases and if so how
 are you doing it right now?

 Specifically:
 - How do you handle table specification with geometry types?
 [1] tries to do this but it is incomplete

 - Do you use custom types and if so how are you doing this?
 [2] has some custom type code but it seems to be tied to binary
 formats and I don't see how to create/retrieve the geometry in text
 format.

 - How are you handling the object mapping?
 For example is there a way to map a POINT geometry to a python Point
 class or tuple?

 - Do you have any general recommendations for how to use spatial data
 successfully with SqlAlchemy?


 Thanks,
 Allen

 [1] 
 http://www.mail-archive.com/sqlalchemy-users@lists.sourceforge.net/msg03371.html
 [2] http://bycycle.org/2007/01/29/using-postgis-with-sqlalchemy/


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Spatial data (PostGIS/OGC) with SqlAlchemy

2007-02-25 Thread Allen Bierbaum

[snip]
  When I use this with my table and datamapper code, it looks like
 everything is working fine but the generated SQL insert statement
 fails with a exception:

 sqlalchemy.exceptions.SQLError: (ProgrammingError) parse error -
 invalid geometry
   'INSERT INTO gis_entity (id, name, pos) VALUES (%(mId)s, %(mName)s,
 %(mPos)s)' {'mName': 'New entity', 'mId': 1L, 'mPos':
 GeomFromText('POINT(100 100)',-1)}

 I know from using sql directly in pgadmin3 that this line works correctly:

 insert into gis_entity (id, name, pos) values (2, 'New entity',
 GeomFromText('POINT(100 100)', -1));

 Does anyone see how this varies from the sql statement issued by SA?

By looking at the postgres log I figured out what was causing the
error, but I still don't know how to fix it.

The problem is that SA considers GeomFromText('POINT(100 100)', -1)
to be a string so it puts it in single quotes when creating the SQL
command to execute. This causes problems because them postgres doesn't
know it could be calling a method instead.  I have tried returning an
sqlalchemy.func object but this doesn't work either.

Any ideas?

-Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Tracking changes on mapped entities

2007-02-12 Thread Allen Bierbaum

Is it possible to examine the session and get a list of all mapped
instances that have been changed?

More details:

I would like to implement an observer pattern in my application.  I
would like to let the code make changes to mapped objects as normal,
but immediately before (or after) a session flush() I need to notify
any interested observers that the state in the mapped objects has
changed.  I know I could do this by adding my own attribute state
tracking to the mapped classes, but I was hoping that since SA is
already doing this there may be a way to just make use of SA's change
tracking.

Has anyone done anything like this?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit testing with SA?

2007-02-07 Thread Allen Bierbaum

I have taken a quick look at this code.  But be warned I have not
actually used it yet so I have limited real experience with it...

Overall it looks very interesting.  I was expecting something much
more along the vein of DbUnit where you do something like this:

setUp():
   dataset = XmlDataset(dataset.xml)
   db.refresh(dataset)

tearDown():
   db.tearDown()

testMethod():
   # Do normal db queries here relying upon the data to be in there
   # some custom comparisons against loaded datasets are supported

You can get the idea of DbUnit by look at it's howto page:
http://dbunit.sourceforge.net/howto.html

So far the fixture code looks a little magic to me but that may be
because I do not use SessionContext or assign_mapper in my code.  I
think it would be helpful to see a full example of a simple test case
class (using unittest) that only uses the standard SA features found
in the SA tutorial.  This may help separate the SA plugin magic from
the fixture magic.

Anyway, see below for more detailed comments

On 2/6/07, Kumar McMillan [EMAIL PROTECTED] wrote:

 On 2/3/07, Allen Bierbaum [EMAIL PROTECTED] wrote:
 
  This works for creating the table structure, and that is important,
  but for me I think the actual data in the database is just was
  important if not more so. What are people using to populate the
  database with test data to use in the tests?

 if you feel adventurous, I've been working to implement SQLAlchemy in
 the fixture module :

 http://code.google.com/p/fixture/

 It's not too difficult to run insert statements at the top of your
 tests, or inject an SQL file.  But when you want to make assertions
 using that data things get tricky, which is why I created the fixture
 module.  To give you a very rough example of how this works (since the
 docs are very lacking at the moment), you have 1) DataSet objects to
 define data and 2) a Fixture object that knows how to load that data.

 In fact, special for you--!--and because I've been meaning to do so, I
 just checked in this example, to illustrate the point (if formatting
 goes funny, it is here:
 http://fixture.googlecode.com/svn/trunk/fixture/examples/db/sqlalchemy_examples.py
 ).  Before I get to it, I want to quickly address the point you made
 about having a gigabyte database but only needing a sample of it for
 testing.  I actually find the simplest way to work with fixtures is to
 use existing data.  So the module also includes a command line that
 can generate fixture code from a data source, given an object that
 knows how to select.  I.E.

 $ fixture path.to.a.table.widgets --dsn=postgres://[EMAIL PROTECTED]/db
 --query=widget_id=2

 will build fixture code (including all foreign key refs) by creating a
 select statement using the widgets table.  This command isn't
 implemented fully and may be rough around the edges but it works for
 something like above.

This command looks interesting and I need to try it on some of my
real data.  One question though, it looks like this generates python
code correct?  Have you thought about supporting the option of
generating some metadata xml file that can be loaded directly.  This
may make maintenance a little easier and the datasets less verbose.
It could also allow for hand editing of the xml datasources when that
makes sense.

 Now ... that example:


 SequencedSet Fixtures with SQLAlchemy and nose
 --

 Create your tables::

  from sqlalchemy import *
  from sqlalchemy.ext.sessioncontext import SessionContext
  from sqlalchemy.ext.assignmapper import assign_mapper

  meta = BoundMetaData(sqlite:///:memory:)
  session_context = SessionContext(
 ... lambda: create_session(bind_to=meta.engine))
 ...
  affiliates = Table('affiliates', meta,
 ... Column('id', INT, primary_key=True),
 ... Column('name', String),)
 ...
  class Affiliate(object): pass
  m = assign_mapper(session_context, Affiliate, affiliates)
  events = Table('events', meta,
 ... Column('id', INT, primary_key=True),
 ... Column('type', String),
 ... Column('affiliate_id', INT,
 ... ForeignKey('affiliates.id')),)
 ...
  class Event(object): pass
  m = assign_mapper(session_context, Event, events, properties = {
 ... 'affiliate': relation(Affiliate), })

 Note that using mappers above is not necessary.  The fixture module also
 supports interacting with mapper classes, however.

 Next you build the DataSet objects that you want to load, in this case they
 inherit from SequencedSet, an optional DataSet enhancement that simulates
 auto-incrementing IDs.  The IDs values can be overridden for any row and the
 column name is configurable, but defaults to 'id'::

Why is this needed?  If the id is already a sequence shouldn't the
standard inserts into the database handle this automatically?  How can
it be disabled?


  from fixture

[sqlalchemy] Re: Unit testing with SA?

2007-02-07 Thread Allen Bierbaum

I am going to try to integrate this into my testing framework this
afternoon so I am sure I will have more questions after that.  In the
meantime see below...

On 2/7/07, Kumar McMillan [EMAIL PROTECTED] wrote:

 Thanks for taking a close look Allen.  Here are some answers...

 On 2/7/07, Allen Bierbaum [EMAIL PROTECTED] wrote:
  Overall it looks very interesting.  I was expecting something much
  more along the vein of DbUnit where you do something like this:
 
  setUp():
 dataset = XmlDataset(dataset.xml)
 db.refresh(dataset)
 
  tearDown():
 db.tearDown()
 
  testMethod():
 # Do normal db queries here relying upon the data to be in there
 # some custom comparisons against loaded datasets are supported

 given the first example datasets, the equivalent with fixture is:

 class TestMyCodeWithData(unittest.TestCase):
 def setUp(self):
 self.data = db.data(events_data)
 self.data.setup()

 def tearDown(self):
 self.data.teardown()

 def testSomething(self):
 joe = Affiliate.get(self.data.affiliates_data.joe.id)
 click = Event.get(self.data.events_data.joes_click.id)
 assert click.affiliate is joe
 assert click.type == self.data.events_data.joes_click.type


 And you;re right, I should add an example to this since people coming
 from DbUnit or rails, etc, will be more familiar with this approach.
 I started with the decorator example since I believe most of the time
 it is easier and faster to write small test functions over classes.
 But classes scale better and often it's cumbersome to convert test
 functions into classes when it comes to that -- hence, like the print
 problem [1], there is a good argument to always use test classes

Agreed.  This is probably the simplest example that people may want to
start with.  You could make it even simpler by using only a single
table but that prevents you from showing some of the advanced
features.

 [1] http://www.python.org/dev/peps/pep-3105/

 Having said that, this is a good idea.  I've committed
 fixture.DataTestCase, a mixin for use with unittest.TestCase
 derivatives.

I will give this a try and see how it works.

 I should also point out here that in python 2.5 you can write tests like:

 with db.data(events_data) as data:
  joe = Affiliate.get(data.affiliates_data.joe.id)
  click = Event.get(data.events_data.joes_click.id)
  assert click.affiliate is joe
  assert click.type == data.events_data.joes_click.type

I have no idea how this works. I guess that means I need to learn a
bit of python 2.5 soon. :)

Until then, it is nice to know that it can be even easier once I
understand python 2.5.

 As far as the XML approach -- I dislike this because I think fixtures
 are meant to be hand-coded and editable (in most cases).  XML is hard
 to edit and the rails approach to use YAML is good, but, python code
 is as good as YAML if you ask me.  Somone who feels storngly can
 submit a patch to me for loading fixtures in XML or YAML.

I understand your hesitation.  Maybe I could suggest that you just
think about adding some sort of loader plugin support similar to the
IDataSet interfaces in DbUnit
(http://dbunit.sourceforge.net/components.html).  This could provide a
point of extension for people in the future that may want to load XML,
YAML, or even excel files. :)

  So far the fixture code looks a little magic to me but that may be
  because I do not use SessionContext or assign_mapper in my code.  I
  think it would be helpful to see a full example of a simple test case
  class (using unittest) that only uses the standard SA features found
  in the SA tutorial.  This may help separate the SA plugin magic from
  the fixture magic.

 I admit the discovery of table objects is magical so yes I should
 come up with a better example.  Also, good point about not using
 extensions.  I just committed an update so that you can create a
 fixture purely from a session:

 db = SQLAlchemyFixture(session=sqlalchemy.create_session())

 keep in mind that if you need to init the session elsewhere, you can ...

 db = SQLAlchemyFixture()
 # anytime before db.data().setup() ...
 db.session = my_session

In my current system I have a single global session that is used for
everything.  Is there any reason you can see that I could not just
reuse this session in all the test cases or should I be creating a new
on each time?

  This command looks interesting and I need to try it on some of my
  real data.  One question though, it looks like this generates python
  code correct?  Have you thought about supporting the option of
  generating some metadata xml file that can be loaded directly.  This
  may make maintenance a little easier and the datasets less verbose.
  It could also allow for hand editing of the xml datasources when that
  makes sense.

 I think I addressed this above.  If you are talking about tons and
 tons of data, then yes I can see how python code might not make

[sqlalchemy] Re: Anyone using migrate?

2007-02-01 Thread Allen Bierbaum

On 2/1/07, Evan Rosson [EMAIL PROTECTED] wrote:

  I was considering the use of migrate (http://erosson.com/migrate/) for
  a new project using SA and I just wondered if anyone else is using
  it?

 I'm using it, but then, I wrote it. It's worked well for me, though
 I'm not sure how widely used it is... the mailing list is pretty
 quiet.

I noticed the list was quiet.  It looks like there have only be a few
posts.  Is that still the best forum for migrate questions or would it
be better to post questions about it to this list to reach a wider
audience?

  Are there any plans to integrate this support into a future
  version of SA?

 Nope. Discussed this a while back, and the decision was to keep things
 separate; the SA core has plenty to do already.

Sounds good.  Migrate looks like a great tool.  Hopefully more people
will start using it to manage their database migration with SA.

-Allen


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Persistence Layer best practices with SQLAlchemy

2007-01-27 Thread Allen Bierbaum

On 1/26/07, Michael Bayer [EMAIL PROTECTED] wrote:

 On Jan 25, 7:28 pm, Allen [EMAIL PROTECTED] wrote:
  The basic idea of a persistence layer (as I understand it) is that you
  attempt to isolate applications from the database to the point that the
  application and the data model can vary independently without requiring
  major code changes.  As the paper above puts it, a well build
  persistence layer for an application should allow moving tables,
  renaming tables, renaming columns, and reorganizing tables without
  affecting the applications that access them.

 yeahto be honest, one thing that makes SA unique from most other
 ORMs ive seen is that its specifically *not* trying to accomplish that.
  as you said, youre not so optimistic about it either.  my experience
 is that those who argue about the object-relational impedance
 mismatch, for which you can find endless blog/academic articles about,
 seem to be obsessed with a completely clean separation of objects and
 database.  and that implies that theyd like their application to scurry
 along doing everything it does without nary a shadow of a relational
 concept being revealedso...why use a database at all then ?

I agree that a complete separation is probably either a) impossible or
b) going to lead to code that is so complex and difficult to maintain
that it removes any advantage gained.

I think there may be a happy medium somewhere in the middle though and
I am hopeful that SA can help out there.  I think the object and data
model need to be fairly close, but I would hope that the software
could allow for small variations so both the data model and the object
model can be refactored and improved.

For example with the ability of SA to map attributes names to columns
that are named differently allows some of this.  It also appears that
being able to map a class to a joined relationship could allow for
some very interesting capabilities as well.

What I am looking for is guidance about how people use SA to try to
reach this middle ground or references to any
modules/plugins/extensions that could assist in this regard.  I would
rather not attempt to recreate the wheel so to speak and I am sure
that other people out there have already created better wheels then I
could dream to. :)

 SA's goal is absolutely not to create a programming paradigm or
 framework which allows pure separation of application/persistence
 concerns...such a framework could certainly build on top of SA, but
 within SA we are providing more primitive operations than that.  The
 goal of SA is strictly to provide a rich framework for issuing SQL
 statements to a database.  the ORM part of it, which is totally
 optional, then takes a straightforward approach to mapping Python
 classes to database tables and other selectables, and provides maybe
 the *beginning* of an abstractional layer, but thats about it.  it
 assumes Python is a flexible enough language that its up to the larger
 framework to figure out where to hide the session.query() calls, if
 thats whats desired.

Agreed.  Python does allow a degree of flexibility that makes a pure
separation less needed.

As a side note, I am trying to see where ActiveMapper would fit into
all of this.  At first I thought that ActiveMapper would actually tie
the classes closer to the data model, but after looking at it further
I am starting to think that it still has all the power and flexibility
of the other methods of using SA but simple encapsulates the Table and
Mapper creation in a single unit.  Can anyone tell me if this is a
correct assessment?


 personally I dont really believe in a pure abstractional layer and i
 think the more time one spends being obsessed with it, the less one
 ends up getting done.  lump it in there with all those other silver
 bullets.

Agreed.  The pursuit of silver bullets is a fools errand.  I just hope
to avoid taking the bullet and is SA and shooting myself in the foot
because of lack of understanding. :)

Thank you for your comments.

-Allen



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---