[sqlalchemy] Sqlautocode 0.7 released

2011-08-20 Thread percious
I released sqlautocode 0.7 today.  This release was based on feedback
from a few users, and adds support for sqlalchemy 0.7 and python 2.7.
Here's a rundown of the feature changes since the last release:

0.7 (8-20-2011)

  SA 0.7 Support
  Python 2.7 Support
  Declarative supports tables with compound foreign keys.
  Declarative supports tables with no primary keys.
  Declarative support limits to tables. (-t option)
  Declarative tolarant of bad relations. (throws warnings)

As you can see, the declarative part of the application has been
beefed up significantly.

In case you are not familiar with sqlautocode, it allows the user to
create an sqlalchemy model (table or declarative definitions) given a
target database.  This allows a simpler path for project startup.
Also, sqlautocode is intended to be an api library, which means that
you can use it programatically to dynamically create models for use
without outputting source code, admittedly, the documentation for this
api is lacking.

To use:

$easy_install sqlautocode

$sqlautocode --help:

Usage: autocode.py database_url [options, ]
Generates Python source code for a given database schema.

Example: ./autocode.py postgres://user:password@myhost/database -o
out.py

Options:
  -h, --helpshow this help message and exit
  -o OUTPUT, --output=OUTPUT
Write to file (default is stdout)
  --force   Overwrite Write to file (default is stdout)
  -s SCHEMA, --schema=SCHEMA
Optional, reflect a non-default schema
  -t TABLES, --tables=TABLES
Optional, only reflect this comma-separated
list of
tables. Wildcarding with '*' is supported,
e.g:
--tables account_*,orders,order_items,*_audit
  -b TABLE_PREFIX, --table-prefix=TABLE_PREFIX
Prefix for generated SQLAlchemy Table object
names
  -a TABLE_SUFFIX, --table-suffix=TABLE_SUFFIX
Suffix for generated SQLAlchemy Table object
names
  -i, --noindexes, --noindex
Do not emit index information
  -g, --generic-types   Emit generic ANSI column types instead of
database-
specific.
  --encoding=ENCODING   Encoding for output, default utf8
  -e, --example Generate code with examples how to access data
  -3, --z3c Generate code for use with z3c.sqlalchemy
  -d, --declarative Generate declarative SA code
  -n, --interactive Generate Interactive example in your code.

sqlautocode can be found on pypi at:

http://pypi.python.org/pypi/sqlautocode/0.7

cheers.
-chris

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Sprox 0.6.6 Released

2009-09-24 Thread percious

I lied, we found a few more ways to tweak sprox 0.6.  (www.sprox.org)

Thanks to
Alessandro Molina (amol) for providing a speed enhancement when
rendering dictionaries.

Thanks to Temmu Yli-Elsila for his help in debugging the preventCache
problem with DojoGrid in IE.  Sprox will now work properly with IE 7
for the grid, but you will need tw/tw.dojo 0.9.8, which I expect to
release tomorrow.

http://pypi.python.org/pypi/sprox/0.6.6

cheers.
-chris

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Postgres, reflection, and search_path

2008-06-20 Thread percious

Hey guys,

I have a postgres database which requires me to set search_path to
'my_db' before I can get a proper table listing.

I have written a schema for this database, but what I would like to do
is compare my schema against the existing database, and make sure that
all my tables and columns jive.  I have tried something like this:


from sqlalchemy import MetaData, create_engine
metadata = MetaData()
engine = create_engine('postgres://[EMAIL PROTECTED]/Target')
engine.execute(set search_path to 'my_db')
metadata.bind = engine
metadata.reflect()

   print metadata.tables.keys()

which never returns the tables I desire.

Does anyone have any pointers?

Thanks,
-chris
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres, reflection, and search_path

2008-06-20 Thread percious

like this?

from sqlalchemy import MetaData, create_engine
metadata = MetaData()
engine = create_engine('postgres://[EMAIL PROTECTED]/Target')
connect = engine.connect()
connect.execute(set search_path to 'my_db')
metadata.reflect(bind=connect)

(does not work)

On Jun 20, 2:47 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 its likely a connection specific thing.  do it on a Connection, then  
 send that as bind to metadata.reflect().

 On Jun 20, 2008, at 4:45 PM, percious wrote:



  Hey guys,

  I have a postgres database which requires me to setsearch_pathto
  'my_db' before I can get a proper table listing.

  I have written a schema for this database, but what I would like to do
  is compare my schema against the existing database, and make sure that
  all my tables and columns jive.  I have tried something like this:

     from sqlalchemy import MetaData, create_engine
     metadata = MetaData()
     engine = create_engine('postgres://[EMAIL PROTECTED]/Target')
     engine.execute(setsearch_pathto 'my_db')
     metadata.bind = engine
     metadata.reflect()

    print metadata.tables.keys()

  which never returns the tables I desire.

  Does anyone have any pointers?

  Thanks,
  -chris
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres, reflection, and search_path

2008-06-20 Thread percious

Nevermind, that got it.  Thanks mike for your ordinarily punctual
responses.

BTW, anyone have a need for a db schema comparison tool?  I thought
there was one out there, but I was dubious about the source.

cheers.
-chris

On Jun 20, 3:06 pm, percious [EMAIL PROTECTED] wrote:
 like this?

     from sqlalchemy import MetaData, create_engine
     metadata = MetaData()
     engine = create_engine('postgres://[EMAIL PROTECTED]/Target')
     connect = engine.connect()
     connect.execute(setsearch_pathto 'my_db')
     metadata.reflect(bind=connect)

 (does not work)

 On Jun 20, 2:47 pm, Michael Bayer [EMAIL PROTECTED] wrote:

  its likely a connection specific thing.  do it on a Connection, then  
  send that as bind to metadata.reflect().

  On Jun 20, 2008, at 4:45 PM, percious wrote:

   Hey guys,

   I have a postgres database which requires me to setsearch_pathto
   'my_db' before I can get a proper table listing.

   I have written a schema for this database, but what I would like to do
   is compare my schema against the existing database, and make sure that
   all my tables and columns jive.  I have tried something like this:

      from sqlalchemy import MetaData, create_engine
      metadata = MetaData()
      engine = create_engine('postgres://[EMAIL PROTECTED]/Target')
      engine.execute(setsearch_pathto 'my_db')
      metadata.bind = engine
      metadata.reflect()

     print metadata.tables.keys()

   which never returns the tables I desire.

   Does anyone have any pointers?

   Thanks,
   -chris
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres, reflection, and search_path

2008-06-20 Thread percious

that also works.  Thanks mike.

On Jun 20, 3:13 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 20, 2008, at 5:06 PM, percious wrote:



  like this?

     from sqlalchemy import MetaData, create_engine
     metadata = MetaData()
     engine = create_engine('postgres://[EMAIL PROTECTED]/Target')
     connect = engine.connect()
     connect.execute(setsearch_pathto 'my_db')
     metadata.reflect(bind=connect)

  (does not work)

 that would be it.   butsearch_pathshouldnt affect anything SQLA does  
 regarding reflection.  don't you just want to send schema='my_db' to  
 metadata.reflect() ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Summer Python Internship - National Renewable Energy Laboratory

2008-04-24 Thread percious

Student Intern – Scientific Computing Group 5900 5900-7259

A student internship is available in the National Renewable Energy
Laboratory's (NREL) Scientific Computing Group.  NREL is the nation's
primary laboratory for research, development and deployment of
renewable energy and energy efficiency technologies. The intern will
be supporting work concerning management of scientific and technical
data.  Our data group is cutting-edge with respect to capturing
rapidly changing scientific metadata and allowing the scientists to
relate different kinds of data in a meaningful way.
We have an immediate opening for a summer student internship with
possible extension to one year in our Golden, Colorado office.  The
position would be part-time (15 - 25 hours per week) during the school
year and/or full time during the summer.


DUTIES: Will include working with researchers on techniques to enable
the capture and storage of technical data in a scientific setting.
Your role in our development team would be to support data harvesting
using existing software, and develop new visualization techniques for
existing data sets.

DESIRED QUALIFICATIONS: Undergraduate or graduate student in computer
science or related field, with demonstrated experience in programming,
databases and software development.  Experience using agile techniques
and test-driven development.  Demonstrated of Unit Testing.
Experience with major dynamic languages like Python, Ruby, or C#.


PREFERRED: Demonstrated good writing skills and computer skills,
specifically including programming in python and database use.
Experience with systems related to management of scientific data.

Candidate must be a US citizen.

Qualified candidates should e-mail their resume to:
Laura Davis
NREL, Human Resources Office
Reference:  Req. #5900-7259
E-Mail:  [EMAIL PROTECTED]

=

feel free to email me with any questions.

cheers.
-chris
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sqlalchemy.org looks down

2008-03-24 Thread percious

Any reason for this?

-chris
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy.org looks down

2008-03-24 Thread percious

yep, and fast as ever.

On Mar 24, 12:29 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 its up ?

 On Mar 24, 2008, at 2:25 PM, Rick Morrison wrote:

  It's still down for me

  On Mon, Mar 24, 2008 at 2:09 PM, Michael Bayer [EMAIL PROTECTED]
   wrote:

  not sure why it was down, apache became unresponsive.  restart fixed
  it.  looking at the logs now

  On Mar 24, 2008, at 12:48 PM, percious wrote:

   Any reason for this?

   -chris
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamically adding MapperExtension

2007-03-13 Thread percious

I think I came up with a decent solution for this:  I do something
like:

extension = MyMapperExtension()
for mapper in mapper_registry.values():
mapper.extension = extension
mapper._compile_extensions()

If you want all classes currently defined in the metadata to map to
your new extension.  This is actually working quite well.

cheers.
chris

On Feb 9, 11:33 am, Michael Bayer [EMAIL PROTECTED] wrote:
 I should add that you *can* be more creative, and construct each mapper
 with aMapperExtensionat the beginning which later can be enabled with
 a hello world callable.  i.e.

 class MyExt(MapperExtension):
 def __init__(self):
 self.func = None
 def after_insert(self, ...):
 if self.func:
  self.func()

 extension = MyExt()
 mapper(foo, bar, extension=extension)
 mapper(foo2, bar2, extension=extension)
 mapper(foo3, bar3, extension=extension)

 ... do stuff ...

 def helloworld():
 print hello world
 extension.func = helloworld

 ... do stuff ...

 percious wrote:

  question.

  Lets say I have a series of table definitions, and a series of objects
  linked to the tables with a bunch of mappers.

  First question:  Is there a way to get from the table definitions in
  the metadata to the Mapper?

  Second question:  If I create aMapperExtension, can I then link it to
  the mapper associated with a table?

  What I want to do is create a simple application that goes through the
  tables defined within a metadata, and create an extension so that
  every time a table entry is added it prints 'hello world' to the
  screen.

  TIA
  -chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ping

2007-03-06 Thread percious

Problem is, using turbogears I don't really have access to the
pool_recycle without some horrible monkey patch.

FYI, my ping seems to work, so I'm going with that for now.

cheers.
-percious

On Mar 5, 1:00 pm, Sébastien LELONG [EMAIL PROTECTED]
securities.fr wrote:
  I need to ping the Mysql database to keep it alive.

  Is there a simple way to do this with SA?

 Maybe you'd want to have a look at the pool_recycle argument of
 create_engine. It'll make the connection pool beeing checked if active, and
 potentially recycle the connections if they are too old. Search for MySQL
 has gone away, you'll have different threads talking about that.

 Hope it helps.

 Seb
 --
 Sébastien LELONG
 sebastien.lelong[at]sirloon.nethttp://www.sirloon.net


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Ping

2007-03-05 Thread percious

I need to ping the Mysql database to keep it alive.

Is there a simple way to do this with SA?

-chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQA failing on table creation

2007-03-01 Thread percious

Here is the dump:

[EMAIL PROTECTED] percious]$ tg-admin sql create
Creating tables at mysql://percious:[EMAIL PROTECTED]:3306/percious
Traceback (most recent call last):
  File /home2/percious/bin/tg-admin, line 7, in ?
sys.exit(
  File /home2/percious/lib/python2.4/TurboGears-1.0-py2.4.egg/
turbogears/command/base.py, line 389, in main
command.run()
  File /home2/percious/lib/python2.4/TurboGears-1.0-py2.4.egg/
turbogears/command/base.py, line 115, in run
sacommand(command, sys.argv)
  File string, line 5, in sacommand
  File /home2/percious/lib/python2.4/TurboGears-1.0-py2.4.egg/
turbogears/command/base.py, line 70, in sacreate
metadata.create_all()
  File build/bdist.linux-i686/egg/sqlalchemy/schema.py, line 891, in
create_all
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line
434, in create
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line
458, in _run_visitor
  File build/bdist.linux-i686/egg/sqlalchemy/schema.py, line 911, in
accept_schema_visitor
  File build/bdist.linux-i686/egg/sqlalchemy/ansisql.py, line 682,
in visit_metadata
  File build/bdist.linux-i686/egg/sqlalchemy/schema.py, line 266, in
accept_schema_visitor
  File build/bdist.linux-i686/egg/sqlalchemy/ansisql.py, line 717,
in visit_table
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line
854, in execute
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line
386, in proxy
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line
350, in _execute_raw
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line
369, in _execute
sqlalchemy.exceptions.SQLError: (OperationalError) (1071, 'Specified
key was too long; max key length is 999 bytes') '\nCREATE TABLE
`Album` (\n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\tname
VARCHAR(128), \n\tdirectory VARCHAR(512), \n\t`imageOrder`
VARCHAR(512), \n\t`coverImage` INTEGER, \n\tPRIMARY KEY (id), \n\t
UNIQUE (directory), \n\t FOREIGN KEY(`coverImage`) REFERENCES `Image`
(id)\n)\n\n' ()

Here is the table code:
AlbumTable = Table('Album', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(128)),
Column('directory', Unicode(512), unique=True),
Column('imageOrder', Unicode(512)),
Column('coverImage', Integer, ForeignKey('Image.id')),
)
Mysql version 5.0.27

TIA
-chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] drop_all not working for me

2007-02-12 Thread percious

See test case:

from turbogears import config
from turbogears.database import metadata
from turbogears import database

from sqlalchemy import Table, Column, Integer, Unicode
import sqlalchemy.orm

config.update({sqlalchemy.dburi:sqlite:///:memory:})
database.bind_meta_data()

Table('t_table', metadata,
  Column('id', Integer, primary_key=True),
  Column('data', Unicode(255)),
  ).create()

Table('t_table_history', metadata,
  Column('id', Integer, primary_key=True),
  Column('data', Unicode(255)),
  ).create()

assert  metadata.tables.keys() == ['t_table', 't_table_history']

metadata.drop_all(tables=['t_table_history',])

#fails
assert  metadata.tables.keys() == ['t_table']


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread percious



On Feb 12, 3:49 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 drop_all() doesnt remove Table instances from the metadata.  the Table
 object is a python object, it only represents your real database
 table.  you may well want to call create_all() again using that same
 Table.

 On Feb 12, 3:20 pm, percious [EMAIL PROTECTED] wrote:

  See test case:

  from turbogears import config
  from turbogears.database import metadata
  from turbogears import database

  from sqlalchemy import Table, Column, Integer, Unicode
  import sqlalchemy.orm

  config.update({sqlalchemy.dburi:sqlite:///:memory:})
  database.bind_meta_data()

  Table('t_table', metadata,
Column('id', Integer, primary_key=True),
Column('data', Unicode(255)),
).create()

  Table('t_table_history', metadata,
Column('id', Integer, primary_key=True),
Column('data', Unicode(255)),
).create()

  assert  metadata.tables.keys() == ['t_table', 't_table_history']

  metadata.drop_all(tables=['t_table_history',])

  #fails
  assert  metadata.tables.keys() == ['t_table']

Would it make sense to add the following code to line 905 in your
schema.py???
if tables is None:
self.tables.clear()
else:
for table in tables:
if type(table) is str:
del self.tables[table]
else:
for k, t in self.tables.iteritems():
if t is table:
del self.tables[k]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Dynamically adding MapperExtension

2007-02-09 Thread percious

question.

Lets say I have a series of table definitions, and a series of objects
linked to the tables with a bunch of mappers.

First question:  Is there a way to get from the table definitions in
the metadata to the Mapper?

Second question:  If I create a MapperExtension, can I then link it to
the mapper associated with a table?

What I want to do is create a simple application that goes through the
tables defined within a metadata, and create an extension so that
every time a table entry is added it prints 'hello world' to the
screen.

TIA
-chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Multiple joins and cascading delete

2007-01-18 Thread percious


Consider the following model:
class Comment(object):
   def __init__(self, text):
   self.text = text


CommentTable=Table('Comment', metadata,
   Column('id', Integer, primary_key=True),
   Column('text', Unicode(512)),
   )

CommentMapper = mapper(Comment, CommentTable

   )

class Image(object):
   def __init__(self, name):
   self.name = name


ImageTable = Table(Image, metadata,
   Column('id', Integer, primary_key=True),
   Column('name', Unicode(128)),
   )

#relationship between Images and Comments
ImageCommentTable = Table(ImageComment, metadata,
 Column(imageID, Integer,
 ForeignKey(Image.id),
 primary_key=True),
 Column(commentID, Integer,
 ForeignKey(Comment.id),
 primary_key=True),
   )

class ImageComment(object): pass

mapper(ImageComment, ImageCommentTable, properties ={
   'comments':relation(Comment, lazy=False,  cascade=all)
   })

imageMapper = mapper(Image, ImageTable,
   properties={'comments': relation(Comment,
secondary=ImageCommentTable, lazy=False)}
   )

and the following code:

i = Image(new)
session.save(i)
session.flush()
c = Comment(new comment)
session.save(c)
session.flush()
i.comments.append(c)
session.save(i)
session.flush()

OK, so that should make an entry in all three tables.

Now, I want to remove the comment:

c = session.query(Comment).get(c.id)

session.delete(c)
session.flush()
session.clear()

Now, if you run the sqlcomment: select * from imagecomment you still
see the relationship to the comment that is no longer there.

Perhaps I do not understand cascade properly, but delete-orphan
causes a problem on object creation.

help?

-chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiple mappers for Many to Many Relationships

2007-01-14 Thread percious


Thanks Mike.  Works fine.  I'll buy you a beer at Pycon if I see you
there... Please let me know if we can fix my original post like I
emailed you about.

-chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---