[sqlalchemy] Re: sqlite unicode/datetime issue in 0.4

2007-08-16 Thread JP


  Is there a different way to do this that's compatible with both
  versions, or is there a new 0.4 way of doing this, or have I just been
  doing something wrong all along and only 0.4 is catching it?

 DateTime objects in sqlite currently expect a datetime.datetime
 object, and not a string.  previous versions accepted both.  a lot of
 people are getting this issue so i think im going to see what
 psycopg2 and mysqldb do, and emulate them.  historically ive been
 against hardcoding a particular string format.

What's odd here is that I'm not setting the value myself -- it's
getting set in pre_exec by the column default, which (I believe) is
running the func in sqlite and returning the results, which I'd think
should wind up with a datetime.datetime. I'll try to put together a
minimal test case that shows what's happening.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlite unicode/datetime issue in 0.4

2007-08-16 Thread JP

Yup, that fixed things. Thanks!

On Aug 16, 10:49 am, Michael Bayer [EMAIL PROTECTED] wrote:
 ok thats in r3322


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sqlite unicode/datetime issue in 0.4

2007-08-15 Thread JP

I'm (finally) working on 0.4 compatibility in a couple of small
applications at work. One issue that is cropping up is that when using
sqlite, columns that are defined like this:

Column('established', TIMESTAMP(timezone=True), nullable=False,
  default=func.current_timestamp(type=TIMESTAMP)),

which work correctly in 0.3.10, in 0.4 cause this error:

  File ./sqlalchemy_trunk/lib/sqlalchemy/engine/default.py, line
245, in pre_execution
self.pre_exec()
  File ./sqlalchemy_trunk/lib/sqlalchemy/engine/default.py, line
258, in pre_exec
self.parameters =
self.__convert_compiled_params(self.compiled_parameters)
  File ./sqlalchemy_trunk/lib/sqlalchemy/engine/default.py, line
228, in __convert_compiled_params
parameters = parameters.get_raw_list(processors)
  File ./sqlalchemy_trunk/lib/sqlalchemy/sql.py, line 867, in
get_raw_list
return [
  File ./sqlalchemy_trunk/lib/sqlalchemy/databases/sqlite.py, line
41, in process
return value.strftime(self.__format__)
AttributeError: 'unicode' object has no attribute 'strftime'

Is there a different way to do this that's compatible with both
versions, or is there a new 0.4 way of doing this, or have I just been
doing something wrong all along and only 0.4 is catching it?

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a renaming proposal

2007-07-27 Thread JP


 So here's what I was thinking:

 datasource = create_datasource('posgresql:///test')
 connection = datasource.connect()

+0 -- I think it's moderately better than engine but for me, not
enough to want to change, though I wouldn't mind seeing it changed.

 catalog = Catalog()
 Table('foo', catalog, autoload=True)
 catalog.bind = datasource

 catalog.bind = 'sqlite:///'

+1

Catalog is, I think, the name we were all trying to come up with back
when metadata first appeared. Really makes it clear what the function
of the object is. Very nice.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-14 Thread JP


 can I have some feedback from the list who is using the thread-local
 capability of DynamicMetaData, and/or are using global_connect() ?
 (another feature i didnt like but it was me bowing to early pressure
 from TG users).

Probably this will be no surprise, since I contributed most of the
original ProxyEngine. ;)

I use DynamicMetaData exactly as you described the main pylons/TG use
case: I define tables at the module level, using a DynamicMetaData
instance, and then call meta.connect(uri) at the start of each
request.  For me, this is the most sensible way to handle things in a
WSGI-friendly web app. Minimal overhead, maximal flexibility.
meta.connect(uri, threadlocal=True) or meta.connect_threadlocal(uri)
would be ok, but I think worse as an API since they put too much
responsibility on the caller and cause too much repetition.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: fetchmany() vs sqlite

2007-03-07 Thread JP

I filed a ticket with a patch:

http://www.sqlalchemy.org/trac/ticket/505

JP

On Mar 7, 11:17 am, Michael Bayer [EMAIL PROTECTED] wrote:
 OK its looking like having it as positional might just be what they
 all commonly had in mind...

 On Mar 7, 2007, at 5:15 AM, Tim Golden wrote:



  On Mar 6, 11:37 pm, Michael Bayer [EMAIL PROTECTED] wrote:
  OK who can try Mysql, MS-SQL, Oracle, Firebird for me ?  while it
  seems like it should work positionally for all of them, it sort of
  seems it should work keyword-wise as well if the DBAPI authors
  actually read the spec they were developing for.

  MSSQL with pyodbc gives:

  Traceback (most recent call last):
File stdin, line 1, in ?
File c:\work_in_progress\sqlalchemy\lib\sqlalchemy\engine\base.py,
  line 982, in fetchmany
  rows = self.cursor.fetchmany(size=size)
  TypeError: fetchmany() takes no keyword arguments

  MSSQL with adodbapi works ok

  MSSQL with pymssql works ok

  The pyodbc cursor.fetchmany *does* allow the
  rowcount as a positional argument:

  code
  from sqlalchemy import *
  db = create_engine (MSSQL://VODEV1/DEV)
  q = db.raw_connection ().cursor ()
  q.execute (SELECT * FROM wb_parcels)
  q.fetchmany (2)
  # [pyodbc.Row object at 0x00A7A728, pyodbc.Row object at
  0x00A7A368]
  /code

  TJG


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread JP

Yeah, I think I explained badly.

What I was trying to show with the sqlalchemy vs postgres query logs
is that extra BEGIN that psycopg2 is sending before the SELECT that
sqlalchemy executes. The BEGIN is implicitly added by psycopg2 because
it's in its default transaction isolation state of read comitted.
Which I didn't know was the DBAPI default -- so strike what I said
about changing any default behavior.

But It would be nice to be able to have total control over the
transactional state of the connection, so that when I know that I'm
just doing a select or two I don't have to have the overhead of a
BEGIN that I know is useless, but as things stand I can't do that,
because do_begin isn't implemented for any of the dialects. I guess
that's because sqlalchemy is depending on the underlying DBAPI
implementation to handle sending BEGIN statements before the first
query that is outside of a transaction?

I noticed a TODO about implementing true autocommit behavior, which is
what I need -- a way to tell the dbapi module *not* to start those
implicit transactions. If the engine/connection could be put into true
autocommit mode, and issue connection.begin() from do_begin when in
that mode, then I could do everything I want to do and I don't think
anything would break, since the orm calls do_begin before each flush.
I know how to change the isolation level in the sqlite and postgres
dialects -- would you be interested in a patch or work on a branch
where I added an isolation_level property to the engine, implemented
it for those two, and for those two dialects had do_begin issue a
connection.begin() when the isolation_level was set to autocommit? I
could probably tackle mysql as well, but I have no access to mssql or
oracle, so I can't do those.

Hopefully I'm making some more sense this time...

JP


On Feb 12, 12:42 am, Michael Bayer [EMAIL PROTECTED] wrote:
 im confused.  the django thread seems to be saying that it is setting
 psycopg2 *into* autocommit mode, which causes the problem; the default
 setting of autocommit for DBAPI (and maintained in SA) is False.  When
 i wrote frameworks years ago I always thought autocommit mode was
 the way to go but i since learned that issues like this arise  so SA
 *never* uses autocommit mode on the connection; instead it implements
 its own autocommitting behavior in a manner similar to hibernate -
 it looks at the statement being executed, determines if its a CRUD/
 CREATE/DROP expression, and then explicitly calls COMMIT if no SA
 transaction is already in progress.

 the log you have above doesnt prove that anything unusual is going on,
 since you are illustrating a transactional operation, then a SELECT,
 then another transactional operation, then another SELECT. SA does an
 explicit COMMIT for the CREATE statements since they are required to
 complete the table creation operation.

 this test confirms that psycopg2 defaults to autocommit as false and
 doesnt do anything with transaction modes after the connection is
 opened:

 import psycopg2 as psycopg

 conn = psycopg.connect(user='scott', password='tiger',
 host='127.0.0.1', database='test')

 for x in range(0, 5):
 curs = conn.cursor()
 curs.execute(SELECT 1)
 curs.close()

 log output:

 LOG:  statement: SET DATESTYLE TO 'ISO'
 LOG:  statement: SHOW client_encoding
 LOG:  statement: SHOW default_transaction_isolation
 LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: ABORT

 one connection, five new cursors, only one set of setup
 corresponding to the connection.

 On Feb 9, 3:52 pm, JP [EMAIL PROTECTED] wrote:

  I noticed this thread on django-devs:

 http://groups.google.com/group/django-developers/browse_frm/thread/52...

  Which notes that psycopg2 by default starts transactions behind the
  scenes, even for select statements. If you happen to be running a web
  app where each hit starts a new cursor, and you only do a few selects,
  this adds a *lot* of overhead relative to autocommit.

  I wondered if sqlalchemy suffered from the same problem with psycopg2,
  and it does. Take a look at what sqlalchemy thinks it was doing for a
  short example, vs what the postgres query log contains.

  The code:

   import sqlalchemy as sa
   meta = sa.DynamicMetaData()
   users = sa.Table('users', meta, sa.Column('id', sa.Integer, 
   primary_key=True, autoincrement=True), sa.Column('name', sa.String(50)))
   meta.connect('postgres://snip', debug=True)
   meta.create_all()
   users.select().execute()

  sqlalchemy query log:
  select relname from pg_class where lower(relname) = %(name)s
  CREATE TABLE users (
  id SERIAL NOT NULL,
  name VARCHAR(50),
  PRIMARY KEY (id)
  )
  COMMIT
  SELECT users.id, users.name FROM users

  postgres query log:
  LOG:  statement: SET DATESTYLE TO 'ISO'
  LOG

[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread JP

 like the example illustrates, there is no
 BEGIN being issued for every SELECT statement when using psycopg2 in
 non-autocommit mode, which applies to SQLAlchemy as well.   therefore
 there is no performance bottleneck; this is a django issue only.

I guess we're reading the example differently. I see this BEGIN:

LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: SELECT users.id, users.name FROM users

in the postgres log, but as far as I can tell sqlalchemy is not
sending it; it's being implicitly inserted by psycopg2 because the
isolation level of the connection is read-committed and it sees a
statement outside of a transaction. If the connection were in the
autocommit isolation level, doing the same thing in sqlalchemy would
not result in that BEGIN, only the SELECT would be issued.

  I noticed a TODO about implementing true autocommit behavior, which is
  what I need -- a way to tell the dbapi module *not* to start those
  implicit transactions.

 it doesnt.  run the example - no implicit transaction beyond the
 initial connection (which is required, since it must implement
 rollback() and commit()).

Either I'm very confused or I'm having lots of trouble getting my
point across. In my experience, it is absolutely the case that the
dbapi drivers are starting implicit transactions when they aren't in
autocommit mode -- otherwise I don't see how any transactions would
work correctly, since do_begin is a no-op everywhere -- so if the
dbapi modules aren't inserting the BEGIN to start transactions, I
don't see where it could be coming from. What am I missing? If I'm
wrong about what's going on here, how does sqlalchemy start
transactions when it needs to, and where is that BEGIN before the
select in my example coming from?

I have an example that I think illustrates what I'm talking about
pretty well, so I'm going to file a ticket and attach the example and
the postgres query log I see when I run it. Hopefully that will clear
things up one way or another.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] psycopg2 default transaction inefficiency

2007-02-11 Thread JP

I noticed this thread on django-devs:

http://groups.google.com/group/django-developers/browse_frm/thread/521a03a726d526e1/b1bacc5628341129?lnk=gstq=psycopg2rnum=1#b1bacc5628341129

Which notes that psycopg2 by default starts transactions behind the
scenes, even for select statements. If you happen to be running a web
app where each hit starts a new cursor, and you only do a few selects,
this adds a *lot* of overhead relative to autocommit.

I wondered if sqlalchemy suffered from the same problem with psycopg2,
and it does. Take a look at what sqlalchemy thinks it was doing for a
short example, vs what the postgres query log contains.

The code:

 import sqlalchemy as sa
 meta = sa.DynamicMetaData()
 users = sa.Table('users', meta, sa.Column('id', sa.Integer, 
 primary_key=True, autoincrement=True), sa.Column('name', sa.String(50)))
 meta.connect('postgres://snip', debug=True)
 meta.create_all()
 users.select().execute()

sqlalchemy query log:
select relname from pg_class where lower(relname) = %(name)s
CREATE TABLE users (
id SERIAL NOT NULL,
name VARCHAR(50),
PRIMARY KEY (id)
)
COMMIT
SELECT users.id, users.name FROM users

postgres query log:
LOG:  statement: SET DATESTYLE TO 'ISO'
LOG:  statement: SHOW client_encoding
LOG:  statement: SHOW default_transaction_isolation
LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: select relname from pg_class where lower(relname) =
'users'
LOG:  statement:
CREATE TABLE users (
id SERIAL NOT NULL,
name VARCHAR(50),
PRIMARY KEY (id)
)
LOG:  statement: END
LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: SELECT users.id, users.name FROM users

I think it would be better for sqlalchemy to set psycopg2 to
autocommit by default, and implement do_begin in the dialect so that
transactions are only started when desired, rather than implicitly on
the first statment seen by a cursor when there's no current
transaction, as seems to be the case now.

Mike, would you be interested in a patch that implemented that
behavior? Does anyone disagree that that would be better?

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.3.4 released

2007-01-24 Thread JP


 I cant keep up with all thats going on at this point, and the trunk
 was just burstingso I need to get out what we have thus far and
 see how far it flies.

Not very far around here, I'm afraid. Just updated and ran my test
suite (which passes with 0.3.3) and I'm getting batches of errors like:

  File (redacted), line 162, in test_offer_interfaces
offer_resp.options.append(offer_opt)
  File
/home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/attributes.py,
line 418, in append
self._data_appender(item)
  File
/home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/attributes.py,
line 390, in __getattr__
return getattr(self.data, attr)
  File /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/query.py,
line 301, in __getattr__
raise AttributeError(key)
AttributeError: _data_appender

The relation is a normal one-many, set up using assign_mapper on the
many side:

assign_mapper(CTX, OfferOption, offer_option, properties={
'offer': relation(OfferResponse, backref='options')
})

I've also seen errors with inherited mappers seeming to no longer
populate foreign keys properly on flush, but those are a little harder
to boil down to a simple, non-NDS-violating example.

Should I file a ticket for the _data_appender error? What else can I do
to help track it down?

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.3.4 released

2007-01-24 Thread JP

I filed #438 for the _data_appender thing:
http://www.sqlalchemy.org/trac/ticket/438.

For the other one, I'll try to put together a simple test case outside
of the code that belongs to my employer. I'll also see what I can do
about setting up a buildbot in our system to test our stuff against SA
trunk. Both of those are likely to take a few days, time is a bit tight
this week.

JP

On Jan 24, 11:41 am, Michael Bayer [EMAIL PROTECTED] wrote:
 see, i wish folks would use the trunk more.  yeah, file tickets, and
 every bug fixed becomes a unit test.  this one seems like something
 trivial.  the inheritance thing, try to track that down...we have *so
 many* inheritance tests at this point i cant believe theres still
 basic issues coming up.

 On Jan 24, 2007, at 11:26 AM, JP wrote:



  I cant keep up with all thats going on at this point, and the trunk
  was just burstingso I need to get out what we have thus far and
  see how far it flies.

  Not very far around here, I'm afraid. Just updated and ran my test
  suite (which passes with 0.3.3) and I'm getting batches of errors
  like:

File (redacted), line 162, in test_offer_interfaces
  offer_resp.options.append(offer_opt)
File
  /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/attributes.py,
  line 418, in append
  self._data_appender(item)
File
  /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/attributes.py,
  line 390, in __getattr__
  return getattr(self.data, attr)
File /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/query.py,
  line 301, in __getattr__
  raise AttributeError(key)
  AttributeError: _data_appender

  The relation is a normal one-many, set up using assign_mapper on the
  many side:

  assign_mapper(CTX, OfferOption, offer_option, properties={
  'offer': relation(OfferResponse, backref='options')
  })

  I've also seen errors with inherited mappers seeming to no longer
  populate foreign keys properly on flush, but those are a little harder
  to boil down to a simple, non-NDS-violating example.

  Should I file a ticket for the _data_appender error? What else can
  I do
  to help track it down?
 
  JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.3.4 released

2007-01-24 Thread JP

I added some info from pdb and from the patch you attached to the
ticket. Both agree that for some reason, the attribute is a Query
instance instead of an InstrumentedList instance, which is more than a
little confusing. I'm trying to come up with a simplified test case,
but so far I haven't been able to reproduce the error outside of the
full context.

JP

On Jan 24, 12:44 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 OK we have a lot of tests that pass with assign_mapper...can you give
 me a little more of a test case for 438 ?  its a weird error since
 not much has changed with attributes...

 On Jan 24, 2007, at 12:12 PM, JP wrote:



  I filed #438 for the _data_appender thing:
 http://www.sqlalchemy.org/trac/ticket/438.

  For the other one, I'll try to put together a simple test case outside
  of the code that belongs to my employer. I'll also see what I can do
  about setting up a buildbot in our system to test our stuff against SA
  trunk. Both of those are likely to take a few days, time is a bit
  tight
  this week.

  JP

  On Jan 24, 11:41 am, Michael Bayer [EMAIL PROTECTED] wrote:
  see, i wish folks would use the trunk more.  yeah, file tickets, and
  every bug fixed becomes a unit test.  this one seems like something
  trivial.  the inheritance thing, try to track that down...we have *so
  many* inheritance tests at this point i cant believe theres still
  basic issues coming up.

  On Jan 24, 2007, at 11:26 AM, JP wrote:

  I cant keep up with all thats going on at this point, and the trunk
  was just burstingso I need to get out what we have thus far and
  see how far it flies.

  Not very far around here, I'm afraid. Just updated and ran my test
  suite (which passes with 0.3.3) and I'm getting batches of errors
  like:

File (redacted), line 162, in test_offer_interfaces
  offer_resp.options.append(offer_opt)
File
  /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/attributes.py,
  line 418, in append
  self._data_appender(item)
File
  /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/attributes.py,
  line 390, in __getattr__
  return getattr(self.data, attr)
File /home/jhp/work/sqlalchemy_0_3_4/lib/sqlalchemy/orm/
  query.py,
  line 301, in __getattr__
  raise AttributeError(key)
  AttributeError: _data_appender

  The relation is a normal one-many, set up using assign_mapper on the
  many side:

  assign_mapper(CTX, OfferOption, offer_option, properties={
  'offer': relation(OfferResponse, backref='options')
  })

  I've also seen errors with inherited mappers seeming to no longer
  populate foreign keys properly on flush, but those are a little
  harder
  to boil down to a simple, non-NDS-violating example.

  Should I file a ticket for the _data_appender error? What else can
  I do
  to help track it down?
 
  JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.3.4 released

2007-01-24 Thread JP

(repeating my last ticket comment)

Ok, I think I've tracked this down. The error is occurring because the
name of the backref is 'options', which as of r2205 is being
overwritten by assign_mapper.

I'm not sure what's the right thing to do here. I don't think
assign_mapper can avoid stepping on backrefs like this one, since (I
think) the backref doesn't exist when assign_mapper is monkeypatching
the domain model class.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New Plugin - AssociationProxy

2006-11-03 Thread JP

This is really awesome. It would be even awesomer if append()/create()
could optionally take keyword arguments that would be passed to the
creator, so that the creator would be able to populate other
association fields if it knows how. I'm thinking of something like:

post.tags.append('awesomer', user=me)

If that seems interesting to others, I can probably whip up a patch
this weekend (though it's been a while since I got my hands dirty on
the insides of sqlalchemy).

JP


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---