[sqlalchemy] Re: Changing lazy property for primary mapper on the fly

2007-03-26 Thread Koen Bok

Ok, thanks!

On Mar 26, 4:26 am, Michael Bayer [EMAIL PROTECTED] wrote:
 no.  do a clear_mappers() and build your mappers again if you need to  
 change the base configuration.

 On Mar 25, 2007, at 3:35 PM, Koen Bok wrote:



  I get this, but that's only on a particular query object. That makes
  sense, but is there no way to 'globally' set this in the mapper?

  Koen

  On Mar 25, 4:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 http://www.sqlalchemy.org/docs/
  datamapping.html#datamapping_selectrelations_options

  On Mar 25, 2007, at 6:50 AM, Koen Bok wrote:

  Is it possible to change the lazy property of a primary mapper of an
  object's relation on the fly so from then on it wil change the eager
  loading of that relation.

  I want to use this for a small debug window in my app where I can
  change these on the fly to test which settings are optimal for the
  connection.

  Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-03-26 Thread Gaetan de Menten

On 3/23/07, Michael Bayer [EMAIL PROTECTED] wrote:

 OK, this is actually something people have asked for a lot.  in the
 beginning, recently, etc.  also for different reasons...i.e.
 convenience, or performance, etc.   So, first off let me start by
 illustrating how this use case is done right now.   Assuming your
 Address mapper has a backref user to the User mapper, its  just:

 for address in session.query(Address).filter_by(user=someuser).filter
 (address_table.c.postcode == 5000):
 print address.street

Once again, I discover a better way to do something I did the hard way.

 But,  the join conditions and bind params which
 have been calculated by LazyLoader are just sitting there, they can
 be currently pulled out with a little non-API attribute access but
 Ive no problem with adding some API-level accessors to get at the
 Query object calculated for a particular property (i.e. what you are
 using in your patch internally).

Yes, please do, that'd probably solve the problem nicely (see below
how I see things).


 now lets look at the way your patch does it.

 addresses = user.addresses.filter(address_table.c.postcode == 5000)

 seems easy.  right ?  remember that user is now in the session.
 anyone else that queries for user will get that same User
 instance.  but the rest of the app is going to assume normal
 relationship semantics on that collectionwhich means:

How I envisioned things, this wouldn't be a problem, because
user.addresses.filter(xxx) would return a new, independant list, which
doesn't affect user.addresses, and is not affected if user.addresses
has already been accessed or not. This is not what my patch does, I
know. Sorry for not explaining this in my first mail.

 print someaddress in user.addresses # -- FAIL - the address is not
 present
 user.addresses.remove(someaddress) # -- ERROR - the address is not
 present

 user.addresses.insert(5, someotheraddress) # -- FAIL - the list is
 incomplete, ordering will be incorrect

This is only a matter of getattr and __contains__ triggering init,
right? (At least if we exclude the other problems pointed above).

 session.flush()  # -- FAIL - we have to figure out what items were
 added/removed/unchanged from the collection...but the data's
 incomplete !

I don't master SQLAlchemy internals but I don't see how that is
different from when the collection is complete?

 so as long as we can agree on the its a read-only thing aspect of
 this, we're good to go.  otherwise you have to define for me how all
 those mutating operations are going to work (and even then, its
 additional core complexity im not sure if i can add to my support-load).

I'm fine with the readonly aspect of it. What I don't like is the fact
you have to create a readonly relation (lazyloader/whatever/...) in
advance (ie in your mapper), which is IMHO just a dupe of the normal
relation and pollutes the mapper. You'd end up with mappers like this:

 mapper(SomeClass, table, properties={
 'addresses':relation(Address)
 'addresses2':lazyloader(Address)
 })

which is pretty much as ugly as you can get.

On the other hand, I think that combined with a quick way to have
predefined filters it might be a nice addition anyway:

 mapper(SomeClass, table, properties={
 'addresses': relation(Address)
 'local_addresses': lazyloader(Address,
filter=address.c.postcode==5000)
 })

But it does in no case replace the dynamic non-polluting use-case
I'd like to have. What I had in mind is to reuse normal relations to
get a query. It feels much more natural and cleaner to me. And I think
the best compromise would be something along the lines of:

user.addresses: # use standard relation = read/write
user.addresses.filter(XXX): # returns a query = read only

the code would probably be cleaner if we did something more explicit like:

user.addresses.query # returns the query object that you can filter, etc...

though, as a user, I'd prefer the first solution.

Wouldn't that be possible? I think it should be. You only need to keep
the deferred approach of the InstrumentedList that I demonstrated in
my patch, so that the whole list is not fetched before we get the
query object, which would ruin the whole idea. Of course it was only a
proof-of-concept patch, but I think it should be fixable.

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-03-26 Thread Gaetan de Menten

And by the way, if you agree with that direction of things, I'd
happily work on a patch for the query-on-relation thing.

On 3/26/07, Gaetan de Menten [EMAIL PROTECTED] wrote:
 On 3/23/07, Michael Bayer [EMAIL PROTECTED] wrote:

  OK, this is actually something people have asked for a lot.  in the
  beginning, recently, etc.  also for different reasons...i.e.
  convenience, or performance, etc.   So, first off let me start by
  illustrating how this use case is done right now.   Assuming your
  Address mapper has a backref user to the User mapper, its  just:
 
  for address in 
  session.query(Address).filter_by(user=someuser).filter
  (address_table.c.postcode == 5000):
  print address.street

 Once again, I discover a better way to do something I did the hard way.

  But,  the join conditions and bind params which
  have been calculated by LazyLoader are just sitting there, they can
  be currently pulled out with a little non-API attribute access but
  Ive no problem with adding some API-level accessors to get at the
  Query object calculated for a particular property (i.e. what you are
  using in your patch internally).

 Yes, please do, that'd probably solve the problem nicely (see below
 how I see things).


  now lets look at the way your patch does it.
 
  addresses = user.addresses.filter(address_table.c.postcode == 5000)

  seems easy.  right ?  remember that user is now in the session.
  anyone else that queries for user will get that same User
  instance.  but the rest of the app is going to assume normal
  relationship semantics on that collectionwhich means:

 How I envisioned things, this wouldn't be a problem, because
 user.addresses.filter(xxx) would return a new, independant list, which
 doesn't affect user.addresses, and is not affected if user.addresses
 has already been accessed or not. This is not what my patch does, I
 know. Sorry for not explaining this in my first mail.

  print someaddress in user.addresses # -- FAIL - the address is not
  present
  user.addresses.remove(someaddress) # -- ERROR - the address is not
  present
 
  user.addresses.insert(5, someotheraddress) # -- FAIL - the list is
  incomplete, ordering will be incorrect

 This is only a matter of getattr and __contains__ triggering init,
 right? (At least if we exclude the other problems pointed above).

  session.flush()  # -- FAIL - we have to figure out what items were
  added/removed/unchanged from the collection...but the data's
  incomplete !

 I don't master SQLAlchemy internals but I don't see how that is
 different from when the collection is complete?

  so as long as we can agree on the its a read-only thing aspect of
  this, we're good to go.  otherwise you have to define for me how all
  those mutating operations are going to work (and even then, its
  additional core complexity im not sure if i can add to my support-load).

 I'm fine with the readonly aspect of it. What I don't like is the fact
 you have to create a readonly relation (lazyloader/whatever/...) in
 advance (ie in your mapper), which is IMHO just a dupe of the normal
 relation and pollutes the mapper. You'd end up with mappers like this:

  mapper(SomeClass, table, properties={
  'addresses':relation(Address)
  'addresses2':lazyloader(Address)
  })

 which is pretty much as ugly as you can get.

 On the other hand, I think that combined with a quick way to have
 predefined filters it might be a nice addition anyway:

  mapper(SomeClass, table, properties={
  'addresses': relation(Address)
  'local_addresses': lazyloader(Address,
 filter=address.c.postcode==5000)
  })

 But it does in no case replace the dynamic non-polluting use-case
 I'd like to have. What I had in mind is to reuse normal relations to
 get a query. It feels much more natural and cleaner to me. And I think
 the best compromise would be something along the lines of:

 user.addresses: # use standard relation = read/write
 user.addresses.filter(XXX): # returns a query = read only

 the code would probably be cleaner if we did something more explicit like:

 user.addresses.query # returns the query object that you can filter, etc...

 though, as a user, I'd prefer the first solution.

 Wouldn't that be possible? I think it should be. You only need to keep
 the deferred approach of the InstrumentedList that I demonstrated in
 my patch, so that the whole list is not fetched before we get the
 query object, which would ruin the whole idea. Of course it was only a
 proof-of-concept patch, but I think it should be fixable.

 --
 Gaëtan de Menten
 http://openhex.org



-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To 

[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-03-26 Thread Michael Bayer


On Mar 26, 2007, at 6:15 AM, Gaetan de Menten wrote:


 though, as a user, I'd prefer the first solution.

 Wouldn't that be possible? I think it should be. You only need to keep
 the deferred approach of the InstrumentedList that I demonstrated in
 my patch, so that the whole list is not fetched before we get the
 query object, which would ruin the whole idea. Of course it was only a
 proof-of-concept patch, but I think it should be fixable.

my various issues with the deferred thing are as follows.   note that  
im not putting these out there as this is why we arent doing it, im  
putting it out there as this is why it makes me uncomfortable.

the current use case of has my lazy list been loaded? is:

addresses in myobject.__dict__

with deferred list, now we have to have a list element actually  
present there (but still loaded, maybe not).  so detecting when an  
attribute requires its callable fired off or not gets thorny...also  
breaks code for those who do it the above way, but also we need to  
add some new way to accomplish the above, which will probably have to  
be some messy function call like attribute_manager.is_loaded 
(myobject, addresses).

generally theres all sorts of places where we want to get at the  
attribute and fire it off, not fire it off (internally known as  
passive), etc. and the awareness of the deferred list there would  
have to be more deeply embedded throughout the attributes module for  
everything to keep working.  so my discomfort grows that we are  
changing the APIs of attributes.py, probably including its public  
API, just to suit something that IMHO is strictly for visual appeal.

also, while it looks cleaner, there is still a semantic co-mingling  
that im not very comfortable with.   i.e. that a collection of  
persisted objects on a parent class doubles as a database query  
object. to me the meaning of those two things is entirely different,  
and i think the decline of an API starts with minor conflation of  
concepts.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy orm doesn't create an instance of certain rows

2007-03-26 Thread Michael Bayer

i notice that neither your table DDL nor your mappers have any notion  
of a primary key, so thats not the complete application...whats below  
will throw an error immediately.

but the most likely cause for what youre seeing is that if any  
element of the primary key in a result row is None, no row will be  
loaded.  this behavior can be changed using the allow_null_pks  
option on your mapper.


On Mar 26, 2007, at 7:17 AM, Karthik Krishnamurthy wrote:


 Hi,
 I find that sqlalchemy isn't creating objects for certain rows.

 For example the following code gives me back an OpsDB.Node instance
 query.select_by(name='konsole12.xx.xx.com')[0]
 whereas
 query.select_by(name='konsole23.xx.xxx.com')[0]

 I have provided below the debugging output gotten by setting
 sqlalchemy.engine and sqlalchemy.orm at DEBUG, the table schema and
 the code that defines the table and the mapper for the said table.

 /kk

 DEBUG info
 ===
 INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS
 node_cport, node.type_id AS node_type_id, node.locshelf AS
 node_locshelf, node.s_time AS node_s_time, node.site_id AS
 node_site_id, node.console_id AS node_console_id, node.locside AS
 node_locside, node.locarea AS node_locarea, node.netswitch_id AS
 node_netswitch_id, node.bmodule2 AS node_bmodule2, node.sport AS
 node_sport, node.node_id AS node_node_id, node.bplug AS node_bplug,
 node.bmodule AS node_bmodule, node.parent_id AS node_parent_id,
 node.os_id AS node_os_id, node.bport2 AS node_bport2, node.backplane
 AS node_backplane, node.prop_id AS node_prop_id, node.status AS
 node_status, node.model_id AS node_model_id, node.locrow AS
 node_locrow, node.serialno AS node_serialno, node.bootbox2_id AS
 node_bootbox2_id, node.m_time AS node_m_time, node.ytag AS node_ytag,
 node.bplug2 AS node_bplug2, node.loccage AS node_loccage,
 node.ponumber AS node_ponumber, node.name AS node_name, node.racksize
 AS node_racksize, node.bootbox_id AS node_bootbox_id, node.locroom AS
 node_locroom, node.notes AS node_notes, node.bport AS node_bport,
 node.locrack AS node_locrack, node.c_time AS node_c_time, node.pdu AS
 node_pdu
 FROM node
 WHERE node.name = ? ORDER BY node.oid
 INFO:sqlalchemy.engine.base.Engine.0x..74:['konsole19.xx..com']
 DEBUG:sqlalchemy.orm.query.Query:instances()
 DEBUG:sqlalchemy.engine.base.Engine.0x..74:Row (30021, 106, 2, 2006,
 91, 106822, u'a', u'E', None, 0, 0, 106818, 0, 0, None, None, 0, u'',
 73, u'active', 179, u'19', u'TES113830', None, u'2006-11-30 19:31:28',
 u'126785', 0, 0, u'', u'konsole19.xx..com', 1, None, 0, u'', 0, 1,
 1135015620, u'')
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
 on [EMAIL PROTECTED]
 INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS
 node_cport, node.type_id AS node_type_id, node.locshelf AS
 node_locshelf, node.s_time AS node_s_time, node.site_id AS
 node_site_id, node.console_id AS node_console_id, node.locside AS
 node_locside, node.locarea AS node_locarea, node.netswitch_id AS
 node_netswitch_id, node.bmodule2 AS node_bmodule2, node.sport AS
 node_sport, node.node_id AS node_node_id, node.bplug AS node_bplug,
 node.bmodule AS node_bmodule, node.parent_id AS node_parent_id,
 node.os_id AS node_os_id, node.bport2 AS node_bport2, node.backplane
 AS node_backplane, node.prop_id AS node_prop_id, node.status AS
 node_status, node.model_id AS node_model_id, node.locrow AS
 node_locrow, node.serialno AS node_serialno, node.bootbox2_id AS
 node_bootbox2_id, node.m_time AS node_m_time, node.ytag AS node_ytag,
 node.bplug2 AS node_bplug2, node.loccage AS node_loccage,
 node.ponumber AS 

[sqlalchemy] Re: ResultProxy lowercasing column names

2007-03-26 Thread Mel Collins

On Mar 25, 3:54 pm, Michael Bayer [EMAIL PROTECTED] wrote:
   I've modified my SA again, so ResultProxy.keys uses the name as it
  comes out of the DB, and everything else uses the lower-cased version.
  Again, my stuff still works, but the same test fails as before
  (orm.inheritance5.RelationTest3).

 can I see a patch for how youre doing that ?  just changing the  
 casing of keys() breaks no tests on this end.

 After running some more tests, it seems the extra test failure is
nothing to do with my modification - just running alltests repeatedly
on the default install of SA results in, seemingly randomly, 1-3
failures:
  FAIL: test_check_constraint (sql.constraints.ConstraintTest)
  FAIL: testrelationonbaseclass_j2_data
(orm.inheritance5.RelationTest3)
  FAIL: testrelationonbaseclass_j2_nodata
(orm.inheritance5.RelationTest3)
 The first always fails, each of the other two seem to be 50-50 as to
whether they succeed or not...
 Arf, now there's another two:
FAIL: testrelationonbaseclass_j1_data (orm.inheritance5.RelationTest3)
FAIL: testrelationonbaseclass_j1_nodata
(orm.inheritance5.RelationTest3)

 I guess it's just inheritance5.RelationTest3 not playing nice. :/

 thats how column names are stored in those databases unless they are  
 quoted.

 Is there any reason not to always quote column names then? Wouldn't
this 'fix' the uppercasing of Orace/Firebird columns, or am I
misunderstanding the situation (again)..?

 Takk,
 - Mel C


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible use of pkg_resources plugins?

2007-03-26 Thread Michael Bayer

i think using entry points to load in external database dialects is a  
great idea.

though the current six core dialects i think i still want to load via  
__import__ though since im a big fan of running SA straight out of  
the source directory (and therefore thered be no entry points for  
those in that case).

so probably a check via __import__('sqlalchemy.databases') first,  
then an entry point lookup.  does that work ?


On Mar 26, 2007, at 11:45 AM, Monty Taylor wrote:


 Hey all,

 I wanted to check and see if a patch would be considered (before I  
 spend
 any time on it) to replace this:

 return getattr(__import__('sqlalchemy.databases.%s' %
 self.drivername).databases, self.drivername)

 from sqlalchemy.engine.url

 with something using the pkg_resources plugin stuff from setuptools?

 I ask, because I'm trying to write a new database engine that's a  
 fairly
 heavy write. (this is the NDB API thing that doesn't use SQL) I'm not
 touching any code so far that isn't in a single file in the databases
 dir, but there are a couple of us who are trying to work on the  
 project
 together. I'd really like to just version control that one file so we
 don't have to branch the whole sqlalchemy source. I also think it  
 might
 be nice to be able to distribute a sqlalchemy database engine without
 having to get it committed to the trunk.

 HOWEVER - I recognize that no one else might care about either of  
 these
 things. I don't think it will be a hard patch or one that will be
 disruptive to the current way of doing things, but I wanted to  
 check if
 it would be rejected out of hand before I bothered?

 Thanks!
 Monty

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible use of pkg_resources plugins?

2007-03-26 Thread Monty Taylor

Michael Bayer wrote:
 i think using entry points to load in external database dialects is a  
 great idea.
 
 though the current six core dialects i think i still want to load via  
 __import__ though since im a big fan of running SA straight out of  
 the source directory (and therefore thered be no entry points for  
 those in that case).
 
 so probably a check via __import__('sqlalchemy.databases') first,  
 then an entry point lookup.  does that work ?

Yes. And I think that's the simplest case anyway - no need to load the
pkg_resources stuff if you don't need it.

I'll see if I can hack that together today.

Thanks!
Monty

 
 On Mar 26, 2007, at 11:45 AM, Monty Taylor wrote:
 
 Hey all,

 I wanted to check and see if a patch would be considered (before I  
 spend
 any time on it) to replace this:

 return getattr(__import__('sqlalchemy.databases.%s' %
 self.drivername).databases, self.drivername)

 from sqlalchemy.engine.url

 with something using the pkg_resources plugin stuff from setuptools?

 I ask, because I'm trying to write a new database engine that's a  
 fairly
 heavy write. (this is the NDB API thing that doesn't use SQL) I'm not
 touching any code so far that isn't in a single file in the databases
 dir, but there are a couple of us who are trying to work on the  
 project
 together. I'd really like to just version control that one file so we
 don't have to branch the whole sqlalchemy source. I also think it  
 might
 be nice to be able to distribute a sqlalchemy database engine without
 having to get it committed to the trunk.

 HOWEVER - I recognize that no one else might care about either of  
 these
 things. I don't think it will be a hard patch or one that will be
 disruptive to the current way of doing things, but I wanted to  
 check if
 it would be rejected out of hand before I bothered?

 Thanks!
 Monty

 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: alias not used with relationship to arbitary select

2007-03-26 Thread HD Mail


 I think the issue is you cant put a task_status ordering in your  
 Task mapper since that table is not part of its mapping.

 http://www.sqlalchemy.org/trac/wiki/ 
 FAQ#ImusinglazyFalsetocreateaJOINOUTERJOINandSQLAlchemyisnotconstructing 
 thequerywhenItrytoaddaWHEREORDERBYLIMITetc.whichreliesupontheOUTERJOIN

   
I'm a bit confused. So with a mapping as follows.

db.mapper(TaskStatus, db.sys_task_status)
db.mapper(Task, db.task,
   properties = {
   'status': relation(TaskStatus, lazy=False),
   }
)

Is the only way for me to order by a column in sys_task_status is with 
an explicit join like in this example ?

query = db.query(model.Task).select_from(
   db.task.join(db.sys_task_status)
).order_by(db.sys_task_status.c.seq_no)

which results in the following SQL:

SELECT sys_task_status_cf27.*, task.*
FROM task JOIN sys_task_status ON sys_task_status.status_code = 
task.status_code
LEFT OUTER JOIN sys_task_status AS sys_task_status_cf27 ON 
sys_task_status_cf27.status_code = task.status_code
ORDER BY sys_task_status.seq_no, sys_task_status_cf27.status_code

I'm trying to get to the following query. It takes half the time of the 
first query.

SELECT sys_task_status_cf27.*, task.*
FROM task LEFT OUTER JOIN sys_task_status AS sys_task_status_cf27 ON 
sys_task_status_cf27.status_code = task.status_code
ORDER BY sys_task_status_cf27.seq_no, sys_task_status_cf27.status_code


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Possible use of pkg_resources plugins?

2007-03-26 Thread Gaetan de Menten

On 3/26/07, Michael Bayer [EMAIL PROTECTED] wrote:

 i think using entry points to load in external database dialects is a
 great idea.

 though the current six core dialects i think i still want to load via
 __import__ though since im a big fan of running SA straight out of
 the source directory (and therefore thered be no entry points for
 those in that case).

For what it's worth it's perfectly possible to use entry points when
running the project from its own source directory. I do it all the
time, but I'm not sure what is the exact condition for that to work.
Simply having in your source directory a directory named
anything.egg-info containing an entry_points.txt file seem to be
enough.

 so probably a check via __import__('sqlalchemy.databases') first,
 then an entry point lookup.  does that work ?


 On Mar 26, 2007, at 11:45 AM, Monty Taylor wrote:

 
  Hey all,
 
  I wanted to check and see if a patch would be considered (before I
  spend
  any time on it) to replace this:
 
  return getattr(__import__('sqlalchemy.databases.%s' %
  self.drivername).databases, self.drivername)
 
  from sqlalchemy.engine.url
 
  with something using the pkg_resources plugin stuff from setuptools?
 
  I ask, because I'm trying to write a new database engine that's a
  fairly
  heavy write. (this is the NDB API thing that doesn't use SQL) I'm not
  touching any code so far that isn't in a single file in the databases
  dir, but there are a couple of us who are trying to work on the
  project
  together. I'd really like to just version control that one file so we
  don't have to branch the whole sqlalchemy source. I also think it
  might
  be nice to be able to distribute a sqlalchemy database engine without
  having to get it committed to the trunk.
 
  HOWEVER - I recognize that no one else might care about either of
  these
  things. I don't think it will be a hard patch or one that will be
  disruptive to the current way of doing things, but I wanted to
  check if
  it would be rejected out of hand before I bothered?
 
  Thanks!
  Monty
 
  


 



-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] [PATCH] Using entry points to load database dialects

2007-03-26 Thread Monty Taylor
Michael Bayer wrote:
 i think using entry points to load in external database dialects is a  
 great idea.
 
 though the current six core dialects i think i still want to load via  
 __import__ though since im a big fan of running SA straight out of  
 the source directory (and therefore thered be no entry points for  
 those in that case).
 
 so probably a check via __import__('sqlalchemy.databases') first,  
 then an entry point lookup.  does that work ?

Here is a patch that implements use of entry points to load dialects.
The largest change is actually adding a get_dialect to replace the
functionality of get_module, since entry points really want to return
classes, and we only ever use the dialect class from the returned module
anyway...

This does not break code that I have that loads the mysql dialect, and
it does work with my new code that adds a new dialect - although I
suppose it's possible it could have broken something I didn't find.

As a side note, I agree with Gaetan - you can run entry points and stuff
out of the current directory, especially if you use setup.py develop ...
but this code does the entry points second, after a check for the module
the old way.

Monty


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

=== modified file 'lib/sqlalchemy/engine/strategies.py'
--- lib/sqlalchemy/engine/strategies.py	2007-02-25 22:44:52 +
+++ lib/sqlalchemy/engine/strategies.py	2007-03-26 17:03:13 +
@@ -42,16 +42,16 @@
 u = url.make_url(name_or_url)
 
 # get module from sqlalchemy.databases
-module = u.get_module()
+dialect_cls = u.get_dialect()
 
 dialect_args = {}
 # consume dialect arguments from kwargs
-for k in util.get_cls_kwargs(module.dialect):
+for k in util.get_cls_kwargs(dialect_cls):
 if k in kwargs:
 dialect_args[k] = kwargs.pop(k)
 
 # create dialect
-dialect = module.dialect(**dialect_args)
+dialect = dialect_cls(**dialect_args)
 
 # assemble connection arguments
 (cargs, cparams) = dialect.create_connect_args(u)
@@ -71,7 +71,7 @@
 raise exceptions.DBAPIError(Connection failed, e)
 creator = kwargs.pop('creator', connect)
 
-poolclass = kwargs.pop('poolclass', getattr(module, 'poolclass', poollib.QueuePool))
+poolclass = kwargs.pop('poolclass', getattr(dialect_cls, 'poolclass', poollib.QueuePool))
 pool_args = {}
 # consume pool arguments from kwargs, translating a few of the arguments
 for k in util.get_cls_kwargs(poolclass):

=== modified file 'lib/sqlalchemy/engine/url.py'
--- lib/sqlalchemy/engine/url.py	2007-03-18 22:35:19 +
+++ lib/sqlalchemy/engine/url.py	2007-03-26 16:47:01 +
@@ -2,6 +2,7 @@
 import cgi
 import sys
 import urllib
+import pkg_resources
 from sqlalchemy import exceptions
 
 Provide the URL object as well as the make_url parsing function.
@@ -69,6 +70,23 @@
 s += '?' + .join([%s=%s % (k, self.query[k]) for k in keys])
 return s
 
+def get_dialect(self):
+Return the SQLAlchemy database dialect class corresponding to this URL's driver name.
+dialect=None
+try:
+  module=getattr(__import__('sqlalchemy.databases.%s' % self.drivername).databases, self.drivername)
+  dialect=module.dialect
+except ImportError:
+if sys.exc_info()[2].tb_next is None:
+  for res in pkg_resources.iter_entry_points('sqlalchemy.databases'):
+if res.name==self.drivername:
+  dialect=res.load()
+else:
+   raise
+if dialect is not None:
+return dialect
+raise exceptions.ArgumentError('unknown database %r' % self.drivername) 
+  
 def get_module(self):
 Return the SQLAlchemy database module corresponding to this URL's driver name.
 try:

=== modified file 'setup.py'
--- setup.py	2007-03-23 21:33:24 +
+++ setup.py	2007-03-26 17:01:51 +
@@ -10,6 +10,10 @@
 url = http://www.sqlalchemy.org;,
 packages = find_packages('lib'),
 package_dir = {'':'lib'},
+entry_points = { 
+  'sqlalchemy.databases': [
+'%s = sqlalchemy.databases.%s:dialect' % (f,f) for f in 
+  ['sqlite', 'postgres', 'mysql', 'oracle', 'mssql', 'firebird']]},
 license = MIT License,
 long_description = \
 SQLAlchemy is:



[sqlalchemy] Inserting many, with missing values

2007-03-26 Thread Mel Collins

 When you do a multiple-row insert, such as:
users.insert().execute(
{'user_id':6, 'user_name':'jack', 'password':'asdfasdf'},
{'user_id':7, 'user_name':'ed'},
{'user_id':8, 'user_name':'scott', 'password':'fadsfads'},
{'user_id':9, 'user_name':'bob'},
)
 ...the 'password' field for the second and fourth rows, rather than
being set to None or raising an error, take the first row's password
value.
 While it is somewhat slack not to provide all the columns for each
row, SA shouldn't really duplicate the first row's values, nei?

 I'm this | | far from submitting a bug report this time. ;)

 Takk,
 - Mel C


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-03-26 Thread Rick Morrison


 i vote Query().


No surprise there ;-)

Me too, +1

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Legacy DB Migration Strategy - How to question?

2007-03-26 Thread Jorge Godoy

BrendanC [EMAIL PROTECTED] writes:

 New user here trying to get started - I'd like to create an ORM map from an
 existing legacy (SQL Server) database - although I may convert this to
 Postgres or MySQL.

 So what is the least painful approach to creating the ORM (I plan to
 use TurboGears to create some Web Front End screens to this database.)

The least painful approach depends on when. 

If you're talking about getting it running *now*, then making the ORM load the
definitions from the database is the fastest way.

If you're talking about migrating, then describing the database correctly in
your model is the least painful approach because you'll have all the work
*now*, but your application will be already changed and your model will be
able to recreate the tables as needed, so you won't have to worry with it
later. 

 Most of the ORM docs I've read seem to require that the schema be
 defined in the ORM tool - however for a large legacy db that seems
 like a lot of work. I'd like to reverse engineer the db and create/
 derive the maps/relationships from the SQL catalog - Is this possible?
 Am I missing something obvious here?

 More specifically, can I reverse migrate (??terminology??) from an existing
 production database and have all the foreign keys/database constraints/table
 relationships recognized and get models and controllers generated correctly?

The term is the one you used on the first paragraph above: reverse engineer.  

Even though the ORM can reverse engineer your database some complex
constructions might need some help from you.  So you don't need to declare
every table, but you might need to declare some of them.

There's nothing that I know that will generate controllers from a database
schema.  So you'll have to write those by hand.  There are CRUD tools, though,
that might help a lot with basic functionality.

 I'd like to avoid any hand coding (e.g. tweaking FKs/relationships) as part
 of this initial migration.

It all depends on your database schema, how complex it is, how normalized it
is, etc.  If there was a good normalization project, things get easier for
reverse engineering tools (including SQL Alchemy). 

 Any links to relevant ref materials/tutorials/etc would be appreciated here.

There are docs on the website.  They help a lot!



-- 
Jorge Godoy  [EMAIL PROTECTED]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [PATCH] Using entry points to load database dialects

2007-03-26 Thread Monty Taylor

Always one in every bunch. :)

I hear what you're saying about the import errors. But does it really
help to allow work to get done before throwing the error? I would think
you'd want to know right up front if you don't have a driver loaded
rather then letting a program actually get started up and think you can
write data (think fat client app) only to get a connection exception.

But I, of course, could be very wrong about this. I am about many things...

Monty

Michael Bayer wrote:
 
 yeah i dont like setup.py develop either :)but anyway,  patch is  
 good.  one thing i have to nail down though is ticket #480.   the  
 main point of that ticket is to cleanly isolate ImportErrors of  
 actual DBAPI modules apart from the containing dialect module  
 itself.   the dialects are catching all the DBAPI-related  
 ImportErrors though so its not necessarily blocking this patch (its  
 just they cant report them nicely).
 
 
 On Mar 26, 2007, at 1:34 PM, Monty Taylor wrote:
 
 Michael Bayer wrote:
 i think using entry points to load in external database dialects is a
 great idea.

 though the current six core dialects i think i still want to load via
 __import__ though since im a big fan of running SA straight out of
 the source directory (and therefore thered be no entry points for
 those in that case).

 so probably a check via __import__('sqlalchemy.databases') first,
 then an entry point lookup.  does that work ?
 Here is a patch that implements use of entry points to load dialects.
 The largest change is actually adding a get_dialect to replace the
 functionality of get_module, since entry points really want to return
 classes, and we only ever use the dialect class from the returned  
 module
 anyway...

 This does not break code that I have that loads the mysql dialect, and
 it does work with my new code that adds a new dialect - although I
 suppose it's possible it could have broken something I didn't find.

 As a side note, I agree with Gaetan - you can run entry points and  
 stuff
 out of the current directory, especially if you use setup.py  
 develop ...
 but this code does the entry points second, after a check for the  
 module
 the old way.

 Monty


 === modified file 'lib/sqlalchemy/engine/strategies.py'
 --- lib/sqlalchemy/engine/strategies.py  2007-02-25 22:44:52 +
 +++ lib/sqlalchemy/engine/strategies.py  2007-03-26 17:03:13 +
 @@ -42,16 +42,16 @@
  u = url.make_url(name_or_url)

  # get module from sqlalchemy.databases
 -module = u.get_module()
 +dialect_cls = u.get_dialect()

  dialect_args = {}
  # consume dialect arguments from kwargs
 -for k in util.get_cls_kwargs(module.dialect):
 +for k in util.get_cls_kwargs(dialect_cls):
  if k in kwargs:
  dialect_args[k] = kwargs.pop(k)

  # create dialect
 -dialect = module.dialect(**dialect_args)
 +dialect = dialect_cls(**dialect_args)

  # assemble connection arguments
  (cargs, cparams) = dialect.create_connect_args(u)
 @@ -71,7 +71,7 @@
  raise exceptions.DBAPIError(Connection  
 failed, e)
  creator = kwargs.pop('creator', connect)

 -poolclass = kwargs.pop('poolclass', getattr(module,  
 'poolclass', poollib.QueuePool))
 +poolclass = kwargs.pop('poolclass', getattr 
 (dialect_cls, 'poolclass', poollib.QueuePool))
  pool_args = {}
  # consume pool arguments from kwargs, translating a  
 few of the arguments
  for k in util.get_cls_kwargs(poolclass):

 === modified file 'lib/sqlalchemy/engine/url.py'
 --- lib/sqlalchemy/engine/url.py 2007-03-18 22:35:19 +
 +++ lib/sqlalchemy/engine/url.py 2007-03-26 16:47:01 +
 @@ -2,6 +2,7 @@
  import cgi
  import sys
  import urllib
 +import pkg_resources
  from sqlalchemy import exceptions

  Provide the URL object as well as the make_url parsing  
 function.
 @@ -69,6 +70,23 @@
  s += '?' + .join([%s=%s % (k, self.query[k]) for  
 k in keys])
  return s

 +def get_dialect(self):
 +Return the SQLAlchemy database dialect class  
 corresponding to this URL's driver name.
 +dialect=None
 +try:
 +  module=getattr(__import__('sqlalchemy.databases.%s' %  
 self.drivername).databases, self.drivername)
 +  dialect=module.dialect
 +except ImportError:
 +if sys.exc_info()[2].tb_next is None:
 +  for res in pkg_resources.iter_entry_points 
 ('sqlalchemy.databases'):
 +if res.name==self.drivername:
 +  dialect=res.load()
 +else:
 +   raise
 +if dialect is not None:
 +return dialect
 +raise exceptions.ArgumentError('unknown database %r' %  
 self.drivername)
 +
  def get_module(self):
  Return the SQLAlchemy database module corresponding to  
 this URL's driver name.
   

[sqlalchemy] Re: sqlalchemy orm doesn't create an instance of certain rows

2007-03-26 Thread Karthik Krishnamurthy
It works after I specify primary_key in my mapper or allow_null_pks, but not
if I specify
the column as primary_key in the Table() constructor.

Thanks
/kk

On 3/26/07, Michael Bayer [EMAIL PROTECTED] wrote:


 i notice that neither your table DDL nor your mappers have any notion
 of a primary key, so thats not the complete application...whats below
 will throw an error immediately.

 but the most likely cause for what youre seeing is that if any
 element of the primary key in a result row is None, no row will be
 loaded.  this behavior can be changed using the allow_null_pks
 option on your mapper.


 On Mar 26, 2007, at 7:17 AM, Karthik Krishnamurthy wrote:

 
  Hi,
  I find that sqlalchemy isn't creating objects for certain rows.
 
  For example the following code gives me back an OpsDB.Node instance
  query.select_by(name='konsole12.xx.xx.com')[0]
  whereas
  query.select_by(name='konsole23.xx.xxx.com')[0]
 
  I have provided below the debugging output gotten by setting
  sqlalchemy.engine and sqlalchemy.orm at DEBUG, the table schema and
  the code that defines the table and the mapper for the said table.
 
  /kk
 
  DEBUG info
  ===
  INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS
  node_cport, node.type_id AS node_type_id, node.locshelf AS
  node_locshelf, node.s_time AS node_s_time, node.site_id AS
  node_site_id, node.console_id AS node_console_id, node.locside AS
  node_locside, node.locarea AS node_locarea, node.netswitch_id AS
  node_netswitch_id, node.bmodule2 AS node_bmodule2, node.sport AS
  node_sport, node.node_id AS node_node_id, node.bplug AS node_bplug,
  node.bmodule AS node_bmodule, node.parent_id AS node_parent_id,
  node.os_id AS node_os_id, node.bport2 AS node_bport2, node.backplane
  AS node_backplane, node.prop_id AS node_prop_id, node.status AS
  node_status, node.model_id AS node_model_id, node.locrow AS
  node_locrow, node.serialno AS node_serialno, node.bootbox2_id AS
  node_bootbox2_id, node.m_time AS node_m_time, node.ytag AS node_ytag,
  node.bplug2 AS node_bplug2, node.loccage AS node_loccage,
  node.ponumber AS node_ponumber, node.name AS node_name, node.racksize
  AS node_racksize, node.bootbox_id AS node_bootbox_id, node.locroom AS
  node_locroom, node.notes AS node_notes, node.bport AS node_bport,
  node.locrack AS node_locrack, node.c_time AS node_c_time, node.pdu AS
  node_pdu
  FROM node
  WHERE node.name = ? ORDER BY node.oid
  INFO:sqlalchemy.engine.base.Engine.0x..74:['konsole19.xx..com']
  DEBUG:sqlalchemy.orm.query.Query:instances()
  DEBUG:sqlalchemy.engine.base.Engine.0x..74:Row (30021, 106, 2, 2006,
  91, 106822, u'a', u'E', None, 0, 0, 106818, 0, 0, None, None, 0, u'',
  73, u'active', 179, u'19', u'TES113830', None, u'2006-11-30 19:31:28',
  u'126785', 0, 0, u'', u'konsole19.xx..com', 1, None, 0, u'', 0, 1,
  1135015620, u'')
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
  on [EMAIL PROTECTED]
  INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS
  node_cport, node.type_id AS node_type_id, node.locshelf AS
  node_locshelf, node.s_time AS node_s_time, node.site_id AS
  node_site_id, node.console_id AS node_console_id, node.locside AS
  node_locside, node.locarea AS node_locarea, node.netswitch_id AS
  node_netswitch_id, node.bmodule2 AS node_bmodule2, node.sport AS
  node_sport, node.node_id AS node_node_id, node.bplug AS node_bplug,
  node.bmodule AS node_bmodule, node.parent_id AS node_parent_id,
  node.os_id AS node_os_id, node.bport2 AS node_bport2, node.backplane
  AS node_backplane, node.prop_id AS node_prop_id, 

[sqlalchemy] deferred join against details table in polymorphic union for performance

2007-03-26 Thread chris e

Currently when sqlalchemy performs a polymorphic lookup, it queries
against all the detail tables, and returns the correct  Python object
represented by the polymorphic identity. Essentially you get a sub
select for each detail table that is included in your primary join
even though only one of the detail tables contains the data that is
specific to the identity of the object. Is there currently a way, or a
plan to support, splitting the polymorphic query into two queries? The
first would get the base table, the second would retrieve the details
based on the discovered table. This way only two tables would be
queried instead of n where n is the number of polymorphic identities.

Our DBAs have concerns that as our tables grow, possibly to the size
of 2.5million rows, that unioning against multiple tables, despite the
fact that we are unioning against a primary key, will become non-
performant. I know I could write a custom mapper to resolve this
issue, however, I thought I would bring this up since it may affect
other users, and there may already be a way to solve this easily of
which I am not aware.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy orm doesn't create an instance of certain rows

2007-03-26 Thread Karthik Krishnamurthy
Another issue with the same setup:

node = query.select_by(name='konsole19.xx..com')[0]
print query.select_by(console=node)

The debugging output shows this:

INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS node_cport,
node.type_id AS node_type_id, node.locshelf AS node_locshelf, node.s_time AS
node_s_time, node.site_id AS node_site_id, node.console_id AS
node_console_id, node.locside AS node_locside, node.locarea AS node_locarea,
node.netswitch_id AS node_netswitch_id, node.bmodule2 AS node_bmodule2,
node.sport AS node_sport, node.node_id AS node_node_id, node.bplug AS
node_bplug, node.bmodule AS node_bmodule, node.parent_id AS node_parent_id,
node.os_id AS node_os_id, node.bport2 AS node_bport2, node.backplane AS
node_backplane, node.prop_id AS node_prop_id, node.status AS node_status,
node.model_id AS node_model_id, node.locrow AS node_locrow, node.serialno AS
node_serialno, node.bootbox2_id AS node_bootbox2_id, node.m_time AS
node_m_time, node.ytag AS node_ytag, node.bplug2 AS node_bplug2,
node.loccage AS node_loccage, node.ponumber AS node_ponumber, node.name AS
node_name, node.racksize AS node_racksize, node.bootbox_id AS
node_bootbox_id, node.locroom AS node_locroom, node.notes AS node_notes,
node.bport AS node_bport, node.locrack AS node_locrack, node.c_time AS
node_c_time, node.pdu AS node_pdu
FROM node
WHERE (node.node_id = ?) AND node.console_id = node.node_id ORDER BY
node.oid

It works when I say instead
print query.select_by(console_id=node.id)

Shouldn't sqlalchemy be able to construct the right SQL from the primaryjoin
condition specified in the mapper
for 'console' ?

/kk


This

On 3/27/07, Karthik Krishnamurthy [EMAIL PROTECTED] wrote:

 It works after I specify primary_key in my mapper or allow_null_pks, but
 not if I specify
 the column as primary_key in the Table() constructor.

 Thanks
 /kk

 On 3/26/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
  i notice that neither your table DDL nor your mappers have any notion
  of a primary key, so thats not the complete application...whats below
  will throw an error immediately.
 
  but the most likely cause for what youre seeing is that if any
  element of the primary key in a result row is None, no row will be
  loaded.  this behavior can be changed using the allow_null_pks
  option on your mapper.
 
 
  On Mar 26, 2007, at 7:17 AM, Karthik Krishnamurthy wrote:
 
  
   Hi,
   I find that sqlalchemy isn't creating objects for certain rows.
  
   For example the following code gives me back an OpsDB.Node instance
   query.select_by(name=' konsole12.xx.xx.com')[0]
   whereas
   query.select_by(name='konsole23.xx.xxx.com')[0]
  
   I have provided below the debugging output gotten by setting
   sqlalchemy.engine and sqlalchemy.orm at DEBUG, the table schema and
   the code that defines the table and the mapper for the said table.
  
   /kk
  
   DEBUG info
   ===
   INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS
   node_cport, node.type_id AS node_type_id, node.locshelf AS
   node_locshelf, node.s_time AS node_s_time, node.site_id AS
   node_site_id, node.console_id AS node_console_id, node.locside AS
   node_locside, node.locarea AS node_locarea, node.netswitch_id AS
   node_netswitch_id, node.bmodule2 AS node_bmodule2, node.sport AS
   node_sport, node.node_id AS node_node_id, node.bplug AS node_bplug,
   node.bmodule AS node_bmodule, node.parent_id AS node_parent_id,
   node.os_id AS node_os_id, node.bport2 AS node_bport2, node.backplane
   AS node_backplane, node.prop_id AS node_prop_id, node.status AS
   node_status, node.model_id AS node_model_id, node.locrow AS
   node_locrow, node.serialno AS node_serialno, node.bootbox2_id AS
   node_bootbox2_id, node.m_time AS node_m_time, node.ytag AS node_ytag,
   node.bplug2 AS node_bplug2, node.loccage AS node_loccage,
   node.ponumber AS node_ponumber, node.name AS node_name, node.racksize
   AS node_racksize, node.bootbox_id AS node_bootbox_id, node.locroom AS
   node_locroom, node.notes AS node_notes, node.bport AS node_bport,
   node.locrack AS node_locrack, node.c_time AS node_c_time, node.pdu AS
   node_pdu
   FROM node
   WHERE node.name = ? ORDER BY node.oid
   INFO:sqlalchemy.engine.base.Engine.0x..74:['konsole19.xx..com']
   DEBUG:sqlalchemy.orm.query.Query:instances()
   DEBUG:sqlalchemy.engine.base.Engine.0x..74:Row (30021, 106, 2, 2006,
   91, 106822, u'a', u'E', None, 0, 0, 106818, 0, 0, None, None, 0, u'',
   73, u'active', 179, u'19', u'TES113830', None, u'2006-11-30 19:31:28',
   u'126785', 0, 0, u'', u'konsole19.xx..com ', 1, None, 0, u'', 0,
  1,
   1135015620, u'')
   DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
   on [EMAIL PROTECTED]
   DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
   on [EMAIL PROTECTED]
   DEBUG:sqlalchemy.orm.strategies.LazyLoader:set class-level lazy loader
   on [EMAIL PROTECTED]