[sqlalchemy] Re: Deep Eagerload

2007-03-12 Thread Dennis

 nope, options(eagerload('a'), eagerload('a.b')) works just fine, have
 tested it here to verify

You're quite correct..  Sorry for the noise.  I had tried it a while
back and it didn't work yet.  But in between now and then, it has
magically been implemented by you!

Thanks
Dennis


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL identity inserts [was: pyodbc and tables with triggers]

2007-03-12 Thread Rick Morrison
Hi Paul,

Could this be one of those situations where MSSQL returns multiple result
sets? If the select for @@identity / @scope_identity() generates another
result set, MSSQL want a full fetch on the previous fetch before issuing a
new one. Fix would be client-side cursors for adodbapi, pyodbc would need to
implement nextset().

But it's hard for me to believe the problem is as simple as a select
@@identity inside a transaction - surely some one (or one of the tests)
would have hit that one before.

Rick



On 3/12/07, Paul Johnston [EMAIL PROTECTED] wrote:


 Hi,

 I've had a go at implementing scope_identity, with mixed results.

 It works fine with pymssql
 With adodbapi, scope_identity() always returns None
 Same problem with pyodbc, and I uncovered a worse problem lurking (it
 exists with @@identity)
 If you do an insert inside a transaction (i.e. autocommit doesn't
 happen), when it tries to extract the ID, you get the dreaded Invalid
 cursor state error.

 So, for the time being I think we should hold fire on scope_identity. I
 will see if I can figure out what's up with adodbapi/pyodbc.

 Paul



 Tim Golden wrote:

 I've looked through the mailing list and can't see
 this issue raised there so...
 
 There is a known issue with retrieving the id of the
 last inserted row under MSSQL where IDENTITY cols are
 used and there are triggers involved. It's pretty easy
 to demonstrate. If I have this construction:
 
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Multi-column primary key

2007-03-12 Thread Mikkel Høgh

Hi there,

I'm trying to make a small (open source) inventory tracking system
with TurboGears and SQLAlchemy, and I have a little problem with how I
should implement the database part.

I think the best idea would be to have different sequences for each
type of item tracked, so spork number 543 would be SPRK-0543.

Now, one could of course create a different table for each type of
item stored, but that means that creation of new types includes
modifications to the database structure and (even worse) the database
model.

So I thought of just putting it all in a single table. The way I see
it, there's three different ways to do that, and I cannot discern
which is best:
1) Use a normal int primary key and have item_number and item_type as
indexed columns
2) Use a string primary primary key, containing the full item
designation (SPRK-0543)
3) Use a multi-column primary key with item_type and item_number

The way I see it, #3 would be preferable, but is that possible with
SQLAlchemy - I can use UniqueConstraint to make sure the combination
is unique, but does that perform well, and can it be a primary key?

In any case, I'm not decided yet, so if you have any advice on how to
best accomplish my goal, I'd appreciate if you'd help me :)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: pyodbc and tables with triggers

2007-03-12 Thread polaar

FYI, specifying module=pyodbc didn't seem to help wrt the
ConcurrentModificationError. Didn't have very much time, had a (very)
quick look at the code in mssql.py, and at first sight, it would seem
that sane_rowcount is a global variable that is only set in the
use_pyodbc() (resp. adodbapy/pymssql) function, which in turn is only
called from use-default(), this would seem to mean only when you don't
specify a module... Either I'm completely wrong (which is very well
possible ;-), as I said, I only took a quick look, and I'm not
familiar with the code), or this means that you may not have adodbapi
(or pymssql) installed in order to use pyodbc correctly???

On 9 mrt, 23:29, polaar [EMAIL PROTECTED] wrote:
 Yes, but I'm starting to think I'm doing something wrong ;-) I suppose
 I should call create_engine with the module=pyodbc?
 I was just using the creator argument (as I was doing already because
 I needed to change the connectionstring to use integrated security
 anyway), and just switched that from adodbapi to pyodbc. So maybe it's
 still using the default adodbapi settngs...
 Hmm, seems to make sense... oops... (well, it's not really clear from
 the docs that this is used for anything else than determining which
 module to use to create the connection, which seems unnecessary if you
 create it yourself)

 I'll try it on monday...

 On 9 mrt, 22:08, Rick Morrison [EMAIL PROTECTED] wrote:

  This is still with pyodbc?  The MSSQL module should already set
  sane_rowcount to False for that dialect, as per the pyodbc site, they don't
  implement rowcount.

  Rick


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] DynamicMetaData question

2007-03-12 Thread Jonathan LaCour

Random question for the list, and an idea.  I have an application I am
working on that needs to be able to dynamically bind its metadata to
an engine based upon configuration.  Logically, it seems I should use
`DynamicMetaData` and just call metadata.connect(engine) after I have
loaded my configuration.

However, I had written all of my code depending upon a threadlocal
strategy as defined by using `strategy='threadlocal'` in my
`create_engine` call.  It turns out that DynamicMetaData has threadlocal
behavior by default as well, and somehow these two things conflict.  My
problem was solved by making sure to pass `threadlocal=False` to my
DynamicMetaData constructor.

Now, here is my question: why does DynamicMetaData have any threadlocal
behavior at all?  It seems like the primary reason one would use a
DynamicMetaData would be for being able to delay the binding of your
engine to your metadata.  The fact that its threadlocal is easy to miss,
and I don't see why it has any threadlocal behavior at all.

Am I missing something?  Wouldn't it be better to two separate MetaData
types, one for dynamically binding your engine, and another for
threadlocal metadata?

--
Jonathan LaCour
http://cleverdevil.org




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-12 Thread Michael Bayer

Er well the whole point of DynamicMetaData was to replace the old  
ProxyEngine, which was intended to emulate SQLObject's i dont know  
what its called object which is what TG was using for thread-local  
context.

Also, it doesnt entirely make sense that the threadlocal engine  
strategy would conflict with dynamicmetadata's thread local-ness.
if you have one engine per thread, and that engine is on the tlocal  
strategy, it should still be doing its thing within the one thread  
that its used within.  so feel free to try to reproduce that in a  
ticket or something.

but also, the threadlocal engine strategy is the thing here thats  
kind of like, you probably dont need to be using it...its basically  
SA 0.1's built in behavior kind of ripped out and put over to the  
side, where people that really want that sort of thing can use it.   
but ive downplayed it a lot since then, since its kind a confusing  
feature if youre not the person who wrote it (or read/understood the  
entire source to it).

whereas DynamicMetaData i think is in pretty wide usage as a thread  
local construct and its pretty straightfoward.  the non-threadlocal  
use case for it is not as obvious to me.


On Mar 12, 2007, at 4:58 PM, Jonathan LaCour wrote:


 Random question for the list, and an idea.  I have an application I am
 working on that needs to be able to dynamically bind its metadata to
 an engine based upon configuration.  Logically, it seems I should use
 `DynamicMetaData` and just call metadata.connect(engine) after I have
 loaded my configuration.

 However, I had written all of my code depending upon a threadlocal
 strategy as defined by using `strategy='threadlocal'` in my
 `create_engine` call.  It turns out that DynamicMetaData has  
 threadlocal
 behavior by default as well, and somehow these two things  
 conflict.  My
 problem was solved by making sure to pass `threadlocal=False` to my
 DynamicMetaData constructor.

 Now, here is my question: why does DynamicMetaData have any  
 threadlocal
 behavior at all?  It seems like the primary reason one would use a
 DynamicMetaData would be for being able to delay the binding of your
 engine to your metadata.  The fact that its threadlocal is easy to  
 miss,
 and I don't see why it has any threadlocal behavior at all.

 Am I missing something?  Wouldn't it be better to two separate  
 MetaData
 types, one for dynamically binding your engine, and another for
 threadlocal metadata?

 --
 Jonathan LaCour
 http://cleverdevil.org




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic collections / ticket #500

2007-03-12 Thread Rick Morrison
Yeah, disregard this, I found an easier way that doesn't fight SA.

For the record, the use case is a polymorphic collection with the occasional
exception. Exceptions were one-offs for which I didn't want to provide
derived mappers because they were well, rare one-offs. Those exceptions were
to be loaded with the base mapper as if it were the base class itself, but
to have the different type differentiators for the app to recognize and use.
The workaround is to simply add a sub-type column and do a second-level
differentiation using that. Won't work if you can't refactor the db, but now
that Mike has put the override flag in there, even that can be gotten
around.



On 3/7/07, Michael Bayer  [EMAIL PROTECTED] wrote:

 im not sure if i understand the use case youre describing ?   I also stuck
 the previously mentioned flag in the trunk, the exception message will tell
 you about it.
 On Mar 6, 2007, at 11:08 PM, Rick Morrison wrote:

 Mike:

 I think I've seen a few requests here on the list over the past months for
 some kind of default or unspecified or other catch-all for the
 occasional one-off exception to what are otherwise polymorphic collections.
 I think it's actually a fairly common use-case.

 I'll try some of Simon's suggestions (thanks), but I think real support
 for something like this would make sense  -- could you think about the issue
 a bit?

 Thanks,
 Rick

 On 3/6/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
  yeah, i didnt like adding this particular error message, but since it
  leads to a bigger problem later i sort of had to.
 
  i am sure a lot of people are going to hit this one.  so i guess ill
  put a flag in for this onei think people shoud me made aware that
  they are allowing a potentially error-causing behavior to occur.
 
  On Mar 5, 2007, at 8:12 PM, Rick Morrison wrote:
 
   The fix for ticket #500 breaks a pattern I've been using.
  
   It's most likely an anti-pattern, but I don't see a way to get what
   I want in SA otherwise.
  
   I've got a series of entities
  
   class Person():
  pass
  
   class Manager(Person):
  def __init__(self):
  # do manager stuff
  
   class Demigod(Person):
  def __init__(self):
  # do demigod stuff
  
   etc.
  
   there are mappers for each of these entities that inherit from
   Person(), so all of the normal Person() properties exist, but Person
   () itself is not polymorphic. That's on purpose, and because the
   class hierarchy of Manager(), etc, is not exhaustive, and I
   occasionally  want to save instances of Person() directly.
   If I make the Person() class polymorphic on a column of say typ,
   then SA clears whatever typ I may have tried to set directly, and
   seems to make me specify an exhaustive list of sub-types.
  
   And so I leave Person() as non-polymorphic. I also have a
   collection of Person() objects on a different mapper, which can
   load entity objects of any type.
  
   Before rev #2382, I could put a Manager() in a Person() collection,
   and  it would flush OK. Now it bitches that it wants a real
   polymorphic mapper. I don't want to use a polymorphic mapper,
   because I don't want to specify an exhaustive list of every class
   that I'm ever going to use.
  
   What to do?
  
   Thanks,
   Rick
  
   
 
 
 
 
 





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multi-column primary key

2007-03-12 Thread Michael Bayer

use a composite primary key - just mark each column with  
'primary_key=True'.

On Mar 12, 2007, at 2:28 PM, Mikkel Høgh wrote:


 Hi there,

 I'm trying to make a small (open source) inventory tracking system
 with TurboGears and SQLAlchemy, and I have a little problem with how I
 should implement the database part.

 I think the best idea would be to have different sequences for each
 type of item tracked, so spork number 543 would be SPRK-0543.

 Now, one could of course create a different table for each type of
 item stored, but that means that creation of new types includes
 modifications to the database structure and (even worse) the database
 model.

 So I thought of just putting it all in a single table. The way I see
 it, there's three different ways to do that, and I cannot discern
 which is best:
 1) Use a normal int primary key and have item_number and item_type as
 indexed columns
 2) Use a string primary primary key, containing the full item
 designation (SPRK-0543)
 3) Use a multi-column primary key with item_type and item_number

 The way I see it, #3 would be preferable, but is that possible with
 SQLAlchemy - I can use UniqueConstraint to make sure the combination
 is unique, but does that perform well, and can it be a primary key?

 In any case, I'm not decided yet, so if you have any advice on how to
 best accomplish my goal, I'd appreciate if you'd help me :)


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Feature request: Session.get_local()

2007-03-12 Thread Daniel Miller

Michael Bayer wrote:
 id rather not have people needing to deal with an actual identity
 key tuple most of the time.  they should be able to say
 session.identity_map.something(class, pk) and get an instance  out of
 it.

What's the use of exposing the identity map if you don't want people to deal 
with the keys with which it's indexed? You might as well expose an 
identity_set, except that would be really unhandy as it would be expensive or 
impossible to do the exact kind of find that I'm asking for.

 
 the reason for those big names on mapper is because all of them are
 used, we need identity keys from instances, rows, primary keys, all of
 it, and i was myself getting confused since their names were not
 clear...so i changed the names to be absolutely clear.  but also, i
 dont want people generally dealing with the methods off of
 Mappers...all the methods you need should be off of Session and
 Query.  so maybe we can put a keyword-oriented identity_key method
 on Session, or something (identity_key(primary_key=None,
 instance=None, row=None, etc)).  or maybe it can use some kind of
 multiple-dispatch that detects scalar, tuple, object instance,
 ResultProxy.  but also, i dont think the get an SA identity key step
 as an interim step to accomplishing something else should really be
 needed in the usual case.

If the session got an identity_key() method that would be great (and eliminate 
the need for a find method). Here's an implementation:

def identity_key(self, *args, **kwargs):
Get an identity key

Valid call signatures:

identity_key(class_, ident, entity_name=None)
class_ - mapped class
ident - primary key, if the key is composite this is a tuple
entity_name - optional entity name. May be given as a
positional arg or as a keyword arg.

identity_key(instance=instance)
instance - object instance (must be given as a keyword arg)

identity_key(row=row)
row - result proxy row (must be given as a keyword arg)

if args:
kw = {}
if len(args) == 2:
class_, ident = args
entity_name = kwargs.pop(entity_name, None)
assert not kwargs, (unknown keyword arguments: %s
% (kwargs.keys(),))
else:
assert len(args) == 3, (two or three positional args are 
accepted, got %s % len(args))
class_, ident, entity_name = args
mapper = _class_mapper(class_, entity_name=entity_name)
return mapper.instance_key_from_primary_key(ident,
entity_name=entity_name)
else:
try:
instance = kwargs.pop(instance)
except KeyError:
row = kwargs.pop(row)
assert not kwargs, (unknown keyword arguments: %s
% (kwargs.keys(),))
mapper = # not sure how to get the mapper form a row
return mapper.identity_key_from_row(row)
else:
assert not kwargs, (unknown keyword arguments: %s
% (kwargs.keys(),))
mapper = _object_mapper(instance)
return mapper.identity_key_from_instance(instance)


Note that I didn't implement the part to get a mapper from a row because I'm 
not sure how to do that. I imagine that's trivial though.

~ Daniel


 
 On Mar 10, 8:27 pm, Daniel Miller [EMAIL PROTECTED] wrote:
 Michael Bayer wrote:
 my only concern is that you now have more than one way to do it.  i
 need to deal with things in the identity map.  do i go look at the
 session.identity_map ? (which is documented, its part of the public
 API)  oh no, i dont have the exact kind of key to use, now i have to
 go use a method called find() (which again, does that mean, find it
 in the database ?  where is it looking ?)
 These are good points. Maybe the problem is in my brain--I've always had a 
 disconnect between the session.identity_map and the mapper.identity_key() 
 function. I guess it's clearly documented that they are compatible and can 
 be used like this:

 key = MyClass.mapper.identity_key(pk_value)
 itm = session.identity_map.get(key)

 It just seemed like that was digging a bit too deep into what I thought were 
 implementation details of the mapper and the session. If those things (i.e. 
 mapper.identity_key and session.identity_map) are clearly documented as part 
 of the interface of SA, and they are meant to work together like that then 
 maybe this proposal isn't even necessary. After all, it's just two lines 
 instead of one. However, upon looking at the documentation, this is all I 
 find on the identity_key method of the Mapper class:

 def identity_key(self, primary_key)

 deprecated. a synonym for identity_key_from_primary_key.

 Now I thought identity_key was OK (if a bit obscure due to lack of 
 documentation), but identity_key_from_primary_key is not so great IMHO. This 
 is not a method name that will come to mind when I'm trying to get the 
 identity key of a given