[sqlalchemy] Re: save_or_update() with a unique but not primary key

2007-03-13 Thread Sean Davis

On Tuesday 13 March 2007 07:35, Sean Davis wrote:
 We are creating a database that will have a set of autoincrement primary
 keys on the tables.  However, many of the tables also have one or more
 unique keys associated with them.  Can we use save_or_update() (and, by
 extension, cascade='save_or_update', etc.) by specifying one of the unique
 keys rather than specifying the primary key directly?

Tried it.  Looks like not.  Sorry for the noise on the list.

Sean

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-13 Thread Sébastien LELONG

 [...] I couldn't use BoundMetaData because I don't know the
 connection parameters until much after import time, so I am using the
 only other option I know of, which is DynamicMetaData [...]. 

I have exactly the same problem. One option is to use global_connect and 
default_metadata (doc: Using the global Metadata object), but as my apps 
are essentially  multi-sources, I need to define datasources separatly. And I 
definitly cannot use the BoundMetaData and hardcode the connection 
parameters... So I'm using the DynamicMetaData and its connect method. Seems 
to be two independant features in this DynamicMetaData...


Seb
-- 
Sébastien LELONG
sebastien.lelong[at]sirloon.net
http://www.sirloon.net

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-13 Thread Gaetan de Menten

I only discovered (or at least understood) this thread localness of
DynamicMetaData, and honestly, I don't understand in what case it can
be useful. It seems like the thread localness is limited to the engine
connected to the metadata. So what I'd like to understand is when
anyone wouldn't want to use a global engine? As long as the
connections are thread-local, we are fine, right?

On 3/12/07, Michael Bayer [EMAIL PROTECTED] wrote:

 Er well the whole point of DynamicMetaData was to replace the old
 ProxyEngine, which was intended to emulate SQLObject's i dont know
 what its called object which is what TG was using for thread-local
 context.

 Also, it doesnt entirely make sense that the threadlocal engine
 strategy would conflict with dynamicmetadata's thread local-ness.
 if you have one engine per thread, and that engine is on the tlocal
 strategy, it should still be doing its thing within the one thread
 that its used within.  so feel free to try to reproduce that in a
 ticket or something.

 but also, the threadlocal engine strategy is the thing here thats
 kind of like, you probably dont need to be using it...its basically
 SA 0.1's built in behavior kind of ripped out and put over to the
 side, where people that really want that sort of thing can use it.
 but ive downplayed it a lot since then, since its kind a confusing
 feature if youre not the person who wrote it (or read/understood the
 entire source to it).

 whereas DynamicMetaData i think is in pretty wide usage as a thread
 local construct and its pretty straightfoward.  the non-threadlocal
 use case for it is not as obvious to me.


 On Mar 12, 2007, at 4:58 PM, Jonathan LaCour wrote:

 
  Random question for the list, and an idea.  I have an application I am
  working on that needs to be able to dynamically bind its metadata to
  an engine based upon configuration.  Logically, it seems I should use
  `DynamicMetaData` and just call metadata.connect(engine) after I have
  loaded my configuration.
 
  However, I had written all of my code depending upon a threadlocal
  strategy as defined by using `strategy='threadlocal'` in my
  `create_engine` call.  It turns out that DynamicMetaData has
  threadlocal
  behavior by default as well, and somehow these two things
  conflict.  My
  problem was solved by making sure to pass `threadlocal=False` to my
  DynamicMetaData constructor.
 
  Now, here is my question: why does DynamicMetaData have any
  threadlocal
  behavior at all?  It seems like the primary reason one would use a
  DynamicMetaData would be for being able to delay the binding of your
  engine to your metadata.  The fact that its threadlocal is easy to
  miss,
  and I don't see why it has any threadlocal behavior at all.
 
  Am I missing something?  Wouldn't it be better to two separate
  MetaData
  types, one for dynamically binding your engine, and another for
  threadlocal metadata?
 
  --
  Jonathan LaCour
  http://cleverdevil.org
 
 


-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamically adding MapperExtension

2007-03-13 Thread percious

I think I came up with a decent solution for this:  I do something
like:

extension = MyMapperExtension()
for mapper in mapper_registry.values():
mapper.extension = extension
mapper._compile_extensions()

If you want all classes currently defined in the metadata to map to
your new extension.  This is actually working quite well.

cheers.
chris

On Feb 9, 11:33 am, Michael Bayer [EMAIL PROTECTED] wrote:
 I should add that you *can* be more creative, and construct each mapper
 with aMapperExtensionat the beginning which later can be enabled with
 a hello world callable.  i.e.

 class MyExt(MapperExtension):
 def __init__(self):
 self.func = None
 def after_insert(self, ...):
 if self.func:
  self.func()

 extension = MyExt()
 mapper(foo, bar, extension=extension)
 mapper(foo2, bar2, extension=extension)
 mapper(foo3, bar3, extension=extension)

 ... do stuff ...

 def helloworld():
 print hello world
 extension.func = helloworld

 ... do stuff ...

 percious wrote:

  question.

  Lets say I have a series of table definitions, and a series of objects
  linked to the tables with a bunch of mappers.

  First question:  Is there a way to get from the table definitions in
  the metadata to the Mapper?

  Second question:  If I create aMapperExtension, can I then link it to
  the mapper associated with a table?

  What I want to do is create a simple application that goes through the
  tables defined within a metadata, and create an extension so that
  every time a table entry is added it prints 'hello world' to the
  screen.

  TIA
  -chris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update() with a unique but not primary key

2007-03-13 Thread Michael Bayer


save_or_update() doesnt take any kind of primary key or unique key
argument.  no specification of anything is needed.

Sean Davis wrote:

 On Tuesday 13 March 2007 07:35, Sean Davis wrote:
 We are creating a database that will have a set of autoincrement primary
 keys on the tables.  However, many of the tables also have one or more
 unique keys associated with them.  Can we use save_or_update() (and, by
 extension, cascade='save_or_update', etc.) by specifying one of the
 unique
 keys rather than specifying the primary key directly?

 Tried it.  Looks like not.  Sorry for the noise on the list.

 Sean

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-13 Thread Jonathan LaCour

Michael Bayer wrote:

 My controller actions don't worry about transactions or connections
 at all, they just execute insert/update/delete/select actions and if
 an exception is raised, their work is rolled back automatically.

 well, thats exactly the pattern provided by strategy=threadlocal,
 and its clear that you understand how it works, so there is no problem
 at all doing that. its merely a pattern that allows a little less
 explicitness (i.e. more magic).

Ah, okay, so I am not missing anything here, just appreciating this
particular piece of magic :)

 people were quite confused by it when it was the built-in behavior,
 which is because a lot of people dont really understand what
 threadlocal means. so i made it all optional. no need to deprecate
 it though, its still useful stuff.

I am surprised that people don't know what threadlocal means, since
it seems like a threaded application with a database pool is probably
one of the most familiar ways to do things.  But, still, if people were
emailing the list in confusion, thats got to mean something :)

 I must still not understand the appropriate usage pattern for
 DynamicMetaData.  I couldn't use BoundMetaData because I don't know
 the connection parameters until much after import time, so I am
 using the only other option I know of, which is DynamicMetaData.
 The fact that it provides threadlocal behavior only caused me a
 headache, because I would get errors unless it was disabled.

 the main use case for DynamicMetaData is a single application, usually
 a web application, that is actually many instances of the same
 application running at once and talking to different databases for
 each instance. so on each request, the DynamicMetaData gets pointed to
 the official database connection for that request, within that thread.

Wow, does this ever actually happen?  It seems like a very obscure use
case to me.

 You are free to use it the way youre using it too. if i were writing
 the app that didnt know the connection parameters until later, i
 might just put my entire create the Tables + mappers logic within a
 function call that gets called when we do know the actual connection
 string...

... eek!

 but then again using DMD with threadlocal turned off is yeah a lot
 easier and cleaner since you can keep your declarations at the module
 level.

Yes, this is a lot cleaner, however its confusing.  Personally, I
think that there should be one and only one MetaData class that can be
told how to act.  As far as I can tell, you could replace the existing
options in SQLAlchemy (BoundMetaData, DynamicMetaData) and simplify
things a bunch in the process:

 metadata = MetaData()
 engine = create_engine('connection_string')
 metadata.connect(engine)

This would let you get rid of BoundMetaData and DynamicMetaData and just
have one simple, easy way to do things.  Plus, it would give you the
ability to do what I am using DynamicMetaData to do without having to
pass in a weird threadlocal=False kwarg.

For people who want the insane threadlocal behavior of DynamicMetaData,
you could have the MetaData object take in the same kwarg which would be
defaulted to the saner False option (unlike the current DynamicMetaData)

 metadata = MetaData(threadlocal=True)
 ...

I think that this would be a lot less confusing for users.  I would
be more than willing to work on a patch to make this happen if you
like the idea.  If not, I will at least supply a patch for yet another
MetaData subclass called DelayedMetaData that gives you the behavior of
DynamicMetaData without the threadlocal insanity :)

What do you think?

--
Jonathan LaCour
http://cleverdevil.org


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multi-column primary key

2007-03-13 Thread Mikkel Høgh

Well, that's simple :)

On Mar 13, 1:20 am, Michael Bayer [EMAIL PROTECTED] wrote:
 use a composite primary key - just mark each column with
 'primary_key=True'.

 On Mar 12, 2007, at 2:28 PM, Mikkel Høgh wrote:



  Hi there,

  I'm trying to make a small (open source) inventory tracking system
  with TurboGears and SQLAlchemy, and I have a little problem with how I
  should implement the database part.

  I think the best idea would be to have different sequences for each
  type of item tracked, so spork number 543 would be SPRK-0543.

  Now, one could of course create a different table for each type of
  item stored, but that means that creation of new types includes
  modifications to the database structure and (even worse) the database
  model.

  So I thought of just putting it all in a single table. The way I see
  it, there's three different ways to do that, and I cannot discern
  which is best:
  1) Use a normal int primary key and have item_number and item_type as
  indexed columns
  2) Use a string primary primary key, containing the full item
  designation (SPRK-0543)
  3) Use a multi-column primary key with item_type and item_number

  The way I see it, #3 would be preferable, but is that possible with
  SQLAlchemy - I can use UniqueConstraint to make sure the combination
  is unique, but does that perform well, and can it be a primary key?

  In any case, I'm not decided yet, so if you have any advice on how to
  best accomplish my goal, I'd appreciate if you'd help me :)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DynamicMetaData question

2007-03-13 Thread sdobrev

  What do you think?

 Hey list, are you confused by the current system ?  Please let me
 know.

 the only change I would favor here would be to merge connect into
 MetaData, BoundMetaData and DynamicMetaData stay around for
 backwards compat for probably forever, and perhaps we add another
 method called connect_threadlocal() or something like that for
 people who want that behavior.  i would like to have just one
 object that does the whole thing, now that some of the early FUD
 has subsided.

 but I need to get at least 20 or 30 users on this list telling me
 how they are using metadata.
i am not doing a plain web app (although it will have http interface 
and behave like web app in some ways - but this is just _another_ 
remote UI), so i dont really know the difference of current 
*MetaData's. 
Even if well documented, the usage of these will go deep in the 
infrastructure and hence be forgotten and stay hidden - its not 
everyone's business to create db connections and alike.

And for this, having 2 different strategies, which differ in some 
_hidden_ ways, and which usage is always invisible... 
i vote for having one type of object - or one factory if u want - with 
parameters. Only who should know about diffs would have to.

so far i have only noted that certain cases work with premade 
BoundMetaData() and break with plain MetaData() being later bound. 
i can try dig them out if u want.

btw theoreticaly, can u have a XXXMetadata obj that is used, closed 
and later turns into YYYMetaData? i can see a use case for this and 
it would not be easy possible with current system. In the new variant 
one can keep forever the metadata as made once, and just change the 
way it is connected, then disconnected. Hey, is this a sort of 
usageContext() ? i.e. u have a MetaData(), which .connect(args-here) 
returns some context-like object which hides the exact implemetation, 
and when dropped will close connection.
 
ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---