Re: [sqlalchemy] Creating a feed related to different object type

2012-12-05 Thread Eric Ongerth
Hi Brice,

I think something strange happened in the discussion group web page because 
it did not show me the existence of the message in which you linked to your 
solution.  If I had seen that I would have known that you already found the 
way.

Cheers,
Eric


On Sunday, December 2, 2012 8:00:05 PM UTC-8, deBrice wrote:

 Hi Eric,

 Thank you, I actually used this documentation (even though they are only 
 many-to-many) and some other blog post to come up with this solution. Using 
 0.7 I don't have event on Mixin so I had to find an alternative to event 
 attachment. BTW I'm pretty surprised by the speed of the event solution 
 provided by SQLAlchemy, as I always heard that event listener on DB object 
 were evil, but it turned out to almost add no overhead (test operated 2k 
 object creation).


 On Sat, Dec 1, 2012 at 10:35 PM, Eric Ongerth erico...@gmail.comjavascript:
  wrote:

 Hi Brice,

 Yours is another good case of the 'Generic Associations' or 'Polymorphic 
 Association' pattern which comes up quite often.  Here's a link to some 
 docs that will get you going on a good solution that keeps the database 
 normalized.


 http://docs.sqlalchemy.org/en/rel_0_8/orm/examples.html#generic-associations

 If that doesn't get you off and running, especially the example in 
 discriminator_on_association.py, continue this thread or pop into the IRC 
 channel.

 - ejo


 On Wednesday, November 28, 2012 1:20:52 PM UTC-8, deBrice wrote:

 So here is my solution, 

 https://gist.github.com/**4164619 https://gist.github.com/4164619

 Hope it helps others to get an insight on generic relation in SQLAlchemy.


 On Tue, Nov 27, 2012 at 7:41 AM, Brice Leroy bbrri...@gmail.com wrote:

 Thank you for the advice Robert. As I'm using flask I'm not sure how 
 adaptable the ZCA would be. I'll try to understand the logic behind it and 
 see if I can replicate it within my models.
  

 On Tue, Nov 27, 2012 at 7:12 AM, Robert Forkel 
 xrot...@googlemail.comwrote:

  hi,
 i used to do something like this, i.e. adding information about urls,
 views, etc. to sqlalchemy models, but found this to be inflexibel. Now
 I keep URL-related information in the web app's routing component, and
 to solve problems like the one you pose, I use zca adapters [1] (which
 is easier when using pyramid, because you already have a component
 registry available). So with this technology you'd register the same
 FeedItem class as adapter for the various sqlalchemy models, which
 means that at adaption time, the object to adapt will be passed to
 you.
 regards
 robert

 [1] 
 http://www.muthukadan.net/**docs/zca.htmlhttp://www.muthukadan.net/docs/zca.html


 On Tue, Nov 27, 2012 at 3:58 PM, Brice Leroy bbrri...@gmail.com 
 wrote:
  Hello everybody,
 
  It's about brainstorming on an elegant solution. I previously posted 
 this
  question on the Flask mailing list, and I got advised to post it on
  SQLAlchemy list... which make more sense I admit. So here is my 
 issue:
 
  I have N different classes:
 
  class Comment(Models):
author_id = Integer
comment = String
creation_date = Date
 
  class Picture(Models):
author_id = Integer
image = File
creation_date = Date
  ...
 
  now let say, I have a follow feature, allowing a user X to get 
 updates
  when Y (the user followed by X) does something (creepy...).
 
  So far I came up with something like that:
 
  class FeedItem(Model)
table = String
key = Integer
creation_date = Date
 
def url(self):
  #get object by querying self.table with self.key
  object = self.get_an_object(table=self.**table, key=self.key)
  return object.view_url
 
  and then add this property to Comment and Picture classes:
 
@property
def view_url(self):
  return url_for('view_function_name', self.id)
 
  - What would be your way of dealing with this kind of open/generic
  relationship items?
 
  - How would you manage automatic deletion of a FeedItem when the 
 object it
  points to get destroyed? (I'm thinking attaching function on delete 
 event to
  classes)
 
  - Would you create as many FeedItem per follower, or use a Table to 
 link
  them to followers, therefore deleting a FeedItem would automatically 
 delete
  the relation record from the Table.?
 
  Thank you,
 
  --
  Brice
 
  --
  You received this message because you are subscribed to the Google 
 Groups
  sqlalchemy group.
  To post to this group, send email to sqlal...@googlegroups.com.

  To unsubscribe from this group, send email to
  sqlalchemy+...@**googlegroups.com.

  For more options, visit this group at
  http://groups.google.com/**group/sqlalchemy?hl=enhttp://groups.google.com/group/sqlalchemy?hl=en
 .

 --
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To post to this group, send email to sqlal...@googlegroups.com.
 To unsubscribe from this group, send email to sqlalchemy+...@**
 googlegroups.com.

 For more options, visit this group at http

Re: [sqlalchemy] Why not seeing ondelete='xxx', and onupdate='xxx' in alembic migration script?

2012-12-05 Thread Eric Ongerth
Thank you very much for that resolution!  I use these a lot.


On Wednesday, December 5, 2012 12:37:54 PM UTC-8, Michael Bayer wrote:


 On Dec 5, 2012, at 2:56 PM, junepeach wrote: 

  I defined several tables in my module file: mymodule.py. For example I 
 have table T1 and T2 defined: 
  class T1(Base): 
 __tablename__ = 't1' 
 id = Column(Integer, primary_key = True) 
 name = Column(String(15)) 
  class T2(Base): 
 __tablename__ = 't2' 
 id = Column(Integer, primary_key = True) 
 fid = Column(Integer, ForeignKey('t1.id', onupdate='CASCADE', 
 ondelete='CASCADE'), nullable = False) 
 name = Column(String(15)) 
  
  After running 'alembic revision --autogenerate', in table 't2' migration 
 code, I can just see something like that: 
  sa.ForeignKeyConstraint(['fid'], ['t1.id'],) without clause 
 onupdate='CASCADE', ondelete='CASCADE' anymore. 
  So I manually added onupdate='CASCADE', ondelete='CASCADE', but I have 
 more than 100 tables. Did I miss something in my module file mymodule.py? 

 this is ticket #92 from the other day for propagation of the ondelete, 
 onupdate, initially, and deferred arguments of FKconstraint, just resolved 
 in rfd50c9605e43. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/YTikzkbgKM0J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Best practice for faves/likes counting?

2012-12-05 Thread Eric Ongerth
But work.like_query.count() will be efficient if you have the right indexes 
in the database, no?

I think if you want to denormalize that count all the way and also stay 
very efficient, maybe it would be good to do it right on the db server with 
a trigger and a stored procedure and avoid extra python function calls.  
What database do you use?


On Wednesday, December 5, 2012 3:06:03 PM UTC-8, Hong Minhee wrote:

 Hi,

 I am making a typical web application using SQLAlchemy, and it contains 
 “users”, “works” and “likes”:


- users ( id, login, … )
- works ( id, title, … )
- likes ( user_id, work_id )


 I want to print how many users liked each work, so the most simple (but 
 naive) is querying count of them e.g.:

 len(work.like_set)
 work.like_query.count()


 However it’s inefficient, so I want to maintain like_count field or such 
 thing for works.  The problem is that I have to manually update the field 
 every time new like is inserted or existing like is deleted.  I thought it 
 could be automatically updated without inconsistency if I catch SQLAlchemy 
 events, but I’m not sure whether it’s correct way to do it or not.

 How do you guys solve such situation with SQLAlchemy?

 Thanks,
 Hong Minhee


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/1bSJm_qJci8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Creating a feed related to different object type

2012-12-01 Thread Eric Ongerth
Hi Brice,

Yours is another good case of the 'Generic Associations' or 'Polymorphic 
Association' pattern which comes up quite often.  Here's a link to some 
docs that will get you going on a good solution that keeps the database 
normalized.

http://docs.sqlalchemy.org/en/rel_0_8/orm/examples.html#generic-associations

If that doesn't get you off and running, especially the example in 
discriminator_on_association.py, continue this thread or pop into the IRC 
channel.

- ejo

On Wednesday, November 28, 2012 1:20:52 PM UTC-8, deBrice wrote:

 So here is my solution, 

 https://gist.github.com/4164619

 Hope it helps others to get an insight on generic relation in SQLAlchemy.


 On Tue, Nov 27, 2012 at 7:41 AM, Brice Leroy bbrri...@gmail.comjavascript:
  wrote:

 Thank you for the advice Robert. As I'm using flask I'm not sure how 
 adaptable the ZCA would be. I'll try to understand the logic behind it and 
 see if I can replicate it within my models.
  

 On Tue, Nov 27, 2012 at 7:12 AM, Robert Forkel 
 xrot...@googlemail.comjavascript:
  wrote:

 hi,
 i used to do something like this, i.e. adding information about urls,
 views, etc. to sqlalchemy models, but found this to be inflexibel. Now
 I keep URL-related information in the web app's routing component, and
 to solve problems like the one you pose, I use zca adapters [1] (which
 is easier when using pyramid, because you already have a component
 registry available). So with this technology you'd register the same
 FeedItem class as adapter for the various sqlalchemy models, which
 means that at adaption time, the object to adapt will be passed to
 you.
 regards
 robert

 [1] http://www.muthukadan.net/docs/zca.html


 On Tue, Nov 27, 2012 at 3:58 PM, Brice Leroy 
 bbrri...@gmail.comjavascript: 
 wrote:
  Hello everybody,
 
  It's about brainstorming on an elegant solution. I previously posted 
 this
  question on the Flask mailing list, and I got advised to post it on
  SQLAlchemy list... which make more sense I admit. So here is my issue:
 
  I have N different classes:
 
  class Comment(Models):
author_id = Integer
comment = String
creation_date = Date
 
  class Picture(Models):
author_id = Integer
image = File
creation_date = Date
  ...
 
  now let say, I have a follow feature, allowing a user X to get 
 updates
  when Y (the user followed by X) does something (creepy...).
 
  So far I came up with something like that:
 
  class FeedItem(Model)
table = String
key = Integer
creation_date = Date
 
def url(self):
  #get object by querying self.table with self.key
  object = self.get_an_object(table=self.table, key=self.key)
  return object.view_url
 
  and then add this property to Comment and Picture classes:
 
@property
def view_url(self):
  return url_for('view_function_name', self.id)
 
  - What would be your way of dealing with this kind of open/generic
  relationship items?
 
  - How would you manage automatic deletion of a FeedItem when the 
 object it
  points to get destroyed? (I'm thinking attaching function on delete 
 event to
  classes)
 
  - Would you create as many FeedItem per follower, or use a Table to 
 link
  them to followers, therefore deleting a FeedItem would automatically 
 delete
  the relation record from the Table.?
 
  Thank you,
 
  --
  Brice
 
  --
  You received this message because you are subscribed to the Google 
 Groups
  sqlalchemy group.
  To post to this group, send email to 
  sqlal...@googlegroups.comjavascript:
 .
  To unsubscribe from this group, send email to
  sqlalchemy+...@googlegroups.com javascript:.
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.

 --
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 To unsubscribe from this group, send email to 
 sqlalchemy+...@googlegroups.com javascript:.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.




 -- 
 --
 Brice

  


 -- 
 Brice

  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/fIkNEtwTaH8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLA app and Django app using the same database

2012-10-27 Thread Eric Ongerth
Generally no problem due to the different ORMs.  There is potential for 
concurrency issues just as if you had two apps with the same ORM accessing 
the same database.

On Friday, October 26, 2012 7:56:06 AM UTC-7, Diego Woitasen wrote:

 Hi,
  I'm developing an application that access a DB that is defined and mainly 
 manipulated by a Django application. My app works in the backend processing 
 the data, using SQLA.

  My question is. Is there a problem in a scenario like this? I mean, two 
 different applications with different ORMs using the same DB. At first, the 
 only problem is that we'll need to keep the models in sync but I'm not sure 
 if I'm missing anything.

  May be someone already faced a problem like this and could tell me 
 his/her experience.

 Regards,
   Diego


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/yMtiPUemmCsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Saas deployment suggestions for SQLAlchemy?

2012-06-25 Thread Eric Ongerth
+1 on seeing thoughts in response to Iain's post! 


On Tuesday, May 1, 2012 8:14:42 AM UTC-7, Iain Duncan wrote:

 Hey all, we've been using SQLAlchemy for about 5 years now, currently with 
 Pyramid, but have not ever deployed to anything other than a standard vps 
 for a one client install. We're now gearing up to make some of our products 
 available as monthly subscriptions, and am looking for opinions and 
 pointers on that whole world. Our needs are:

 - must be able to keep using SQLAlchemy, Pyramid, Chameleon, FormEncode
 - must be able to connect to per-client db and central account management 
 db, likely using wsgi middleware for the central account part, but quite 
 possibly having the application talk to both dbs itself
 - we're really small, so we'd prefer to pay more per user for high 
 reliability low headache situations
 - ideally we'd like to know that as many users can signup as possible 
 without our interference, but I'm not married to that idea if everything 
 else leans another way
 - we are unlikely to have that many users, and bandwidth use will likely 
 be low ( it's not likely to go boom and suddenly need to scale like crazy, 
 rather specialized client base)

 The only things I've thought of are:
 - do it manually on a big ass vps and monitor
 - use Heroku
 - ???

 Another question I have, do most people deploying something as a SaaS set 
 it up so there is one python worker serving all the users or is it better 
 to have a python process/worker per user?

 War stories welcome!
 thanks
 Iain


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/FCRVrheYgtAJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: auto reflect to create table declarative table code

2012-06-07 Thread Eric Ongerth
Ben,

I think I might have seen you write in IRC that you had already
decided not to autoreflect after all, not sure if this is the same
person.  But in any case, I wanted to say that one good reason NOT to
autoreflect is as follows.  If you do the work of building a
declarative model for your database setup, it becomes very easy to
create test databases with an identical definition to your working
database, as in just a few lines easy.  That alone is a pretty big
advantage.  Another is that your declarative setup code can serve as
an additional backup of your database schema -- not that that should
matter really, but some people have projects that they leave alone for
a while, and when they come back it can be pretty hard to get
reacquainted just by looking over the table and constraint definitions
themselves, whereas reading through a sqlalchemy declarative model
that you built can make it all clear in short order.

- Eric


On Jun 6, 7:05 pm, Benjamin Hitz h...@stanford.edu wrote:
 Nah, I don't want a simpler interface, I want something that just generates 
 the code so I can extend it as needed.

 Ben
 On Jun 6, 2012, at 5:57 PM, Michael Bayer wrote:











  On Jun 6, 2012, at 7:47 PM, Benjamin Hitz wrote:

  This seems (to me) like an obvious question but a brief googling and 
  looking at the docs didn't seem to find the answer.

  I have an existing database (lets say mysql)
  I can easily create object like:

  class BroadPeaks(Base):
    __table__ = Table('broad_peaks', Base.metadata, autoload=True)

    def __repr__(self):
        return 'Peak %r' % (self.peak_name)

  Once I am connected, yay.  And I think this has already all the 
  relationships I need.

  But let's say I am S lazy that I just want to auto generate the lines:

  class ClassName(Base)
    __table__ = Table('table_name', Base.metadata, autoload=True)

  For each table.  Should I just introspect the metadata object?  I thought 
  of this but not all the tables were loaded...

  if you're incredibly lazy to that degree you might want to check out / help 
  out with SQLSoup, which has been with SQLAlchemy since the beginning but is 
  now it's own project:

 http://readthedocs.org/docs/sqlsoup/en/latest/tutorial.html

  basically you give it a name, it reflects that name and maps it.   All 
  kinds of caveats apply.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

 --
 Ben Hitz
 Senior Scientific Programmer ** Saccharomyces Genome Database ** GO Consortium
 Stanford University ** h...@stanford.edu

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] '_Label' object has no attribute 'nullable'

2012-05-17 Thread Eric Ongerth
(using sqlalchemy 0.8.0b1)

Using the new 8.0 inspection mechanism, I can perform the following on one 
of my model classes in an ipython or bpython shell and obtain a boolean 
result:

 [... bunch of imports to load up my model classes ...]
 inspect(SomeClassOfMine).attr['foo'].columns[0].nullable
True

But it seems I can only do this successfully in a shell.  When the same 
line of code is encountered at runtime, I get the following error:
AttributeError: '_Label' object has no attribute 'nullable'

Further info: this is in a shell initiated with respect to my Pyramid 
project by invoking pshell -p bpython development.ini.

What I'm actually trying to do: got some mako templates generating HTML 
forms from my model classes, and just want to have non-nullable fields 
generate the required keyword in my input elements.  Perhaps there is 
another better way to reach the 'nullable' property of a column?

Fishing for an easy answer here, but if none is forthcoming I'll knuckle 
down and build a testcase.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/y90dnrsvKZkJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: '_Label' object has no attribute 'nullable'

2012-05-17 Thread Eric Ongerth
Answered my own question here.

My code was actually working fine at runtime, it's just that one of the 
objects it encountered which I had thought was a simple column was actually 
a ColumnProperty which of course had no 'nullable' attribute.


On Thursday, May 17, 2012 3:08:21 AM UTC-7, Eric Ongerth wrote:

 (using sqlalchemy 0.8.0b1)

 Using the new 8.0 inspection mechanism, I can perform the following on one 
 of my model classes in an ipython or bpython shell and obtain a boolean 
 result:

  [... bunch of imports to load up my model classes ...]
  inspect(SomeClassOfMine).attr['foo'].columns[0].nullable
 True

 But it seems I can only do this successfully in a shell.  When the same 
 line of code is encountered at runtime, I get the following error:
 AttributeError: '_Label' object has no attribute 'nullable'

 Further info: this is in a shell initiated with respect to my Pyramid 
 project by invoking pshell -p bpython development.ini.

 What I'm actually trying to do: got some mako templates generating HTML 
 forms from my model classes, and just want to have non-nullable fields 
 generate the required keyword in my input elements.  Perhaps there is 
 another better way to reach the 'nullable' property of a column?

 Fishing for an easy answer here, but if none is forthcoming I'll knuckle 
 down and build a testcase.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/tnJBFf0OOtEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PostgreSQL 9.1 on the horizon, cool new stuff

2011-07-14 Thread Eric Ongerth
Not that anyone actually needed it, but it was fun to filter and
summarize.  (caffeine sink)

On Jul 11, 11:41 pm, Warwick Prince warwi...@mushroomsys.com wrote:
 Thanks for the 'heads-up' Eric :-)







  ! Nothing to see here, move right along !

  Except... Couple of interesting additions coming up in PostgreSQL 9.1
  (still in beta) for anyone who's interested.  Release notes:
 http://developer.postgresql.org/pgdocs/postgres/release-9-1.html  A
  couple of selected items I found of interest:

  * New support for CREATE UNLOGGED TABLE -- a new type of table that is
  sort of in between temporary tables and ordinary tables.  They are not
  crash-safe as they are not written to the write-ahead log and are not
  replicated if you have replication set up, but the tradeoff is they
  can be written to a lot faster.  Could use these to speed up testing,
  or in other non-production scenarios where crashproofness is not a
  concern.

  * New support for Synchronous replication -- primary master waits for
  a standby to write the transaction information to disk before
  acknowledging the commit.  This behavior can be enabled or disabled on
  a per-transaction basis.  Also a number of new settings related to
  keeping a 'hot standby'.

  * They added a true serializable transaction isolation level.
  Previously, asking for serializable isolation guaranteed only that a
  single MVCC snapshot would be used for the entire transaction, which
  allowed certain documented anomalies.  The old snapshot isolation
  behavior will now be accessible by using the repeatable read
  isolation level.

  --This one might be particularly interesting for SQLAlchemy--
  * INSERT, UPDATE, and DELETE will now be allowed in WITH clauses;
  these commands can use RETURNING to pass data up to the containing
  query.  While not strictly necesary, this can improve the clarity of
  SQL emitted by eliminating some nested sub-SELECTs.  There is other
  cool stuff you can accomplish with this such as deleting rows from one
  table according to a WHERE clause inside of a WITH...RETURNING, and
  inserting the same rows into another table in the same statement.  The
  recursive abilities of WITH statements can also be used now to perform
  useful maneuvers like recursive DELETEs in tree structures (as long as
  the data-modifying part is outside of the WITH clause).

  * New support for per-column collation settings (yawn... but someone
  somewhere needs this while migrating or something)

  * New support for foreign tables -- allowing data stored outside the
  database to be used like native postgreSQL-stored data (read-only).

  * Enum types can now be added to programmatically (i don't know if
  they can be removed from) via ALTER TYPE

  * Added CREATE TABLE IF NOT EXISTS syntax -- seems like SA's DDL
  machinery might want to use that in the checkfirst=True case to
  eliminate the separate check operation?  A minor matter, but nice.

  * Added transaction-level advisory locks (non-enforced, application-
  defined) similar to existing session-level advisory locks.

  * Lots more (a dump + restore will be required between 9.0 and 9.1)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQL Server weird sorting behaviour

2011-07-14 Thread Eric Ongerth
Sounds like you might want to set a different collation?  I don't know
if sql server lets you do that per column, per table, or just per
database.

http://msdn.microsoft.com/en-us/library/ms144250.aspx  --- some
collation examples


On Jul 14, 4:51 am, Massi massi_...@msn.com wrote:
 Hi everyone, I'm using sqlalchemy 0.6.8 to interact with an sql server
 database via pyodbc. I'm getting in troubles using the 'order by'
 clause on a varchar column which include positive or negative integer
 values. When I try to get values from this column ordered in ascending
 mode I get:

 1
 -1
 11
 -11
 111

 and so on...while I expect the resutl to be something like:

 -1
 -11
 1
 11
 111

 or even better:

 -11
 -1
 1
 11
 111

 Is there any way to achieve (or at least workaround) that with
 SQLalchemy?
 Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Can't get sqlalchemy to backup or restore MSSQL database

2011-07-11 Thread Eric Ongerth
I think more common practice is just to use shell scripts (whether in
scheduled tasks / cron jobs or manually) for backup.  But I don't know
MSSQL specifically.  I just have rarely heard of anyone trying to
accomplish their backup with SQLAlchemy as part of the chain of
command.


On Jul 11, 2:06 pm, Maciej Filip Szkodziński
maciej.szkodzin...@gmail.com wrote:
 Hello,

 I'm trying to backup a database running on SQL Server 2008 Express.
 I'm using pyodbc as the driver.

 This backup code executes happily, however no file is written to the
 provided path. I've also tried placing an empty file in the path, and
 only 2KB of data gets written to it.

 eng = create_engine(mssql+pyodbc://%s:%s@%s % (uid, pwd, server))
 eng.execute('BACKUP DATABASE test TO DISK=?',
                      backupFilePath)
 # sqlalchemy.engine.base.ResultProxy object at 0x015868F0

 os.path.isfile(backupFilePath)
 # False

 I am able to backup the database with the same parameters in 'bare'
 pyodbc.

 Here's a more verbose version of both, sqlalchemy and pyodbc, backup
 code:http://pastebin.com/6x1RRTqz

 I've also tried restoring an existing backup with sqlalchemy. Again, I
 get the ResultProxy, but the newly 'restored' database is stuck in
 perpetual 'Restoring...' state, and trying to use it results in:
 # Database 'test' cannot be opened. It is in the middle of a restore.

 I had a similar problem with bare pyodbc, and googling suggested that
 this loop is required for the backup/restore operation to continue and
 finish:

 while backupCursor.nextset():
     pass

 where backupCursor is the one returned by execute('BACKUP...').

 Is there a different way of doing backups via sqlalchemy, or some way
 around this, or should I stick with bare pyodbc for backups?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Conditional insert in one transaction

2011-06-18 Thread Eric Ongerth
Just curious and learning here  -- There are two separate issues here,
aren't there?  (1.) Atomicity of the transaction, taken care of by the
above discussion, and (2.) what if there was a need to have it be not
only atomic but consume as little time as possible between the read
and write, let's say for financial purposes?

All I really know about (2.) so far is that it's a fine concern in
theory, but in practice there are so many things affecting the timing
that it's impractical to worry about it beyond just coding for atomic
transactions and trying to minimize bottlenecks in the system.



On Jun 17, 7:17 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 17, 2011, at 8:41 AM, Moch Ramis wrote:

  2011/6/15 Michael Bayer mike...@zzzcomputing.com
  Assuming you're talking about the ORM.  You can of course emit the exact 
  SQL you refer to any time using execute() from an engine, connection, or 
  session.   The ORM works within a transaction at all times.    so an 
  expression like:

  myobject = MyObject(id=5)
  myobject = Session.merge(myobject)
  Session.commit()

  will perform the equivalent steps, subject to the transaction isolation in 
  effect.

  I'm not sure this will do the trick as i don't know if the object i'm 
  looking for already exist or not. thus, i don't have it's id ( i don't even 
  know if a range of id is free or not).

 OK then similar,

 myobject = s.query(MyClass).filter(...).first()

 if myobject is None:
         # ... etc



  To lock the row during the SELECT, you can use 
  query.with_lockmode(True).filter(criterion) to SELECT the row with FOR 
  UPDATE, will return None if the row does not exist.

   I'm not sure a lockMode is necessary.

 right actually this doesn't apply here since we are talking about an INSERT 
 not an UPDATE, sorry.

  However, I'm far from being an expert or to clearly know the internal 
  mechanisms of DBMS, but reading the Deadlock example in the part 12.3.3 of 
  this page of the postgresql documentation, I thought, that sending a bunch 
  of requests in one transaction would allow to do it as if it was all at 
  once, avoiding any conflicts that could occurs if two almost identical 
  requests were overlapping (like one executing its first statement, the 
  other the second ect..).

 yah Session is in a transaction which completes when you call rollback() or 
 commit().   I don't think there's any additional atomicity to an IF NOT 
 EXISTS...SELECT type of phrase versus a SELECT then an INSERT (if there is 
 that would be surprising).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] maybe check for near-miss underscored parameters in declarative classes?

2011-06-11 Thread Eric Ongerth
I'm plus or minus on this, not sure, but thought it might be worth
bringing up.

I had the following typo in a declarative class def:

class Foo(Base):
__tablename__ = 'foo_table'
__table_args_ = {'mysql_engine':'InnoDB', 'mysql_charset':'utf8'}

... so I had only one underscore character at the end of __table_args_
where I should have had two.  By failing to see this I led myself on a
wild goose chase.  MySQL refused to create another table that
references this table.  In mysql from the terminal, SHOW ENGINE INNODB
STATUS told me that the error was due to failing to reference the
parent table.  I kept trying to figure out how the Integer primary key
could be somehow mismatched with the Integer foreign key that referred
to it.

Ultimately the answer was that the referenced table was the only table
in my entire db that was using MyISAM.  I had to return to the class
definition a couple more times and finally caught the missing
underscore.

It generally does not need to be sqlalchemy's job or responsibility to
scan for typo errors.  But given that it seems to be 'reading' the
whole class definition and that it notices and warns about things like
a dangling comma after a Column specification, maybe it should watch
for single vs. double underscore in the standard __table__,
__tablename__, __table_args__, and __mapper_args__.

Not a strong contention, just tossing it out there.
- Eric

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Polymorphic forces table definition outside of declarative class?

2011-06-09 Thread Eric Ongerth
Good fix, thanks.

And thanks for the first answer too, I had not thought of just using
__table__.c.discriminator.  I was thinking I would need to use
sometable.c.discriminator, which I wasn't sure would work.

Incidentally, I'm finally going to quit using inheritance for cross-
cutting concerns and try mixins instead... In particular, keeping one
set of unique identifiers for trackable objects that can be users,
documents, events, products, etc... so that a note or issue/ticket can
be affixed to any of the above.  But without the mess of having to
draw primary keys for all of those tables from a single parent class
and all the accompanying baggage of inheritance and incurring joins on
more queries than necessary.  Thanks again for the guidance on that in
the past.


On Jun 8, 2:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 8, 2011, at 5:19 PM, Eric Ongerth wrote:

  # Meanwhile, the following way of doing it doesn't work.
  # But it seems like it would make sense and might be worth enabling.

  class SomeClass(Base):

     __table__ = Table('sometable', Base.metadata,
         Column('id', Integer, primary_key=True),
         Column('discriminator', Text),
         Column('data', Text))

     __mapper_args__ = {'polymorphic_on': 'discriminator',
                        'polymorphic_identity': 'default'}

 polymorphic_on someday might be able to handle more than just a column, and 
 there's no dependency-oriented rationale for allowing the string there like 
 there is with relationship(), so its better that it only accept the column 
 for now, which you can do easily enough via 
 {polymorphic_on:__table__.c.discriminator} above.

  error it causes is not very clear.  The error would be fine if it
  quoted the piece of code which triggered it, for instance if it said:
  * AttributeError: 'str' object 'discriminator' has no attribute
  'proxy_set'.

 we have a canned columns only function used by relationship() to check its 
 arguments, while there are probably a lot more places such a checker could 
 go, its additionally affixed to polymorphic_on in r760197daa0c2.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Polymorphic forces table definition outside of declarative class?

2011-06-08 Thread Eric Ongerth
# Currently I have to do this:

from sqlalchemy import Table, Column, Text, Integer

Base = sqlahelper.get_base()

sometable = Table('sometable', Base.metadata,
Column('id', Integer, primary_key=True),
Column('discriminator', Text),
Column('data', Text))

class SomeClass(Base):

__table__ = sometable

__mapper_args__ = {'polymorphic_on': sometable.c.discriminator,
   'polymorphic_identity': 'default'}


# Meanwhile, the following way of doing it doesn't work.
# But it seems like it would make sense and might be worth enabling.

class SomeClass(Base):

__table__ = Table('sometable', Base.metadata,
Column('id', Integer, primary_key=True),
Column('discriminator', Text),
Column('data', Text))

__mapper_args__ = {'polymorphic_on': 'discriminator',
   'polymorphic_identity': 'default'}


# Attempting to do it this way gives this error:
AttributeError: 'str' object has no attribute 'proxy_set'


Doesn't really matter to me, it's no extra effort to define the table
outside of the class, and in fact sometimes I need to do that in order
to share the table with other modules.  But it does seem that this is
a mistake that could be easily made, and my main point is that the
error it causes is not very clear.  The error would be fine if it
quoted the piece of code which triggered it, for instance if it said:
* AttributeError: 'str' object 'discriminator' has no attribute
'proxy_set'.

That still wouldn't be particularly descriptive of what went wrong,
but it might help someone find their error more quickly in the
future.  In my case I was just lucky to remember that I had recently
altered this particular file and was able to guess what the cause of
the problem was.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Defining a relationship without a foreign key constraint?

2011-03-16 Thread Eric Ongerth
Mike's suggestion is correct, and I want to add that relationship() /
relation()  do not require a foreign key constraint, they just are
able to figure out the mapping more automatically (without a
primaryjoin argument in unambiguous cases) if you do have one existing
on the table.


On Mar 15, 6:22 pm, recurse ken...@walkscore.com wrote:
 I'm wondering if there is a way to define a relationship without
 creating an associated foreign key constraint in the database.  It
 seems like relationship() requires me to define a foreign key, and
 that in turn automatically creates a foreign key constraint.  I'm
 currently using the declarative syntax to define my tables.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: two-way attribute relationships through associationproxy

2011-03-16 Thread Eric Ongerth
I hope this makes sense, what I'm trying to do here.  My naive first
try at it was to just see if I could have the desired backref()s on
either the A or C class, and manage to have the relationship go
through the associationproxy.  Probably I should be thinking of a
technical reason why we would not expect that to work.  So then I
thought maybe the backrefs just need to be on the relationships
defined on the B class.  The above post was my third try, where I
attempted to just make the whole setup symmetrical between A and C.


On Mar 14, 4:44 pm, Eric Ongerth ericonge...@gmail.com wrote:
 from sqlalchemy import Unicode, Integer, Column, create_engine,
 ForeignKey
 from sqlalchemy.orm import relationship, Session
 from sqlalchemy.orm.collections import MappedCollection
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.ext.associationproxy import association_proxy
 import operator

 class Base(object):
     id = Column(Integer, primary_key=True)

 Base = declarative_base(cls=Base)

 def _create_c_by_value(value):
     return C(value)

 def _create_a_by_value(value):
     return A(value)

 class A(Base):
     __tablename__ = a
     id = Column(Integer, primary_key=True)
     value = Column(Unicode)
     associations = relationship(B, cascade=all)
     c_values = association_proxy(associations, c_val,
 creator=_create_c_by_value)

     def __init__(self, val):
         self.value = val

     def __repr__(self):
         return('A(%s)' % self.value)

 class B(Base):
     __tablename__ = b
     a_id = Column(Integer, ForeignKey(a.id), nullable=False)
     c_id = Column(Integer, ForeignKey(c.id), nullable=False)
     c_elements = relationship(C, cascade=all)
     c_val = association_proxy(c_elements, value)
     a_elements = relationship(A, cascade=all)
     a_val = association_proxy(a_elements, value)

 class C(Base):
     __tablename__ = c
     id = Column(Integer, primary_key=True)
     c_value = Column(Unicode)
     associations = relationship(B, cascade=all)
     a_values = association_proxy(associations, a_val,
 creator=_create_a_by_value)

     def __init__(self, val):
         self.value = val

     def __repr__(self):
         return('C(%s)' % self.value)

 if __name__ == __main__:

     engine = create_engine('sqlite://', echo=True)
     Base.metadata.create_all(engine)
     session = Session(engine)

 Ok, running this sets up an example environment then you can
 experiment.  Try creating some As and relating them to some Cs.  I'm
 getting a bunch of different errors doing this.  What am I doing wrong
 or overlooking?

 The core of my inquiry is this: would this be the way to make two-way
 relationships between two classes (two classes which have a many:many
 relationship between them) work correctly when tunneled through an
 associationproxy to hide the association object class between them?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to write and access attribute in many to many table

2011-03-14 Thread Eric Ongerth
Nothing prevents the use of associationproxy with Declarative.


On Mar 14, 3:26 am, Christian Démolis christiandemo...@gmail.com
wrote:
 Hi all,

 I have a question about many to many Table containing attribute.
 How to access and write Max attribute in many to many table ?

 I already read that but i try to not use mapper and stay in declarative mode
 which is more user friendly 
 :)http://www.sqlalchemy.org/docs/05/reference/ext/associationproxy.html

 PlageTypeActe = Table('plage_type_acte', Base.metadata,
     Column('IdPlage', Integer, ForeignKey('plage_type.IdPlage'),
 primary_key=True),
     Column('IdActe', Integer, ForeignKey('acte.IdActe'), primary_key=True),
 *    Column('Max', Integer)*
 )

 class PlageType(Base):
     __tablename__ = 'plage_type'
     Typage = 2
     IdPlage = Column('IdPlage', Integer, primary_key=True)
     Debut = Column('Debut', DateTime)
     Fin = Column('Fin', DateTime)
     JourSemaine = Column('JourSemaine', Integer)
     EtatPrecedent = Column('EtatPrecedent', String)
     EtatCourant = Column('EtatCourant', String)
     EtatSuivant = Column('EtatSuivant', String)
     Max = Column('Max', Integer)
     SurchargeVad = Column('SurchargeVad', Boolean)
     StopVad = Column('StopVad', Time)
     Creation = Column('Creation', DateTime)
     Modification = Column('Modification', DateTime)
     MaxDejaVenu = Column('MaxDejaVenu', Integer)
     MaxJamaisVenu = Column('MaxJamaisVenu', Integer)
     MaxActif = Column('MaxActif', Integer)
     MaxInactif = Column('MaxInactif', Integer)
     MaxPatient = Column('MaxPatient', Integer)
     MaxDelegue = Column('MaxDelegue', Integer)
     MaxCmu = Column('MaxCmu', Integer)
     MaxNonCmu = Column('MaxNonCmu', Integer)
     AgeMini = Column('AgeMini', Integer)
     AgeMaxi = Column('AgeMaxi', Integer)
     IdSemaineType = Column('IdSemaineType',
 ForeignKey('semaine_type.IdSemaineType'))
     IdDossier = Column('IdDossier', ForeignKey('dossier.IdDossier'))
     IdLieu = Column('IdLieu', ForeignKey('lieu.IdLieu'))
     NomRemplacement = Column('NomRemplacement', String)
     *PlageTypeActe = relationship(Acte, secondary=PlageTypeActe,
 backref=plage_type)*
     def __init__(self,
         Debut=datetime.datetime.today(),
         Fin=datetime.datetime.today(),
         JourSemaine=0,
         EtatPrecedent=,
         EtatCourant=,
         EtatSuivant=,
         Max=0,
         SurchargeVad=0,
         StopVad=datetime.time(),
         Creation=datetime.datetime.today(),
         Modification=datetime.datetime.today(),
         MaxDejaVenu=-1,
         MaxJamaisVenu=-1,
         MaxActif=-1,
         MaxInactif=-1,
         MaxPatient=-1,
         MaxDelegue=-1,
         MaxCmu=-1,
         MaxNonCmu=-1,
         AgeMini = 0,
         AgeMaxi = 0,
         IdSemaineType = 0,
         IdDossier = 0,
         IdLieu = 0,
         NomRemplacement = 
         ):
         self.Debut = Debut
         self.Fin = Fin
         self.JourSemaine = JourSemaine
         self.EtatPrecedent = EtatPrecedent
         self.EtatCourant = EtatCourant
         self.EtatSuivant = EtatSuivant
         self.Max = Max
         self.SurchargeVad = SurchargeVad
         self.StopVad = StopVad
         self.Creation = Creation
         self.Modification = Modification
         self.MaxDejaVenu = MaxDejaVenu
         self.MaxJamaisVenu = MaxJamaisVenu
         self.MaxActif = MaxActif
         self.MaxInactif = MaxInactif
         self.MaxPatient = MaxPatient
         self.MaxDelegue = MaxDelegue
         self.MaxCmu = MaxCmu
         self.MaxNonCmu = MaxNonCmu
         self.AgeMini = AgeMini
         self.AgeMaxi = AgeMaxi
         self.IdSemaineType = IdSemaineType
         self.IdDossier = IdDossier
         self.IdLieu = IdLieu
         self.NomRemplacement = NomRemplacement

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] two-way attribute relationships through associationproxy

2011-03-14 Thread Eric Ongerth

from sqlalchemy import Unicode, Integer, Column, create_engine,
ForeignKey
from sqlalchemy.orm import relationship, Session
from sqlalchemy.orm.collections import MappedCollection
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
import operator

class Base(object):
id = Column(Integer, primary_key=True)

Base = declarative_base(cls=Base)

def _create_c_by_value(value):
return C(value)

def _create_a_by_value(value):
return A(value)

class A(Base):
__tablename__ = a
id = Column(Integer, primary_key=True)
value = Column(Unicode)
associations = relationship(B, cascade=all)
c_values = association_proxy(associations, c_val,
creator=_create_c_by_value)

def __init__(self, val):
self.value = val

def __repr__(self):
return('A(%s)' % self.value)

class B(Base):
__tablename__ = b
a_id = Column(Integer, ForeignKey(a.id), nullable=False)
c_id = Column(Integer, ForeignKey(c.id), nullable=False)
c_elements = relationship(C, cascade=all)
c_val = association_proxy(c_elements, value)
a_elements = relationship(A, cascade=all)
a_val = association_proxy(a_elements, value)


class C(Base):
__tablename__ = c
id = Column(Integer, primary_key=True)
c_value = Column(Unicode)
associations = relationship(B, cascade=all)
a_values = association_proxy(associations, a_val,
creator=_create_a_by_value)

def __init__(self, val):
self.value = val

def __repr__(self):
return('C(%s)' % self.value)


if __name__ == __main__:

engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)
session = Session(engine)



Ok, running this sets up an example environment then you can
experiment.  Try creating some As and relating them to some Cs.  I'm
getting a bunch of different errors doing this.  What am I doing wrong
or overlooking?

The core of my inquiry is this: would this be the way to make two-way
relationships between two classes (two classes which have a many:many
relationship between them) work correctly when tunneled through an
associationproxy to hide the association object class between them?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Has anyone already got a collection_class mod that uses defaultdict?

2011-03-13 Thread Eric Ongerth
Thank you, Mike.  Hmm... I tried exactly that technique, the last time
I found myself with a similar need, a year or two ago.  I seem to
remember it only caused the problem to regress one step farther.  But
I must have missed something and will probably give it another shot.

Any suggestions on how to replace the dict in one of these situations
with a defaultdict?  Just override __setitem__ and __getitem__ ?

- Eric


On Mar 12, 10:26 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Mar 10, 2011, at 6:20 PM, Eric Ongerth wrote:

  So, jek, if you're listening, or anyone else -- is there an already
  existing, working implementation of a Dict of Lists or Dict of Sets
  collection class?

 the association_proxy is always there to flatten the object in the middle 
 of two relationships typically along an association - so just 
 A-relationship-dict of B-relationship-set of C, association proxy from A 
 to B.cs.



  On Mar 10, 1:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
  Ach, I did it again... proceeded as if column_mapped_collection and
  attribute_mapped_collection provide collection classes that are dicts
  of lists.  No, they're just dicts of scalar values!

  Again and again I need dicts of lists.  They seem to really fit the
  data that I tend to work with; A's related to C's indexed by B's (the
  B typically indicating the *type* of relationship between the A and
  the C).

  Eventually I need to work my own implementation of dict-of-lists and
  dict-of-dicts collection classes up to full strength so I can share
  them as a recipe or extension or something.

  On Mar 10, 1:02 pm, Eric Ongerth ericonge...@gmail.com wrote:

  I make occasional use of alternate collection classes for my
  sqlalchemy relationships.  For example, column_mapped_collection.  The
  existing collection classes in sqlalchemy.orm.collections work really
  well for me; I have a lot of relational data that very naturally
  belongs in dicts rather than lists because it makes sense as one thing
  indexed or mapped by another.

  I find that I often need to add an item to one of these dict-mapped
  collections and would rather not have to check whether the key already
  exists in the dict.  So I'm doing, essentially:
  collection.setdefault(key, []).append(value).

  As a matter of syntactic sugar (which alternative collection classes
  already are, anyway) I would like to have collection classes backed by
  dicts with the capability of python's defaultdict, so that I could
  write more simply, and with equivalent results:
  collection[key].append(value)

  I understand how to create my own collection class which will behave
  as desired.  My question is, has anyone already got code for this that
  I could borrow rather than reinvent the wheel?  Thanks in advance.  --
  Eric

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Has anyone already got a collection_class mod that uses defaultdict?

2011-03-13 Thread Eric Ongerth
Cool!  I was unaware of __missing__.  Thanks again.


On Mar 13, 10:24 am, Michael Bayer mike...@zzzcomputing.com wrote:
 its probably easier to use __missing__ - see attached.

  dict_of_sets_with_default.py
 1KViewDownload



 On Mar 13, 2011, at 4:26 AM, Eric Ongerth wrote:

  Thank you, Mike.  Hmm... I tried exactly that technique, the last time
  I found myself with a similar need, a year or two ago.  I seem to
  remember it only caused the problem to regress one step farther.  But
  I must have missed something and will probably give it another shot.

  Any suggestions on how to replace the dict in one of these situations
  with a defaultdict?  Just override __setitem__ and __getitem__ ?

  - Eric

  On Mar 12, 10:26 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Mar 10, 2011, at 6:20 PM, Eric Ongerth wrote:

  So, jek, if you're listening, or anyone else -- is there an already
  existing, working implementation of a Dict of Lists or Dict of Sets
  collection class?

  the association_proxy is always there to flatten the object in the 
  middle of two relationships typically along an association - so just 
  A-relationship-dict of B-relationship-set of C, association proxy from 
  A to B.cs.

  On Mar 10, 1:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
  Ach, I did it again... proceeded as if column_mapped_collection and
  attribute_mapped_collection provide collection classes that are dicts
  of lists.  No, they're just dicts of scalar values!

  Again and again I need dicts of lists.  They seem to really fit the
  data that I tend to work with; A's related to C's indexed by B's (the
  B typically indicating the *type* of relationship between the A and
  the C).

  Eventually I need to work my own implementation of dict-of-lists and
  dict-of-dicts collection classes up to full strength so I can share
  them as a recipe or extension or something.

  On Mar 10, 1:02 pm, Eric Ongerth ericonge...@gmail.com wrote:

  I make occasional use of alternate collection classes for my
  sqlalchemy relationships.  For example, column_mapped_collection.  The
  existing collection classes in sqlalchemy.orm.collections work really
  well for me; I have a lot of relational data that very naturally
  belongs in dicts rather than lists because it makes sense as one thing
  indexed or mapped by another.

  I find that I often need to add an item to one of these dict-mapped
  collections and would rather not have to check whether the key already
  exists in the dict.  So I'm doing, essentially:
  collection.setdefault(key, []).append(value).

  As a matter of syntactic sugar (which alternative collection classes
  already are, anyway) I would like to have collection classes backed by
  dicts with the capability of python's defaultdict, so that I could
  write more simply, and with equivalent results:
  collection[key].append(value)

  I understand how to create my own collection class which will behave
  as desired.  My question is, has anyone already got code for this that
  I could borrow rather than reinvent the wheel?  Thanks in advance.  --
  Eric

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Has anyone already got a collection_class mod that uses defaultdict?

2011-03-13 Thread Eric Ongerth
Ah, brilliant!  So now I'm analyzing what prevented me from coming up
with that solution before it looks like my problem, or mental
block, was that I was trying to find a magic incantation to allow me
to get the whole dict-of-set behavior defined from the A class, rather
than allowing it to be staged across A and B where it naturally fits.
That is exactly where I should have seen that associationproxy would
handle the rest, but I must have been attempting to use
associationproxy in a different way.  At some point I remember
succumbing to a mental storm of infinite recursion and had to just put
down the keyboard and back away slowly.

I'm excited to give this a test-drive.

On Mar 13, 10:24 am, Michael Bayer mike...@zzzcomputing.com wrote:
 its probably easier to use __missing__ - see attached.

  dict_of_sets_with_default.py
 1KViewDownload



 On Mar 13, 2011, at 4:26 AM, Eric Ongerth wrote:

  Thank you, Mike.  Hmm... I tried exactly that technique, the last time
  I found myself with a similar need, a year or two ago.  I seem to
  remember it only caused the problem to regress one step farther.  But
  I must have missed something and will probably give it another shot.

  Any suggestions on how to replace the dict in one of these situations
  with a defaultdict?  Just override __setitem__ and __getitem__ ?

  - Eric

  On Mar 12, 10:26 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Mar 10, 2011, at 6:20 PM, Eric Ongerth wrote:

  So, jek, if you're listening, or anyone else -- is there an already
  existing, working implementation of a Dict of Lists or Dict of Sets
  collection class?

  the association_proxy is always there to flatten the object in the 
  middle of two relationships typically along an association - so just 
  A-relationship-dict of B-relationship-set of C, association proxy from 
  A to B.cs.

  On Mar 10, 1:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
  Ach, I did it again... proceeded as if column_mapped_collection and
  attribute_mapped_collection provide collection classes that are dicts
  of lists.  No, they're just dicts of scalar values!

  Again and again I need dicts of lists.  They seem to really fit the
  data that I tend to work with; A's related to C's indexed by B's (the
  B typically indicating the *type* of relationship between the A and
  the C).

  Eventually I need to work my own implementation of dict-of-lists and
  dict-of-dicts collection classes up to full strength so I can share
  them as a recipe or extension or something.

  On Mar 10, 1:02 pm, Eric Ongerth ericonge...@gmail.com wrote:

  I make occasional use of alternate collection classes for my
  sqlalchemy relationships.  For example, column_mapped_collection.  The
  existing collection classes in sqlalchemy.orm.collections work really
  well for me; I have a lot of relational data that very naturally
  belongs in dicts rather than lists because it makes sense as one thing
  indexed or mapped by another.

  I find that I often need to add an item to one of these dict-mapped
  collections and would rather not have to check whether the key already
  exists in the dict.  So I'm doing, essentially:
  collection.setdefault(key, []).append(value).

  As a matter of syntactic sugar (which alternative collection classes
  already are, anyway) I would like to have collection classes backed by
  dicts with the capability of python's defaultdict, so that I could
  write more simply, and with equivalent results:
  collection[key].append(value)

  I understand how to create my own collection class which will behave
  as desired.  My question is, has anyone already got code for this that
  I could borrow rather than reinvent the wheel?  Thanks in advance.  --
  Eric

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Joined Table Inheritance question

2011-03-10 Thread Eric Ongerth
Or just leave the polymorphic_entity='employee' and put a CHECK
constraint on the table enforcing that the value of that column is not
'employee'.  Or just don't code anything that can add a generic
employee :)


On Mar 10, 8:23 am, Franck franck.pe...@gmail.com wrote:
 I'm trying to reply myself... maybe I could set *polymorphic_identity=None *at
 the Employee level, and define the DB field EMPLOYEE.TYPE field as not
 nullable...

 Would that work ? Is there a better way ?

 Thanks !
 Franck

 On Thu, Mar 10, 2011 at 2:00 PM, Franck franck.pe...@gmail.com wrote:
  Hi,

  I read carefully the documentation at
 http://www.sqlalchemy.org/docs/orm/inheritance.htmland I'd like to
  implement a similar joined table inheritance for my application.

  The following example is provided :

  *mapper(Employee, employees, polymorphic_on=employees.c.type,
  polymorphic_identity='employee')*
  *mapper(Engineer, engineers, inherits=Employee,
  polymorphic_identity='engineer')*
  *mapper(Manager, managers, inherits=Employee,
  polymorphic_identity='manager')*
  *
  *
  However in my case I don't want to be able to persist some random Employee.
  All my employees should be engineers or managers. Ideally some exception
  should be thrown if I try to add an Employee object to my session.

  Of course, I'd still like to be able to persist Engineers and Managers with
  the right type.

  Is there a way to achieve that ?

  Thanks a lot !
  Franck

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Has anyone already got a collection_class mod that uses defaultdict?

2011-03-10 Thread Eric Ongerth
I make occasional use of alternate collection classes for my
sqlalchemy relationships.  For example, column_mapped_collection.  The
existing collection classes in sqlalchemy.orm.collections work really
well for me; I have a lot of relational data that very naturally
belongs in dicts rather than lists because it makes sense as one thing
indexed or mapped by another.

I find that I often need to add an item to one of these dict-mapped
collections and would rather not have to check whether the key already
exists in the dict.  So I'm doing, essentially:
collection.setdefault(key, []).append(value).

As a matter of syntactic sugar (which alternative collection classes
already are, anyway) I would like to have collection classes backed by
dicts with the capability of python's defaultdict, so that I could
write more simply, and with equivalent results:
collection[key].append(value)

I understand how to create my own collection class which will behave
as desired.  My question is, has anyone already got code for this that
I could borrow rather than reinvent the wheel?  Thanks in advance.  --
Eric

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Has anyone already got a collection_class mod that uses defaultdict?

2011-03-10 Thread Eric Ongerth
Ach, I did it again... proceeded as if column_mapped_collection and
attribute_mapped_collection provide collection classes that are dicts
of lists.  No, they're just dicts of scalar values!

Again and again I need dicts of lists.  They seem to really fit the
data that I tend to work with; A's related to C's indexed by B's (the
B typically indicating the *type* of relationship between the A and
the C).

Eventually I need to work my own implementation of dict-of-lists and
dict-of-dicts collection classes up to full strength so I can share
them as a recipe or extension or something.

On Mar 10, 1:02 pm, Eric Ongerth ericonge...@gmail.com wrote:
 I make occasional use of alternate collection classes for my
 sqlalchemy relationships.  For example, column_mapped_collection.  The
 existing collection classes in sqlalchemy.orm.collections work really
 well for me; I have a lot of relational data that very naturally
 belongs in dicts rather than lists because it makes sense as one thing
 indexed or mapped by another.

 I find that I often need to add an item to one of these dict-mapped
 collections and would rather not have to check whether the key already
 exists in the dict.  So I'm doing, essentially:
 collection.setdefault(key, []).append(value).

 As a matter of syntactic sugar (which alternative collection classes
 already are, anyway) I would like to have collection classes backed by
 dicts with the capability of python's defaultdict, so that I could
 write more simply, and with equivalent results:
 collection[key].append(value)

 I understand how to create my own collection class which will behave
 as desired.  My question is, has anyone already got code for this that
 I could borrow rather than reinvent the wheel?  Thanks in advance.  --
 Eric

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Has anyone already got a collection_class mod that uses defaultdict?

2011-03-10 Thread Eric Ongerth
So, jek, if you're listening, or anyone else -- is there an already
existing, working implementation of a Dict of Lists or Dict of Sets
collection class?


On Mar 10, 1:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
 Ach, I did it again... proceeded as if column_mapped_collection and
 attribute_mapped_collection provide collection classes that are dicts
 of lists.  No, they're just dicts of scalar values!

 Again and again I need dicts of lists.  They seem to really fit the
 data that I tend to work with; A's related to C's indexed by B's (the
 B typically indicating the *type* of relationship between the A and
 the C).

 Eventually I need to work my own implementation of dict-of-lists and
 dict-of-dicts collection classes up to full strength so I can share
 them as a recipe or extension or something.

 On Mar 10, 1:02 pm, Eric Ongerth ericonge...@gmail.com wrote:

  I make occasional use of alternate collection classes for my
  sqlalchemy relationships.  For example, column_mapped_collection.  The
  existing collection classes in sqlalchemy.orm.collections work really
  well for me; I have a lot of relational data that very naturally
  belongs in dicts rather than lists because it makes sense as one thing
  indexed or mapped by another.

  I find that I often need to add an item to one of these dict-mapped
  collections and would rather not have to check whether the key already
  exists in the dict.  So I'm doing, essentially:
  collection.setdefault(key, []).append(value).

  As a matter of syntactic sugar (which alternative collection classes
  already are, anyway) I would like to have collection classes backed by
  dicts with the capability of python's defaultdict, so that I could
  write more simply, and with equivalent results:
  collection[key].append(value)

  I understand how to create my own collection class which will behave
  as desired.  My question is, has anyone already got code for this that
  I could borrow rather than reinvent the wheel?  Thanks in advance.  --
  Eric

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: cascading deletes

2011-02-25 Thread Eric Ongerth
Wouldn't he need to configure the ondelete cascade for even
session.delete(session.query(User).get('testname'))  to work that
way?

I know why the cascade is necessary for session.query(User).delete()
to also delete the associated IP instances.  But I don't quite get why
it's not necessary for that other method of deleting the user to
delete the associated IPs.


On Feb 25, 7:17 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 25, 2011, at 7:31 AM, Chris Withers wrote:



  Hi All,

  I have the following models:

  class IP(Base):
     __tablename__ = 'ip'
     username = Column(String(50),
                       ForeignKey('user.username'),
                       primary_key=True)
     ip = Column(String(15), primary_key=True, index=True)

  class User(Base):
     __tablename__ = 'user'
     username = Column(String(50), primary_key=True)
     ips = relation(IP,
                    order_by=IP.ip,
                    backref=user,
                    cascade=all)

  If I delete a user as follows:

  session.delete(session.query(User).get('testname'))

  ...then the IPs associated with 'testname' get deleted.

  However, if I do:

  session.query(User).delete()

  ..they do not.

  Why is that?
  How do I get them both to work?

 you would need to configure ON DELETE CASCADE on the foreign key.  
 ForeignKey() offers the ondelete option for this.



  cheers,

  Chris

  --
  Simplistix - Content Management, Batch Processing  Python Consulting
            -http://www.simplistix.co.uk

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Create Database trought sqlalchemy 0.6.6 !

2011-02-22 Thread Eric Ongerth
sqlalchemy allows you to issue any literal sql statements as text:

http://www.sqlalchemy.org/docs/core/tutorial.html#using-text


On Feb 22, 7:38 am, Toninho Nunes toninhonu...@gmail.com wrote:
 Hi,

 I would like to know how to create database with sqlalchemy using the
 PostGresql driver, are there a sample or example?

 sqlalchemy just only works with database postgresql previous created.

 see my code:

 import sqlalchemy
 from sqlalchemy import create_engine, Table, MetaData, Integer,
 String, ForeignKey, Column, LargeBinary

 db_engine = create_engine('postgresql+psycopg2://
 postgres:magi1850@localhost/newscom')

 metadata = MetaData()

 metadata.create_all(db_engine)

 Error message:

 sqlalchemy.exc.OperationalError: (OperationalError) FATAL:  database
 newscom2 does not exist
  None None

 Any ideas ?

 Toninho Nunes

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Create Database trought sqlalchemy 0.6.6 !

2011-02-22 Thread Eric Ongerth
Even with that autocommit transaction isolation level, you probably
need to commit the create database before you try to add tables to
it.

On Feb 22, 1:45 pm, Toninho Nunes toninhonu...@gmail.com wrote:
 Hi see my source code below

 import sqlalchemy
 import psycopg2
 from sqlalchemy import create_engine, Table, MetaData, Integer,
 String, ForeignKey, Column, LargeBinary
 from sqlalchemy.sql import text

 db_engine = create_engine('postgresql+psycopg2://
 postgres:password@localhost/newscom3', echo=True)
 db_engine.raw_connection().set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
 db_engine.text(CREATE DATABASE newscom3 WITH OWNER = postgres
 ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'pt_BR.utf8'
 LC_CTYPE = 'pt_BR.utf8' CONNECTION LIMIT = -1;).execute()

 metadata = MetaData()
 metadata.create_all(db_engine)

 I run the source code, but I receive the following error.

 Traceback (most recent call last):
   File newscomddl.py, line 18, in module

 db_engine.raw_connection().set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/engine/base.py, line 1874, in raw_connection
     return self.pool.unique_connection()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 142, in unique_connection
     return _ConnectionFairy(self).checkout()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 369, in __init__
     rec = self._connection_record = pool.get()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 213, in get
     return self.do_get()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 732, in do_get
     con = self.create_connection()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 147, in create_connection
     return _ConnectionRecord(self)
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 253, in __init__
     self.connection = self.__connect()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/pool.py, line 319, in __connect
     connection = self.__pool._creator()
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/engine/strategies.py, line 82, in connect
     return dialect.connect(*cargs, **cparams)
   File /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.6-
 py2.6.egg/sqlalchemy/engine/default.py, line 249, in connect
     return self.dbapi.connect(*cargs, **cparams)
 sqlalchemy.exc.OperationalError: (OperationalError) FATAL:  database
 newscom3 does not exist
  None None

 I don't get know where are wrong, could you help me?

 Thanks,

 Toninho Nunes

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Dynamic relations...

2011-02-13 Thread Eric Ongerth
Polymorphic associations pop up a lot around here, don't they!  I
suppose it's partly because they would be so much more difficult to
handle, or even come close to handling, conveniently, with most other
ORM packages.

Martijn, after running into the wall on polymorphic associations
approximately once a year since Michael wrote that blog article, I
finally got it all straight in my mind, and I realized there are two
fundamental tricks to seeing the PA phenomenon clearly.  One is learn
to sense when it's around; it's almost like a certain smell in your
code.  There is a particular feeling that distinguishes it from other
data modeling problems.  Two, when you notice a polymorphic
association pattern beginning to appear, attempt to think about the
relations involved in the exact reverse direction.  This sounds too
trivial to be a real piece of advice, but it works every time for me.
Each time I perceive a problem with a polymorphic association but then
I reverse my perspective, the perceived problem vanishes.

I could probably do just as well by binding a rug around my head or
something, but that's just me.  And I'm not going to find out.

- Eric


On Feb 13, 1:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 a polymorphic association is hard.   that's why I have three examples of them 
 and soon a fourth.    Though they are a subset of a larger batch of tricks 
 that I've been using in my own work with declarative for the past year to 
 automate lots of different kinds of patterns, perhaps there's a learning 
 curve but once three or four techniques are mastered they come pretty easily.

 On Feb 13, 2011, at 4:35 PM, Martijn Moeling wrote:

  Michael,

  I looked at the code and I can not say more than that its very interesting, 
  I have to see how it works and more importantly how It fits into my objects 
  but it seems clear enough to do so.
  I really appreciate your work on SQLAlchemy and all the time you spend to 
  help us users out.

  Your solution is definitively one I could not have put together myself. 
  Although I have tried. SQLAlchemy is so powerful that it is hard to find 
  the right options for the job.

  Thank you again!

  Martijn

  On Feb 13, 2011, at 21:19 , Michael Bayer wrote:

  On Feb 13, 2011, at 6:14 AM, Martijn Moeling wrote:

  You are right in the misunderstood relation.

  I see the primary key in extra to be wrong, extra should have it's own 
  I'd column being an auto number. In extra it should be possible to have 
  many records pointing to 1 ext variant. Sorry for that.

  The extra, should also work with tables without a discriminator, there 
  the link should be made to  table name which is in my case always 
  class.__name__ .. On those tables, the relation needs to be different 
  since on of the local columns, discriminator is not present and it 
  somehow should be linked to __table name__

  OK what you are trying to do is exactly a polymorphic association.   The 
  technique of placing tablename in the table of related records, then 
  using that tablename to indicate which parent table should be matched at 
  query time, is a common, but IMHO relationally incorrect pattern.   I 
  blogged extensively about the Ruby on Rails approach, how to duplicate 
  Rails' approach in SQLAlchemy, and then an alternate system which 
  maintains referential integrity, four years ago 
  athttp://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-s

  That's a really old example and we're into 0.7 now, so I've created a new 
  version of the poly assoc example that uses declarative techniques and the 
  association proxy, which is attached.  I'm going to further fix up this 
  example and add it to the distribution as a fourth example of polymorphic 
  association, which is in examples/poly_assoc/.

  Additionally I'd like the association proxy to work more smoothly in 
  queries so I've added ticket #2054 for some of those issues which weren't 
  accounted for when we first added any(), contains() operators to the 
  association proxy.

  It is all part of the ACL examples it talked about walker, where extra 
  must be seen as the ACL. That is where the mapper extension comes in

  It is getting a all technologies mixed in situation

  Verstuurd vanaf mijn iPad

  Op Feb 12, 2011 om 17:05 heeft Michael Bayer mike...@zzzcomputing.com 
  het volgende geschreven:

  OK I can show you the version of your code that does most of this but 
  there are some fundamental relational misunderstandings in this schema 
  if I am interpreting correctly.

  Extra:

  tableid         tablename
  ---         -
  1               ext1
  2               ext1
  3               ext2
  4               ext2
  5               ext3

  ext1:

  id       discriminator     (- FK to Extra.tableid, Extra.tablename)
  --       -
  1        ext1
  2        ext1
  3        ext2
  4        ext2
  5        ext3

  ext2:

  id
  --
  3
  4

  ext3:

  id
  --

  

[sqlalchemy] Re: sqlalchemy rocks my socks off!

2011-01-15 Thread Eric Ongerth
+1

On Jan 13, 5:08 pm, rdlowrey rdlow...@gmail.com wrote:
 To Michael Bayer: sqlalchemy simplifies my life every day and makes me
 vastly more productive! Many thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: change of discriminator column apparently not saved by session?

2011-01-03 Thread Eric Ongerth
Oh, whoops.  Also I figured out why my solution of deleting, making
transient, then re-adding appeared not to work.   It actually did
work, I just had some dead rows in the parent table that needed to be
weeded out.  Now I can stop with this refactoring / migration madness
and get back to developing!  Meanwhile the alternative solution in
sa's SQL Expression language turned out to be far simpler too,
required 5 lines of code instead of ~20.

Thanks again!


On Jan 2, 11:32 pm, Eric Ongerth ericonge...@gmail.com wrote:
 Right, you made that clear before.

 I was no longer talking about setting the discriminator column here in
 0.6.5.  I was talking about deleting, making transient, and then re-
 adding all of the objects in question.  And how this worked on some of
 them but not all.

 And your reasons for not bothering with multiple inheritance in sa are
 great, thanks for describing them.

 On Jan 2, 11:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  On Jan 3, 2011, at 2:02 AM, Eric Ongerth wrote:

   So I tried my solution of deleting and then re-adding each object
   (row) in question.  Didn't work quite like that; instead I had to
   delete, make_transient(), and then re-add.  Still didn't quite work;
   for rows that had sa relationships via FKs to other tables, in order
   to avoid errors I had to give the objects thus related a similar
   treatment, calling make_transient() and then re-associating them.

   Lo and behold, for tables that fit that description it worked and the
   discriminator is set properly now.  But for the rest of the tables,
   those which did not have relationships that needed taking care of in
   that way to avoid errors, their rows still aren't getting the
   discriminator set.  I tried using instance_state(obj).modified =
   True on them before flush, but that didn't help either.  Still
   haven't found a solution for the rest of those.

   I don't mean to annoy you in persisting with this approach when it's
   not to your liking.   It's just that at this point I'm trying to work
   it all the way through in this manner just to come to a better
   understanding of the ORM's workings.

  You can't set the discriminator column in 0.6 with the ORM.   The mapper 
  will ignore it.   You need to forego the ORM or use 0.7.

   On Jan 2, 9:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
   Ah!  I did it again.  You may or may not remember I asked you a
   different question yielding the same answer a couple of months ago!
   I've got to put a stop to that trend.

   To quote from your blog post:

   ===
   In fact, while this table scheme is exactly joined table inheritance,
   and we could certainly go the straight route of creating an
   Addressable base class and mapper from which the User and Order
   classes/mappers derive, then creating the traditional SA polymorphic
   mapping using UNION ALL (or whatever surprise 0.4 has in store) with
   a relationship to Address, here we're going to do it differently.
   Namely, because we are still going to look at this association as a
   cross-cutting concern rather than an is-a relationship, and also
   because SA's explicit inheritance features only support single
   inheritance, and we'd rather not occupy the inherits slot with a
   relationship that is at best a mixin, not an is-a.
   ===

   I actually started modeling this exactly the way you're suggesting in
   that post.  I added a trackable_id column to each of the six tables,
   as a foreign key to a table of association objects.  But then I
   reasoned as follows: (1) great, now I have two unique ID columns in
   each of these tables and that's almost never a good thing.  (2) great,
   now I'm going to have to add more program logic to explicitly create
   an association object every time I add a row to any of these tables.
   Clunky.

   Your blog post shows how to fix problem (2) there with some very cool
   python magic.  Ok, it's not even magic, it's clear enough.  But as
   I've noticed before, I still haven't absorbed your level of comfort
   with throwing that stuff around.  I understand it well enough, but
   when I see underscores I start thinking can't this be done another
   way without wizardry?

   I suppose even that point is moot when I recognize that my solution
   using inheritance actually calls upon a lot of very clever programming
   within the guts of sqlalchemy.

   Your most salient point of all, for me, is where you note that that
   there's currently only one slot for inheritance and there isn't a
   very strong case for occupying it with a cross-cutting concern that is
   at best a mix-in, not an 'is-a'. 

   So now I'm curious, have you given much thought to advancing the
   possibility of supporting multiple inheritance with SA's explicit
   inheritance features?  Or is that just too dangerous (or just too much
   of a pain to work out).  Or too tempting for people to brew up all
   manner of ridiculous and unnecessary schema

[sqlalchemy] Re: change of discriminator column apparently not saved by session?

2011-01-02 Thread Eric Ongerth
On Jan 2, 7:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
 Curious here is what kind of scheme you have that requires a manual setting 
 of the discriminator.

Forgive me, I tried to indicate that it doesn't, this is just a one-
time thing I have to do to get this database fixed up.  Here's the
whole story in a nutshell.  I had six tables (person, company,
product, item, etc.) that had little to do with each other.
Eventually I decided that the objects symbolized by the rows in those
tables had a cross-cutting concern: I want to be able to attach action
tickets (sort of like in Trac) and notes to them.  The project I'm
working on has a certain limited amount of CRM/ERP type stuff as part
of its scope, thus tickets, notes, and so on.

So where these tables originally each had their own unique primary key
id columns, I realized that in order to be able to affix notes
equally to a person, a company, a product, etc., I wanted to have all
of these disparate tables pull their IDs from a common table holding a
single column, a column of trackable object IDs.  Sort of like the
POID concept used for certain concerns in 
http://www.agiledata.org/essays/mappingObjects.html.

Seemed to me a simple way to make that work, without having to build a
bunch of ungainly trigger machinery, is make all of these tables
inherit from that common ID table; thus a joined table inheritance
setup.  I successfully scripted all of the changes necessary so that
instead of their old ID columns, each table's ID column now is still
its primary key but is also a foreign key to the common ID table.

At that point I had sort of Frankensteined a joined-table
inheritance situation... I had managed to do all of the constraint
dropping and adding that was needed, I managed to keep all of the rows
in most of the tables (only had to rebuild some of them), and
everything was peachy.  Now a given ID of any row within any of these
six tables, is unique over all six of the tables.  Thus notes,
tickets, whatever other cross-cutting deals, can be attached to such
unique IDs or groups thereof.

The only remaining problem was that the parent table actually needs
more than just the ID column.  It needs the discriminator column if
I'm going to really handle this as joined-table inheritance.
Everything else is in place except the discriminator column is not set
for any of my existing rows that made it through this refactoring.
The new setup works perfectly for newly added people, companies,
items, products, etc... the ID is assigned and inherited, the
discriminator is assigned.  (Of course.)  But I need a way to get in
there and set the discriminator column for my _existing_ rows that
came through from before.

I suppose I could just go with Concrete Table inheritance, but the
added difficulties with relationships are less enticing.  Intuitively
the joined table idea seems to fit.

Should I just take a deep breath and get this done without the ORM?  I
appreciate your answer regarding pre-0.7 vs. 0.7 approaches, that's
the sort of stuff I was looking for when I went ahead and asked this
question rather than quietly working around it.

Your answer also just jogged my brain in a way that I failed to do
last night.  I could just delete and re-insert each of these pre-
existing rows that I need to fix up... now that's blindingly obvious.
I welcome any commentary, though, on the sanity or insanity of my
above paragraphs.  Sqlalchemy rocks...

Thanks!
Eric



 On Jan 1, 2011, at 9:52 PM, Eric Ongerth wrote:





  I must be doing something wrong but can't find it.

  I'm doing some database refactoring and found a situation where I had
  to set up a joined-table inheritance structure after the fact, with
  some of the tables already populated.  I created (successfully) a
  script that got all of the primary and foreign keys right on all the
  child tables and the parent table, and now I just need to correctly
  populate the discriminator column on the parent table.

  Each row of the parent table is referenced by exactly one row from one
  of the six child tables.  So i'm iterating through all child objects
  (yes, doing this via the sa ORM) and setting the value of the
  discriminator appropriately.  Then I commit the Session.  Afterward, I
  go and look at the parent table directly using pgAdmin and I see that
  the new values in the discriminator column were not saved.  That's
  strange because I didn't get an error either, nor am I doing anything
  that would mask an error.

  So I thought maybe the discriminator column on the parent table in a
  joined-table inheritance scenario is just not watched by the Session /
  UOW.  I thought maybe it assumes that this column would only,
  normally, be set during object instantiation and would typically not
  change for the lifetime of the object and its associated database
  rows.  So I tried manually dirtying the object using
  instance_state(obj).modified=True before committing the Session.
  Still no success

[sqlalchemy] Re: change of discriminator column apparently not saved by session?

2011-01-02 Thread Eric Ongerth
Ah!  I did it again.  You may or may not remember I asked you a
different question yielding the same answer a couple of months ago!
I've got to put a stop to that trend.

To quote from your blog post:

===
In fact, while this table scheme is exactly joined table inheritance,
and we could certainly go the straight route of creating an
Addressable base class and mapper from which the User and Order
classes/mappers derive, then creating the traditional SA polymorphic
mapping using UNION ALL (or whatever surprise 0.4 has in store) with
a relationship to Address, here we're going to do it differently.
Namely, because we are still going to look at this association as a
cross-cutting concern rather than an is-a relationship, and also
because SA's explicit inheritance features only support single
inheritance, and we'd rather not occupy the inherits slot with a
relationship that is at best a mixin, not an is-a.
===

I actually started modeling this exactly the way you're suggesting in
that post.  I added a trackable_id column to each of the six tables,
as a foreign key to a table of association objects.  But then I
reasoned as follows: (1) great, now I have two unique ID columns in
each of these tables and that's almost never a good thing.  (2) great,
now I'm going to have to add more program logic to explicitly create
an association object every time I add a row to any of these tables.
Clunky.

Your blog post shows how to fix problem (2) there with some very cool
python magic.  Ok, it's not even magic, it's clear enough.  But as
I've noticed before, I still haven't absorbed your level of comfort
with throwing that stuff around.  I understand it well enough, but
when I see underscores I start thinking can't this be done another
way without wizardry?

I suppose even that point is moot when I recognize that my solution
using inheritance actually calls upon a lot of very clever programming
within the guts of sqlalchemy.

Your most salient point of all, for me, is where you note that that
there's currently only one slot for inheritance and there isn't a
very strong case for occupying it with a cross-cutting concern that is
at best a mix-in, not an 'is-a'. 

So now I'm curious, have you given much thought to advancing the
possibility of supporting multiple inheritance with SA's explicit
inheritance features?  Or is that just too dangerous (or just too much
of a pain to work out).  Or too tempting for people to brew up all
manner of ridiculous and unnecessary schema with it.

Thank you for your thoughts.


On Jan 2, 9:18 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 2, 2011, at 11:25 PM, Eric Ongerth wrote:



  On Jan 2, 7:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
  Curious here is what kind of scheme you have that requires a manual 
  setting of the discriminator.

  Forgive me, I tried to indicate that it doesn't, this is just a one-
  time thing I have to do to get this database fixed up.  Here's the
  whole story in a nutshell.  I had six tables (person, company,
  product, item, etc.) that had little to do with each other.
  Eventually I decided that the objects symbolized by the rows in those
  tables had a cross-cutting concern: I want to be able to attach action
  tickets (sort of like in Trac) and notes to them.  The project I'm
  working on has a certain limited amount of CRM/ERP type stuff as part
  of its scope, thus tickets, notes, and so on.

  So where these tables originally each had their own unique primary key
  id columns, I realized that in order to be able to affix notes
  equally to a person, a company, a product, etc., I wanted to have all
  of these disparate tables pull their IDs from a common table holding a
  single column, a column of trackable object IDs.  Sort of like the
  POID concept used for certain concerns 
  inhttp://www.agiledata.org/essays/mappingObjects.html.

  Seemed to me a simple way to make that work, without having to build a
  bunch of ungainly trigger machinery, is make all of these tables
  inherit from that common ID table; thus a joined table inheritance
  setup.  I successfully scripted all of the changes necessary so that
  instead of their old ID columns, each table's ID column now is still
  its primary key but is also a foreign key to the common ID table.

 oh.   OK no, I wouldn't use joined inheritance for a cross-cutting 
 concern.  This is the polymorphic association pattern, which from a database 
 table perspective, looks the same in fact.   But the cross cutting concern 
 is modeled at the ORM level via relationship.  The blog post is extremely old 
 at this point but it should get the idea across:  
 http://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-s...

 Also if the question about using the ORM or not refers to just the activity 
 of migrating the data to the new structure, I'd possibly skip the ORM for a 
 migration, since its a single operation that has to work only once.   It sort 
 of depends on what seems more

[sqlalchemy] Re: change of discriminator column apparently not saved by session?

2011-01-02 Thread Eric Ongerth
So I tried my solution of deleting and then re-adding each object
(row) in question.  Didn't work quite like that; instead I had to
delete, make_transient(), and then re-add.  Still didn't quite work;
for rows that had sa relationships via FKs to other tables, in order
to avoid errors I had to give the objects thus related a similar
treatment, calling make_transient() and then re-associating them.

Lo and behold, for tables that fit that description it worked and the
discriminator is set properly now.  But for the rest of the tables,
those which did not have relationships that needed taking care of in
that way to avoid errors, their rows still aren't getting the
discriminator set.  I tried using instance_state(obj).modified =
True on them before flush, but that didn't help either.  Still
haven't found a solution for the rest of those.

I don't mean to annoy you in persisting with this approach when it's
not to your liking.   It's just that at this point I'm trying to work
it all the way through in this manner just to come to a better
understanding of the ORM's workings.


On Jan 2, 9:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
 Ah!  I did it again.  You may or may not remember I asked you a
 different question yielding the same answer a couple of months ago!
 I've got to put a stop to that trend.

 To quote from your blog post:

 ===
 In fact, while this table scheme is exactly joined table inheritance,
 and we could certainly go the straight route of creating an
 Addressable base class and mapper from which the User and Order
 classes/mappers derive, then creating the traditional SA polymorphic
 mapping using UNION ALL (or whatever surprise 0.4 has in store) with
 a relationship to Address, here we're going to do it differently.
 Namely, because we are still going to look at this association as a
 cross-cutting concern rather than an is-a relationship, and also
 because SA's explicit inheritance features only support single
 inheritance, and we'd rather not occupy the inherits slot with a
 relationship that is at best a mixin, not an is-a.
 ===

 I actually started modeling this exactly the way you're suggesting in
 that post.  I added a trackable_id column to each of the six tables,
 as a foreign key to a table of association objects.  But then I
 reasoned as follows: (1) great, now I have two unique ID columns in
 each of these tables and that's almost never a good thing.  (2) great,
 now I'm going to have to add more program logic to explicitly create
 an association object every time I add a row to any of these tables.
 Clunky.

 Your blog post shows how to fix problem (2) there with some very cool
 python magic.  Ok, it's not even magic, it's clear enough.  But as
 I've noticed before, I still haven't absorbed your level of comfort
 with throwing that stuff around.  I understand it well enough, but
 when I see underscores I start thinking can't this be done another
 way without wizardry?

 I suppose even that point is moot when I recognize that my solution
 using inheritance actually calls upon a lot of very clever programming
 within the guts of sqlalchemy.

 Your most salient point of all, for me, is where you note that that
 there's currently only one slot for inheritance and there isn't a
 very strong case for occupying it with a cross-cutting concern that is
 at best a mix-in, not an 'is-a'. 

 So now I'm curious, have you given much thought to advancing the
 possibility of supporting multiple inheritance with SA's explicit
 inheritance features?  Or is that just too dangerous (or just too much
 of a pain to work out).  Or too tempting for people to brew up all
 manner of ridiculous and unnecessary schema with it.

 Thank you for your thoughts.

 On Jan 2, 9:18 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  On Jan 2, 2011, at 11:25 PM, Eric Ongerth wrote:

   On Jan 2, 7:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
   Curious here is what kind of scheme you have that requires a manual 
   setting of the discriminator.

   Forgive me, I tried to indicate that it doesn't, this is just a one-
   time thing I have to do to get this database fixed up.  Here's the
   whole story in a nutshell.  I had six tables (person, company,
   product, item, etc.) that had little to do with each other.
   Eventually I decided that the objects symbolized by the rows in those
   tables had a cross-cutting concern: I want to be able to attach action
   tickets (sort of like in Trac) and notes to them.  The project I'm
   working on has a certain limited amount of CRM/ERP type stuff as part
   of its scope, thus tickets, notes, and so on.

   So where these tables originally each had their own unique primary key
   id columns, I realized that in order to be able to affix notes
   equally to a person, a company, a product, etc., I wanted to have all
   of these disparate tables pull their IDs from a common table holding a
   single column, a column of trackable object IDs.  Sort of like the
   POID

[sqlalchemy] Re: change of discriminator column apparently not saved by session?

2011-01-02 Thread Eric Ongerth
Right, you made that clear before.

I was no longer talking about setting the discriminator column here in
0.6.5.  I was talking about deleting, making transient, and then re-
adding all of the objects in question.  And how this worked on some of
them but not all.

And your reasons for not bothering with multiple inheritance in sa are
great, thanks for describing them.



On Jan 2, 11:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 3, 2011, at 2:02 AM, Eric Ongerth wrote:



  So I tried my solution of deleting and then re-adding each object
  (row) in question.  Didn't work quite like that; instead I had to
  delete, make_transient(), and then re-add.  Still didn't quite work;
  for rows that had sa relationships via FKs to other tables, in order
  to avoid errors I had to give the objects thus related a similar
  treatment, calling make_transient() and then re-associating them.

  Lo and behold, for tables that fit that description it worked and the
  discriminator is set properly now.  But for the rest of the tables,
  those which did not have relationships that needed taking care of in
  that way to avoid errors, their rows still aren't getting the
  discriminator set.  I tried using instance_state(obj).modified =
  True on them before flush, but that didn't help either.  Still
  haven't found a solution for the rest of those.

  I don't mean to annoy you in persisting with this approach when it's
  not to your liking.   It's just that at this point I'm trying to work
  it all the way through in this manner just to come to a better
  understanding of the ORM's workings.

 You can't set the discriminator column in 0.6 with the ORM.   The mapper will 
 ignore it.   You need to forego the ORM or use 0.7.



  On Jan 2, 9:55 pm, Eric Ongerth ericonge...@gmail.com wrote:
  Ah!  I did it again.  You may or may not remember I asked you a
  different question yielding the same answer a couple of months ago!
  I've got to put a stop to that trend.

  To quote from your blog post:

  ===
  In fact, while this table scheme is exactly joined table inheritance,
  and we could certainly go the straight route of creating an
  Addressable base class and mapper from which the User and Order
  classes/mappers derive, then creating the traditional SA polymorphic
  mapping using UNION ALL (or whatever surprise 0.4 has in store) with
  a relationship to Address, here we're going to do it differently.
  Namely, because we are still going to look at this association as a
  cross-cutting concern rather than an is-a relationship, and also
  because SA's explicit inheritance features only support single
  inheritance, and we'd rather not occupy the inherits slot with a
  relationship that is at best a mixin, not an is-a.
  ===

  I actually started modeling this exactly the way you're suggesting in
  that post.  I added a trackable_id column to each of the six tables,
  as a foreign key to a table of association objects.  But then I
  reasoned as follows: (1) great, now I have two unique ID columns in
  each of these tables and that's almost never a good thing.  (2) great,
  now I'm going to have to add more program logic to explicitly create
  an association object every time I add a row to any of these tables.
  Clunky.

  Your blog post shows how to fix problem (2) there with some very cool
  python magic.  Ok, it's not even magic, it's clear enough.  But as
  I've noticed before, I still haven't absorbed your level of comfort
  with throwing that stuff around.  I understand it well enough, but
  when I see underscores I start thinking can't this be done another
  way without wizardry?

  I suppose even that point is moot when I recognize that my solution
  using inheritance actually calls upon a lot of very clever programming
  within the guts of sqlalchemy.

  Your most salient point of all, for me, is where you note that that
  there's currently only one slot for inheritance and there isn't a
  very strong case for occupying it with a cross-cutting concern that is
  at best a mix-in, not an 'is-a'. 

  So now I'm curious, have you given much thought to advancing the
  possibility of supporting multiple inheritance with SA's explicit
  inheritance features?  Or is that just too dangerous (or just too much
  of a pain to work out).  Or too tempting for people to brew up all
  manner of ridiculous and unnecessary schema with it.

  Thank you for your thoughts.

  On Jan 2, 9:18 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  On Jan 2, 2011, at 11:25 PM, Eric Ongerth wrote:

  On Jan 2, 7:59 am, Michael Bayer mike...@zzzcomputing.com wrote:
  Curious here is what kind of scheme you have that requires a manual 
  setting of the discriminator.

  Forgive me, I tried to indicate that it doesn't, this is just a one-
  time thing I have to do to get this database fixed up.  Here's the
  whole story in a nutshell.  I had six tables (person, company,
  product, item, etc.) that had little to do with each other

[sqlalchemy] Money data type for Postgres

2010-12-18 Thread Eric Ongerth
Postgres deprecated the Money data type around 8.2 but then brought it
back with better support soon after (I think 8.3 or 8.4 and after).

I found the following message on this group, where Mike welcomes
anyone to just roll their own type stuff for it or possibly submit a
patch.

http://groups.google.com/group/sqlalchemy/msg/77efa08097492b1a

This should be easy enough to do; I'm just wondering if anyone has
already done so and is willing to share, just to avoid reinventing a
wheel.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Money data type for Postgres

2010-12-18 Thread Eric Ongerth
Hmm, I read through all of the relevant files, postgresql.py and the
rest, and of course this section of the SA docs:
http://www.sqlalchemy.org/docs/core/types.html#custom-types

But I'm confused about a couple of things.  One: what I'm reading
seems to be suggesting that I just ignore the existence of a postgres
Money type and just TypeDecorate a Numeric to get what I want.  I was
hoping to do the necessary plumbing to get SA to recognize columns in
reflected tables that are defined as Money type and work with them
appropriately.  Am I missing something?

Two, and closely related, If I just make a TypeDecorator, or even my
own UserDefinedType, I'm not yet seeing how SA will know to make use
of it when reflecting.

I guess what I'm really asking is this: If SA doesn't 'know about'
postgresql's Money type, it doesn't know about it!  I understand how
to fake it by making a type decorator so I can store what I need to in
a Numeric column but have it represented in my python objects via any
interface I please.  What I don't understand is how to make SA
actually USE the native postgresql Money type.

It's probably right in front of my face but maybe I'm misinterpreting
some of the code so I keep falling back into the same mental orbit
about it.

Thanks in advance if you can help clear my view.

On Dec 18, 1:15 pm, Eric Ongerth ericonge...@gmail.com wrote:
 Postgres deprecated the Money data type around 8.2 but then brought it
 back with better support soon after (I think 8.3 or 8.4 and after).

 I found the following message on this group, where Mike welcomes
 anyone to just roll their own type stuff for it or possibly submit a
 patch.

 http://groups.google.com/group/sqlalchemy/msg/77efa08097492b1a

 This should be easy enough to do; I'm just wondering if anyone has
 already done so and is willing to share, just to avoid reinventing a
 wheel.

 Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Money data type for Postgres

2010-12-18 Thread Eric Ongerth
Meanwhile... meh.  I'll be more than happy to go with Numeric(16,2) as
suggested by #postgresql.  Should work fine and the only disadvantage
is the most minor of semantic objections.


On Dec 18, 1:57 pm, Eric Ongerth ericonge...@gmail.com wrote:
 Hmm, I read through all of the relevant files, postgresql.py and the
 rest, and of course this section of the SA 
 docs:http://www.sqlalchemy.org/docs/core/types.html#custom-types

 But I'm confused about a couple of things.  One: what I'm reading
 seems to be suggesting that I just ignore the existence of a postgres
 Money type and just TypeDecorate a Numeric to get what I want.  I was
 hoping to do the necessary plumbing to get SA to recognize columns in
 reflected tables that are defined as Money type and work with them
 appropriately.  Am I missing something?

 Two, and closely related, If I just make a TypeDecorator, or even my
 own UserDefinedType, I'm not yet seeing how SA will know to make use
 of it when reflecting.

 I guess what I'm really asking is this: If SA doesn't 'know about'
 postgresql's Money type, it doesn't know about it!  I understand how
 to fake it by making a type decorator so I can store what I need to in
 a Numeric column but have it represented in my python objects via any
 interface I please.  What I don't understand is how to make SA
 actually USE the native postgresql Money type.

 It's probably right in front of my face but maybe I'm misinterpreting
 some of the code so I keep falling back into the same mental orbit
 about it.

 Thanks in advance if you can help clear my view.

 On Dec 18, 1:15 pm, Eric Ongerth ericonge...@gmail.com wrote:

  Postgres deprecated the Money data type around 8.2 but then brought it
  back with better support soon after (I think 8.3 or 8.4 and after).

  I found the following message on this group, where Mike welcomes
  anyone to just roll their own type stuff for it or possibly submit a
  patch.

 http://groups.google.com/group/sqlalchemy/msg/77efa08097492b1a

  This should be easy enough to do; I'm just wondering if anyone has
  already done so and is willing to share, just to avoid reinventing a
  wheel.

  Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] polymorphism where a child can have multiple values of polymorphic_id?

2010-11-21 Thread Eric Ongerth
I must be in a search blind spot, I'm having trouble finding
references and examples for this pattern, though I feel certain I've
seen examples around the sqlalchemy literature a couple of times
before.

I have a table of Companies, and then further tables of Manufacturer
and Vendor info which apply to some companies.  So a company can be
stored just as a company, or as a company that happens to be a
manufacturer, and/or happens to be a vendor.  I don't want the
manufacturer and vendor designations to be exclusive of each other; it
should be possible for a company to be both.

I thought of just having a boolean field on the Company table for each
'child' type.  So (company.manufacturer == True) would tell me it's
safe to write to or read from attributes whose storage is in the
manufacturer info table.  Likewise, (company.vendor == True) would
indicate that it's safe to write to or read from attributes whose
storage is in the vendor info table.  And of course some companies
will be filed as neither, so I won't attempt to access mfr or vendor
info about them.

Regular SA joined-table inheritance doesn't work this way since it
requires a discriminator column taking a single value at a time.
Still I would love to use joined-table inheritance if I could.  Is
there a way?

Should I just try to do this with single-table inheritance or even
just go without SA's inheritance features and python my way to working
attributes that do what I wish based on a single table?

I just don't want to miss out if there's a way I could do this
elegantly (and fully normalized) with joined tables.  I'll admit that
single tables with some columns being defunct for some rows just bug
the normalization freak in me.

Thanks in advance.
Eric

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: polymorphism where a child can have multiple values of polymorphic_id?

2010-11-21 Thread Eric Ongerth
 found thus
far to doing what I'm trying to do, but they went down the road of
delving into Python class mechanics to force all of this to happen via
Python classes and instances, rather than just keeping it all
relational.  I'm trying to build this with a relational perspective
through and through.  So far the only drawback is the complexity of
assembling views of products and items, and of comparing disparate
kinds of them; but I actually look forward to that as I believe a
number of unforeseen benefits are going to accrue from simply modeling
powerfully yet flexibly from the beginning.

Cheers,
Eric

On Nov 21, 3:38 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 21, 2010, at 4:41 PM, Eric Ongerth wrote:

  I must be in a search blind spot, I'm having trouble finding
  references and examples for this pattern, though I feel certain I've
  seen examples around the sqlalchemy literature a couple of times
  before.

  I have a table of Companies, and then further tables of Manufacturer
  and Vendor info which apply to some companies.  So a company can be
  stored just as a company, or as a company that happens to be a
  manufacturer, and/or happens to be a vendor.  I don't want the
  manufacturer and vendor designations to be exclusive of each other; it
  should be possible for a company to be both.

 The thing I pointed someone to the other day regarding multiple inheritance 
 in general is 
 athttp://www.agiledata.org/essays/mappingObjects.html#MappingMultipleIn   
 My general attitude about that example is yeah great, go nuts !.   i.e. if 
 someone wants to get into it, great, good luck, but I haven't personally ever 
 had any situation that called for something that complex and if I did, I'd 
 probably try to get around it somehow.   Maybe someday I'll actually need the 
 feature though, then I'll figure out how to do it generically, then it will 
 be the next big SQLA feature everyone needs, who knows.



  I thought of just having a boolean field on the Company table for each
  'child' type.  So (company.manufacturer == True) would tell me it's
  safe to write to or read from attributes whose storage is in the
  manufacturer info table.  Likewise, (company.vendor == True) would
  indicate that it's safe to write to or read from attributes whose
  storage is in the vendor info table.  And of course some companies
  will be filed as neither, so I won't attempt to access mfr or vendor
  info about them.

 so I think if you are trying to get multiple tables to compose into a single 
 type, its likely very possible using a combination of relationship() and 
 association_proxy().  relationship() to maintain the linkages to other 
 tables, association_proxy or something similar (perhaps like the hybrid 
 example, after all we are just using Python attribute tricks) to create a 
 one-level-of-names type of facade.



  Regular SA joined-table inheritance doesn't work this way since it
  requires a discriminator column taking a single value at a time.
  Still I would love to use joined-table inheritance if I could.  Is
  there a way?

 joined inheritance might be involved, but in a practical sense you have to 
 think about what your SELECT queries are going to look like if locating a 
 record requires spanning across three or four tables.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Two relationships with a same backref name. Is that (even) possible or I got everything wrong?

2010-11-12 Thread Eric Ongerth
You're welcome, and I hope that works for you.  I went through the
same process a few years ago when picking up SqlAlchemy... the backref
facility is so cool that it's easy to forget that it's optional and
that most relationship backrefs /could/ be handled as just another
relationship on the opposite mapper.

On Nov 12, 7:31 am, Hector Blanco white.li...@gmail.com wrote:
 2010/11/12 Eric Ongerth ericonge...@gmail.com:

  Hi Hector,

  If I'm not mistaken, everywhere you wrote
  (MyObject.id==MyObject.containerId),
  you meant to write: (Container.id==MyObject.containerId).

 Ups... yeah... great eye.

  Instead of the backref technique, why not just create the MyObject--
 Container relationship a single time in your MyObject class.  That
  should be able to coexist with your first code example (with no
  backrefs).

 Oh, right!! That's a great approach... I was so blinded with the
 backref thing that I didn't think it could be the other way around!

 I'll do that!

 Thank you Eric!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Found an old This will raise an error in 0.6 warning still included in 0.6.5

2010-11-12 Thread Eric Ongerth
Just a heads-up:

I was experimenting with various cascade options on mappers and came
across the following warning:
SAWarning: The 'delete-orphan' cascade option requires 'delete'.
This will raise an error in 0.6.

But I'm running 0.6.5.  Maybe this warning message just never got
updated since the 0.6.x releases.  No complaint here, just mentioning
it in case it helps bring things up to date.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Two relationships with a same backref name. Is that (even) possible or I got everything wrong?

2010-11-11 Thread Eric Ongerth
Hi Hector,

If I'm not mistaken, everywhere you wrote
(MyObject.id==MyObject.containerId),
you meant to write: (Container.id==MyObject.containerId).

Instead of the backref technique, why not just create the MyObject--
Container relationship a single time in your MyObject class.  That
should be able to coexist with your first code example (with no
backrefs).


On Nov 11, 8:16 am, Hector Blanco white.li...@gmail.com wrote:
 I have a class that has two relationships to the same type of objects.
 One of the relationships will store objects of type VR and the other
 objects with a type CC. One object can only be in one of the lists
 (relationships) at the same time:

 This is the container class and its two relationships:

 class Container(rdb.Model):
         rdb.metadata(metadata)
         rdb.tablename(containers)

         id = Column(id, Integer, primary_key=True)

         relation1 = relationship(MyObject,
                 uselist=True,
                 primaryjoin=lambda: and_((MyObject.id == 
 MyObject.containerId),
                                 (MyObject._type == VR)),
                 cascade=all, delete, delete-orphan
         )

         relation2 = relationship(MyObject,
                 uselist=True,
                 primaryjoin=lambda: and_((MyObject.id == 
 MyObject.containerId),
                                 (MyObject._type == CC)),
                 cascade=all, delete, delete-orphan
         )

 I don't think there's need to mention, but, MyObject.containerId is
 a ForeignKey pointing to the Container.id.

 I'd like to know if there's a way to create a backref so I will be
 able to access the container through the MyObject class. The idea
 would be having something like:

         relation1 = relationship(MyObject,
                 uselist=True,
                 primaryjoin=lambda: and_((MyObject.id == 
 MyObject.containerId),
                                 (MyObject._type == VR)),
                 cascade=all, delete, delete-orphan,
                 backref=backref('container', order_by=id)
         )

         relation2 = relationship(MyObject,
                 uselist=True,
                 primaryjoin=lambda: and_((MyObject.id == 
 MyObject.containerId),
                                 (MyObject._type == CC)),
                 cascade=all, delete, delete-orphan,
                 backref=backref('container', order_by=id)
         )

 But of course, that fails because it's trying to add two .container
 fields to the MyObject class.

 I have also seen that you can define joins in the backref, but I
 haven't been able to find examples about how to define it. And I am
 still not very sure that that would allow me to have to backrefs with
 the same name/identifier.

 I just need to know if it's even possible having two backrefs with the
 same name. Actually, a you really got the whole concept wrong may
 help too (if that's the case) . If it's doable, does any of you know
 where can I find examples of advanced backref usage? With primary
 joins, secondary joins and all that juicy stuff...

 Thank you in advance!!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Odd many-to-one problem

2010-11-11 Thread Eric Ongerth
Mike, what you set forth is more of what I was actually trying to
bring into the discussion (having used that same technique myself),
rather than the link I gave above.  I need to get more sleep and check
my doc references more carefully!


On Nov 11, 1:39 pm, Mike Conley mconl...@gmail.com wrote:
 If it's simply a matter of sequence of how code is organized:

 1. Define Merchants table and mappers
 2. Define Deals table and mappers
 3. Add relations to Merchant
 All of this can be in separate files if needed; just import right
 definitions where needed.

 metadata = MetaData()

 merchants = Table('merchants', metadata,
     Column('id', Integer, primary_key=True),
     Column('name', String)
     )

 class Merchant(object):
     pass

 mapper(Merchant, merchants)

 deals = Table('deals', metadata,
     Column('id', Integer, primary_key=True),
     Column('merch_id', Integer, ForeignKey('merchants.id')),
     Column('deal_status', String(10))
     )

 class Deal(object):
     pass

 mapper(Deal, deals)

 Merchant.all_deals = relation(Deal, backref='merchant')
 Merchant.active_deals = relation(Deal, primaryjoin=
                 and_(merchants.c.id==deals.c.merch_id,
                 deals.c.deal_status=='active'))

 This is one advantage of using declarative because the primaryjoin can be
 defined as a string that will not be compiled until later. That can be
 deferred until after everything is defined.

 --
 Mike Conley

 On Thu, Nov 11, 2010 at 1:33 PM, Jonathan Gardner 

 jgard...@jonathangardner.net wrote:
  This is what I need to do, except the Merchant object is defined
  before the Deal object. In the example in the documentation, I have
  mapped User before I have mapped Address.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Odd many-to-one problem

2010-11-10 Thread Eric Ongerth
Hi Jonathan,

Rather than create a specific backref for each subtype of deal, why
not just continue with your basic 'deals' backref, then attach regular
python properties to your Merchant class which return just the desired
sub-deals.

Something like:

class Merchant(object):
...
@property
def available_deals(self):
return [deal for deal in self.deals if deal.available]

@property
def expired_deals(self):
return [deal for deal in self.deals if deal.expired]

... and so on.

You could also reverse your order of definition, define Deal first
with no reference to Merchant, then define Merchant second, with
mapper properties for each of your type of deal (probably mapped to
select statements).  But I don't know if it would work to have each of
those different mapper properties all use 'merchant' (with, of course,
the uselist=False option to make it 1:1) as the backref identifier.


On Nov 10, 11:19 am, Jonathan Gardner jgard...@jonathangardner.net
wrote:
 I have two tables, merchants and deals. The merchants table is
 represented by Merchant and deals table by Deal.

 Each merchant can have 0, 1, or many deals. Some of those deals will
 be available, while others will be expired or coming soon or deleted.
 Each deal belongs to exactly one merchant.

 I'd like to setup Merchant to have attributes deals,
 available_deals, expired_deals, upcoming_deals, and
 deleted_deals.  These would return, obviously, deals from those
 groups.

 The twist is that I've spread out my tables and ORM classes across
 several files. I've tried to keep it so that I don't have circular
 dependencies. That means I've defined Merchant first, and then Deal
 later, in separate files

 It looks like this:

 in model/merchant.py:
   merchants = Table(...)
   class Merchant(object): ...
   mapper(Merchant, merchants)

 in model/deal.py:
   deals = Table(...)
   class Deal(object): ...
   mapper(Deal, deals, properties=dict(
       merchant=relationship(Merchant, backref='deals'),
   ))

 What can I sprinkle in model/deal.py's mapper call to add backrefs to
 'available_deals', 'deleted_deals', etc...?

 Or am I going about this all wrong?

 Thanks in advance. BTW, SQLAlchemy is, by far, the most superior ORM
 in the history of the world, bar none, IMHO.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Odd many-to-one problem

2010-11-10 Thread Eric Ongerth
Good point, Sergey.

Here is the relevant documentation regarding mapping attributes to
selects:
http://www.sqlalchemy.org/docs/orm/mapper_config.html?highlight=arbitrary%20selects#sql-expressions-as-mapped-attributes

On Nov 10, 4:46 pm, Sergey V. sergey.volob...@gmail.com wrote:
  The twist is that I've spread out my tables and ORM classes across
  several files. I've tried to keep it so that I don't have circular
  dependencies. That means I've defined Merchant first, and then Deal
  later, in separate files

 To avoid problems with imports and dependencies you can pass strings
 to the relationship function instead of the actual classes:

 mapper(Deal, deals, properties=dict(
       merchant=relationship('Merchant', backref='deals'),
   ))

 This greatly simplifies everything if you split your classes into
 separate files.

 Regarding 'available_deals', 'deleted_deals' etc. - the approach with
 properties is sub-optimal. Consider a merchant having thousands of
 deals, only a few of which are available - the method would have to
 fetch all those deals only to discard most of them. Also, that won't
 work with eager loading. The optimal way would be to make SA to
 generate a query like SELECT ... FROM Deals WHERE ... AND deleted=1
 which would return only the records we're interested in. I'm sure it's
 possible but I'll leave it to you to find it in SA docs :) When you
 find it please post it here :)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Table Inheritance

2010-11-06 Thread Eric Ongerth
Well, not by exactly using SQLAlchemy's provided implementation of
joined table inheritance, because it uses a discriminator column that
only holds a single value.  Of course it is possible to create more
complex inheritance structures on your own, just without some of the
polymorphic-loading work so generously pre-configured in SA's joined-
table system.

One of my own projects makes use of polymorphic multiple-inheritance
(multiple 'roles' for certain objects).  What i do is instead of a
discriminator column in the parent table, I have a secondary
association table that establishes a many-to-many relationship between
objects and [roles | types | whatever classifications I'm working
with].  If I want fancy loading behavior I have to write some joins,
but in most cases I'm willing to have my own code lazy-load attributes
as needed.


On Nov 5, 9:15 am, Mark Erbaugh m...@microenh.com wrote:
 Please refer to the Joined Table Inheritance Section under declarative.ext 
 (I'm using SA 0.5.8).

 Is is possible to create a Person who is both an Engineer and a Manager using 
 joined table inheritance? IOW, both Manager and Engineer would link to the 
 same row in Person.

 Thanks,
 Mark

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Best way to get data from database

2010-10-29 Thread Eric Ongerth
I understand your question if you are getting different data from the
server in the two database accesses.  But if you are loading the exact
same data twice for a page load, you should try to eliminate that
redundancy instead of finding a plan to perform the redundancy in the
best way.

If it's the identical data twice, then why not render it into the page
when you are rendering the HTML... you can render hidden fields, CDATA
sections, regions of javascript containing any data structure you
need, etc.

It's a confusing question because if it's two different DB requests
then you wouldn't be inquiring about caching for this purpose, but if
it's two identical DB requests I suspect you already would have
realized that the data could easily be encoded in the original page
render.



On Oct 28, 4:22 pm, Alvaro Reinoso alvrein...@gmail.com wrote:
 Hey guys,

 I have a doubt. I need to get the data from the sever twice every time
 when I load a page, one to render the HTML and another one to get the
 data for client side (javascript).

 So I don't know exactly what it's the best way and fastest. I was
 trying to implement a session object and store the data once using
 joinedload loading technique. When the data is in the client side, to
 kill the session object.

 Another one it's to call the database twice.

 I don't know which one is faster and better because I don't know if
 the database or server stores the first call in memory. If so it's not
 like to call the database twice, right?

 And if the second choice is better which loading technique
 (joinedload, eagerload or subqueryload) is better to use.

 Every call could be a bunch of data.

 Any help could be really useful.

 Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: parent/child relationship: what am I doing wrong?

2009-02-01 Thread Eric Ongerth

Kevin, the default behavior is for relations to be represented by
lists.  If what you want is a tree structure where a directory can
only have a single parent, you would use backref=backref(parentdir,
uselist=False).  Or at least that's how you'd do it in plain SA; i
haven't used the declarative base yet so I don't know if there's a
different syntax.



On Feb 1, 6:35 am, Kevin Dangoor dang...@gmail.com wrote:
 I've done many many-to-one relationships with SQLAlchemy, but there
 must be something obvious I'm doing wrong here:

 class Directory(Base):
     __tablename__ = directories

     id = Column(Integer, primary_key=True)
     name = Column(String, unique=True)
     subdirs = relation('Directory', backref=parentdir)
     parent_id = Column(Integer, ForeignKey('directories.id'))

     def __str__(self):
         return Dir: %s % (self.name)

 Base is standard declarative Base.

 The odd behavior that I'm seeing is that on a newly created Directory
 instance, both subdirs and parentdir are lists when I would expect
 parentdir to just be null or a Directory.

 This is using SQLAlchemy 0.5.2.

 I do hope there's something obvious that I missed...

 Thanks,
 Kevin

 --
 Kevin Dangoor

 email: k...@blazingthings.com
 blog:http://www.BlueSkyOnMars.com
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: returning values as a list of values rather than as list of tuples

2009-01-15 Thread Eric Ongerth

I've always thought this format for the list comprehension was
particularly clean:

result = [x for (x, ) in conn.execute(.).fetchall()]



On Jan 15, 8:27 am, Faheem Mitha fah...@email.unc.edu wrote:
 On Thu, 15 Jan 2009, Matthew Zwier wrote:
  Hi Faheem,

  On Thu, Jan 15, 2009 at 11:05 AM, Faheem Mitha fah...@email.unc.edu wrote:

  Hi,

  The following code returns a list of tuples to python from the db,
  corresponding to the values of the 'snpval_id' column in the table 'cell'.
  I was wondering if there was an easy way to have it return a list of
  values (in this case, integers) instead.
  result = conn.execute(select snpval_id from cell where patient_chipid IN 
  ('Duke1_plateC_F11.CEL')).fetchall()
  *

  Easiest thing is probably just to use a list comprehension:
  result_ints = [row[0] for row in result]

 Hi Matthew,

 Yes, I'm doing that already. Just wondered if there was a way to return it
 in the right form directly.
                                                            Regards, Faheem.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy Sphinx Documentation Preview

2008-12-06 Thread Eric Ongerth

You're right about CSS selectors; the simple fix for this without re-
generating any source, is just to instruct the browser to not double
up on the indentation when it sees a ul nested in a blockquote.  Hey
wait, the problem is already fixed.  Looks great today.  The lists
too; thanks for the changes.


On Dec 6, 6:44 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Dec 5, 2008, at 11:00 PM, Eric Ongerth wrote:



  Oh yeah, and in Main Documentation (at least) you have some ul
  class=simple lists nested inside of blockquote elements, which is
  resulting in some of your lists being much farther indented than
  others, without a good visual reason why.  Seems like the difference
  could be eliminated.

 sphinx (actually docutils) creates that structure; unless we've done  
 something wrong in the rest markup, we can't change it without parsing  
 it and reconstructing it (which seems like overkill to me, since CSS  
 selectors can usually find things).    not sure what is prompting it  
 to create a blockquote though.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a-directional i.e. bi-directional m:m relations

2008-12-05 Thread Eric Ongerth

Thanks for the ideas.  I thought of all of the above.  The one I've
been using is the accessor which unions together the necessary
things.  My question came up when I wondered if there was some even
more fundamental way to handle these forwards-backwards cases.  I'm
glad to know I'm already doing all I can.


On Dec 2, 3:32 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Dec 2, 2008, at 5:54 PM, Eric Ongerth wrote:



  Now when I want to find out whether a Foo has a relation to another
  Foo, I have to check whether there exists any row in foo_relations
  that has the given Foo as either as this OR that.  Also, what if I
  need backrefs on the foo_relations mapper?  The backref from 'this'
  and the backref from 'that' would both point to something called a
  foo, but they would have to be given separate labels in order ot not
  be conflicting property names -- when really, I would not want to know
  if a foo was the 'that' or the 'this' of some foo relation.

  So ideally in a case like this, I could set an option that says the
  m:m relation is bidirectional, and that the backrefs for both foreign
  keys in the m:m table should really point to the same place (or at
  least be unioned together).

  I have a feeling that would violate some part of the RDBMS standards,
  and I'm perfectly willing to go without or work around.  This is more
  of a philosophical point for learning's sake -- what do other people
  do in such cases?

 you can store two rows in the association table, one for each  
 direction.   or provide an accessor which just unions together the  
 forwards and backwards references between Foo objects.  or make a  
 readonly relation() that does the appropriate OR logic.   I might  
 even try combining both of those techniques somehow.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a-directional i.e. bi-directional m:m relations

2008-12-05 Thread Eric Ongerth

Oh, right.  I don't know what type of brain fog obscured that basic
relational fact, except that I may have been burning my synapses a bit
too hot lately resulting in a deplorable deficit of
neurotransmitters.  Thank you for helping me regain the sight of the
obvious.

On Dec 5, 1:16 am, [EMAIL PROTECTED] wrote:
 there is... u do not want to know if A points B or B points A, u want
 to know if A and B are related in whatever aspect. That is, A and B
 are members of some set X denoting that aspect. i.e. moving the
 belonginess out of A and B alltogether.
 but this isn't going to make your DB simpler... quite the opposite.

 On Friday 05 December 2008 10:40:16 Eric Ongerth wrote:

  Thanks for the ideas.  I thought of all of the above.  The one I've
  been using is the accessor which unions together the necessary
  things.  My question came up when I wondered if there was some even
  more fundamental way to handle these forwards-backwards cases.  I'm
  glad to know I'm already doing all I can.

  On Dec 2, 3:32 pm, Michael Bayer [EMAIL PROTECTED] wrote:
   On Dec 2, 2008, at 5:54 PM, Eric Ongerth wrote:
Now when I want to find out whether a Foo has a relation to
another Foo, I have to check whether there exists any row in
foo_relations that has the given Foo as either as this OR
that.  Also, what if I need backrefs on the foo_relations
mapper?  The backref from 'this' and the backref from 'that'
would both point to something called a foo, but they would have
to be given separate labels in order ot not be conflicting
property names -- when really, I would not want to know if a
foo was the 'that' or the 'this' of some foo relation.

So ideally in a case like this, I could set an option that says
the m:m relation is bidirectional, and that the backrefs for
both foreign keys in the m:m table should really point to the
same place (or at least be unioned together).

I have a feeling that would violate some part of the RDBMS
standards, and I'm perfectly willing to go without or work
around.  This is more of a philosophical point for learning's
sake -- what do other people do in such cases?

   you can store two rows in the association table, one for each
     direction.   or provide an accessor which just unions together
   the forwards and backwards references between Foo objects.  or
   make a readonly relation() that does the appropriate OR logic.
     I might even try combining both of those techniques somehow.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy Sphinx Documentation Preview

2008-12-05 Thread Eric Ongerth

Mike,

Gaetan's right -- I just viewed the site a day after you (Mike) said
that the li issue had been fixed, but they're still too widely
spaced for sure.  There are several conflicting (well ok, inheriting/
overriding) settings of line-height across the various css files, and
it does not appear that padding is actually the problem.

Here, make the following change to site_docs.css and see what you
think.

current:

a { line-height: 1.2em; }

replace this with:

li li { line-height: 1.2em; }

This leaves in place the 1.3em that's inherited from above for the
main lis, but their sub-items get a more cozy 1.2em.  To me this
looks as it should.

Eric




On Dec 5, 9:23 am, Michael Bayer [EMAIL PROTECTED] wrote:
 well we have no control over any of thatI don't know that Sphinx  
 search uses case insensitivity for full text searches.

 On Dec 5, 2008, at 11:53 AM, Jon Nelson wrote:



  The searching is a bit weird.
  If I search for Adjacency I get no results. If I search for adjacency
  (all lower case) I get results, the first of which has an upper-cased
  Adjacency.

  Otherwise they look nice and I'm sure will look nicer-yet as time  
  goes on!

  --
  Jon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy Sphinx Documentation Preview

2008-12-05 Thread Eric Ongerth

Forgot to add that I can't see much reason for links to be given a
line-height that would be any different from the text that surrounds
them -- at least not on the TOC page.  That's why I felt free to scrap
the 'a' rule and put the 'li li' in the same spot.  If the 'a' rule is
necessary for other pages then my suggestion could be an addition
instead of a replacement.


On Dec 5, 7:48 pm, Eric Ongerth [EMAIL PROTECTED] wrote:
 Mike,

 Gaetan's right -- I just viewed the site a day after you (Mike) said
 that the li issue had been fixed, but they're still too widely
 spaced for sure.  There are several conflicting (well ok, inheriting/
 overriding) settings of line-height across the various css files, and
 it does not appear that padding is actually the problem.

 Here, make the following change to site_docs.css and see what you
 think.

 current:

 a { line-height: 1.2em; }

 replace this with:

 li li { line-height: 1.2em; }

 This leaves in place the 1.3em that's inherited from above for the
 main lis, but their sub-items get a more cozy 1.2em.  To me this
 looks as it should.

 Eric

 On Dec 5, 9:23 am, Michael Bayer [EMAIL PROTECTED] wrote:

  well we have no control over any of thatI don't know that Sphinx  
  search uses case insensitivity for full text searches.

  On Dec 5, 2008, at 11:53 AM, Jon Nelson wrote:

   The searching is a bit weird.
   If I search for Adjacency I get no results. If I search for adjacency
   (all lower case) I get results, the first of which has an upper-cased
   Adjacency.

   Otherwise they look nice and I'm sure will look nicer-yet as time  
   goes on!

   --
   Jon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy Sphinx Documentation Preview

2008-12-05 Thread Eric Ongerth

Oh yeah, and in Main Documentation (at least) you have some ul
class=simple lists nested inside of blockquote elements, which is
resulting in some of your lists being much farther indented than
others, without a good visual reason why.  Seems like the difference
could be eliminated.

I sent new association_proxy docs via jek; hopefully you'll find them
worthwhile in total or in part.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] a-directional i.e. bi-directional m:m relations

2008-12-02 Thread Eric Ongerth

I could use some insightful suggestions here:
What do you think of this use case?

Let's say I have a class Foo mapped to a table 'foos',  and a
many:many relation between these objects, stored in the table
'foo_relations'.

columns on foos:
id (Integer)
data (Text)

columns on foo_relations:
this_foo_id (Integer)
that_foo_id (Integer)

But this implicitly creates a direction to the m:m relation,
something that I don't want.  I want this m:m relation to be bi-
directional (or you could call it adirectional).  The setup above
makes one foo into this and the other into that.  It's not, of
course, because of what I labeled them; it's because the columns have
to be labeled at all.  But I don't want a specific directionality
here.

Now when I want to find out whether a Foo has a relation to another
Foo, I have to check whether there exists any row in foo_relations
that has the given Foo as either as this OR that.  Also, what if I
need backrefs on the foo_relations mapper?  The backref from 'this'
and the backref from 'that' would both point to something called a
foo, but they would have to be given separate labels in order ot not
be conflicting property names -- when really, I would not want to know
if a foo was the 'that' or the 'this' of some foo relation.

So ideally in a case like this, I could set an option that says the
m:m relation is bidirectional, and that the backrefs for both foreign
keys in the m:m table should really point to the same place (or at
least be unioned together).

I have a feeling that would violate some part of the RDBMS standards,
and I'm perfectly willing to go without or work around.  This is more
of a philosophical point for learning's sake -- what do other people
do in such cases?

Thanks,
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inferring object class/table directly

2008-12-02 Thread Eric Ongerth

def add_obj(session, obj):
  Check if object primary key exists in db. If so,exit, else
 add.
 
 pid = obj.id
if session.query(obj.__class__).filter_by(id=pid).count():
 print Patient object with id %s is already in db.%pid
 exit
 else:
 session.save(obj)
 session.commit()


Not too difficult.  You can also use type(obj) instead of
obj.__class__.

Furthermore, if you really need to determine the object's class's
mapped table,
obj_table = obj.__class__._sa_class_manager.mapper.mapped_table

Of course, being an underscored thing, _sa_class_manager is not
something you should count on from version to version of sqlalchemy,
so keep that in consideration and don't use it anywhere you don't plan
to maintain.

Eric


On Dec 2, 2:24 pm, Faheem Mitha [EMAIL PROTECTED] wrote:
 Hi,

 If I have an ORM object, it is sometimes convenient to be able to infer
 the class directly. Eg. consider this function.

 def add_patient_obj(session, patient_obj):
       Check if object primary key exists in db. If so,exit, else
      add.
      pid = patient_obj.id
      #print session.query(Patient).filter_by(id=pid).count()
      if session.query(Patient).filter_by(id=pid).count()  0:
          print Patient object with id %s is already in db.%pid
          exit
      else:
          session.save(patient_obj)
          session.commit()

 But I want a generic version. Since patient_obj knows what class is
 belongs to, it should be possible not to have to state the class directly,
 which here is Patient.

 I have done the following, which works, but is hideous, horrible, ugly,
 fragile hack. Can anyone suggest a better way of doing this?

 Please CC me on any reply. Thanks in advance.

                                                     Regards, Faheem.

 def add_obj(session, obj):
       Check if object primary key exists in db. If so,exit, else
      add.
      
      c = str(type(obj)).split(')[1].split(.)[1]
      s = q = session.query(+ c +)
      exec(s)
      pid = obj.id
      if q.filter_by(id=pid).count()  0:
          print Patient object with id %s is already in db.%pid
          exit
      else:
          session.save(obj)
          session.commit()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Session.delete_all()

2008-11-30 Thread Eric Ongerth

IMO there should be a delete_all() convenience method on Session,
similar to what add_all() does, accepting an iterable as its
parameter.  Just to match up expected behavior with the existence of
add_all().

Side note: in the current API docs, add() is separated from add_all()
by an out-of-alphabetical-order listing of save_or_update().  I assume
in the forthcoming Sphinx semi-automation of docs, everything will be
strictly alphabetical...

Thanks,
Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-25 Thread Eric Ongerth

Unfortunately, I posted the wrong version of my Itemtype class above;
fortunately it wasn't important for what I was trying to show.  Please
replace class Itemtype with the following, and note the additional
test lines and commentary which I also forgot to include.


class Itemtype(object):

def __repr__(self):
return 'Itemtype: %s' % (self.name)

@property
def inherited_features(self):
return ([feature for base_itemtype in self.inherits for
feature in base_itemtype.features])

@property
def features(self):
result = self.own_features[:]
if self.inherits:
result.extend(self.inherited_features[:])
return result

@property
def dependent_features(self):
return [f for f in self.features if f.determinants]

@property
def independent_features(self):
return [f for f in self.features if not f.determinants]


The code i posted the first time was not the right code for the
inherited_features and features properties on Itemtype.  I must have
cut and pasted from the wrong test file.

Please observe the following interactions which may help paint a
clearer picture of what I'm doing with this stuff.

Boot.features
[Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
Model, Product Year, Product Weight]

Boot.dependent_features
[Product Weight]

Boot.independent_features
[Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
Model, Product Year]

Footwear.inherited_features
[Product Manufacturer, Product Model, Product Year, Product Weight]

Footwear.own_features
[Footwear Gender, Footwear US Shoe Size]

These are all examples of some of the useful things I'm using this for
-- keeping track of which variables ('features'), in an inheriting/
cascading type system, are dependent and which ones are independent.
Later, when values are entered and stored for each feature, the
relationships between the values will be constrained by these same
feature dependency links, without having to repeat them at that time.

Sorry for the wrong code before.  Oh yeah, I forgot to demonstrate
this too, but it's trivial:

Model = session.query(Feature).filter_by(name='Model').one()
Model.dependents
[Product weight]
Gender.dependents
[Product weight]

... so all this works the other way around too, though requesting the
dependents of a given feature is not as interesting as requesting a
given feature's determinants.

Eric

On Nov 24, 11:51 pm, Eric Ongerth [EMAIL PROTECTED] wrote:
 Below, I have attached a working testcase.  It works, yes -- but my
 question is that I need to make an improved version of a particular
 method on one of my classes.  The following model will probably
 explain itself for the most part.  I'll let you read it first, then
 offer a few explanatory notes afterward just in case.  Finally, at the
 end, I will describe the difference between what the method in
 question does now, and what I would like it to do.

 The nature of the response I am seeking is: a description of what I
 need to do to build a better version of the method I'm speaking of,
 including any further insight on the practice of joining at multiple
 levels of a recursive / self-referential (but loop-free) graph.

 ---snip---

 from sqlalchemy import *
 from sqlalchemy.sql import *
 from sqlalchemy.orm import *

 engine = create_engine('sqlite://')

 metadata = MetaData(bind=engine)

 itemtypes = Table('itemtypes', metadata,
     Column('name', Text, primary_key=True))

 itemtype_inheritance = Table('itemtype_inheritance', metadata,
     Column('itemtype_name', Text, ForeignKey('itemtypes.name'),
 primary_key=True),
     Column('parent_name', Text, ForeignKey('itemtypes.name'),
 primary_key=True))

 features = Table('features', metadata,
     Column('id', Integer, primary_key=True),
     Column('name', Text),
     Column('root_itemtype_name', Text, ForeignKey('itemtypes.name')))

 feature_dependencies = Table('feature_dependencies', metadata,
     Column('dependent_id', Integer, ForeignKey('features.id'),
 primary_key=True),
     Column('determinant_id', Integer, ForeignKey('features.id'),
 primary_key=True))

 metadata.drop_all()
 metadata.create_all()

 itemtypes.insert().execute([
     {'name': 'Product'},
     {'name': 'Footwear'},
     {'name': 'Boot'},
     {'name': 'Ski'}
 ])

 itemtype_inheritance.insert().execute([
     {'itemtype_name': 'Footwear', 'parent_name': 'Product'},
     {'itemtype_name': 'Boot', 'parent_name': 'Footwear'},
     {'itemtype_name': 'Ski', 'parent_name': 'Product'}
 ])

 features.insert().execute([
     {'id': 1, 'name': 'Manufacturer',
 'root_itemtype_name':'Product' },
     {'id': 2, 'name': 'Model', 'root_itemtype_name':'Product' },
     {'id': 3, 'name': 'Year', 'root_itemtype_name':'Product' },
     {'id': 4, 'name': 'Gender', 'root_itemtype_name':'Footwear' },
     {'id': 5, 'name': 'US Shoe Size',
 'root_itemtype_name':'Footwear' },
     {'id': 6, 'name': 'Length

[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-25 Thread Eric Ongerth

Well!  I guess that's exactly why we post sometimes -- the process of
producing the test case bumps the unconscious forward a few steps.  I
quit and did some pleasure reading for a while then came back.  Here's
my own answer that does exactly what I needed it to do.

Add the following property on Itemtype:

@property
def full_heritage(self):


On Nov 25, 12:42 am, Eric Ongerth [EMAIL PROTECTED] wrote:
 Unfortunately, I posted the wrong version of my Itemtype class above;
 fortunately it wasn't important for what I was trying to show.  Please
 replace class Itemtype with the following, and note the additional
 test lines and commentary which I also forgot to include.

 class Itemtype(object):

     def __repr__(self):
         return 'Itemtype: %s' % (self.name)

     @property
     def inherited_features(self):
         return ([feature for base_itemtype in self.inherits for
 feature in base_itemtype.features])

     @property
     def features(self):
         result = self.own_features[:]
         if self.inherits:
             result.extend(self.inherited_features[:])
         return result

     @property
     def dependent_features(self):
         return [f for f in self.features if f.determinants]

     @property
     def independent_features(self):
         return [f for f in self.features if not f.determinants]

 The code i posted the first time was not the right code for the
 inherited_features and features properties on Itemtype.  I must have
 cut and pasted from the wrong test file.

 Please observe the following interactions which may help paint a
 clearer picture of what I'm doing with this stuff.

 Boot.features

 [Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
 Model, Product Year, Product Weight]

 Boot.dependent_features

 [Product Weight]

 Boot.independent_features

 [Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
 Model, Product Year]

 Footwear.inherited_features

 [Product Manufacturer, Product Model, Product Year, Product Weight]

 Footwear.own_features

 [Footwear Gender, Footwear US Shoe Size]

 These are all examples of some of the useful things I'm using this for
 -- keeping track of which variables ('features'), in an inheriting/
 cascading type system, are dependent and which ones are independent.
 Later, when values are entered and stored for each feature, the
 relationships between the values will be constrained by these same
 feature dependency links, without having to repeat them at that time.

 Sorry for the wrong code before.  Oh yeah, I forgot to demonstrate
 this too, but it's trivial:

 Model = session.query(Feature).filter_by(name='Model').one()
 Model.dependents
 [Product weight]
 Gender.dependents

 [Product weight]

 ... so all this works the other way around too, though requesting the
 dependents of a given feature is not as interesting as requesting a
 given feature's determinants.

 Eric

 On Nov 24, 11:51 pm, Eric Ongerth [EMAIL PROTECTED] wrote:

  Below, I have attached a working testcase.  It works, yes -- but my
  question is that I need to make an improved version of a particular
  method on one of my classes.  The following model will probably
  explain itself for the most part.  I'll let you read it first, then
  offer a few explanatory notes afterward just in case.  Finally, at the
  end, I will describe the difference between what the method in
  question does now, and what I would like it to do.

  The nature of the response I am seeking is: a description of what I
  need to do to build a better version of the method I'm speaking of,
  including any further insight on the practice of joining at multiple
  levels of a recursive / self-referential (but loop-free) graph.

  ---snip---

  from sqlalchemy import *
  from sqlalchemy.sql import *
  from sqlalchemy.orm import *

  engine = create_engine('sqlite://')

  metadata = MetaData(bind=engine)

  itemtypes = Table('itemtypes', metadata,
      Column('name', Text, primary_key=True))

  itemtype_inheritance = Table('itemtype_inheritance', metadata,
      Column('itemtype_name', Text, ForeignKey('itemtypes.name'),
  primary_key=True),
      Column('parent_name', Text, ForeignKey('itemtypes.name'),
  primary_key=True))

  features = Table('features', metadata,
      Column('id', Integer, primary_key=True),
      Column('name', Text),
      Column('root_itemtype_name', Text, ForeignKey('itemtypes.name')))

  feature_dependencies = Table('feature_dependencies', metadata,
      Column('dependent_id', Integer, ForeignKey('features.id'),
  primary_key=True),
      Column('determinant_id', Integer, ForeignKey('features.id'),
  primary_key=True))

  metadata.drop_all()
  metadata.create_all()

  itemtypes.insert().execute([
      {'name': 'Product'},
      {'name': 'Footwear'},
      {'name': 'Boot'},
      {'name': 'Ski'}
  ])

  itemtype_inheritance.insert().execute([
      {'itemtype_name': 'Footwear', 'parent_name': 'Product'},
      {'itemtype_name': 'Boot

[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-25 Thread Eric Ongerth

Arghh.  Accidentally hitting 'Tab' in google groups takes you to the
'Send' button, then your next spacebar press prematurely sends your
post.

Ok, add the following property on Itemtype:

@property
def full_heritage(self):
result = self.inherits[:]
if result:
for inherited in self.inherits:
result.extend(inherited.full_heritage)
return result

... this is just recursively building a list of all nodes involved in
inheritance from a given Itemtype node upward through the graph.

Then change the method on Feature:

def determinants_in_scope_of(self, itemtype):
targets = map(lambda x: x.name, itemtype.full_heritage)
targets.append(itemtype.name)
return (session.query(Feature)
.join(FeatureDependency.determinant)
.join(Feature.root_itemtype)
.filter(and_(FeatureDependency.dependent_id==self.id,
Itemtype.name.in_(targets.all()

Now this machinery does exactly what I want.  I look forward to
showing you what it's really used for eventually.  Ciao!

Eric


On Nov 25, 1:57 am, Eric Ongerth [EMAIL PROTECTED] wrote:
 Well!  I guess that's exactly why we post sometimes -- the process of
 producing the test case bumps the unconscious forward a few steps.  I
 quit and did some pleasure reading for a while then came back.  Here's
 my own answer that does exactly what I needed it to do.

 Add the following property on Itemtype:

 @property
 def full_heritage(self):

 On Nov 25, 12:42 am, Eric Ongerth [EMAIL PROTECTED] wrote:

  Unfortunately, I posted the wrong version of my Itemtype class above;
  fortunately it wasn't important for what I was trying to show.  Please
  replace class Itemtype with the following, and note the additional
  test lines and commentary which I also forgot to include.

  class Itemtype(object):

      def __repr__(self):
          return 'Itemtype: %s' % (self.name)

      @property
      def inherited_features(self):
          return ([feature for base_itemtype in self.inherits for
  feature in base_itemtype.features])

      @property
      def features(self):
          result = self.own_features[:]
          if self.inherits:
              result.extend(self.inherited_features[:])
          return result

      @property
      def dependent_features(self):
          return [f for f in self.features if f.determinants]

      @property
      def independent_features(self):
          return [f for f in self.features if not f.determinants]

  The code i posted the first time was not the right code for the
  inherited_features and features properties on Itemtype.  I must have
  cut and pasted from the wrong test file.

  Please observe the following interactions which may help paint a
  clearer picture of what I'm doing with this stuff.

  Boot.features

  [Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
  Model, Product Year, Product Weight]

  Boot.dependent_features

  [Product Weight]

  Boot.independent_features

  [Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
  Model, Product Year]

  Footwear.inherited_features

  [Product Manufacturer, Product Model, Product Year, Product Weight]

  Footwear.own_features

  [Footwear Gender, Footwear US Shoe Size]

  These are all examples of some of the useful things I'm using this for
  -- keeping track of which variables ('features'), in an inheriting/
  cascading type system, are dependent and which ones are independent.
  Later, when values are entered and stored for each feature, the
  relationships between the values will be constrained by these same
  feature dependency links, without having to repeat them at that time.

  Sorry for the wrong code before.  Oh yeah, I forgot to demonstrate
  this too, but it's trivial:

  Model = session.query(Feature).filter_by(name='Model').one()
  Model.dependents
  [Product weight]
  Gender.dependents

  [Product weight]

  ... so all this works the other way around too, though requesting the
  dependents of a given feature is not as interesting as requesting a
  given feature's determinants.

  Eric

  On Nov 24, 11:51 pm, Eric Ongerth [EMAIL PROTECTED] wrote:

   Below, I have attached a working testcase.  It works, yes -- but my
   question is that I need to make an improved version of a particular
   method on one of my classes.  The following model will probably
   explain itself for the most part.  I'll let you read it first, then
   offer a few explanatory notes afterward just in case.  Finally, at the
   end, I will describe the difference between what the method in
   question does now, and what I would like it to do.

   The nature of the response I am seeking is: a description of what I
   need to do to build a better version of the method I'm speaking of,
   including any further insight on the practice of joining at multiple
   levels of a recursive / self-referential (but loop-free) graph.

   ---snip---

   from sqlalchemy import *
   from sqlalchemy.sql import

[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-25 Thread Eric Ongerth

What I don't like about my own solution, after all (see my 3rd post on
this thread, after the accidental 2nd one prematurely submitted), is
that it recursively traverses the Itemtype graph to make a list of
itemtypes to constrain the scope of a request for the list of features
upon which a given feature is dependent.  That traversal makes it
pretty slow, and moreover it represents other activity outside of
the query itself.

Intuitively I feel that there ought to be some query syntax that
expresses the same request that I am trying to make entirely in sql,
without this pre-traversal step to build an in_() list of Itemtype
nodes.  On the other hand, I have seen Mike write that if sqlalchemy
were expected to traverse arbitrary m:m structures to arbitrary depth
in creating joins to satisfy a query, things could get out of hand too
quickly.  The standard problem of having too many possibilities in the
space of characterizing the desired solutions, and those possibilities
increasing faster than the means of describing them, leaving too much
ambiguity to be worth attempting to support a general solution.

So maybe my current solution with this recursively built in_()
list (of itemtypes that are the root_itemtype of features, following
the itemtype inheritance graph 'upward' from a given starting feature)
is about as good as it gets.  I'm just reaching out intuitively to see
if anyone says something that sparks a leap forward in my
understanding of the shape of this type of problem.

Will now respond to Svilen's post.

Eric


On Nov 25, 2:04 am, Eric Ongerth [EMAIL PROTECTED] wrote:
 Arghh.  Accidentally hitting 'Tab' in google groups takes you to the
 'Send' button, then your next spacebar press prematurely sends your
 post.

 Ok, add the following property on Itemtype:

 @property
 def full_heritage(self):
     result = self.inherits[:]
     if result:
         for inherited in self.inherits:
             result.extend(inherited.full_heritage)
     return result

 ... this is just recursively building a list of all nodes involved in
 inheritance from a given Itemtype node upward through the graph.

 Then change the method on Feature:

 def determinants_in_scope_of(self, itemtype):
     targets = map(lambda x: x.name, itemtype.full_heritage)
     targets.append(itemtype.name)
     return (session.query(Feature)
         .join(FeatureDependency.determinant)
         .join(Feature.root_itemtype)
         .filter(and_(FeatureDependency.dependent_id==self.id,
             Itemtype.name.in_(targets.all()

 Now this machinery does exactly what I want.  I look forward to
 showing you what it's really used for eventually.  Ciao!

 Eric

 On Nov 25, 1:57 am, Eric Ongerth [EMAIL PROTECTED] wrote:

  Well!  I guess that's exactly why we post sometimes -- the process of
  producing the test case bumps the unconscious forward a few steps.  I
  quit and did some pleasure reading for a while then came back.  Here's
  my own answer that does exactly what I needed it to do.

  Add the following property on Itemtype:

  @property
  def full_heritage(self):

  On Nov 25, 12:42 am, Eric Ongerth [EMAIL PROTECTED] wrote:

   Unfortunately, I posted the wrong version of my Itemtype class above;
   fortunately it wasn't important for what I was trying to show.  Please
   replace class Itemtype with the following, and note the additional
   test lines and commentary which I also forgot to include.

   class Itemtype(object):

       def __repr__(self):
           return 'Itemtype: %s' % (self.name)

       @property
       def inherited_features(self):
           return ([feature for base_itemtype in self.inherits for
   feature in base_itemtype.features])

       @property
       def features(self):
           result = self.own_features[:]
           if self.inherits:
               result.extend(self.inherited_features[:])
           return result

       @property
       def dependent_features(self):
           return [f for f in self.features if f.determinants]

       @property
       def independent_features(self):
           return [f for f in self.features if not f.determinants]

   The code i posted the first time was not the right code for the
   inherited_features and features properties on Itemtype.  I must have
   cut and pasted from the wrong test file.

   Please observe the following interactions which may help paint a
   clearer picture of what I'm doing with this stuff.

   Boot.features

   [Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
   Model, Product Year, Product Weight]

   Boot.dependent_features

   [Product Weight]

   Boot.independent_features

   [Footwear Gender, Footwear US Shoe Size, Product Manufacturer, Product
   Model, Product Year]

   Footwear.inherited_features

   [Product Manufacturer, Product Model, Product Year, Product Weight]

   Footwear.own_features

   [Footwear Gender, Footwear US Shoe Size]

   These are all examples of some of the useful things I'm using

[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-25 Thread Eric Ongerth

Svil,

Thanks for your reply.  I have been following your posts with interest
over the past half year (or I thought even longer).  At first I
thought you were crazy.  But now I've found myself creating a model of
similar complexity, as necessary to express the domain I'm working on.

The purpose of my model is to ingest all of the easily expressible
facts about the characteristics ('features') of categories and
(recursive) subcategories of items within specific sub-regions of a
domain of physical items, and to encode and store those facts in a
minimally redundant and maximally searchable / indexable form.  This
supports an instant search / search while you type widget that is
far aware of the conceptual structure within the domain being
searched, unlike a mere full-text search over a flatfile, or ordinary
(non-category-structure-aware) indexings of a flatfile.  This
awareness should bring significant benefits in terms of reducing the
search to its minimal consistent combinations of targets and a sense
of just bringing up exactly what the user was looking for.

In the weeks ahead I will revisit some of the threads you listed.
Thank you for the conclusions and suggestions you mentioned; they seem
reasonable.

Eric



On Nov 25, 2:42 am, [EMAIL PROTECTED] wrote:
 Eric
 i'm not sure i follow all your mapping setup as it's too detail. but:

 i've been battling along similar data/feature inheritance+shading
 stuff along a branchy, recursive directed graph (not a pure tree as
 it has alternative short-cut paths in it), all over bitemporal
 objects and values (i.e. m2m relations all over, + grouping by
 time/objid), for almost half an year. see these posts of mine:
  - Many to many self referential relationship /15.03
  - multiple-aspects revisited /23.06
  - tree/graph persistency, concurency-safe? 13.07
  - unions in query?
  - and probably most others
  - as well as this thread in [EMAIL PROTECTED]:
     optimizing a query over tree-like structure, 2008-09-30
     my setup (law,company, department(s)/recursive, position,
 workplace, employment) is all explained there, less the
 bitemporalism.

 also see the thread Is there a simple way to let records have the
 same groups as it parents, or just look up data inheritance in the
 group.

 and so far i've reached these decisions, based on all the experiments
 (i don't come from sql background, and OO/practicalism doesnt give
 much insights on what sql can handle):
  - the root-most branches are separate queries, and a pseudo
 multi-query mimes a plain one over all those (it can also be a union
 of all ot them, or one single query - but single one has 20+tables in
 the From, and union fails here-there). it also came that different
 root-most branches have slightly diff. meaning hence it's good if
 they are loaded separately.
  - recursion is handled by expanding it on say 3 levels deep, hoping
 that noone will go further (i.e. a.x or a.a and (a.a.x or a.a.a and
 (a.a.a.x or ...))) etc.
  - everything is generated by a node-type-walking on class level, and
 the strategy of alternativ'ing on each level can be different (i.e.
 it can start as multiquery, follow as union on branch A and as single
 query on branch B). i can give this code if anyone dares read it..
  - the query returns all values whereever reachable/associated with
 some end-node
  - actual inheritance/shading etc is done after that in python. it can
 (probably) be done at sql-level by a very specific order-by, but it's
 nightmarish bitemporal query already so no need to go hell any
 further

 the times i got are of this kind: 10 nodes, with 10 values each, x100
 changes each, for about 20sec, on a relatively slow machine /
 postgres.

 maybe we can work together to get something out of it.

 On Tuesday 25 November 2008 09:51:37 Eric Ongerth wrote:

  Below, I have attached a working testcase.  It works, yes -- but my
  question is that I need to make an improved version of a particular
  method on one of my classes.  The following model will probably
  explain itself for the most part.  I'll let you read it first, then
  offer a few explanatory notes afterward just in case.  Finally, at
  the end, I will describe the difference between what the method in
  question does now, and what I would like it to do.

  The nature of the response I am seeking is: a description of what I
  need to do to build a better version of the method I'm speaking of,
  including any further insight on the practice of joining at
  multiple levels of a recursive / self-referential (but loop-free)
  graph.

  ---snip---

  from sqlalchemy import *
  from sqlalchemy.sql import *
  from sqlalchemy.orm import *

  engine = create_engine('sqlite://')

  metadata = MetaData(bind=engine)

  itemtypes = Table('itemtypes', metadata,
      Column('name', Text, primary_key=True))

  itemtype_inheritance = Table('itemtype_inheritance', metadata,
      Column('itemtype_name', Text, ForeignKey('itemtypes.name'),
  primary_key=True

[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2008-11-25 Thread Eric Ongerth
 widget that is far aware of the conceptual structure
  within the domain being searched, unlike a mere full-text search
  over a flatfile, or ordinary (non-category-structure-aware)
  indexings of a flatfile.  This awareness should bring significant
  benefits in terms of reducing the search to its minimal consistent
  combinations of targets and a sense of just bringing up exactly
  what the user was looking for.

  In the weeks ahead I will revisit some of the threads you listed.
  Thank you for the conclusions and suggestions you mentioned; they
  seem reasonable.

  Eric

  On Nov 25, 2:42 am, [EMAIL PROTECTED] wrote:
   Eric
   i'm not sure i follow all your mapping setup as it's too detail.
   but:

   i've been battling along similar data/feature inheritance+shading
   stuff along a branchy, recursive directed graph (not a pure tree
   as it has alternative short-cut paths in it), all over bitemporal
   objects and values (i.e. m2m relations all over, + grouping by
   time/objid), for almost half an year. see these posts of mine: -
   Many to many self referential relationship /15.03
    - multiple-aspects revisited /23.06
    - tree/graph persistency, concurency-safe? 13.07
    - unions in query?
    - and probably most others
    - as well as this thread in [EMAIL PROTECTED]:
       optimizing a query over tree-like structure, 2008-09-30
       my setup (law,company, department(s)/recursive, position,
   workplace, employment) is all explained there, less the
   bitemporalism.

   also see the thread Is there a simple way to let records have
   the same groups as it parents, or just look up data
   inheritance in the group.

   and so far i've reached these decisions, based on all the
   experiments (i don't come from sql background, and
   OO/practicalism doesnt give much insights on what sql can
   handle):
    - the root-most branches are separate queries, and a pseudo
   multi-query mimes a plain one over all those (it can also be a
   union of all ot them, or one single query - but single one has
   20+tables in the From, and union fails here-there). it also came
   that different root-most branches have slightly diff. meaning
   hence it's good if they are loaded separately.
    - recursion is handled by expanding it on say 3 levels deep,
   hoping that noone will go further (i.e. a.x or a.a and (a.a.x or
   a.a.a and (a.a.a.x or ...))) etc.
    - everything is generated by a node-type-walking on class level,
   and the strategy of alternativ'ing on each level can be different
   (i.e. it can start as multiquery, follow as union on branch A and
   as single query on branch B). i can give this code if anyone
   dares read it.. - the query returns all values whereever
   reachable/associated with some end-node
    - actual inheritance/shading etc is done after that in python.
   it can (probably) be done at sql-level by a very specific
   order-by, but it's nightmarish bitemporal query already so no
   need to go hell any further

   the times i got are of this kind: 10 nodes, with 10 values each,
   x100 changes each, for about 20sec, on a relatively slow machine
   / postgres.

   maybe we can work together to get something out of it.

   On Tuesday 25 November 2008 09:51:37 Eric Ongerth wrote:
Below, I have attached a working testcase.  It works, yes --
but my question is that I need to make an improved version of a
particular method on one of my classes.  The following model
will probably explain itself for the most part.  I'll let you
read it first, then offer a few explanatory notes afterward
just in case.  Finally, at the end, I will describe the
difference between what the method in question does now, and
what I would like it to do.

The nature of the response I am seeking is: a description of
what I need to do to build a better version of the method I'm
speaking of, including any further insight on the practice of
joining at multiple levels of a recursive / self-referential
(but loop-free) graph.

---snip---

from sqlalchemy import *
from sqlalchemy.sql import *
from sqlalchemy.orm import *

engine = create_engine('sqlite://')

metadata = MetaData(bind=engine)

itemtypes = Table('itemtypes', metadata,
    Column('name', Text, primary_key=True))

itemtype_inheritance = Table('itemtype_inheritance', metadata,
    Column('itemtype_name', Text, ForeignKey('itemtypes.name'),
primary_key=True),
    Column('parent_name', Text, ForeignKey('itemtypes.name'),
primary_key=True))

features = Table('features', metadata,
    Column('id', Integer, primary_key=True),
    Column('name', Text),
    Column('root_itemtype_name', Text,
ForeignKey('itemtypes.name')))

feature_dependencies = Table('feature_dependencies', metadata,
    Column('dependent_id', Integer, ForeignKey('features.id'),
primary_key=True),
    Column('determinant_id', Integer,
ForeignKey

[sqlalchemy] Selecting from a self-referential mapper: recursive joins?

2008-11-24 Thread Eric Ongerth

Below, I have attached a working testcase.  It works, yes -- but my
question is that I need to make an improved version of a particular
method on one of my classes.  The following model will probably
explain itself for the most part.  I'll let you read it first, then
offer a few explanatory notes afterward just in case.  Finally, at the
end, I will describe the difference between what the method in
question does now, and what I would like it to do.

The nature of the response I am seeking is: a description of what I
need to do to build a better version of the method I'm speaking of,
including any further insight on the practice of joining at multiple
levels of a recursive / self-referential (but loop-free) graph.


---snip---


from sqlalchemy import *
from sqlalchemy.sql import *
from sqlalchemy.orm import *

engine = create_engine('sqlite://')

metadata = MetaData(bind=engine)

itemtypes = Table('itemtypes', metadata,
Column('name', Text, primary_key=True))

itemtype_inheritance = Table('itemtype_inheritance', metadata,
Column('itemtype_name', Text, ForeignKey('itemtypes.name'),
primary_key=True),
Column('parent_name', Text, ForeignKey('itemtypes.name'),
primary_key=True))

features = Table('features', metadata,
Column('id', Integer, primary_key=True),
Column('name', Text),
Column('root_itemtype_name', Text, ForeignKey('itemtypes.name')))

feature_dependencies = Table('feature_dependencies', metadata,
Column('dependent_id', Integer, ForeignKey('features.id'),
primary_key=True),
Column('determinant_id', Integer, ForeignKey('features.id'),
primary_key=True))

metadata.drop_all()
metadata.create_all()

itemtypes.insert().execute([
{'name': 'Product'},
{'name': 'Footwear'},
{'name': 'Boot'},
{'name': 'Ski'}
])

itemtype_inheritance.insert().execute([
{'itemtype_name': 'Footwear', 'parent_name': 'Product'},
{'itemtype_name': 'Boot', 'parent_name': 'Footwear'},
{'itemtype_name': 'Ski', 'parent_name': 'Product'}
])

features.insert().execute([
{'id': 1, 'name': 'Manufacturer',
'root_itemtype_name':'Product' },
{'id': 2, 'name': 'Model', 'root_itemtype_name':'Product' },
{'id': 3, 'name': 'Year', 'root_itemtype_name':'Product' },
{'id': 4, 'name': 'Gender', 'root_itemtype_name':'Footwear' },
{'id': 5, 'name': 'US Shoe Size',
'root_itemtype_name':'Footwear' },
{'id': 6, 'name': 'Length', 'root_itemtype_name':'Ski' },
{'id': 7, 'name': 'Weight', 'root_itemtype_name':'Product' }

])

feature_dependencies.insert().execute([
{'dependent_id': 7, 'determinant_id': 1},
{'dependent_id': 7, 'determinant_id': 2},
{'dependent_id': 7, 'determinant_id': 3},
{'dependent_id': 7, 'determinant_id': 4},
{'dependent_id': 7, 'determinant_id': 5},
{'dependent_id': 7, 'determinant_id': 6}
])


class Itemtype(object):

def __repr__(self):
return 'Itemtype: %s' % (self.name)

@property
def inherited_features(self):
return reduce(list.extend,
  [base_itemtype.features for base_itemtype in
self.inherits],
  [])

@property
def features(self):
return self.own_features.extend(self.inherited_features)

@property
def dependent_features(self):
return [f for f in self.features if f.determinants]

@property
def independent_features(self):
return [f for f in self.features if not f.determinants]


class Feature(object):

def __repr__(self):
return '%s %s' % (self.root_itemtype_name, self.name)

def determinants_in_scope_of(self, itemtype):
return (session.query(Feature)
.join(FeatureDependency.determinant)
.join(Feature.root_itemtype)
.filter(and_(FeatureDependency.dependent_id==self.id,
 Itemtype.name==itemtype.name))).all()


class FeatureDependency(object):

def __repr__(self):
return F_D: %s depends on %s % (self.dependent.name,
  self.determinant.name)



mapper(Itemtype, itemtypes, properties={
'inherits':relation(Itemtype,
secondary=itemtype_inheritance,
primaryjoin=
(itemtypes.c.name==itemtype_inheritance.c.itemtype_name),
secondaryjoin=
(itemtype_inheritance.c.parent_name==itemtypes.c.name),
backref='progeny'),
'own_features':relation(Feature,
primaryjoin=(features.c.root_itemtype_name==itemtypes.c.name),
backref=backref('root_itemtype', uselist=False))
})

mapper(Feature, features, properties={
'dependents':relation(Feature,
secondary=feature_dependencies,
primaryjoin=
(feature_dependencies.c.determinant_id==features.c.id),
secondaryjoin=
(feature_dependencies.c.dependent_id==features.c.id),
backref=backref('determinants'))
})

mapper(FeatureDependency, feature_dependencies, properties={
'dependent':relation(Feature,
uselist=False,
primaryjoin=

[sqlalchemy] Re: dynamic columns

2008-11-18 Thread Eric Ongerth

A way to normalize this:

article_table:
id

title_table:
article_id = primary key, also = foreign key to article_table.id
language
title

content_table:
article_id = primary key, also = foreign key to article_table.id
language
content

mapper(Article, article_table, properties={
'titles':relation(Title),
'contents':relation(Content) } )

mapper(Title, title_table)
mapper(Content, content_table)

Now you can assign an article as many titles in as many languages as
you wish, and as many content fields in as many language as you wish.
You could extend this further in many ways.  It is even possible to
use a dictionary-like class as the collection class for the relations
in the Article mapper, so that you could work like this:

a=Article()
a.title['en']='Confusion in Ten Easy Steps'
a.title['fr']='...'
a.title['ru']='...'   (etc.)

So, this approach means no need to alter your schema, change any
classes, or alter any tables when you add or remove a language.  You
could keep a table of currently accepted languages and use that (via
foreign key) to check the integrity of the 'language' column on the
title and content tables, and so forth.

This technique, generally speaking, is known as Vertical Partitioning
or a 'Vertical Tables' approach, because it results in tall, skinny
tables as opposed to tables that are many columns wide.  Each
separable, changeable attribute that you might wish to add to an
article can have its own table instead of a field in the articles
table, and foreign keys from these tables back to the main article
table give sqlalchemy the information it needs to make articles
respond with integrity as a coherent class despite the normalization/
separation of data.  Of course, to do this you have to use a database
that is good with foreign keys.

On Nov 18, 2:31 pm, g00fy [EMAIL PROTECTED] wrote:
 hi
 so i have list of languages (suffixes)
 en
 de
 pl
 ru
  etc...

 now i have my article_table, when normaly i would have columns:
 id, title, content

 but now i want to have:
 id, title_en, title_de, title_pl,
 title_ru,,content_en,..,content_ru

 how can i create table definition dynamicly according to languages i
 have ?
 [I am aware that i will have to alter my table when I will add or
 remove a language]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Python 2.6 includes sqlite3 2.4.1

2008-11-09 Thread Eric Ongerth

To whom it may concern:

I noticed the following thread in this group (the only thing that came
up when I searched for sqlite3 python 2.6):

http://groups.google.com/group/sqlalchemy/browse_thread/thread/d6d691b53e93b5e5/78a57bae1aefd59d

And then I found the following on the page for What's New in Python
2.6 (released last month):

[quote]
The sqlite3 module, maintained by Gerhard Haering, has been updated
from version 2.3.2 in Python 2.5 to version 2.4.1.
[/quote]

... just FYI.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.5.0rc3 Released

2008-11-07 Thread Eric Ongerth

Mike.  You have absolutely perfect spelling.  Better than 99% of the
population.  But there is just this one, and only one, English word
that you spell strangely.  You consistently spell propagate as
propigate.   Is there any way we can get an i/a switch in there?

p.s. - Major props on being just around the corner from 0.5.  I am in
awe of SA as always.

Eric

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: composite primary key/postgres

2008-05-13 Thread Eric Ongerth

So part of the problem is postgresql is autoincrementing where you do
not want it to do so?  I thought postgresql only autoincrements where
your column is of type 'serial'.   Is that not true?  Or if so, you
could use type 'integer' instead of 'serial'.  There is also the
possibility that the combination of column type 'integer' and
'nullable=False is handled as identical to 'serial', i.e.
autoincrement.  I think I've seen that before but I'm not sure.



On May 12, 2:57 pm, [EMAIL PROTECTED] wrote:
 hi.
 i have a sort-of multicolumn m2m association table, where the primary
 key is composed of all the links. At least 1 link (actualy, 2) is
 always present, but never all. so i am defining all of those columns
 with primary_key=True, nullable=True.
 which is fine in sqlite, but doesnot work in postgres - it
 autoincrements those columns without value.

 how can i fix this?
 would a default_value=0 - or something - work?

 (now as i look at it, at least as declaration, the whole primary key
 seems nullable - is this wrong?)

 ciao
 svil
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres cascade error ?

2008-04-16 Thread Eric Ongerth



On Apr 16, 7:24 am, Michael Bayer [EMAIL PROTECTED] wrote:

 always use delete cascade in conjunction with delete-orphan.   It
 doesnt make much sense to have delete-orphan only and not delete
 cascade.

Oh wow.  That clears up a few things for me.  I don't remember ever
seeing this (or at least I don't remember taking this sense of things
away after reading) in the documentation.  Maybe I developed a blind
spot back around 3.something and never got past it?  I have simply
been avoiding delete-orphan although I looked forward to figuring out
how to use it without errors some day.  I think this was the key fact
that I missed, even though as you pointed out it's kind of the only
way that makes sense.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Connection Already Closed -- potentially incorrect error msg sometimes?

2008-03-09 Thread Eric Ongerth

I don't know if the following traceback, from my Pylons project, will
shed light on this.  But I have fixed the error that I'm about to
mention, and it was not what it appeared, which makes me wonder if
SqlAlchemy has a bug in its error reporting (or on the other hand
maybe there's just something I don't understand in the process).

I was chasing down an error that said Connection Already Closed and
having a tough time figuring it out.  I read the FAQ on the subject
and several of Mike's replies to people's posts.  All of this existing
material spoke about leaving a connection open too long, and other
related issues with connection pooling and recycling.  I was pretty
sure that wasn't my problem.  I even rebooted the machine that my
project lives on, and the same error was happening on first access to
the database after rebooting.  Not merely restarting the paster
webserver.

Oddly, the error only happened on a call to session.flush() right
after update() was called on an object.  In other parts of my little
CRUD app, I was able to create new values and save them just fine.  It
was only in the attempt to update an existing row that the error was
happening.

Finally after enough hair loss, I stopped poring over the tracebacks
shown in the Pylons error response, and went directly to the paster
debug logs.  Hmm!  What was actually happening was that a foreign key
relationship was being violated (or i guess you could say just not
satisfied) by the value I was trying to save (update).  The foreign
key error message went to the log where I wasn't looking, because I
typically only look there if the in-browser Pylons error response is
absent or otherwise hung up.  Meanwhile for some reason the ultimate
result was showing up as Connection Already Closed.  So maybe the
underlying Foreign Key error resulted in the connection closing, and
something in the way SA handled that led to the Connection error, and
I only saw the latter because I hadn't thought to look for an error
beneath the error.

Here's the trace from the Pylons error handler in the browser:

URL: http://127.0.0.1:5000/FVal/update/1160
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\error.py', line 245 in respond
  app_iter = self.application(environ, detect_start_response)
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\wsgiapp.py', line 315 in __call__
  return self.app(environ, start_response)
File 'h:\\python25\\lib\\site-packages\\Beaker-0.9.3-py2.5.egg\\beaker\
\middleware.py', line 74 in __call__
  return self.app(environ, start_response)
File 'h:\\python25\\lib\\site-packages\\Beaker-0.9.3-py2.5.egg\\beaker\
\middleware.py', line 145 in __call__
  return self.wrap_app(environ, session_start_response)
File 'h:\\python25\\lib\\site-packages\\Routes-1.7.2-py2.5.egg\\routes\
\middleware.py', line 104 in __call__
  response = self.app(environ, start_response)
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\wsgiapp.py', line 95 in __call__
  response = self.dispatch(controller, environ, start_response)
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\wsgiapp.py', line 237 in dispatch
  return controller(environ, start_response)
File 'H:\\msys\\1.0\\home\\Eric\\rentals\\rentals\\lib\\base.py', line
30 in __call__
  return WSGIController.__call__(self, environ, start_response)
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\controllers\\core.py', line 164 in __call__
  response = self._dispatch_call()
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\controllers\\core.py', line 120 in _dispatch_call
  response = self._inspect_call(func)
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\controllers\\core.py', line 79 in _inspect_call
  result = func(**args)
File 'H:\\msys\\1.0\\home\\Eric\\rentals\\rentals\\controllers\
\FVal.py', line 118 in update
  return render('/ShowFVal.mako')
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\templating.py', line 343 in render
  format=format, namespace=kargs, **cache_args)
File 'h:\\Python25\\lib\\site-packages\\pylons-0.9.6.1-py2.5.egg\
\pylons\\templating.py', line 228 in render
  **options)
File 'h:\\python25\\lib\\site-packages\\mako-0.1.10-py2.5.egg\\mako\
\ext\\turbogears.py', line 49 in render
  return template.render(**info)
File 'h:\\python25\\lib\\site-packages\\mako-0.1.10-py2.5.egg\\mako\
\template.py', line 114 in render
  return runtime._render(self, self.callable_, args, data)
File 'h:\\python25\\lib\\site-packages\\mako-0.1.10-py2.5.egg\\mako\
\runtime.py', line 287 in _render
  _render_context(template, callable_, context, *args,
**_kwargs_for_callable(callable_, data))
File 'h:\\python25\\lib\\site-packages\\mako-0.1.10-py2.5.egg\\mako\
\runtime.py', line 304 in _render_context
  _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File 

[sqlalchemy] Re: Connection Already Closed -- potentially incorrect error msg sometimes?

2008-03-09 Thread Eric Ongerth

Thanks for the reply.  I wonder if Pylons is responsible for ignoring
that exception.  My project is at such an early stage that it hasn't
even grown any exception-handling code yet, so I know my own code
didn't catch an exception and neglect to raise it or something.

Anyway, not something I need to get to the bottom of, just wanted to
hear someone's perspective on it.  Thank you for supplying one.

E

On Mar 9, 10:55 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Mar 9, 2008, at 5:58 AM, Eric Ongerth wrote:

  Finally after enough hair loss, I stopped poring over the tracebacks
  shown in the Pylons error response, and went directly to the paster
  debug logs.  Hmm!  What was actually happening was that a foreign key
  relationship was being violated (or i guess you could say just not
  satisfied) by the value I was trying to save (update).  The foreign
  key error message went to the log where I wasn't looking, because I
  typically only look there if the in-browser Pylons error response is
  absent or otherwise hung up.  Meanwhile for some reason the ultimate
  result was showing up as Connection Already Closed.  So maybe the
  underlying Foreign Key error resulted in the connection closing, and
  something in the way SA handled that led to the Connection error, and
  I only saw the latter because I hadn't thought to look for an error
  beneath the error.

 this seems like an error in how you are integrating with Pylons, or
 how Pylons is integrated with exception throws.  An exception throw
 was ignored, and control passed to your regular view template as
 though it were a successful request.   This kind of bug is pretty
 common in a lot of software but it lies within how SQLA is integrated
 here.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Intersect of ORM queries

2008-03-05 Thread Eric Ongerth

Cool.  I wasn't sure if it was ready for filter(A.bs == list_of_bs).
When I tried to do that before, I must have let some silly syntax
error keep me from realizing that it was a workable construction.

Thanks!

On Mar 5, 8:20 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Mar 5, 2008, at 10:50 AM, Eric Ongerth wrote:

  Anyway -- so what would really clean it all up would be:

  session.query(A).filter(A.bs.contains(list_of_bs_being_sought)).all().

  THAT would do exactly what I'm trying to accomplish.  But it would
  require contains() to accept a list and know what to do with it.  My
  proposal would be that the expected behavior is for contains() to
  construct an intersect of selects where each select is like the one it
  creates in its simpler case where the argument to contains() is a
  scalar instead of a list.  Does that make sense?

 Well i think we'd call the operator intersect().  However I think this
 will do what you want right now if you were to say:

 session.query(A).filter(A.bs == list_of_bs).all()

 since it will generate individual EXISTS predicates for each element
 in the list.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Has anyone implemented a dict of lists collection_class?

2008-02-16 Thread Eric Ongerth

It occurs to me that a near analogue to this would be... storing a
collection of one man's children by different mothers.  While it's
somewhat useful to have a collection of the guy's children... what if
you want to look up his children by just one mother, or by each mother
in turn.  There's an easy workaround, just looking up the list of his
children then asking each child whom its mother is.  But that's not
nearly as efficient as having his children collection indexed by
mother.  That's basically all I'm trying to do here (though totally
not the problem domain i'm working in!)

Thanks again.

On Feb 16, 2:23 pm, Eric Ongerth [EMAIL PROTECTED] wrote:
 Awesome -- Mike, that's a heck of a great response and I'll try it all
 out.

 But one quick reply to your very first comment, before I go out and do
 so.

 You wrote:

 quote
 Im not sure you're going to be able to use the collection classes
 here
 - those are all built on the notion of mapped elements being
 appended and removed from a collection.but here you are
 appending and removing further subsets of elements.
 /quote

 Actually, I don't need or want the ability to append or remove entire
 subsets.  There will only be a single element appended or removed at a
 time.  It's just that when a Bar is added to one of these children
 collections, I want it to be filed under a dict slot whose key happens
 to be the Bar's parent attribute.  It should join all the other
 elements stored in a list which holds the contents of that dict slot.
 Know what I mean?  Similarly, when I remove a Bar from one of these
 children collections, we look at its parent attribute to determine
 which slot in the collection it should be removed from, then it gets
 removed from the list which holds the contents of that dict slot.
 (Sorry, nonstandard use of the word slot here, just to avoid any
 ambiguity in using the word value to speak of what's on the right
 side of the colon in a dict entry).

 My guess is that making this clear, changes your idea of what I'm
 trying to achieve.  Let me know if that's the case.  If not, I need to
 read more deeply into what you wrote.  In either case I'm going to try
 out your examples.  Thanks a bundle for the length and quality of
 response.

 On Feb 16, 2:03 pm, Michael Bayer [EMAIL PROTECTED] wrote:

  yet another adjustment to ListAdapter...

  class ListAdapter(object):
   def __init__(self, parent, key):
   self.__parent = parent
   self.__key = key

   def __cached(self):
   try:
   return self.__cached_data
   except AttributeError:
   self.__cached_data = [item.data for item in
  self.__parent._data if item.key == self.__key]
   return self.__cached_data
   __cached = property(__cached)

   def __delcached(self):
   try:
   del self.__cached_data
   except AttributeError:
   pass

   def __iter__(self):
   return iter(self.__cached)

   def __eq__(self, other):
   return list(self) == list(other)

   def __repr__(self):
   return repr(list(self))

   def append(self, item):
   self.__delcached()
   self.__parent._data.append(DictOfListElement(self.__key, item))

   def __getitem__(self, index):
   return self.__cached[index]

   def __setitem__(self, index, value):
  self.__delcached()
   [item for item in self.__parent._data if item.key ==
  self.__key][index].data = value

   # other list like methods
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Has anyone implemented a dict of lists collection_class?

2008-02-15 Thread Eric Ongerth

In case I didn't make it clear enough -- I've already done the
following:

'children': relation(Bar,
collection_class=attribute_mapped_collection('foo'),
backref=backref('parent', remote_side=[bars.c.id])) } )

And that worked great -- if I only needed to have up to a SINGLE child
per bar per foo.  Because the dict in attribute_mapped_collection
expects scalar keys and scalar values, right?  It's not set up to
collect a whole list of values per key.  And that is what I need.  Any
given Foo is only going to appear once in the keys of any given Bar's
children DictOfLists, of course.  But the values mapped to that given
Foo need to be a list of Bars of any length.  Any given Bar will have
1..n children in the bars table; each of these child Bars will be
related to a single Foo, but the total number of Foos is  n, so a
parent Bar might have a number of child Bars for a given Foo, while
only having zero or one single child Bar for some other Foo.

There, I think that tells it more completely.  Sorry for the
metasyntactic variables.


On Feb 15, 8:13 pm, Eric Ongerth [EMAIL PROTECTED] wrote:
 If anyone out there has already implemented a custom DictOfLists
 collection class to map scalar keys to lists of values, I would be
 grateful for the opportunity to avoid reinventing the wheel.  If not,
 I guess I'll start working on it.

 I've experimented successfully with attribute_mapped_collection and
 column_mapped_collection and they work just great.  However, instead
 of mapping keys to single values, I need some of my mapper relations
 to map to a list of values.

 Here is more of the picture, for example.

 foos = Table('foos', metadata,
 Column('id', Integer, primary_key=True),
 Column('name', String(20)))

 bars = Table('bars', metadata,
 Column('id', Integer, primary_key=True),
 Column('foo_id', Integer, ForeignKey('foos.id')),
 Column('value', String(20)),
 Column('parent_id', Integer, ForeignKey('bars.id')))

 class Foo(object): pass
 class Bar(object): pass

 mapper(Foo, foos)

 mapper(Bar, bars, properties={
 'foo':relation(Foo, uselist=False, backref='bars'),
 'children':relation(Bar,
 backref=backref('parent',
 remote_side=[bars.c.id]))

 })

 ... So we have a relation of 1 Foo : many Bars.  And within the Bars
 we also have 'adjacency' (tree-like) relations between the various
 rows of the 'bars' table.  A Bar's children are kept in the standard
 list-like collection class.

 But what I really need is a *dict* instead of a list.  Ok, SA already
 takes care of that.  But I actually need a list-like collection to
 appear as the value for each key in the dict.  Specifically, I need
 each Bar to be able to have stored children *per Foo*.  And not keyed
 by the parent's foo, but the child's foo.

 Does that make sense?  I'll be working on this immediately, but if
 anyone can shorten my path to getting this straight I'd be very glad.
 I'm beginning to work out the use of a custom collection_class for
 this, but I haven't done all that much with metaclassing and the way
 forward isn't obvious (the SA instructions about this seem to assume
 the programmer is pretty experienced with custom subclassing etc.)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Has anyone implemented a dict of lists collection_class?

2008-02-15 Thread Eric Ongerth

If anyone out there has already implemented a custom DictOfLists
collection class to map scalar keys to lists of values, I would be
grateful for the opportunity to avoid reinventing the wheel.  If not,
I guess I'll start working on it.

I've experimented successfully with attribute_mapped_collection and
column_mapped_collection and they work just great.  However, instead
of mapping keys to single values, I need some of my mapper relations
to map to a list of values.

Here is more of the picture, for example.

foos = Table('foos', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(20)))

bars = Table('bars', metadata,
Column('id', Integer, primary_key=True),
Column('foo_id', Integer, ForeignKey('foos.id')),
Column('value', String(20)),
Column('parent_id', Integer, ForeignKey('bars.id')))

class Foo(object): pass
class Bar(object): pass

mapper(Foo, foos)

mapper(Bar, bars, properties={
'foo':relation(Foo, uselist=False, backref='bars'),
'children':relation(Bar,
backref=backref('parent',
remote_side=[bars.c.id]))
})


... So we have a relation of 1 Foo : many Bars.  And within the Bars
we also have 'adjacency' (tree-like) relations between the various
rows of the 'bars' table.  A Bar's children are kept in the standard
list-like collection class.

But what I really need is a *dict* instead of a list.  Ok, SA already
takes care of that.  But I actually need a list-like collection to
appear as the value for each key in the dict.  Specifically, I need
each Bar to be able to have stored children *per Foo*.  And not keyed
by the parent's foo, but the child's foo.

Does that make sense?  I'll be working on this immediately, but if
anyone can shorten my path to getting this straight I'd be very glad.
I'm beginning to work out the use of a custom collection_class for
this, but I haven't done all that much with metaclassing and the way
forward isn't obvious (the SA instructions about this seem to assume
the programmer is pretty experienced with custom subclassing etc.)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Has anyone implemented a dict of lists collection_class?

2008-02-15 Thread Eric Ongerth

Ok, I tried subclassing MappedCollection and it seems like I did all
right with my made-up appender, remover, and iterator functions.  At
least I fixed various errors and got this to at least function as the
collection_class for the mapper shown above.

But I can't figure out how to tell my DictOfLists to return a list
instead of a single item.  So there's a place where I'm trying to say:
for child in some_bar.children[some_foo]:
blah(child)
etc.

Sadly, while some_bar.children[some_foo] at least returns something,
it returns a scalar Bar instance, not a list of Bars as I want it too.

What's the next step?



On Feb 15, 8:21 pm, Eric Ongerth [EMAIL PROTECTED] wrote:
 In case I didn't make it clear enough -- I've already done the
 following:

 'children': relation(Bar,
 collection_class=attribute_mapped_collection('foo'),
 backref=backref('parent', remote_side=[bars.c.id])) } )

 And that worked great -- if I only needed to have up to a SINGLE child
 per bar per foo.  Because the dict in attribute_mapped_collection
 expects scalar keys and scalar values, right?  It's not set up to
 collect a whole list of values per key.  And that is what I need.  Any
 given Foo is only going to appear once in the keys of any given Bar's
 children DictOfLists, of course.  But the values mapped to that given
 Foo need to be a list of Bars of any length.  Any given Bar will have
 1..n children in the bars table; each of these child Bars will be
 related to a single Foo, but the total number of Foos is  n, so a
 parent Bar might have a number of child Bars for a given Foo, while
 only having zero or one single child Bar for some other Foo.

 There, I think that tells it more completely.  Sorry for the
 metasyntactic variables.

 On Feb 15, 8:13 pm, Eric Ongerth [EMAIL PROTECTED] wrote:

  If anyone out there has already implemented a custom DictOfLists
  collection class to map scalar keys to lists of values, I would be
  grateful for the opportunity to avoid reinventing the wheel.  If not,
  I guess I'll start working on it.

  I've experimented successfully with attribute_mapped_collection and
  column_mapped_collection and they work just great.  However, instead
  of mapping keys to single values, I need some of my mapper relations
  to map to a list of values.

  Here is more of the picture, for example.

  foos = Table('foos', metadata,
  Column('id', Integer, primary_key=True),
  Column('name', String(20)))

  bars = Table('bars', metadata,
  Column('id', Integer, primary_key=True),
  Column('foo_id', Integer, ForeignKey('foos.id')),
  Column('value', String(20)),
  Column('parent_id', Integer, ForeignKey('bars.id')))

  class Foo(object): pass
  class Bar(object): pass

  mapper(Foo, foos)

  mapper(Bar, bars, properties={
  'foo':relation(Foo, uselist=False, backref='bars'),
  'children':relation(Bar,
  backref=backref('parent',
  remote_side=[bars.c.id]))

  })

  ... So we have a relation of 1 Foo : many Bars.  And within the Bars
  we also have 'adjacency' (tree-like) relations between the various
  rows of the 'bars' table.  A Bar's children are kept in the standard
  list-like collection class.

  But what I really need is a *dict* instead of a list.  Ok, SA already
  takes care of that.  But I actually need a list-like collection to
  appear as the value for each key in the dict.  Specifically, I need
  each Bar to be able to have stored children *per Foo*.  And not keyed
  by the parent's foo, but the child's foo.

  Does that make sense?  I'll be working on this immediately, but if
  anyone can shorten my path to getting this straight I'd be very glad.
  I'm beginning to work out the use of a custom collection_class for
  this, but I haven't done all that much with metaclassing and the way
  forward isn't obvious (the SA instructions about this seem to assume
  the programmer is pretty experienced with custom subclassing etc.)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Has anyone implemented a dict of lists collection_class?

2008-02-15 Thread Eric Ongerth

Incidentally, I tried mocking this all up entirely outside of SA by
creating a DictOfLists class that subclasses the basic 'dict'.  That
worked fine, returns lists, adds and removes as desired, handles
everything as one would expect.  So I don't think I'm fumbling with
the basic mechanics of it.


On Feb 15, 11:41 pm, Eric Ongerth [EMAIL PROTECTED] wrote:
 Ok, I tried subclassing MappedCollection and it seems like I did all
 right with my made-up appender, remover, and iterator functions.  At
 least I fixed various errors and got this to at least function as the
 collection_class for the mapper shown above.

 But I can't figure out how to tell my DictOfLists to return a list
 instead of a single item.  So there's a place where I'm trying to say:
 for child in some_bar.children[some_foo]:
 blah(child)
 etc.

 Sadly, while some_bar.children[some_foo] at least returns something,
 it returns a scalar Bar instance, not a list of Bars as I want it too.

 What's the next step?

 On Feb 15, 8:21 pm, Eric Ongerth [EMAIL PROTECTED] wrote:

  In case I didn't make it clear enough -- I've already done the
  following:

  'children': relation(Bar,
  collection_class=attribute_mapped_collection('foo'),
  backref=backref('parent', remote_side=[bars.c.id])) } )

  And that worked great -- if I only needed to have up to a SINGLE child
  per bar per foo.  Because the dict in attribute_mapped_collection
  expects scalar keys and scalar values, right?  It's not set up to
  collect a whole list of values per key.  And that is what I need.  Any
  given Foo is only going to appear once in the keys of any given Bar's
  children DictOfLists, of course.  But the values mapped to that given
  Foo need to be a list of Bars of any length.  Any given Bar will have
  1..n children in the bars table; each of these child Bars will be
  related to a single Foo, but the total number of Foos is  n, so a
  parent Bar might have a number of child Bars for a given Foo, while
  only having zero or one single child Bar for some other Foo.

  There, I think that tells it more completely.  Sorry for the
  metasyntactic variables.

  On Feb 15, 8:13 pm, Eric Ongerth [EMAIL PROTECTED] wrote:

   If anyone out there has already implemented a custom DictOfLists
   collection class to map scalar keys to lists of values, I would be
   grateful for the opportunity to avoid reinventing the wheel.  If not,
   I guess I'll start working on it.

   I've experimented successfully with attribute_mapped_collection and
   column_mapped_collection and they work just great.  However, instead
   of mapping keys to single values, I need some of my mapper relations
   to map to a list of values.

   Here is more of the picture, for example.

   foos = Table('foos', metadata,
   Column('id', Integer, primary_key=True),
   Column('name', String(20)))

   bars = Table('bars', metadata,
   Column('id', Integer, primary_key=True),
   Column('foo_id', Integer, ForeignKey('foos.id')),
   Column('value', String(20)),
   Column('parent_id', Integer, ForeignKey('bars.id')))

   class Foo(object): pass
   class Bar(object): pass

   mapper(Foo, foos)

   mapper(Bar, bars, properties={
   'foo':relation(Foo, uselist=False, backref='bars'),
   'children':relation(Bar,
   backref=backref('parent',
   remote_side=[bars.c.id]))

   })

   ... So we have a relation of 1 Foo : many Bars.  And within the Bars
   we also have 'adjacency' (tree-like) relations between the various
   rows of the 'bars' table.  A Bar's children are kept in the standard
   list-like collection class.

   But what I really need is a *dict* instead of a list.  Ok, SA already
   takes care of that.  But I actually need a list-like collection to
   appear as the value for each key in the dict.  Specifically, I need
   each Bar to be able to have stored children *per Foo*.  And not keyed
   by the parent's foo, but the child's foo.

   Does that make sense?  I'll be working on this immediately, but if
   anyone can shorten my path to getting this straight I'd be very glad.
   I'm beginning to work out the use of a custom collection_class for
   this, but I haven't done all that much with metaclassing and the way
   forward isn't obvious (the SA instructions about this seem to assume
   the programmer is pretty experienced with custom subclassing etc.)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: why this Query API?

2007-08-24 Thread Eric Ongerth

Search the discussion group archive for the phrase (in quotes) in
place and take a look at some of the related discussions from late
June.


On Aug 23, 2:10 pm, Marcos Dione [EMAIL PROTECTED] wrote:
 hi, I'm rather new to SQLAlchemy, using version 0.3.x right now. I
 would like to know the reason why Query.filter() returns another Query
 object instead of aplying in place, if there is one. an answer to this
 would help me to understand better this ORM.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Group by? Still a problem

2007-06-27 Thread Eric Ongerth



On Jun 27, 4:34 pm, voltron [EMAIL PROTECTED] wrote:
 Could you point me to the url where this example is? I wonder why
 order_by and other things work with the ORM then and group_by left out


Here is where to find the group_by method in the documentation:
From the main table of contents, select Generated Documentation,
then search for Class Query and click on that link.  Now you're
here:
http://www.sqlalchemy.org/docs/sqlalchemy_orm.html#docstrings_sqlalchemy.orm_Query
Now scroll down to method group_by.

For more usage suggestions, see the following:
http://www.sqlalchemy.org/docs/datamapping.html#datamapping_query_callingstyles

And here is exactly where the example given by Huy Do is found:
http://www.sqlalchemy.org/docs/adv_datamapping.html#advdatamapping_selects


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Qualified column names in JOINs?

2007-06-23 Thread Eric Ongerth



On Jun 23, 6:18 am, Michael Bayer [EMAIL PROTECTED] wrote:
 im not sure if full outer join is really available on most databases.

I'm confused by the inclusion of the word really there.

Is it that some of them claim to support a full outer join but what
they deliver is not really the right result?  Or just that many dbs
haven't supported the operation yet?

Full outer joins are pretty straightforward in postgresql:

http://www.postgresql.org/docs/8.2/interactive/queries-table-expressions.html


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit Of work seems to be calling save/delete twice

2007-06-21 Thread Eric Ongerth

Thank you.  Glad it worked out easily.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit Of work seems to be calling save/delete twice

2007-06-20 Thread Eric Ongerth
Hello,

* Apologies if this is a duplicate -- I attempted to post a few hours
earlier but the result vanished.  Could be that the earlier effort went off
as a direct email to MB, instead of a post to this group as intended.
chris e posted about this topic in April and I was trying to respond on
that thread, but Google Groups won't take a response this long after.

Below, i've included a working test script that causes a
ConcurrentModificationError.  I'm wondering if that is the right result.  I
was working with this section of my code specifically to work out what
cascade relationships I wanted to use in my tables and mappers, and while
experimenting with various cascade options I fully expected to run into some
exceptions where I couldn't delete some object because of its
dependencies.  But seeing a ConcurrentModificationError instead was a
surprise.  So I'm wondering if I've just got something set up wrong.

I've done my best to trim this to a bare-bones minimal reproduction of the
error, though it's still not tiny.  Can't really cut out any more
tables/classes/mappers without rendering the test script alien to what I'm
actually working on.

  I'm modeling a rental reservation system.  rentable Items are (multiple
table) polymorphic.  In this example the subclasses of Item have all been
trimmed down to just one for clarity: Ski.  Events are polymorphic, too;
here I've included only most obvious subclass of event: Reservation.  Each
reservation has one customer associated with it.  A reservation is
associated to one or more Gearsets; each gearset has one or more
Items.  Items to Gearsets is actually many-to-many, and this is accomplished
via three secondary tables: historyfile, currentfile, and futurefile.  In
this way an item has a distinct history of what sets (and thereby what
rentals) it has been a part of in the past; a record of what it's doing
right now, and what sets (and thereby rentals) it is reserved to be a part
of in the future.

If I had to guess, maybe this 3-way split of secondary/association tables is
the thing that might be causing problems.  I set it up that way to keep a
strong, up-front distinction between events being in an item's future vs.
its present case vs. its past.  But I suppose I could actually achieve the
same thing with all gearsetitem associations being in a single table, and
just map the item's future/current/past to separate selects on that
table.  Right?

But that's semi-off-topic.  Again, I thought maybe I'd encounter some
cascade problems but I didn't expect a ConcurrentModificationError to happen
where it's happening here.

Here's the test script and stack trace:

# semi-minimal test example for ConcurrentModificationError:
# Deleted rowcount 0 does not match number of objects deleted 1

import sqlalchemy
from sqlalchemy import Table, BoundMetaData, Column
from sqlalchemy import Integer, String, ForeignKey, PrimaryKeyConstraint
from sqlalchemy import ForeignKeyConstraint, polymorphic_union
from sqlalchemy import mapper, relation, backref
from sqlalchemy import create_engine, create_session

db = create_engine('postgres://postgres:[EMAIL PROTECTED]:5432/test2',
encoding='utf-8')
metadata = BoundMetaData(db)

items = Table('items', metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('type', String(20)))

skis = Table('skis', metadata,
Column('id', Integer, primary_key=True),
Column('data', Integer),
ForeignKeyConstraint(['id'], ['items.id']))

events = Table('events', metadata,
Column('id', Integer, primary_key=True),
Column('etype', String(20)),
Column('data', Integer))

reservations = Table('reservations', metadata,
Column('id', Integer, ForeignKey('events.id'), primary_key=True),
Column('customer_id', Integer, ForeignKey('customers.id')),
Column('data', Integer))

gearsets = Table('gearsets', metadata,
Column('id', Integer, primary_key=True),
Column('event_id', Integer, ForeignKey('events.id')),
Column('data', Integer))

historyfile = Table('historyfile', metadata,
Column('item_id', Integer, ForeignKey('items.id'), primary_key=True),
Column('gearset_id', Integer, ForeignKey('gearsets.id'),
primary_key=True))

currentfile = Table('currentfile', metadata,
Column('item_id', Integer, ForeignKey('items.id'), primary_key=True),
Column('gearset_id', Integer, ForeignKey('gearsets.id'),
primary_key=True))

futurefile = Table('futurefile', metadata,
Column('item_id', Integer, ForeignKey('items.id'), primary_key=True),
Column('gearset_id', Integer, ForeignKey('gearsets.id'),
primary_key=True))

customers = Table('customers', metadata,
Column('id', Integer, primary_key = True, autoincrement=True),
Column('data', Integer))

class Item(object): pass
class Ski(Item): pass
class GearSet(object): pass
class Event(object): pass
class Reservation(Event): pass
class Customer(object): pass

item_join = polymorphic_union(
{
'ski':items.join(skis),

[sqlalchemy] Re: How to catch a changed event in ORM

2007-06-20 Thread Eric Ongerth


On Jun 20, 12:45 am, Can Xue [EMAIL PROTECTED] wrote:
 I'm working in a GUI project and use the SQLAlchemy for ORM.
 Can anyone tell me how to catch a attribute changed event in
 SQLAlchemy so that application can update the UI automaticly.
 Thank you.

 --
 XUE Can

This may be more of a Python question, because it is not specifically
related to data persistence.

Mapper Extensions:
http://www.sqlalchemy.org/docs/adv_datamapping.html#advdatamapping_extending
can help if you want to catch a hook before a class instance is
updated in the database.  But if you want to catch the moment when the
attribute is changed in memory, consider the possibility that this is
not in sqlalchemy's scope.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit Of work seems to be calling save/delete twice

2007-06-20 Thread Eric Ongerth

p.s.: I'm curious whether this is the same issue mentioned in ticket
370, though that was apparently specific to Firebird's treatement of
rowcounts; here I'm on postgresql 8.2.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit Of work seems to be calling save/delete twice

2007-06-20 Thread Eric Ongerth

Thanks.  I'll be getting back to this tomorrow too.  Meanwhile: please
note -- easily overlooked, item.current has uselist=False in its
relation to Gearset because an item can only *currently* be involved
in, at most, one set of items / one in progress reservation.
However, item.history and item.future both have uselist=True in their
relations to Gearset, because items can have loads of future
reservations and loads of events they participated in in the past.
While it's not clear that this little scalar-vs-collection difference
would cause problems when sqla is processing dependencies upon delete,
conceivably it could?

Also in the meantime, instinctively would you say keeping all the past/
present/future in one table would just be a better way to go?  It only
occurred to me very recently that I could do that, and map item.future
to a select that collects only related gearsets connected to events
with future dates, and map item.history to a different select on the
same association table, etc.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

For reference:

http://www.mail-archive.com/[EMAIL PROTECTED]/msg02239.html

I found the above discussion when googling a ProgrammingError i've
been getting with a polymorphic_union:

quote
sqlalchemy.exceptions.SQLError: (ProgrammingError) UNION types numeric
and character varying cannot be matched
'SELECT ijoin.type AS ijoin_type, ijoin.id AS ijoin_id, ijoin.size AS
ijoin_size \nFROM (SELECT items.type AS type, skiboots.id AS id,
skiboots.size AS size \nFROM items JOIN skiboots ON items.id =
skiboots.id UNION ALL SELECT anon_c93f.type AS type, anon_c93f.id AS
id, CAST(NULL AS VARCHAR(20)) AS size \nFROM (SELECT items.id AS id,
items.type AS type \nFROM items \nWHERE items.type = %(items_type)s)
AS anon_c93f UNION ALL SELECT items.type AS type, skis.id AS id,
skis.size AS size \nFROM items JOIN skis ON items.id = skis.id) AS
ijoin ORDER BY ijoin.id' {'items_type': 'item'}
/quote

Here's a minimal test script:

from sqlalchemy import *

db = create_engine('postgres://postgres:[EMAIL PROTECTED]:5432/
testrentals',
encoding='utf-8')
metadata = BoundMetaData(db)

items = Table('items', metadata,
Column('id', Integer, primary_key=True, autoincrement=True),
Column('type', String(20)))

skis = Table('skis', metadata,
Column('id', Integer, primary_key=True),
Column('size', String(20), nullable=False),
ForeignKeyConstraint(['id'], ['items.id']))

skiboots = Table('skiboots', metadata,
Column('id', Integer, primary_key=True),
Column('size', types.Numeric(3,1)),
ForeignKeyConstraint(['id'], ['items.id']))

item_join = polymorphic_union(
{
'ski':items.join(skis),
'skiboot':items.join(skiboots),
'item':items.select(items.c.type=='item'),
}, None, 'ijoin')

class Item(object):pass
class Ski(Item): pass
class SkiBoot(Item): pass

item_mapper = mapper(Item, items,
select_table=item_join,
polymorphic_on=item_join.c.type,
polymorphic_identity='item')

ski_mapper = mapper(Ski, skis,
inherits=item_mapper,
polymorphic_identity='ski')

skiboot_mapper = mapper(SkiBoot, skiboots,
inherits=item_mapper,
polymorphic_identity='skiboot',
inherit_condition = items.c.id==skiboots.c.id)

if __name__ == __main__:
session = create_session()
print session.query(Item).select()

##


So, skis are working fine but skiboots aren't.  If I either comment
out the 'size' column in the skiboots table:
# Column('size', types.Numeric(3,1)),
- or - comment out the 'skiboots' line in the item_join:
# 'skiboot':items.join(skiboots),

...then it runs ok.

Maybe I'm making incorrect use of the Numeric type?  Or is this a
bug?  I want to use the Numeric type because i'd like to represent
boot sizes as, e.g., 9.5, 10.0, 10.5.  For skis the size is a String
because sometimes it's a number and sometimes a nominal size like s,
m, xl, etc.  No problem with the string; i'm just wondering if I
can use the Numeric type as sqla currently stands.  Do I have to
define a custom type and stash Numerics in string representations?

Thanks!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-06 Thread Eric Ongerth


On Jun 6, 8:32 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 5, 2007, at 10:30 PM, Mike Orr wrote:

  I do think .append_whereclause should be changed to .append_to_where.
  A SQL statement can have only one WHERE clause; what you're actually
  appending is an AND operand.  .append_to_where seems to get that
  across better than .append_whereclause or .append_where.  The word
  clause is superfluous because all parts of a SQL statement are
  called clauses.

 I know phrases like append_to_where are more correct, but its a
 lot of typing.  I had in mind just where().  i dont think people
 trip over the meaning of multiple where() statements.

+1 for where()

also for group_by() and having(), no?

...see:
http://www.mail-archive.com/[EMAIL PROTECTED]/msg03449.html


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

So then I thought: maybe  I just need to reflect the skiboots table
and override the size column to the desired type?  That would make
sense... so I tried it, using the same script as above but adding the
line autoload=True as the final clause in each Table definition.

Now i'm getting a different error:
sqlalchemy.exceptions.ArgumentError: Can't determine join between
'items' and 'skis'; tables have more than one foreign key constraint
relationship between them. Please specify the 'onclause' of this join
explicitly.

Ok, so my item_join definition was too loose.
Changed the ski and skiboot lines in it to read:
'ski':join(items, skis, items.c.id==skis.c.id),
'skiboot':join(items, skiboots, items.c.id==skiboots.c.id)

...and still get the same error.  How much more specific can I get
with my onclause?  In each case the items table and each of its
children are only joined by a single column, 'id'.

I can't see that the 'items' and 'skis' table have more than one
foreign key constraint relationship between them.  What am i missing?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

oops, sorry -- I was adding my reply while you were still writing
yours.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth


On Jun 6, 8:47 am, Michael Bayer [EMAIL PROTECTED] wrote:
 your size column differs in type.  you cant create a UNION with
 differing types in the unioned queries.  so it can either be both
 string, both numeric, or use distinct columns.

Ah!  Ok, if i was more experienced with unions/joins I would have
realized that the way I was doing it, i was asking sa to smash two
'size' columns of different type together.  So I can just ask sa to do
some column aliasing if I really need the column on each child table
to be identically named simply size, or if I don't mind the change I
can switch to having skis have a 'ski_size' column and skiboots have a
'skiboot_size' column, etc.  Correct?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

Yeah.  My problem has been solved by altering the tables...
skiboots.c.size is now skiboots.c.skiboot_size, and skis.c.size is now
skis.c.ski_size.

Is there a way I could avoid that, making use of the use_labels=True
parameter on select()?  I've been trying to work out how to rewrite my
item_join to do that, so that identically columns in child tables
would not collide.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UNION types x and y cannot be matched

2007-06-06 Thread Eric Ongerth

Thanks for your responses, Mike.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to most effectively associate child objects to parents upon _init__ (lookup tables)

2007-06-05 Thread Eric Ongerth

Hi John,

Check out the doc section entitled Mapping a class with table
inheritance:
http://www.sqlalchemy.org/docs/adv_datamapping.html#advdatamapping_inheritance
Although it's not the only way to do it, you might be interested in
polymorphic multiple-table inheritance, which SQLAlchemy already
handles really well.

As for your current code: if set up for polymorphic inheritance,
sqlalchemy will handle the connection of an object with the proper
table for its type and you won't need that select statement in your
__init__ method at all.  Furthermore, sqlalchemy doesn't even call
your __init__ methods when it loads items from storage; it uses
__new__ (see item 5.6 on the FAQ page for explanation: why isn't my
init called when i load objects?). Init is just for when you first
make a new object of your own, before you've ever persisted the
object.

Read and ye shall find!

Here is a reworked version that has the behavior you might be looking
for:

from sqlalchemy import *
#..other imports

engine = create_engine('mysql://[EMAIL PROTECTED])  #example uri
metadata = BoundMetaData(engine)

items_table = Table('items', metadata,
Column('id', Integer, primary_key = True),
Column('item_type', String(20))
# add any other columns here for properties that should be
# carried by every item regardless of its item_type
)

tools_table = Table('tools', metadata,
Column('item_id', Integer, ForeignKey('items.id'),
primary_key=True),
Column('tool_name', String(20)))
# add any other columns for properties of tools only

widgets_table = Table('widgets', metadata,
Column('item_id', Integer, ForeignKey('items.id'),
primary_key=True),
Column('widget_name', String(20)))
# add any other columns for properties of widgets only

# class definitions
class Item(object):
def __init__(self, item_type):
self.item_type = item_type
# id will be provided automatically

class Tool(Item):   ## -- note the inheritance
def __init__(self, tool_name):
self.tool_name = tool_name
# etc., set any other values for tools_table columns
def __repr__(self):
return 'Tool (%s)' % self.tool_name

class Widget(Item):
def __init__(self, widget_name):
self.widget_name = widget_name
# etc., set any other values for widgets_table columns
def __repr__(self):
return 'Widget (%s)' % self.widget_name

item_join = polymorphic_union(
{
'tool': items_table.join(tools_table),
'widget': items_table.join(widgets_table),
'item': items_table.select(items_table.c.item_type=='item'),
}, None, 'ijoin')
# note: by assigning an item type 'item' you can create generic items
# which are neither tools nor widgets (etc.)

item_mapper = mapper(Item, items_table,
select_table = item_join,
polymorphic_on = item_join.c.item_type,
polymorphic_identity = 'item')

tool_mapper = mapper(Tool, tools_table,
inherits=item_mapper,
polymorphic_identity='tool')

widget_mapper = mapper(Widget, widgets_table,
inherits=item_mapper,
polymorphic_identity='widget')

# note, no need to name the mappers if you don't wish to
# refer to them later

if __name__ == '__main__':
metadata.create_all()
session = create_session()

metadata.engine.echo = True
item1 = Tool('sample hammer')
item2 = Tool('screwdriver')
item3 = Widget('spam')
item4 = Widget('eggs')

# note: items don't *have* ids until saved AND flushed
# note also: item ids are assigned automatically

session.save(item1)
session.save(item2)
session.save(item3)
session.save(item4)
session.flush()

allitems = session.query(Item).select()
metadata.engine.echo = False
for item in allitems:
print item.id, item

session.close()
metadata.drop_all()


On Jun 5, 6:20 pm, John Lorance [EMAIL PROTECTED] wrote:
 I'm newish to SqlAlchemy and for the life of me I can't figure out how
 to properly set things up so that lookup tables(objects) are cached
 and/or it is easy for new parent objects to associate to their
 childing upon initialization.  See below for code snippet and sample
 problem.

 from sqlalchemy import *
 #..other imports

 metadata = MetaData()

 item_types_table = Table('item_types', metadata,
 Column('id', Integer, autoincrement=False, primary_key = True),
 Column('name', String(50))
 )

 items_table = Table('items', metadata,
 Column('id', Integer, primary_key = True),
 Column('item_type_id', Integer, ForeignKey('item_types.id')),
 Column('name', String(150))
 )

 engine = create_engine('mysql://[EMAIL PROTECTED])  #example uri
 engine.echo = True
 metadata.create_all(engine)

 # class definitions
 class ItemType(object):
 def __init__(self, id, name=None):
 self.id = id
 self.name = name

 class Item(object):
 def __init__(self, type_name, name):
self.item_type =
 session.query(ItemType).selectfirst(ItemType.c.name==type_name)
  

[sqlalchemy] Re: How to most effectively associate child objects to parents upon _init__ (lookup tables)

2007-06-05 Thread Eric Ongerth

What's more, I should have just said to look in your sqlalchemy
subdirectory /examples/polymorph/polymorph.py.

I forgot that's where I learned the above techniques a month ago...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PROPOSAL: whack query.select(), selectfirst(), selectone(), select_by(), selectfirst_by(), selectone_by(), get_by(), auto-join feature

2007-06-03 Thread Eric Ongerth

Well, that answers my next N anticipated questions on this list plus
essentially every unasked-yet one in my backlog.  These consistency/
predictability-of-syntax/redundancy points cut to the core of every
issue I've had and/or every time I've had to hit the docs for more
than a brief reminder.

+1 oh yeah

On Jun 3, 8:35 am, Michael Bayer [EMAIL PROTECTED] wrote:
 {super duper SQLA proclamation}


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: bug in OrderedDict?

2007-06-02 Thread Eric Ongerth

done and done: Ticket #585.

Patch reprinted here for the curious passerby:

def pop(self, key=None):
if key == None:
try:
key = self._list[0]
except IndexError:
raise IndexError('tried to pop() from an empty
OrderedDict')
result = self[key]
dict.__delitem__(self, key)
self._list = self._list[1:]
return result
elif not key in self:
raise KeyError(key)
else:
self._list.remove(key)
return dict.pop(self, key)


On Jun 2, 2:30 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 sounds like a bug.  add a ticket and/or create a patch !  thanks.

 On Jun 2, 3:48 pm, Eric Ongerth [EMAIL PROTECTED] wrote:

  I noticed that if you pop() an item out of an OrderedDict, then ask
  the OrderedDict for its values(), you get a key error because the OD
  doesn't trim its ._list when the pop() occurs.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   >