[sqlalchemy] Re: how to display all the tables of my DB

2007-03-23 Thread vkuznet

Hi,
I've used slightly different approach:

sel=SELECT table_name FROM all_tables WHERE owner='XXX' #
ORACLE
sel=show tables # MySQL
sel=SELECT name FROM SQLITE_MASTER WHERE type='table' # SQLite

con=dbengine.connect()
metadata=DynamicMetaData()
tList = con.execute(sel)
tables=[]
for t in tList:
tables.append(Table(t[0], metadata, autoload=True))

that will give you a list of auto-loaded Table objects.
Valentin.

On Mar 22, 3:59 pm, Mando [EMAIL PROTECTED] wrote:
 I launched it, but I receive this error message:

 Traceback (most recent call last):
   File autocode.py, line 20, in module
 tbl = Table(tname, metadata, schema=schema, autoload=True);
   File build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py, line
 143, in __call__
   File build/bdist.macosx-10.3-fat/egg/sqlalchemy/engine/base.py,
 line 505, in reflecttable
   File build/bdist.macosx-10.3-fat/egg/sqlalchemy/databases/
 postgres.py, line 385, in reflecttable
 KeyError: 'information_schema.cardinal_number'

 Somes ideas?

 thanks again!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Help with saving mapped objects to pylons session

2007-03-23 Thread Michael Bayer

its likely because the process in which you are trying to unpickle  
does not have the mappers compiled.  the mapper compilation step is  
where the attribute gets added to Invoice at the class level.

On Mar 22, 2007, at 8:30 PM, HD Mail wrote:


 Hi,

 I am having problems with saving/restoring mapped objects from a  
 pylons
 session.

 I am getting the no attribute on a list attribute (which is a one- 
 many
 relationship) error when pylons tries to unpickle the object.

 I've read a previous post where Michael explains why this happens.
 I have turned off all lazy loaders. I have also tried to implement
 __setstate__,
 but not sure what I should be doing in there.

 This is my example:

 class Invoice: pass
 class InvoiceLine: pass

 db.mapper(InvoiceLine, db.invoice_line_table)
 db.mapper(Invoice, db.invoice_table,
properties = {
'client': relation(Client, lazy=True),
'lines': relation(InvoiceLine, lazy=True),
}
 )

 This works:

 import pickle
 i = Invoice()
 pickle.dump(i, file('test', 'w'))
 i = pickle.load(file('test'))

 if I then do this
 il = InvoiceLine()
 i.lines.append(il)
 pickle.dump(i, file('test', 'w'))

 I get this:
 File /home/huy/apps/sqlalchemy/lib/sqlalchemy/orm/attributes.py,  
 line
 452, in __setstate__
 AttributeError: type object 'Invoice' has no attribute 'lines'

 How can I get around this problem ?

 Thanks

 Huy



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: alias not used with relationship to arbitary select

2007-03-23 Thread Michael Bayer

I think the issue is you cant put a task_status ordering in your  
Task mapper since that table is not part of its mapping.

http://www.sqlalchemy.org/trac/wiki/ 
FAQ#ImusinglazyFalsetocreateaJOINOUTERJOINandSQLAlchemyisnotconstructing 
thequerywhenItrytoaddaWHEREORDERBYLIMITetc.whichreliesupontheOUTERJOIN



On Mar 22, 2007, at 9:44 PM, HD Mail wrote:


 Hi,

 Sorry for the long email, below is a dump of the SA generated sql.
 I have a relationship in one of my mappers to an abitrary select
 (task_status).
 My mapper also orders the result by a column in this abitrary select.

 I am using the latest SA from trunk. This use to work in an earlier  
 version.

 The sql is generated from the following mapping

 task_status = select(
 [sys_enum.c.id,
 sys_enum.c.key_code.label('code'),
 sys_enum.c.value.label('description'),
 sys_enum.c.seq_no],
 sys_enum.c.type_code=='status'
 ).alias('status')

 db.mapper(Task, db.task,
 properties = {
 'created_user': relation(Employee, lazy=False,
 primaryjoin=db.task.c.created_by==db.employee.c.employee_id),
 'assigned_user': relation(Employee, lazy=False,
 primaryjoin=db.task.c.assigned_to==db.employee.c.employee_id),
 'project': relation(Project, lazy=False),
 'status': relation(status_mapper, lazy=False,
 primaryjoin=and_ 
 (task_status.c.code==db.task.c.status_code),
 foreignkey=db.task.c.status_code
 ),
 'task_details': relation(TaskDetail, lazy=True, cascade='all,
 delete-orphan'),
 },
 order_by = [db.task.c.project_id, desc(task_status.c.seq_no),
 db.task.c.started_on, db.tas
 k.c.finished_on, db.task.c.sort_no, db.task.c.priority]
 )

 Error(SQLError): (ProgrammingError) relation status does not exist
 'SELECT anon_824a.code AS anon_824a_code, anon_824a.seq_no AS
 anon_824a_seq_no, anon_824a.id AS anon_824a_id,  
 anon_824a.description AS
 anon_824a_description, task.project_id AS task_project_id,
 task.description AS task_description, task.task_id AS task_task_id,
 task.status_code AS task_status_code, task.created_on AS
 task_created_on, task.created_by AS task_created_by, task.sort_no AS
 task_sort_no, project_6f4a.project_name AS project_6f4a_project_name,
 project_6f4a.chargeable AS project_6f4a_chargeable,  
 project_6f4a.enabled
 AS project_6f4a_enabled, project_6f4a.parent_id AS
 project_6f4a_parent_id, project_6f4a.rate AS project_6f4a_rate,
 client_3f15.phone_area_code AS client_3f15_phone_area_code,
 client_3f15.fax_no AS client_3f15_fax_no, client_3f15.address1 AS
 client_3f15_address1, client_3f15.client_name AS
 client_3f15_client_name, client_3f15.address3 AS client_3f15_address3,
 client_3f15.fax_area_code AS client_3f15_fax_area_code,  
 client_3f15.abn
 AS client_3f15_abn, client_3f15.phone_no AS client_3f15_phone_no,
 client_3f15.post_code AS client_3f15_post_code,  
 client_3f15.client_id AS
 client_3f15_client_id, client_3f15.country_name AS
 client_3f15_country_name, client_3f15.state_code AS
 client_3f15_state_code, client_3f15.address2 AS client_3f15_address2,
 client_3f15.enabled AS client_3f15_enabled, project_6f4a.client_id AS
 project_6f4a_client_id, project_6f4a.project_id AS
 project_6f4a_project_id, project_6f4a.project_code AS
 project_6f4a_project_code, task.finished_on AS task_finished_on,
 employee_6c57.first_name AS employee_6c57_first_name,
 employee_6c57.employee_id AS employee_6c57_employee_id,
 employee_6c57.middle_name AS employee_6c57_middle_name,
 employee_6c57.employee_code AS employee_6c57_employee_code,
 employee_6c57.home_phone_no AS employee_6c57_home_phone_no,
 employee_6c57.tfn AS employee_6c57_tfn, employee_6c57.mobile_no AS
 employee_6c57_mobile_no, employee_6c57.client_id AS
 employee_6c57_client_id, employee_6c57.surname AS  
 employee_6c57_surname,
 employee_6c57.home_area_code AS employee_6c57_home_area_code,
 employee_6c57.password AS employee_6c57_password, task.assigned_to AS
 task_assigned_to, task.due_on AS task_due_on, task.estimate AS
 task_estimate, employee_ba6b.first_name AS employee_ba6b_first_name,
 employee_ba6b.employee_id AS employee_ba6b_employee_id,
 employee_ba6b.middle_name AS employee_ba6b_middle_name,
 employee_ba6b.employee_code AS employee_ba6b_employee_code,
 employee_ba6b.home_phone_no AS employee_ba6b_home_phone_no,
 employee_ba6b.tfn AS employee_ba6b_tfn, employee_ba6b.mobile_no AS
 employee_ba6b_mobile_no, employee_ba6b.client_id AS
 employee_ba6b_client_id, employee_ba6b.surname AS  
 employee_ba6b_surname,
 employee_ba6b.home_area_code AS employee_ba6b_home_area_code,
 employee_ba6b.password AS employee_ba6b_password, task.resolution AS
 task_resolution, task.started_on AS task_started_on, task.category AS
 task_category, task.priority AS task_priority \nFROM task LEFT OUTER
 JOIN (SELECT sys_enum.id AS id, sys_enum.key_code AS code,
 sys_enum.value AS description, sys_enum.seq_no AS seq_no \nFROM  
 sys_enum
 \nWHERE sys_enum.type_code = %(sys_enum_type_code)s) 

[sqlalchemy] Re: ORACLE db name in table definitions

2007-03-23 Thread vkuznet

Thanks,
it works.

On Mar 23, 10:56 am, Michael Bayer [EMAIL PROTECTED] wrote:
 dont stick foo.bar in your table name.  use the schema=DBNAME
 parameter on your Table.

 On Mar 23, 2007, at 9:41 AM, vkuznet wrote:



  Hi,
  I've got a new DB to handle and the account is setup in a way that I
  need to specify dbname for selects, e.g. select * from DBNAME.T

  So, I created a table
  T=Table('DBNAME.T')

  and once I used it
  res=select([T]).execute()
  the sqlalchemy constructed the following query with *quotes*

  select DBNAME.T.id from DBNAME.T

  the problem is that ORACLE doesn't accept it, but if I manually drop
  *quotes* from this select and pass it to sqlplus everything works,
  i.e.

  select DBNAME.T.id from DBNAME.T

  How to disable *quoting* for SQLAlchemy?

  Thanks,
  Valentin.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] [PATCH] Filtered one_to_many relationships (Experimental)

2007-03-23 Thread Gaetan de Menten
Hi list,

The recent work that Michael has done about bringing SelectResults
functionality to Query made me think: what if that allowed me to do
what I've always wanted to have ever since I started using ORMs? The
thing I've wanted to do is to have one-to-many relations that you can
filter when you access them, and it generates the correct query behind
the scene without fetching too many rows from the DB.

So I dived into the code a bit deeper than I had up until now... In
the end, I didn't use the SelectResults stuff at all, but I've managed
to come up with a working patch to do what I wanted anyway. The most
interesting part of it is the addition of a DeferredInstrumentedList
which takes a callable as argument and actually call it only when the
list is accessed.

This allows to do stuff like that:

for address in user.addresses.filter(address_table.c.postcode == 5000):
 print address.street

The patch is quite experimental (I'm pretty sure I broke some stuff).
Also, it only includes a proof-of-concept filter method, but it should
be almost trivial to add things like: limit, offset, order_by,
distinct, list slicing and so on...

And anyway, I'm not sure it's the correct way to go. I _think_ the
best way to go would be if InstrumentedList would hold a
preconfigured query object, that you could alter the same way you
can alter the normal query objects, and which would only be executed
when one access the list elements. But to get to this point would need
a major refactoring of the code while what I've done is pretty simple.

I also attach an (Elixir-made) example demonstrating the thing.

Michael, if you are interested in this, I'll gladly translate the
example into a plain SQLAlchemy unittest. And more generally, if there
is anything I can do (within the limits of my knowledge of SQLAlchemy)
to get this done the correct way and included into the trunk, please
tell me. I'm so thrilled with this new toy, I hope it'll be possible
to include it ! :)

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



deferred_list_relation.diff
Description: application/text
from elixir import * 

class A(Entity):
has_field('name', String(30))
has_many('b', of_kind='B')

class B(Entity):
has_field('name', String(30))
belongs_to('a', of_kind='A')
has_field('extra', Integer)

metadata.connect('sqlite:///')
create_all()

a1 = A(name='a1')
a2 = A(name='a2')
b1 = B(name='b1', a=a1, extra=10)
b2 = B(name='b2', a=a2)
b3 = B(name='b3', a=a1, extra=5)

objectstore.flush()
objectstore.clear()

metadata.engine.echo = True
a = A.get_by(name='a1')
print [b.name for b in a.b.filter(B.c.extra  6)]



[sqlalchemy] Eager loading self referential mapper

2007-03-23 Thread Koen Bok

Could anyone please explain me a little why self referantial mappers
cannot be eager-loading. Is this not yet integrated in SQLAlchemy or
theoratically impossible or impractical?

Ciao

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Eager loading self referential mapper

2007-03-23 Thread Andreas Jung



--On 23. März 2007 16:14:26 + Koen Bok [EMAIL PROTECTED] wrote:



Could anyone please explain me a little why self referantial mappers
cannot be eager-loading. Is this not yet integrated in SQLAlchemy or
theoratically impossible or impractical?


Do you really want to load a possibly *huge* tree using eager loading?

-aj

pgp7zUtaMht5q.pgp
Description: PGP signature


[sqlalchemy] Re: Eager loading self referential mapper

2007-03-23 Thread Michael Bayer

its not theoretically impossible, but it is theoretically  
ridiculously complicated to do automatically, and also would have to  
limit the depth arbitrarily to one or two levels (which means,  
someone wants level 5 off their eager loader, then they come back to  
complain).  each level of nodes for an adjacency-list relationship  
requires a distinct self-join.  each self-join would need to be  
aliased too, which then gets pretty complicated when that self-join  
is injected into a larger mapping of enclosing eager loaders, etc.   
plus all the code that right now knows stop eager loading when we  
come back to our own mapper now becomes a lot more complicated,  
since now we are counting levels and stuff like that.  add onto that  
the really insane self-joins that already arise with polymorphic  
mappings, id be fixing bugs in that code for ten years.  the queries  
are way too huge.  not to mention the efficiency issues that start to  
arise when you start self-joining too much.

So, if you really need to load alot of self-ref nodes in one query,  
you query all the nodes yourself and then attach them.  thats what  
the example examples/adjacencytree/byroot_tree.py illustrates.



On Mar 23, 2007, at 12:14 PM, Koen Bok wrote:


 Could anyone please explain me a little why self referantial mappers
 cannot be eager-loading. Is this not yet integrated in SQLAlchemy or
 theoratically impossible or impractical?

 Ciao

 Koen


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [PATCH] Filtered one_to_many relationships (Experimental)

2007-03-23 Thread Michael Bayer


OK, this is actually something people have asked for a lot.  in the  
beginning, recently, etc.  also for different reasons...i.e.  
convenience, or performance, etc.   So, first off let me start by  
illustrating how this use case is done right now.   Assuming your  
Address mapper has a backref user to the User mapper, its  just:

for address in session.query(Address).filter_by(user=someuser).filter 
(address_table.c.postcode == 5000):
print address.street

and of course, if you dont have the backref, then we go back to the  
regular SQL stuff:

for address in session.query(Address).filter 
(user.id==someuser.id).filter(address_table.c.postcode == 5000):
print address.street

in the second example, we are figuring out the lazy criterion  
ourselves.  in most cases this is pretty easy to figure out.  but we  
do have some exotic mappings these days, primarily because of  
polymorphic union queries, and it might not be as trivial to make up  
the join condition.  But,  the join conditions and bind params which  
have been calculated by LazyLoader are just sitting there, they can  
be currently pulled out with a little non-API attribute access but  
Ive no problem with adding some API-level accessors to get at the  
Query object calculated for a particular property (i.e. what you are  
using in your patch internally).

So if you want to take the above and make it all slick, all you have  
to do is stick the above query inside a property:

class User(object):
def _get_addresses(self):
return object_session(self).query(Address).filter 
(user.id==someuser.id)
addresses = property(_get_addresses)

then you can just say pretty much what you want:

for address in someuser.addresses.filter(address_table.c.postcode ==  
5000):
print address.street

plus, the addresses object is a Query !  so you already have the  
full interface, and it has __iter__ and __getitem__() !  so the list  
operations on the base object work just fine (and they even work like  
SQLObjects)

for address in someuser.addresses:   # __iter__() fires off the query !

for address in someuser.addresses[3:5]  # slice applies OFFSET and  
LIMIT !

now lets look at the way your patch does it.

addresses = user.addresses.filter(address_table.c.postcode == 5000)

seems easy.  right ?  remember that user is now in the session.   
anyone else that queries for user will get that same User  
instance.  but the rest of the app is going to assume normal  
relationship semantics on that collectionwhich means:

print user.addresess   # -- FAIL  - this code assumes the list is  
complete, its not, its already been loaded incompletely

print someaddress in user.addresses # -- FAIL - the address is not  
present

user.addresses.remove(someaddress) # -- ERROR - the address is not  
present

user.addresses.insert(5, someotheraddress) # -- FAIL - the list is  
incomplete, ordering will be incorrect

session.flush()  # -- FAIL - we have to figure out what items were  
added/removed/unchanged from the collection...but the data's  
incomplete !

This is an issue introduced by our usage of sessions, identity maps,  
etc., things that simpler ORMs dont have to worry about.  So thats  
the part of this I dont want to get into (and im hoping you dont,  
either).  Yes, the use case you want is desireable, and ive no  
problem adding in hooks to make it possible.  but No, I dont think it  
should be injected into the base attribute lazy-loading operation.
i think the semantics of a Query for related items and that of a  
mapped relationship are just plain different, even though  
implementation-wise on the view side at least they are both just a  
SQL query.  with the primary issue being that all mutating/uow- 
related operations go out the window with an incomplete collection.   
from a performance point of view,  the manipulation of huge  
collections could be achieved via loading stub objects which is  
something discussed long ago, which is how Hibernate addresses it  
(called extra-lazy loading).  so if we ever did that, thats how we  
would do it (i.e. a complete collection of incomplete objects).

anyway back to the query side, Ive even been considering making folks  
happy with this and adding a new kind of mapped property called, i  
dont knowquery_relation(), relquery(),  lazyloader(),  
something like that.  and, right in the core, how often does that  
happen ?  it would copy the setup style of relation() but be a lot  
more view oriented.

mapper(SomeClass, table, properties={
'addresses':lazyloader(Address)
})

and there you go !  does the same thing as the manual property code I  
illustrated does, returns a Query from which you can do whatever.

We also had a guy who wanted to do a more complex primaryjoin 

[sqlalchemy] Re: Eager loading self referential mapper

2007-03-23 Thread Koen Bok

Yeah, I am sure the tree will never be deeper than 3 or 4 levels.

Koen

On Mar 23, 5:24 pm, Andreas Jung [EMAIL PROTECTED] wrote:
 --On 23. März 2007 16:14:26 + Koen Bok [EMAIL PROTECTED] wrote:



  Could anyone please explain me a little why self referantial mappers
  cannot be eager-loading. Is this not yet integrated in SQLAlchemy or
  theoratically impossible or impractical?

 Do you really want to load a possibly *huge* tree using eager loading?

 -aj

  application_pgp-signature_part
 1KDownload


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQLAlchemy 0.3.6 released

2007-03-23 Thread Michael Bayer

This is the first post-Pycon release and includes the new Query  
object which merges in all of the functionality of SelectResults.

More docs have been added, including some description of the new  
Query methods.   The generated documentation has been broken up  
into separate pages per module, with navigation added for module- 
level functions.  Docstrings are also much more PEP compliant and are  
formatted using docutils.

changelog can be viewed at:

http://www.sqlalchemy.org/CHANGES


0.3.6
- sql:
 - bindparam() names are now repeatable!  specify two
   distinct bindparam()s with the same name in a single statement,
   and the key will be shared.  proper positional/named args  
translate
   at compile time.  for the old behavior of aliasing bind  
parameters
   with conflicting names, specify unique=True - this option is
   still used internally for all the auto-genererated (value-based)
   bind parameters.

 - slightly better support for bind params as column clauses, either
   via bindparam() or via literal(), i.e. select([literal('foo')])

 - MetaData can bind to an engine either via url or engine  
kwargs
   to constructor, or by using connect() method. BoundMetaData is
   identical to MetaData except engine_or_url param is required.
   DynamicMetaData is the same and provides thread-local  
connections be
   default.

 - exists() becomes useable as a standalone selectable, not just  
in a
   WHERE clause, i.e. exists([columns], criterion).select()

 - correlated subqueries work inside of ORDER BY, GROUP BY

 - fixed function execution with explicit connections, i.e.
   conn.execute(func.dosomething())

 - use_labels flag on select() wont auto-create labels for  
literal text
   column elements, since we can make no assumptions about the  
text. to
   create labels for literal columns, you can say somecol AS
   somelabel, or use literal_column(somecol).label(somelabel)

 - quoting wont occur for literal columns when they are proxied  
into
   the column collection for their selectable (is_literal flag is
   propigated). literal columns are specified via
   literal_column(somestring).

 - added fold_equivalents boolean argument to Join.select(), which
   removes 'duplicate' columns from the resulting column clause that
   are known to be equivalent based on the join condition. this  
is of
   great usage when constructing subqueries of joins which Postgres
   complains about if duplicate column names are present.

 - fixed use_alter flag on ForeignKeyConstraint [ticket:503]

 - fixed usage of 2.4-only reversed in topological.py [ticket:506]

 - for hackers, refactored the visitor system of ClauseElement and
   SchemaItem so that the traversal of items is controlled by the
   ClauseVisitor itself, using the method visitor.traverse(item).
   accept_visitor() methods can still be called directly but will  
not
   do any traversal of child items. ClauseElement/SchemaItem now  
have a
   configurable get_children() method to return the collection of  
child
   elements for each parent object. This allows the full  
traversal of
   items to be clear and unambiguous (as well as loggable), with an
   easy method of limiting a traversal (just pass flags which are
   picked up by appropriate get_children() methods). [ticket:501]

 - the else_ parameter to the case statement now properly works  
when
   set to zero.

- orm:
 - the full featureset of the SelectResults extension has been  
merged
   into a new set of methods available off of Query.  These methods
   all provide generative behavior, whereby the Query is copied
   and a new one returned with additional criterion added.
   The new methods include:

   filter() - applies select criterion to the query
   filter_by() - applies by-style criterion to the query
   avg() - return the avg() function on the given column
   join() - join to a property (or across a list of properties)
   outerjoin() - like join() but uses LEFT OUTER JOIN
   limit()/offset() - apply LIMIT/OFFSET
   range-based access which applies limit/offset:
  session.query(Foo)[3:5]
   distinct() - apply DISTINCT
   list() - evaluate the criterion and return results

   no incompatible changes have been made to Query's API and no  
methods
   have been deprecated.  Existing methods like select(),  
select_by(),
   get(), get_by() all execute the query at once and return results
   like they always did.  join_to()/join_via() are still there  
although
   the generative join()/outerjoin() methods are easier to use.

 - the return value for multiple mappers used with instances() now
   returns a cartesian product of the requested list of mappers,
   represented as a list of 

[sqlalchemy] Video sharing social network Teenwag seeks great Python hackers we ll wear Python T-shirts at startup school $100K $5K sign on $2k referral

2007-03-23 Thread John

Video sharing social network Teenwag seeks great Python hackers we ll
wear Python T-shirts at startup school $100K $5K sign on $2k
referral


http://www.teenwag.com/showvideo/352


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---