[sqlalchemy] Re: what happens on save?

2008-04-28 Thread Eric Lemoine

Thanks Michael for the detailed explanations and the patch. This is
really helping!

2008/4/26, Michael Bayer [EMAIL PROTECTED]:

 On Apr 25, 2008, at 4:21 PM, Eric Lemoine wrote:

 
  Module sqlalchemy.orm.mapper:1198 in _postfetch
  elif not c.primary_key and c.key in params and
  self._get_state_attr_by_column(state, c) != params[c.key]:
  Module shapely.geometry.base:255 in __ne__
  return not self.equals(other)
  Module shapely.predicates:30 in __call__
  raise RuntimeError() # breakpoint FIXME
 
  As you can see, the __ne__ method of my object does get called.


 this occurs well after any attribute history detection has happened
 (which is where comparsions are supposed to happen, if needed).   The
 mapper has inserted the row, then it goes through the list of
 parameters which were inserted into the row and compares them to what
 is present on the object, so that it can detect Column-level defaults
 and other auto-generated values which need to be placed on the
 instance.  This methodology is out of date since nowadays we have an
 explicit listing of which columns were auto generated - so try out
 the attached patch which refines the methodology in that section and
 should solve the issue.  The patch is also ticket #1015 which needs
 test coverage before it can be committed (but is high priority for
 0.4.6).


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ODBC Connection is busy error

2008-04-28 Thread BruceC

We're not intentionally sharing a session between multiple
threads... :) I'll check out the code to see if there is any sharing
going on, but I'm pretty sure that's not the case.

You mention lazy-loading as if it might be playing a part in this
issue - is that right? We are using lazy  eager loading in various
parts of the app...

On Apr 28, 2:04 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Apr 28, 2008, at 12:02 AM, Michael Bayer wrote:



  On Apr 27, 2008, at 9:58 PM, BruceC wrote:

  I don't know whether this helps, but many thanks for looking at the
  issue :)

  unfortunately it doesnt say much at all.  You're not sharing a Session
  between multiple threads, right ?

 and by sharing, I mean the Session, as well as any objects that are
 attached to it, since that stack trace illustrated specifically an
 attribute lazy-loading its contents from the database.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Patch: Ordered merge of ShardedQuery results (Was: Re: [sqlalchemy] Patch: Use generators for ShardedQuery results)

2008-04-28 Thread Kyle Schaffrick
On Sun, 27 Apr 2008 23:35:06 -0400
Kyle Schaffrick [EMAIL PROTECTED] wrote:

 Since as I mentioned, this is kind of an interesting learning project
 for me to learn some SA internals, I'd like my next step to be seeing
 if I can write something that will make a callable/closure which, when
 passed into iter_merging, will produce the least surprising ordering
 of results w/r/t what was requested in the original ShardedQuery.

I have taken a first stab at this to play with the idea and it seems to
be doing the right thing, the patch also contains some very rudimentary
tests in ShardTest that make it voom.

The patch adds a .merge_ordering() method to ShardedQuery that allows
supplying an ordering function that will take a short (upper limit being
the number of shards involved) list of items and return the one that
comes first. The type of the items is whatever you would get back from
the query ordinarily. If you use it with an order_by that enforces the
same ordering, you can get a complete ordering of results for queries
that pull results from multiple shards.

Example:
 q = s.query(Person).order_by(Person.age.asc())
 mq = q.merge_ordering(lambda l: min(l, key=(lambda p: p.age)))

If you don't call merge_ordering, it doesn't do any merging and uses the
standard behavior of concatenating the results.

merge_ordering is probably a crappy name, so if this gets any interest
then we can figure out what else to call it. One aspect I'm not sure
about is whether or not merge_ordering's parameter should use the
protocol it's using now, or if it should accept something more like
key or cmp parameters of the builtin list.sort method.

Also, I attached a better version of the iterator patch that closes the
result object like the original code did, by hooking each iterator with
a call to result.close(). I couldn't figure out if this was actually
nessecary because it didn't seem consistent between Query and
ShardedQuery, but that may be for reasons I'm not seeing.

Feedback is welcome of course!

Kyle

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Use a generator to concatenate results from ShardedQuerys.

Adds a function `iter_concatenating` that accepts a list of iterators, and
yields every object produced by each of them. Alter ShardedQuery to use this
function instead of loading all of the potentially large sets of result objects
into a list before concatenating them.

Also adds a convenience function to hook StopIteration on an iterator.

diff --git a/lib/sqlalchemy/orm/shard.py b/lib/sqlalchemy/orm/shard.py
--- a/lib/sqlalchemy/orm/shard.py
+++ b/lib/sqlalchemy/orm/shard.py
@@ -88,20 +88,16 @@
 def _execute_and_instances(self, context):
 if self._shard_id is not None:
 result = self.session.connection(mapper=self.mapper, shard_id=self._shard_id).execute(context.statement, **self._params)
-try:
-return iter(self.instances(result, querycontext=context))
-finally:
-result.close()
+return util.hook_stop_iteration(self.iterate_instances(result, querycontext=context), lambda: result.close())
 else:
-partial = []
+partials = []
 for shard_id in self.query_chooser(self):
 result = self.session.connection(mapper=self.mapper, shard_id=shard_id).execute(context.statement, **self._params)
-try:
-partial = partial + list(self.instances(result, querycontext=context))
-finally:
-result.close()
+partials.append(util.hook_stop_iteration(self.iterate_instances(result, querycontext=context),
+lambda: result.close()))
+
 # if some kind of in memory 'sorting' were done, this is where it would happen
-return iter(partial)
+return util.iter_concatenating(partials)
 
 def get(self, ident, **kwargs):
 if self._shard_id is not None:
diff --git a/lib/sqlalchemy/util.py b/lib/sqlalchemy/util.py
--- a/lib/sqlalchemy/util.py
+++ b/lib/sqlalchemy/util.py
@@ -229,6 +229,30 @@
 yield y
 else:
 yield elem
+
+def hook_stop_iteration(iterator, hook_fun):
+Add a hook to StopIteration on an iterator
+try:
+while True:
+yield iterator.next()
+except StopIteration:
+hook_fun()
+raise
+
+def iter_concatenating(iters):
+Concatenate iterables using a generator.
+
+Yields every item produced by every iterable in the given list, in
+concatenated order.
+
+for iterable in iters:
+   

[sqlalchemy] Re: cant update mssql table

2008-04-28 Thread Paul Johnston
Hi,

a table in a mssql db with a uniqueidentifier field as primary key and
 an integer field as identity,i am able to insert a row into the table
 but not update it


Can you send the code you're using, and the error you are getting? I think
this is an area we haven't particularly covered so far.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ODBC Connection is busy error

2008-04-28 Thread Michael Bayer


On Apr 28, 2008, at 4:04 AM, BruceC wrote:


 We're not intentionally sharing a session between multiple
 threads... :) I'll check out the code to see if there is any sharing
 going on, but I'm pretty sure that's not the case.

 You mention lazy-loading as if it might be playing a part in this
 issue - is that right? We are using lazy  eager loading in various
 parts of the app...

the lazy loading can present an issue only to the degree that the  
object still maintains a link to its originating Session.  So if the  
lazy loader fires off in a thread other than that where the Session  
normally resides, you can have concurrent access to it, which in the  
case of the Session already being within a transaction, could create  
concurrent access on a single connection.

Basically if you dont share anything between threads, in theory this  
kind of issue should not occur.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how return value to the web page

2008-04-28 Thread reetesh

hi all,
i have developed one project on login page. i am unable to print the
dailouge on the page when user is given wronge password.. can anyone
help me... how to print the value on the web page

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: help required urgent

2008-04-28 Thread Michael Bayer


On Apr 25, 2008, at 10:36 AM, sniffer wrote:


 inserted into an identity column.How do i make SA understand to leave
 the autoincrement field value i.e. not to insert anything into it and
 let the field take the default value set in the database.
 any help will be great i need to turn in this stuff in a day.



I dont use MS-SQL but the SQLA dialect has a flag called  
auto_identity_insert you can turn offcreate_engine(url,  
auto_identity_insert=False) .  Not sure if thats the right approach  
here though.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Concrete inheritance woes

2008-04-28 Thread Gaetan de Menten
Hi there,

I've been playing with concrete inheritance a bit these days (trying
to implement it into Elixir). I've run into several problems, which
might be known limitations, but I'd like to know for sure or be shown
what I've done wrong...

The two problems I get involve a chain of classes inheriting from one
another: class C inherits from class B which inherits from class A.
The first problem is that I can get a polymorphic load starting from
class B. I'd like it to return instances of class C, along with the
instances from class B. Querying class A works as expected though
(return instances of A, B and C).
See multi_level_concrete.py for what I tried. (It's a
hacked/standalone version of the unit tests found in SA itself). FWIW,
I run them with nosetest.

The second (probably related) problem is that I can't get
relationships to work polymorphically. I'd like to have a relationship
to the B class. Since I've read in the doc that the relation is not
inherited automatically, I tried duplicating manually, but then it
doesn't work polymorphically... Anyway, see concrete_with_relations.py
for my experimentations.

Thanks in advance for any pointer on this,
-- 
Gaƫtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import *
from sqlalchemy.orm import *

metadata = MetaData('sqlite:///')

companies = Table('companies', metadata,
   Column('id', Integer, primary_key=True),
   Column('name', String(50)))

managers_table = Table('managers', metadata,
Column('employee_id', Integer, primary_key=True),
Column('name', String(50)),
Column('manager_data', String(50)),
Column('company_id', Integer, ForeignKey('companies.id'))
)

engineers_table = Table('engineers', metadata,
Column('employee_id', Integer, primary_key=True),
Column('name', String(50)),
Column('engineer_info', String(50)),
Column('company_id', Integer, ForeignKey('companies.id'))
)

hackers_table = Table('hackers', metadata,
Column('employee_id', Integer, primary_key=True),
Column('name', String(50)),
Column('engineer_info', String(50)),
Column('company_id', Integer, ForeignKey('companies.id')),
Column('nickname', String(50))
)

metadata.create_all()
#metadata.bind.echo = True

def test_relation_1():
class Company(object):
def __init__(self, name):
self.name = name

class Employee(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.__class__.__name__ +   + self.name

class Manager(Employee):
def __init__(self, name, manager_data):
self.name = name
self.manager_data = manager_data
def __repr__(self):
return self.__class__.__name__ +   + self.name +   +  self.manager_data

class Engineer(Employee):
def __init__(self, name, engineer_info):
self.name = name
self.engineer_info = engineer_info
def __repr__(self):
return self.__class__.__name__ +   + self.name +   +  self.engineer_info

class Hacker(Engineer):
def __init__(self, name, nickname, engineer_info):
self.name = name
self.nickname = nickname
self.engineer_info = engineer_info
def __repr__(self):
return self.__class__.__name__ +   + self.name +  ' + \
   self.nickname + '  +  self.engineer_info
pjoin = polymorphic_union({
'manager':managers_table,
'engineer':engineers_table,
'hacker':hackers_table
}, 'type', 'pjoin')

mapper(Company, companies, properties={
'engineers':relation(Engineer, lazy=False, enable_typechecks=False)
})
employee_mapper = mapper(Employee, pjoin, polymorphic_on=pjoin.c.type)
manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
engineer_mapper = mapper(Engineer, engineers_table,
inherits=employee_mapper, concrete=True,
polymorphic_identity='engineer',
properties={'company': relation(Company)})
hacker_mapper = mapper(Hacker, hackers_table, inherits=engineer_mapper,
   concrete=True, polymorphic_identity='hacker',
   properties={'company': relation(Company)})

session = create_session()
c = Company('Super')
session.save(Manager('Tom', 'knows how to manage things'))
c.engineers.append(Engineer('Jerry', 'knows how to program'))
c.engineers.append(Hacker('Kurt', 'Badass', 'knows how to hack'))
assert 

[sqlalchemy] SQL for (col1, col2) NOT IN (SELECT ...)

2008-04-28 Thread Matthew Zwier

Hi all,

I'm trying to run a (non-ORM) query of the form:
  SELECT job_id, pathname FROM jobfilesTable WHERE (job_id, pathname)
NOT IN (SELECT job_id, pathname FROM tempTable)

After searching the docs and the mailing list, the best I've been able
to come up with is something like:

from sqlalchemy.sql.expression import _Grouping, ClauseList

s = select([jobfilesTable.c.job_id, jobfilesTable.c.pathname])
s2 = select([tempTable.c.job_id, tempTable.c.pathname])
colgroup = _Grouping(ClauseList(jobfilesTable.c.job_id,
jobfilesTable.c.pathname))
s = s.where(~colgroup.in_(s2))

It seems to generate the appropriate SQL, but I'd prefer not to have
to rely on _Grouping(), as it appears not to be part of the public
API.  Any suggestions for a more elegant way of doing this?

Thanks,
Matt Z.

P.S.  Creating the temporary table seen in select s2 was a bear,
involving a manual string substitution on a Table.create() bound to an
Engine with strategy=mock then feeding the result to the DB.  Are
there any plans for supporting temporary tables in SA?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL for (col1, col2) NOT IN (SELECT ...)

2008-04-28 Thread Michael Bayer


On Apr 28, 2008, at 3:28 PM, Matthew Zwier wrote:


 Hi all,

 I'm trying to run a (non-ORM) query of the form:
  SELECT job_id, pathname FROM jobfilesTable WHERE (job_id, pathname)
 NOT IN (SELECT job_id, pathname FROM tempTable)

 After searching the docs and the mailing list, the best I've been able
 to come up with is something like:

 from sqlalchemy.sql.expression import _Grouping, ClauseList

 s = select([jobfilesTable.c.job_id, jobfilesTable.c.pathname])
 s2 = select([tempTable.c.job_id, tempTable.c.pathname])
 colgroup = _Grouping(ClauseList(jobfilesTable.c.job_id,
 jobfilesTable.c.pathname))
 s = s.where(~colgroup.in_(s2))

 It seems to generate the appropriate SQL, but I'd prefer not to have
 to rely on _Grouping(), as it appears not to be part of the public
 API.  Any suggestions for a more elegant way of doing this?

that's pretty good you came up with that.  We haven't placed explicit  
support for multiple items as the subject of an IN statement.  You can  
do what you have there without _Grouping by saying  
ClauseList(...).self_group().   I think you're safe with that for now  
though we should add a public function for this purpose at some point.


 P.S.  Creating the temporary table seen in select s2 was a bear,
 involving a manual string substitution on a Table.create() bound to an
 Engine with strategy=mock then feeding the result to the DB.  Are
 there any plans for supporting temporary tables in SA?

if its just a matter of saying CREATE TEMPORARY TABLE instead of  
CREATE TABLE, we can accept a patch for temporary=True, sure.



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Patch: Ordered merge of ShardedQuery results (Was: Re: [sqlalchemy] Patch: Use generators for ShardedQuery results)

2008-04-28 Thread Michael Bayer


On Apr 28, 2008, at 4:34 AM, Kyle Schaffrick wrote:

 I have taken a first stab at this to play with the idea and it seems  
 to
 be doing the right thing, the patch also contains some very  
 rudimentary
 tests in ShardTest that make it voom.

very nice !

 The patch adds a .merge_ordering() method to ShardedQuery that allows
 supplying an ordering function that will take a short (upper limit  
 being
 the number of shards involved) list of items and return the one that
 comes first. The type of the items is whatever you would get back from
 the query ordinarily. If you use it with an order_by that enforces the
 same ordering, you can get a complete ordering of results for queries
 that pull results from multiple shards.

 Example:
 q = s.query(Person).order_by(Person.age.asc())
 mq = q.merge_ordering(lambda l: min(l, key=(lambda p: p.age)))

 If you don't call merge_ordering, it doesn't do any merging and uses  
 the
 standard behavior of concatenating the results.

 merge_ordering is probably a crappy name, so if this gets any  
 interest
 then we can figure out what else to call it. One aspect I'm not sure
 about is whether or not merge_ordering's parameter should use the
 protocol it's using now, or if it should accept something more like
 key or cmp parameters of the builtin list.sort method.

so, lets name it _merge_ordering() to start so we have some future  
leeway on the name.

 Also, I attached a better version of the iterator patch that closes  
 the
 result object like the original code did, by hooking each iterator  
 with
 a call to result.close(). I couldn't figure out if this was actually
 nessecary because it didn't seem consistent between Query and
 ShardedQuery, but that may be for reasons I'm not seeing.

Well, in fact the result.close() at the end there is not really  
needed; when all result rows are exhausted, ResultProxy will issue  
close() on the cursor anyway (and if the connection itself was created  
with close_with_result=True, the Connection will also be returned to  
the connection pool).  So in this case its probably better if shard.py  
were using session.execute() instead of session.connection(), since  
this allows the Session to make the choice, based on transactional  
state, if the Connection should be a close_with_result or not.  I  
think the reason shard.py isnt doing that right now is because there  
were more paramters to be sent to connection() that ShardedSession  
adds, so ShardedSession would need a corresponding execute() method  
added which works in an analgous way to Session.execute().

All in all its a great patch and if you can make those adjustments  
that would be great.  Thanks !

- mike



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL for (col1, col2) NOT IN (SELECT ...)

2008-04-28 Thread Matthew Zwier

Thanks for the quick reply!  I've always been quite impressed with the
quality of SA and its support.

I'm a bit swamped at work at the moment but I'll see about putting a
'CREATE TEMPORARY TABLE' patch together.

MZ

On Mon, Apr 28, 2008 at 4:09 PM, Michael Bayer [EMAIL PROTECTED] wrote:

  that's pretty good you came up with that.  We haven't placed explicit
  support for multiple items as the subject of an IN statement.  You can
  do what you have there without _Grouping by saying
  ClauseList(...).self_group().   I think you're safe with that for now
  though we should add a public function for this purpose at some point.



  if its just a matter of saying CREATE TEMPORARY TABLE instead of
  CREATE TABLE, we can accept a patch for temporary=True, sure.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Concrete inheritance woes

2008-04-28 Thread Michael Bayer


On Apr 28, 2008, at 12:07 PM, Gaetan de Menten wrote:

pjoin = polymorphic_union({
'manager':managers_table,
'engineer':engineers_table,
'hacker':hackers_table
}, 'type', 'pjoin')

mapper(Company, companies, properties={
'engineers':relation(Engineer, lazy=False,  
 enable_typechecks=False)
})
employee_mapper = mapper(Employee, pjoin,  
 polymorphic_on=pjoin.c.type)
manager_mapper = mapper(Manager, managers_table,  
 inherits=employee_mapper, concrete=True,  
 polymorphic_identity='manager')
engineer_mapper = mapper(Engineer, engineers_table,
inherits=employee_mapper, concrete=True,
polymorphic_identity='engineer',
properties={'company': relation(Company)})
hacker_mapper = mapper(Hacker, hackers_table,  
 inherits=engineer_mapper,
   concrete=True,  
 polymorphic_identity='hacker',
   properties={'company': relation(Company)})

OK...mapping Employee to pjoin is fine since its a virtual mapper.
For engineer_mapper, it needs a selectable from which it can also load  
Hacker objects...this is the pjoin2 polymorphic_union you've created  
in a later test.  *but*, the engineer_mapper doesn't get mapped to it,  
it should use it as its with_polymorphic argument (which used to be  
handled by select_table):

pjoin = polymorphic_union(...)
pjoin2 = polymorphic_union(...)

employee_mapper = mapper(Employee, pjoin, polymorphic_on=pjoin.c.type)
manager_mapper = mapper(Manager, managers_table,  
inherits=employee_mapper, concrete=True, polymorphic_identity='manager')
engineer_mapper = mapper(Engineer, engineers_table,
 with_polymorphic=('*', pjoin2),
 polymorphic_on=pjoin2.c.type,
 inherits=employee_mapper, concrete=True,
 polymorphic_identity='engineer')
hacker_mapper = mapper(Hacker, hackers_table, inherits=engineer_mapper,
concrete=True, polymorphic_identity='hacker')

this should in theory also fix the engineers relation on Company.
But as I've said many times (to svil, at least) concrete inheritance  
is something i havent gotten into much as of yet, largely due to the  
many inherent issues with it as well as its being a generally  
unpopular pattern, so there may still be issues with this setup (but  
let me know, since thats how it should work).

In any case the unit tests which you were working from (im guessing  
test/orm/inheritance/concrete.py) should be patched to include this  
test (i.e. including Hacker, the polymorphic load, as well as the  
relation).  Assuming it works we should also make sure the 0.5 branch  
(which isnt called that yet) can handle it too.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL for (col1, col2) NOT IN (SELECT ...)

2008-04-28 Thread jason kirtland

This could be expanded slightly to include 'prefixes=[]' support ala 
select() and insert().  Sqlite could use that for creating full text 
tables, e.g. 'CREATE VIRTUAL TABLE foo (...) USING ...'.

I haven't thought about this extensively but I think I'd prefer 
prefixes=['TEMPORARY'] to a temporary=True flag until such a time as we 
can guarantee that the temp tables are cleaned up when the defining 
connection is returned to the pool.

Matthew Zwier wrote:
 Thanks for the quick reply!  I've always been quite impressed with the
 quality of SA and its support.
 
 I'm a bit swamped at work at the moment but I'll see about putting a
 'CREATE TEMPORARY TABLE' patch together.
 
 MZ
 
 On Mon, Apr 28, 2008 at 4:09 PM, Michael Bayer [EMAIL PROTECTED] wrote:
  that's pretty good you came up with that.  We haven't placed explicit
  support for multiple items as the subject of an IN statement.  You can
  do what you have there without _Grouping by saying
  ClauseList(...).self_group().   I think you're safe with that for now
  though we should add a public function for this purpose at some point.



  if its just a matter of saying CREATE TEMPORARY TABLE instead of
  CREATE TABLE, we can accept a patch for temporary=True, sure.

 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL for (col1, col2) NOT IN (SELECT ...)

2008-04-28 Thread Michael Bayer


On Apr 28, 2008, at 4:41 PM, jason kirtland wrote:


 This could be expanded slightly to include 'prefixes=[]' support ala
 select() and insert().  Sqlite could use that for creating full text
 tables, e.g. 'CREATE VIRTUAL TABLE foo (...) USING ...'.

 I haven't thought about this extensively but I think I'd prefer
 prefixes=['TEMPORARY'] to a temporary=True flag until such a time as  
 we
 can guarantee that the temp tables are cleaned up when the defining
 connection is returned to the pool.

This is all fine with me...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using declarative when imports happen before create_engine() ??

2008-04-28 Thread Allen Bierbaum

So, if I understand this right, I could import a base module that
does a lazy creation of the Base class with a metadata object and then
just use that base class everywhere I need it for the declarative
class definitions.  Then at a later time (before I use the mapped
classes), I could go and bind the metadata for the Base class to an
engine for use.

Correct?  (I apologize if I used the terms incorrectly).

If this is true, then I think I see how I can solve my problem.

-Allen


On Sun, Apr 27, 2008 at 6:28 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Apr 27, 2008, at 8:25 AM, Allen Bierbaum wrote:

   The problem that as I understand it, to use declarative, you can't
   import an module that defines a table-based object until after some
   initialization code has been run to connect to a database and create a
   'Base' class for the declarative layer.

  This is not true; the declarative extension serves as a holding zone
  for a MetaData object; like the MetaData object, it requires no
  association to any database engine at any time.   The initial example
  in the declarative docunentation, which I am going to change right
  now, illustrates the engine being associated with the underlying
  MetaData as just as an example.The only time a databse connection
  is needed is when you are ready to query the database.   You can
  create your Session and bind it to a newly created engine right before
  the first query is issued, if you like.


  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using declarative when imports happen before create_engine() ??

2008-04-28 Thread Michael Bayer


On Apr 28, 2008, at 5:42 PM, Allen Bierbaum wrote:


 So, if I understand this right, I could import a base module that
 does a lazy creation of the Base class with a metadata object and then
 just use that base class everywhere I need it for the declarative
 class definitions.  Then at a later time (before I use the mapped
 classes), I could go and bind the metadata for the Base class to an
 engine for use.

 Correct?  (I apologize if I used the terms incorrectly).

 If this is true, then I think I see how I can solve my problem.

thats right.  the whole idea of declarative_base is that its just a  
common base class from which all your other classes inherit.  then you  
can plug whatever engine configuration is needed at any point in time  
and it will become active for all base-inheriting classes.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Patch: Ordered merge of ShardedQuery results (Was: Re: [sqlalchemy] Patch: Use generators for ShardedQuery results)

2008-04-28 Thread Kyle Schaffrick
On Mon, 28 Apr 2008 16:22:44 -0400
Michael Bayer [EMAIL PROTECTED] wrote:
 
 so, lets name it _merge_ordering() to start so we have some future  
 leeway on the name.
 

Changes made.

 Well, in fact the result.close() at the end there is not really  
 needed; when all result rows are exhausted, ResultProxy will issue  
 close() on the cursor anyway (and if the connection itself was
 created with close_with_result=True, the Connection will also be
 returned to the connection pool).  So in this case its probably
 better if shard.py were using session.execute() instead of
 session.connection(), since this allows the Session to make the
 choice, based on transactional state, if the Connection should be a
 close_with_result or not.  I think the reason shard.py isnt doing
 that right now is because there were more paramters to be sent to
 connection() that ShardedSession adds, so ShardedSession would need a
 corresponding execute() method added which works in an analgous way
 to Session.execute().

Alright, I removed the result.close() hook code from the first patch.
Makes things considerably simpler. It did occur to me that the original
try-finally block that was doing this would also close the results in
the case of an exception being thrown. Should I deal with this case?

Also, I haven't looked into writing such an override for
ShardedQuery.execute(), but suppose I could see about doing it if it
would make things cleaner.

 
 All in all its a great patch and if you can make those adjustments  
 that would be great.  Thanks !
 
 - mike

Great :) .. I've gotten a lot of use out of SA, it's got a great
community and is a joy to hack on. Glad I can contribute something to
it.

I'd really like to throw more testing at _merge_ordering, I'll try to
come up with something stand-alone, and then see about working it into
SA's test suite. The framework looks quite involved and I haven't had a
chance to grok it yet.

Kyle

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Use a generator to concatenate results from ShardedQuerys.

Adds a function `iter_concatenating` that accepts a list of iterators, and
yields every object produced by each of them. Alter ShardedQuery to use this
function instead of loading all of the potentially large sets of result objects
into a list before concatenating them.

diff --git a/lib/sqlalchemy/orm/shard.py b/lib/sqlalchemy/orm/shard.py
--- a/lib/sqlalchemy/orm/shard.py
+++ b/lib/sqlalchemy/orm/shard.py
@@ -88,20 +88,15 @@
 def _execute_and_instances(self, context):
 if self._shard_id is not None:
 result = self.session.connection(mapper=self.mapper, shard_id=self._shard_id).execute(context.statement, **self._params)
-try:
-return iter(self.instances(result, querycontext=context))
-finally:
-result.close()
+return self.iterate_instances(result, querycontext=context)
 else:
-partial = []
+partials = []
 for shard_id in self.query_chooser(self):
 result = self.session.connection(mapper=self.mapper, shard_id=shard_id).execute(context.statement, **self._params)
-try:
-partial = partial + list(self.instances(result, querycontext=context))
-finally:
-result.close()
+partials.append(self.iterate_instances(result, querycontext=context))
+
 # if some kind of in memory 'sorting' were done, this is where it would happen
-return iter(partial)
+return util.iter_concatenating(partials)
 
 def get(self, ident, **kwargs):
 if self._shard_id is not None:
diff --git a/lib/sqlalchemy/util.py b/lib/sqlalchemy/util.py
--- a/lib/sqlalchemy/util.py
+++ b/lib/sqlalchemy/util.py
@@ -229,6 +229,21 @@
 yield y
 else:
 yield elem
+
+def iter_concatenating(iters):
+Concatenate iterables using a generator.
+
+Yields every item produced by every iterable in the given list, in
+concatenated order.
+
+for iterable in iters:
+try:
+while True:
+yield iterable.next()
+
+# Begin yielding stuff off the next iterator.
+except StopIteration:
+pass
 
 class ArgSingleton(type):
 instances = weakref.WeakValueDictionary()
Add ability to do ordered merge of ShardedQuery results.

`iter_merging` creates an iterator that yields an ordered merge of the outputs
of the list of iterators passed in. The ordering is determined by a function
`ordering` which takes a list of