[sqlalchemy] Re: relations with additional criteria

2009-10-08 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
[snip]
 For the next go-around here, assuming this is all still not working  
 for you can you please provide an example of what the additional  
 filter does exactly ?

Oh, it was already working for me (with the hack as described with 
backref and such, earlier in this thread).

I just looked for a better way, but the suggestions I got initially had 
some issues.

What it does is filter on a workflow status. I.e. I want a number of 
relations, one of which shows all related objects, the other one 
restricts by some status of the target. This way I can navigate an 
object graph and only see, say, published items, or items that can be 
edited, by following the right links. I.e:

zoo.animals # all animals in the zoo that have been 'published'
zoo.editable_animals # all animals in the zoo that can be edited or have 
been edited
zoo.archived_animals # all animals the zoo had once but have been removed
zoo.all_animals: all animals above, i.e. the automatic relation

I realize that this is unusual; normally one cannot expect any 
particular column to be present on the target such as my workflow status 
column, so it makes sense to spell it out explicitly in the 'relation' 
logic. I still think that even there having to actually make the 
auto-generated join explicit as soon as you want to add an extra 
criterion requires a bit of extra work and thought that might be 
preventable, but I don't think a solution to that would have helped my case.

I'm still thinking about alternate strategies to reach the same effect. 
Properties for the alternate extra filters would be one option, but I'm 
not sure whether the property implementation can be made general enough 
to find the target table very easily either.

An alternative idea I had was to create a special collection class that 
knows how to do such additional restrictions (perhaps a bunch of methods 
on them.

I've noticed that I need to expire the instances with relations very 
often if something changes about the workflow status in one of the 
related objects - it seems SQLAlchemy doesn't figure this out for 
itself. Perhaps this is expected. I intend to post about this separately 
after I've isolated this issue.

  at this point you're looking for a subclass  
 hook that is basically augment_primary_join(self,  
 existing_primary_join), since you're looking to have the figure out  
 the joins stuff done for you in exactly the same way it is now, but  
 you want to have a say in changing the final product.   that's a  
 pretty arbitrary looking hook.

Sure. Again, I'm not asking for a hook, I just tried to make it work but 
wasn't very happy with the result, so I thought I'd bring it up to see 
whether there was a better way. And then we got into a bit of a 
discussion. :)

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-10-08 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
[snip]
 well that is the part of the use case I don't understand.  Why the
 automagic aspect of it ?   and if there are dozens of target tables that
 have similar attributes, why not reduce the workload in a more boring way,
 like one that just adds the attributes after the fact:
 
 def map_my_stuff(parent, child):
 parent_mapper = class_mapper(parent)
 parent_mapper.add_property(children_editable,
   relation(..figure it out from parent_mapper etc..)

The answer would be that I hadn't considered that approach yet, but I 
could indeed do try to do this at the mapping phase. I'm not terribly 
familiar with the way mapping takes place yet, but this gives me a clue 
as to where to start looking - thanks!

 I've noticed that I need to expire the instances with relations very
 often if something changes about the workflow status in one of the
 related objects - it seems SQLAlchemy doesn't figure this out for
 itself. Perhaps this is expected. I intend to post about this separately
 after I've isolated this issue.
 
 I would suggest using a dynamic relation if you'd like the attribute to
 issue a SQL query on every hit.   Using a Dynamic relation for the
 core relation can allow you to construct all of the alternate views at
 the class level without much need to muck about the mapper:
 
 
 class MyClass(object):
 @property
 def editable_children(self):
 return self.children.filter(Child.editable==True)

Yeah, this is like the property approach I was considering. To automate 
this I'd need a way to determine Child automatically from the 
self.children relation, and I hadn't gone into that yet.

 if the core children relation were non-dynamic, the same route can be
 applied:
 
 class MyClass(object):
 @property
 def editable_children(self):
 return
 object_session(self).query(Child).with_parent(self).filter(Child.editable==True)
 
 this approach would remove the need to dig around ORM internals 

Oh well, at least I learned something about them. :)

 and would
 solve all the stale data issues too, assuming autoflush is turned on.   
 SQLAlchemy's behavior of caching collections in the first place is
 fairly unique among Python ORMs (its more of a Hibernate thing) so this is
 the usual way that issue is addressed.

Thanks for the tip!

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-10-06 Thread Martijn Faassen

Hey,

Thanks very much for looking into this. I'm sorry I couldn't offer more 
specific suggestions earlier - my goal was just to start the thought 
process leading to such suggestions (also in myself), and at this point 
you're still a lot more familiar with this codebase than I am. :)

I'm happy the discussion ended up leading to this improvement. The 
backref story indeed looks like an improvement as less knowledge about 
this has to be in different areas of the code.

I do still have some questions about your suggested code:

Michael Bayer wrote:
 class MyRelation(RelationProperty):
 
 def _determine_joins(self):
 self.primaryjoin = join_condition(self.parent.local_table,
 self.target)  (self.parent.local_table.c.id0)

The code in the baseclass version of _determine_joins isn't used at all 
in your example. But this code appears to handle various cases:

* raising an exception if secondary is None when self.secondaryjoin isn't

* finding a join condition to the mapped_table in the case of some 
inheritance conditions

* handling the cases where secondary is provided in the case of a MANY 
to MANY relationship

* raise an error if join conditions couldn't be determined.

In order to handle all these cases I think in my subclass I'd need to 
handle them too.

In addition I'm curious what _orm_deannotate does and why deannotating 
the extra clause isn't needed in this case.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-10-05 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
 Martijn Faassen wrote:
 Michael Bayer wrote:
 subclass RelationProperty fine, but don't get involved with overriding
 its
 internal _xxx methods.
 So:

 Override do_init() completely (not calling the super do_init()).
 
 no, call do_init().
 
 def do_init(self):
 self.primaryjoin = figure_out_my_primary_join()
 super(RelationProperty, self).do_init()

Can't do that as parent and target haven't been figured out yet, which 
_determine_joins and custom joins need. That happens here somewhere:

self._get_target()
self._process_dependent_arguments()

 Problem: the backref will already have been set up, and this will not
 get the custom arguments.

 the backref you can roll without using backref().   set up your custom
 relation() on the other side and then establish the two-way communication
 using the back_populates argument on both relations. 
 http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/test/orm/test_relationships.py#L803
 is an example.

That's nice, but my goal is to make these custom relations work in the 
same way as they do now, with an additional criterion. The current 
SQLAlchemy backref story works just fine, and I don't want to make my 
users suddenly switch to a more verbose mechanism. The goal is to allow 
the creation of these special relations without extra verbosity.

 It's pretty involved, ugly, and fragile. It'd be nicer if there was a
 way to do this generically, as I just want to add a custom condition to
 the automatically set up primary...
 
 I *really* try to avoid adding new arguments and flags as quick solutions
 to single-user use cases, unless there is truly no other way to accomplish
 the desired goal.  the back_populates argument is an example of opening
 up the mechanics of backref in a more generalized way.

I'm not asking for new arguments and flags. I'm just pointing out that 
RelationProperty is *really* hard to subclass. It might be time to start 
thinking about refactoring it, just to make it a bit easier to 
comprehend what is going on. As a side-effect it might make it a bit 
easier to subclass for my purposes. :)

Here's what works, but it's really ugly as I have to override a private 
method, and re-assign to the backref attribute to make backrefs work 
properly:

class MyRelationProperty(RelationProperty):
 def extra_relation_condition(self):
 raise NotImplementedError
 # for instance:
 # return self.parent.c.foo == 3

 def _determine_joins(self):
 super(MyRelationProperty, self)._determine_joins()
 # XXX is orm_deannotate really needed? codepath in
 # RelationProperty suggests it is run...
 self.primaryjoin = _orm_deannotate(
 and_(self.primaryjoin,
  self.extra_relation_condition()))
 if self.backref is not None:
 self.backref = BackRef(self.backref.key,
primaryjoin=self.primaryjoin,
secondaryjoin=self.secondaryjoin,
passive_updates=self.passive_updates)

There are certainly issues here; I don't know whether all this works 
properly for many to many relations for instance. Probably not.

This way I can reuse exactly what _determine_joins has come up with, 
immediately at the point where this is relevant. I have to stuff in a 
new backref too, as the backref is already set up way earlier, in 
__init__, which is a monster to override.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-10-01 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
[snip]
 OK well I'm sure you noticed that RelationProperty was not designed to be
 subclassed.   I would advise that your my_own_relation() function generate
 its own primaryjoin and secondaryjoin conditions which it passes as
 arguments to the relation().  

Looking at it again, I don't think that this is actually possible, as 
the custom relation function lacks the information to determine what is 
the parent and the child and therefore cannot construct a join.

Take the following:

mapper(B, b,
properties={
  'a': my_own_relation(A, backref='bs'),
})

How in the implementation of my_own_relation am I to find out about B? I 
think this is only possible to do when do_init() is called on the 
relation property.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-10-01 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
 subclass RelationProperty fine, but don't get involved with overriding its
 internal _xxx methods.

So:

Override do_init() completely (not calling the super do_init()).

Do something like:

 def do_init(self):
 self._get_target()
 self._process_dependent_arguments()
 self.custom_joins()
 self._determine_joins()
 self._determine_synchronize_pairs()
 self._determine_direction()
 self._determine_local_remote_pairs()
 self._post_init()
 super(RelationProperty, self).do_init()

Then introduce a custom_joins() that will now have access to a properly 
set up local_table and mapper_table and self.parent and self.target etc, 
and implement something very close to _determine_joins except inserting 
custom arguments.

Problem: the backref will already have been set up, and this will not 
get the custom arguments.

So, I need to override __init__, copying it completely and setting up 
the backref there if needed, or hackishly override the previously set up 
backref in custom_joins.

It's pretty involved, ugly, and fragile. It'd be nicer if there was a 
way to do this generically, as I just want to add a custom condition to 
the automatically set up primary...

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] relations with additional criteria

2009-09-29 Thread Martijn Faassen

Hi there,

The relation() logic lets you write an additional filter into a relation 
if you provide your own primaryjoin.

The case I'm dealing with is a case where I want to take over the 
primaryjoin that is autogenerated by the RelationProperty object, but 
amend it with an extra filter.

This turns out to be surprisingly difficult to do in the current 
codebase. In fact I haven't made it work properly yet.

Is there a reason this hook isn't available? Perhaps it's because in my 
use case I know there is a certain column available on the child table 
no matter what table it is, and that's not so common.

I imagine an optional callable that takes the parent and child tables as 
arguments and can return a custom filter expression on them. I haven't 
thought through how this works with many to many relationships yet though.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-09-29 Thread Martijn Faassen

Michael Bayer wrote:
[snip]
 Whats missing here is the context.  

I want to define ORM relations in the mapper that I can access without 
having to do a manual join.

Normally you'd do this:

mapper(A, a_table,
properties={
   'bs': relation(B , backref='a',
 primaryjoin=and_(a_table.c_id == b_table.c.a_id,
  b_table.c.status == 'FOO'),
})


But I'd like to automate this:

mapper(A, a_table,
properties={
   'bs': my_own_relation(B , backref='a'),
})

my_own_relation behaves like relation, except it adds an extra clause 
restricting the query, say, b_table.c.status == 'FOO'. It should have 
access to the parent and child tables so it can do this generically.

I've tried to accomplish this by overriding _determine_joins and 
manipulating self.primaryjoin after the default is set up, but I think 
that breaks because backrefs have already been set up earlier. I've 
hacked around that now I think, but it's not very pretty.

 From what you describe, use the
 primaryjoin and amend with an extra filter, that is just:
 
 query(Parent).join(Parent.children).filter(Child.foo==bar)

This presents a potential alternative implementation strategy where this 
is implemented using properties on the model classes that do something 
like this.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: relations with additional criteria

2009-09-29 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
 OK well I'm sure you noticed that RelationProperty was not designed to be
 subclassed. 

Yeah, my subclass isn't pretty. :)

 I would advise that your my_own_relation() function generate
 its own primaryjoin and secondaryjoin conditions which it passes as
 arguments to the relation().  If you look at the source of
 _determine_joins(), it uses the table.join(othertable).onclause idea, but
 more cleanly through a utility function called join_condition().it
 calls it twice to accommodate some more exotic use cases.  I think in
 general its fine to call as join_condition(prop.parent, prop.target) which
 will give you what you need.

Thanks, I know all about _determine_joins() now that I wrestled with the 
subclassing approach, and will try to rewrite my code so it uses a 
similar approach. We'll see how it turns out.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] finding related objects

2009-09-10 Thread Martijn Faassen

Hi there,

I'm trying to figure out whether there's something in SQLAlchemy that 
lets me reliably get all the objects related to another: i.e. all 
children of a parent that (in the underlying record) have a foreign key 
relationship to the parent record.

One way I've been trying to solve this is by inspecting the underlying 
tables directly, inspecting the foreign_keys property of the tables. But 
this is mixing levels however - I get into trouble getting from ORMed 
object to record and back again (having to expunge the session, etc).

I was wondering whether there is something in the ORM that can help me 
here. It tricks me that the cascading behavior might be close to what 
I'm looking for, but I'm not sure. I've tried to use the 
cascade_iterator on the mapper but I'm not sure I'm using it right (as I 
get no related objects while I expect some, or an error).

If there is a way to get the related objects (in the session or not; I 
need all of them) in the ORM layer, could someone post me a snippet?

I've been trying something like:

foo = some orm mapped object

m = object_mapper(foo)
related = list(m.cascade_iterator('all', foo))

I'm not sure whether I'm passing the right arguments or whether the 
cascade story is even suitable for that. I just get an empty list, even 
though foo does have several objects that refer to it.

Any ideas?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: finding related objects

2009-09-10 Thread Martijn Faassen

Hey,

[finding related objects]

One way I've come up with is this:


m = object_mapper(model)
for prop in m.iterate_properties:
 if isinstance(prop, RelationProperty):
 if prop.direction is ONETOMANY:
 for related in getattr(model, prop.key):
 do_stuff(related)

the other directions still need to be handled though (and recursion).

Would this be a reasonable solution?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: composite primary key problem

2009-08-25 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
 the program works for me, I get:

That's interesting. I've tested the script with Python 2.4, Python 2.5, 
and Python 2.6, with SQLAlchemy 0.5.5 and trunk. sqlite version is 
3.4.2. I get the assertion error each time.

The output I get when echo is True (on SA trunk/Python 2.5) is this:

2009-08-25 15:19:18,637 INFO sqlalchemy.engine.base.Engine.0x...974c 
PRAGMA table_info(user)
2009-08-25 15:19:18,637 INFO sqlalchemy.engine.base.Engine.0x...974c ()
2009-08-25 15:19:18,638 INFO sqlalchemy.engine.base.Engine.0x...974c
CREATE TABLE user (
code INTEGER NOT NULL,
status INTEGER NOT NULL,
username VARCHAR NOT NULL,
PRIMARY KEY (code, status)
)


2009-08-25 15:19:18,638 INFO sqlalchemy.engine.base.Engine.0x...974c ()
2009-08-25 15:19:18,638 INFO sqlalchemy.engine.base.Engine.0x...974c COMMIT
2009-08-25 15:19:18,640 INFO sqlalchemy.engine.base.Engine.0x...974c BEGIN
2009-08-25 15:19:18,640 INFO sqlalchemy.engine.base.Engine.0x...974c 
INSERT INTO user (code, status, username) VALUES (?, ?, ?)
2009-08-25 15:19:18,641 INFO sqlalchemy.engine.base.Engine.0x...974c [0, 
1, u'a']
2009-08-25 15:19:18,641 INFO sqlalchemy.engine.base.Engine.0x...974c COMMIT
2009-08-25 15:19:18,641 INFO sqlalchemy.engine.base.Engine.0x...974c BEGIN
2009-08-25 15:19:18,642 INFO sqlalchemy.engine.base.Engine.0x...974c 
INSERT INTO user (code, status, username) VALUES (?, ?, ?)
2009-08-25 15:19:18,642 INFO sqlalchemy.engine.base.Engine.0x...974c [0, 
2, u'a']
2009-08-25 15:19:18,642 INFO sqlalchemy.engine.base.Engine.0x...974c COMMIT
2009-08-25 15:19:18,643 INFO sqlalchemy.engine.base.Engine.0x...974c BEGIN
2009-08-25 15:19:18,643 INFO sqlalchemy.engine.base.Engine.0x...974c 
SELECT user.code AS user_code, user.status AS user_status, user.username 
AS user_username
FROM user
WHERE user.code = ? AND user.status = ?
2009-08-25 15:19:18,644 INFO sqlalchemy.engine.base.Engine.0x...974c [0, 1]
2009-08-25 15:19:18,645 INFO sqlalchemy.engine.base.Engine.0x...974c 
SELECT user.code AS user_code, user.status AS user_status, user.username 
AS user_username
FROM user
WHERE user.code = ? AND user.status = ?
2009-08-25 15:19:18,645 INFO sqlalchemy.engine.base.Engine.0x...974c [0, 2]
2009-08-25 15:19:18,646 INFO sqlalchemy.engine.base.Engine.0x...974c 
UPDATE user SET status=? WHERE user.code = ? AND user.status = ?
2009-08-25 15:19:18,646 INFO sqlalchemy.engine.base.Engine.0x...974c [3, 
0, 1]
2009-08-25 15:19:18,646 INFO sqlalchemy.engine.base.Engine.0x...974c 
UPDATE user SET status=? WHERE user.code = ? AND user.status = ?
2009-08-25 15:19:18,646 INFO sqlalchemy.engine.base.Engine.0x...974c [1, 
0, 2]
Traceback (most recent call last):
   File bin/devpython, line 25, in module
 execfile(sys.argv[0])
   File reproduce.py, line 44, in module
 session.commit()
   File .../sqlalchemy/orm/session.py, line 673, in commit
 self.transaction.commit()
   File .../sqlalchemy/orm/session.py, line 378, in commit
 self._prepare_impl()
   File .../sqlalchemy/orm/session.py, line 362, in _prepare_impl
 self.session.flush()
   File .../sqlalchemy/orm/session.py, line 1358, in flush
 self._flush(objects)
   File .../sqlalchemy/orm/session.py, line 1445, in _flush
 flush_context.finalize_flush_changes()
   File .../sqlalchemy/orm/unitofwork.py, line 288, in 
finalize_flush_changes
 self.session._register_newly_persistent(elem.state)
   File .../sqlalchemy/orm/session.py, line 1021, in 
_register_newly_persistent
 self.identity_map.remove(state)
   File .../sqlalchemy/orm/identity.py, line 135, in remove
 raise AssertionError(State %s is not present in this identity map 
% state)
AssertionError: State sqlalchemy.orm.state.InstanceState object at 
0x84228ac is not present in this identity map

At first glance that looks identical to yours, until the session tries
to do the commit in the end.

What's the difference?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: making an instance read-only

2009-08-25 Thread Martijn Faassen

Hey,

phrrn...@googlemail.com wrote:
 I implemented a very crude flush/commit-time version of this today
 that disables all modifications. Michael suggested the use of
 connection_callable to provide fine-grained control of which engine to
 use for modifications. I haven't gone through all the details yet but
 it seems to work for my basic tests.
 
 pjjH
 
 
 class ReadOnlySession(DefaultSession):
 def __init__(self, **kwargs):
 super(ReadOnlySession, self).__init__(**kwargs)
 self._mapper_flush_opts = {'connection_callable',
 self._disable_any_modifications}
 
 def _disable_any_modifications(self, mapper=None, instance=None,
 **kwargs):
 return None

If I understand it correctly, this is on a per-session basis, correct? 
I'd like to block modification to some but not all objects in the same 
session.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: composite primary key problem

2009-08-25 Thread Martijn Faassen

Hey,

I now also tested the program with sqlite3 3.6.10: same problem. 
pysqlite2.5.5 is in use.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: composite primary key problem

2009-08-25 Thread Martijn Faassen

Hi there,

I'm looking at the remove() method in 
sqlalchemy.orm.identify.WeakInstanceDict, as this is where the assertion 
error is raised.

In the 'self' dictionary there is indeed an 
sqlalchemy.orm.state.InstanceState object with under the key (it's the 
only entry in the dictionary), but it's a different state object than 
what is passed in as the 'state' parameter. This triggers the 
AssertionError.

This remove() call is triggered by a piece of code that has a comment 
primary key switch, in _register_newly_persistent in session.py.

I wish we could figure out why you don't get it and I do...

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: making an instance read-only

2009-08-25 Thread Martijn Faassen

phrrn...@googlemail.com wrote:
 You are correct: the code-snippet will cause an exception to be thrown
 when SA attempts to flush any changes. However, the connection
 callable is called *per-instance* and it is supposed to return the
 connection to use to perform the flush. Inside the callable, you can
 peek at the mapper and/or the instance and return whatever you deem
 appropriate.

Ah, I see, so instances can have some say in the decision making.

 What kind of failure mode are you looking for? If you
 have a session with forbidden writes, what should happen? Nothing gets
 written? The legitimate updates occur but the disallowed ones generate
 exceptions? disallowed writes are silently swallowed by a mock
 connection (probably make programmers very upset, confused and angry!)

I think nothing at all should be committed, and an exception should be 
raised to the application.

 The problem with any of these approches is -- as you point out -- that
 the application is not informed of the boo-boo at the time it occurs.
 I wonder what Michael is alluding to in his comment about implementing
 __getattribute__? I assume he meant __setattr__?

I'm not sure, I haven't had a chance to think about it much yet.

I'd be all right if indeed the failure only occurred at the end of the 
transaction during a commit, if at least the error message is clear. The 
most important bit is to prevent the developer from updating a record 
that really shouldn't be updated (instead the developer should first 
create a new editable version of the record).

Preferable of course would be if we could do this validation earlier. 
The validator approach in SQLAlchemy allows this, but you'd need to set 
it manually for every mapped attribute, while I'd like to register one 
validator for every mapped attribute...

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: composite primary key problem

2009-08-25 Thread Martijn Faassen

Hey Michael,

Cool that you managed to reproduce the issue.

Michael Bayer wrote:

 and a potential fix is this:
 
 Index: lib/sqlalchemy/orm/session.py
 ===
 --- lib/sqlalchemy/orm/session.py (revision 6289)
 +++ lib/sqlalchemy/orm/session.py (working copy)
 @@ -1018,7 +1018,7 @@
  state.key = instance_key
  elif state.key != instance_key:
  # primary key switch
 -self.identity_map.remove(state)
 +self.identity_map.discard(state)
  state.key = instance_key
 
  self.identity_map.replace(state)
 Index: MANIFEST.in
 ===
 
 I guess the unit test for this would be, to do the operation in both ways
 so that the issue is indicated regardless of dictionary ordering.

Yes, that seems like it would catch it.

Anything I can do to make sure that a fix is in a 0.5.x release soon? I 
could look into writing the unit test that demonstrates the problem; 
would I do this on trunk?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] making an instance read-only

2009-08-24 Thread Martijn Faassen

Hi there,

I'm investigating ways to make an ORM-mapped instance read-only, 
dependent the value of a particular attribute (database backed or not). 
If an object has a certain state, I want to prevent normal users from 
making a modification. Other objects connected to the same session 
should be editable however.

I've tried to figure out various possibilities, but none of them seem 
entirely satisfactory:

* I could use AttributeExtension that checks the 'readonly' attribute 
and if so, raises an exception when modification is attempted. As far as 
I can see I'd need to manually define an AttributeExtension for *all* 
attributes, and I'd just like to do this once per mapped class at most.

* It may be possible to use MapperExtension and modify before_update 
somehow. This is a bit late however - I'd prefer an exception to be 
raised as soon as someone tries to modify an attribute.

* I could proxy the whole instance with a security proxy, along the 
lines of zope.security, which could then do the checks. I'd like to 
avoid security proxies if I can get away with it however. I'd prefer it 
if the objects that came back from session.query() were already 
configured to do read-only checks.

* I could try to use some form of row-level security on the database 
level. MySQL, the database I'm working with, doesn't have such feature 
as far as I'm aware, however. Also this'd be warning the developer a bit 
late - I'd prefer if it happened directly when modifying the attribute.

Does anyone have any clues as to what a good implementation strategy 
would be?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] cloning ORM-mapped instances

2009-08-24 Thread Martijn Faassen

Hi there,

I'm looking into a reliable way to clone ORM-mapped instances, so that 
the clone, with some modifications, can be re-added to the database. I 
saw the following thread:
 
http://groups.google.com/group/sqlalchemy/browse_thread/thread/6e162f9a74697a01/6396c677b2a87797#6396c677b2a87797

Unfortunately the solutions presented in this thread require the 
instantation of the class without arguments, and this is not something 
that works with arbitrary classes as an __init__ may have required 
arguments.

I went with a solution that instead clones the record on the table 
level, going outside the ORM. I'd be nice though if there were an easy 
way to do this with the ORM. Would this be something that be sensible to 
add to SQLAlchemy itself? What would the implementation strategy be?

Of course once one starts thinking about relations and the like, cloning 
can get pretty hairy. Perhaps for that reason the cloning of an instance 
doesn't have a general implementation in SQLAlchemy. You'd need to clone 
everything that relates to it as well, and that can get pretty involved. 
(I'd need such a thing though, as my use case involves items going 
through a workflow. each of the indirect deep-copy clones would 
require a modification as well)

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] composite primary key problem

2009-08-24 Thread Martijn Faassen
Hi there,

I'm experimenting with a composite primary key to see whether it might 
help implement a workflow system, where the primary key consists of an 
identifier (code) and a workflow status.

I run into an error with the ORM (in 0.5.5 and trunk) when I modify part 
of the primary key (the workflow status). It looks like the session 
somehow retains a reference to something that isn't around anymore. The 
documentation claims primary keys can be mutated, but perhaps I'm doing 
something that really shouldn't be done after all?

I've attached the code to reproduce the issue (reproduce.py). The error 
is during the commit on the last line, when the primary keys of two 
items are modified.

Here is the traceback:

Traceback (most recent call last):
   File bin/devpython, line 25, in ?
 execfile(sys.argv[0])
   File reproduce.py, line 44, in ?
 session.commit()
   File .../sqlalchemy/orm/session.py, line 673, in commit
 self.transaction.commit()
   File .../sqlalchemy/orm/session.py, line 378, in commit
 self._prepare_impl()
   File .../sqlalchemy/orm/session.py, line 362, in _prepare_impl
 self.session.flush()
   File .../sqlalchemy/orm/session.py, line 1358, in flush
 self._flush(objects)
   File .../sqlalchemy/orm/session.py, line 1445, in _flush
 flush_context.finalize_flush_changes()
   File .../sqlalchemy/orm/unitofwork.py, line 288, in 
finalize_flush_changes
 self.session._register_newly_persistent(elem.state)
   File .../sqlalchemy/orm/session.py, line 1021, in 
_register_newly_persistent
 self.identity_map.remove(state)
   File .../sqlalchemy/orm/identity.py, line 135, in remove
 raise AssertionError(State %s is not present in this identity map 
% state)
AssertionError: State sqlalchemy.orm.state.InstanceState object at 
0xb769ae4c is not present in this identity map

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import create_engine, MetaData
from sqlalchemy import Table, Column, Integer, Unicode
from sqlalchemy.orm import sessionmaker, mapper

NEW = 0
PUBLISHED = 1
EDITABLE = 2
ARCHIVED = 3

engine = create_engine('sqlite:///:memory:')
metadata = MetaData()

Session = sessionmaker(bind=engine)
session = Session()

user = Table(
'user', metadata,
Column('code', Integer, primary_key=True),
Column('status', Integer, primary_key=True),
Column('username', Unicode, nullable=False),
)
metadata.create_all(engine)
  
class User(object):
def __init__(self, code, status, username):
self.code = code
self.status = status
self.username = username

mapper(User, user)

a_published = User(0, PUBLISHED, u'a')
session.add(a_published)
session.commit()

a_editable = User(0, EDITABLE, u'a')

session.add(a_editable)
session.commit()

a_published.status = ARCHIVED
a_editable.status = PUBLISHED

session.commit()


[sqlalchemy] Re: declarative, autoload and late binding to engine

2008-11-06 Thread Martijn Faassen

Hi there,

MikeCo wrote:
 I have an application that will need to bind to several different
 databases with the same structure. The databases to be accessed are
 not known at import time, that will be determined later while the
 application is running. I want to use declarative and autoload the
 columns.  Autoload needs a database connection available when a class
 is first compiled to do the database introspection, that is the part I
 can't figure out. Maybe a custom metaclass?
 
 If this is already solved, can someone point me at the answer? I
 haven't found it yet.

I've solved it, but the solution is rather particular to Zope technology 
right now. It's spread around z3c.saconfig and megrok.rdb:

http://svn.zope.org/z3c.saconfig/trunk

http://svn.zope.org/megrok.rdb/trunk/

Let me sketch out what's going on:

* z3c.saconfig sets up a special scoped session, with a custom session 
factory and scopefunc. The session factory looks up in the Zope 3 
component architecture for a way to create an engine, but you could use 
some other strategy.

* the engine factory is also looked up dynamically by the session 
factory and in turn creates a SQLAlchemy engine (or returns an existing 
one if the engine is already created).

* when an engine is first created, an event is fired. In effect this 
event is fired when a session is needed for the first time in the 
application.

Now megrok.rdb hooks into this event. If will reflect any tables that 
need to be reflected and create any tables that need to be created (if 
their structure is defined in python).

reflection in the simplest case can be done like this:

metadata.reflect(bind=engine)

and creation can be done like this:

metadata.create_all(engine)

Now at the point the event is handled, previously the various 
declarative classes have been associated with the metadata object.

megrok.rdb actually doesn't use the declarative extension's metaclass 
approach, but instead drives the declarative extension's 
instrument_declarative from a grokker (see the martian library). In my 
approach I use an explicit metadata object. I believe the declarative 
extension creates one on the fly (but you can get to it with Base.metadata).

Anyway, all of this sounds very complicated, as the Zope and Grok stuff 
take particular approaches towards configurability. I think the core of 
this approach is:

* hook up your classes to a metadata (declarative does this for you)

* at the appropriate time, do metadata.reflect(bind=engine)

Regards,

Martijn



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: proposed extension to SessionExtension: after_bulk_operation

2008-11-05 Thread Martijn Faassen
Hi there,

Michael Bayer wrote:
 I would say it should  
 be called in any case, rows or not, and the result object provided.   
 Just provide a patch and we're GTG.

Done. The patch is attached. I can commit it myself if it looks all right.

  I wonder if a before_ hook should be provided as well.

I don't know myself; Laurence, if you're listening in perhaps you'd care 
to comment?

Regards,

Martijn

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Index: test/orm/session.py
===
--- test/orm/session.py	(revision 5241)
+++ test/orm/session.py	(working copy)
@@ -898,7 +898,9 @@
 log.append('after_begin')
 def after_attach(self, session, instance):
 log.append('after_attach')
-
+def after_bulk_operation(self, session, result):
+log.append('after_bulk_operation')
+
 sess = create_session(extension = MyExt())
 u = User(name='u1')
 sess.add(u)
@@ -920,12 +922,21 @@
 log = []
 sess.commit()
 assert log == ['before_commit', 'after_commit']
+
+log = []
+sess.query(User).delete()
+assert log == ['after_begin', 'after_bulk_operation']
+
+log = []
+sess.query(User).update({'name': 'foo'})
+assert log == ['after_bulk_operation']
 
 log = []
 sess = create_session(autocommit=False, extension=MyExt(), bind=testing.db)
 conn = sess.connection()
 assert log == ['after_begin']
 
+
 @testing.resolve_artifact_names
 def test_before_flush(self):
 test that the flush plan can be affected during before_flush()
Index: lib/sqlalchemy/orm/query.py
===
--- lib/sqlalchemy/orm/query.py	(revision 5241)
+++ lib/sqlalchemy/orm/query.py	(working copy)
@@ -1347,6 +1347,9 @@
 if identity_key in session.identity_map:
 session._remove_newly_deleted(attributes.instance_state(session.identity_map[identity_key]))
 
+for ext in session.extensions:
+ext.after_bulk_operation(session, result)
+
 return result.rowcount
 
 def update(self, values, synchronize_session='expire'):
@@ -1441,6 +1444,9 @@
 if identity_key in session.identity_map:
 session.expire(session.identity_map[identity_key], values.keys())
 
+for ext in session.extensions:
+ext.after_bulk_operation(session, result)
+
 return result.rowcount
 
 def _compile_context(self, labels=True):
Index: lib/sqlalchemy/orm/interfaces.py
===
--- lib/sqlalchemy/orm/interfaces.py	(revision 5241)
+++ lib/sqlalchemy/orm/interfaces.py	(working copy)
@@ -307,6 +307,15 @@
 This is called after an add, delete or merge.
 
 
+def after_bulk_operation(self, session, result):
+Execute after a bulk operation to the session.
+
+This is called after a session.query().delete() and
+session.query().update().
+
+'result' is the result object returned from the bulk operation.
+
+
 class MapperProperty(object):
 Manage the relationship of a ``Mapper`` to a single class
 attribute, as well as that attribute as it appears on individual


[sqlalchemy] proposed extension to SessionExtension: after_bulk_operation

2008-11-04 Thread Martijn Faassen

Hi there,

I've been using zope.sqlalchemy's integration with SQLALchemy and it's 
been working pretty well so far.

Today however I ran into a snag when using session.query(..).delete(). 
While a query immediately after the delete showed no more objects, in 
the next transaction the objects would re-appear.

This turned out because zope.sqlalchemy tracks whether a session has 
changed over time, and has no way of tracking the transaction has been 
changed when this (and also session.query(..).update()) is in use, and 
then rolls back the transaction.

zope.sqlalchemy offers a way to manually mark a transaction as changed:

   zope.sqlalchemy.mark_changed()

It's annoying however to have to remember to call this when using these 
operations.

After discussion between myself and Laurence Rowe we came up with the 
following proposal (mostly it's Laurence's :):

   session.query(...).delete() and session.query(...).update() call
   a new method on SessionExtension, if at least result.rowcount != 0. We
   propose the name after_bulk_operation() for this new method.

   We can then modify zope.sqlalchemy's SessionExtension to mark the
   session as changed by hooking into this method.

What do people think?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key problem when using reflection and schemas

2008-10-28 Thread Martijn Faassen

jason kirtland wrote:
 That should be working now in r5203.  The reflection code was missing an 
 edge case where an explicit schema= is the same as the connection's 
 schema.  Switching those to schema=None should work as intended if you 
 need a workaround on a released version.

Thanks a lot for the quick fix, it seems to work for me! Working with 
the trunk is all right for now.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] foreign key problem when using reflection and schemas

2008-10-27 Thread Martijn Faassen

Hi there,

I have a problem with foreign keys that seems to occur when I combine 
reflection and explicit schemas, in the context of MySQL. I've confirmed 
this problem with both rc2 and the trunk. It's best demonstrated with 
some failing code:

Imagine the following MySQL database 'somedb':

CREATE TABLE somedb.a (
   id int PRIMARY KEY auto_increment NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE somedb.b (
   id int PRIMARY KEY auto_increment NOT NULL,
   a_id int NOT NULL,
   FOREIGN KEY (a_id) REFERENCES somedb.a(id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

And the following code:

from sqlalchemy import *
from sqlalchemy.orm import mapper, relation, sessionmaker

engine = create_engine('mysql:///somedb')
meta = MetaData()
meta.bind = engine

a_table = Table(
 'a',
 meta,
 schema='somedb',
 autoload=True)

b_table = Table(
 'b',
 meta,
 schema='somedb',
 autoload=True)

class A(object):
 pass


class B(object):
 pass

mapper(A, a_table,
properties={'bs': relation(B)})
mapper(B, b_table)

Session = sessionmaker(bind=engine)
session = Session()
print session.query(A).all()

When executing this code, the last line fails with the following error:

Traceback (most recent call last):
   File bin/devpython, line 138, in ?
 execfile(sys.argv[0])
   File experiment.py, line 33, in ?
 print session.query(A).all()
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/session.py,
 
line 914, in query
 return self._query_cls(entities, self, **kwargs)
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py,
 
line 95, in __init__
 self.__setup_aliasizers(self._entities)
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py,
 
line 109, in __setup_aliasizers
 mapper, selectable, is_aliased_class = _entity_info(entity)
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py,
 
line 454, in _entity_info
 mapper = class_mapper(entity, compile)
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py,
 
line 531, in class_mapper
 mapper = mapper.compile()
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py,
 
line 371, in compile
 mapper.__initialize_properties()
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py,
 
line 393, in __initialize_properties
 prop.init(key, self)
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/interfaces.py,
 
line 384, in init
 self.do_init()
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py,
 
line 531, in do_init
 self._determine_joins()
   File 
/home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py,
 
line 604, in _determine_joins
 raise sa_exc.ArgumentError(Could not determine join condition 
between 
sqlalchemy.exc.ArgumentError: Could not determine join condition between 
parent/child tables on relation A.bs.  Specify a 'primaryjoin' 
expression.  If this is a many-to-many relation, 'secondaryjoin' is 
needed as well.

This code *only* fails if I designate an explicit 'schema' in the table 
statements. If I leave these out, things work as expected. Since I'm 
interested in working with reflected tables that reside in multiple 
schemas, this is a problem.

Digging around indicates this that _search_for_join, defined in 
_determine_joins, does not actually find the join clause. Going deeper 
traces the failure down to the Join class in sqlalchemy.sql.expression, 
which fails in self._match_primaries in its __init__ method. This in 
turn brings us to sqlalchemy.sql.util.join_condition, which has 
fk.get_referent() return None if schemas are explicitly specified, and 
work fine if not.

fk.get_referent() uses corresponding_column, and this in turn tries to 
use contains_column() which returns False in the schema case, but true 
if 'schema' is not explicitly verified.

Why I don't know. The repr of the column passed into contains_column 
looks the same as the repr of the column in the table, but apparently 
it's not exactly the same instance. Something somewhere is making the 
column to be different.

Is this a bug? If so, how would we go around solving it?

Regards,

Martijn




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: tracking parents, names

2008-08-20 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
[snip]
 The part missing for me here is that bar, if its a MappedCollection,  
 is not itself a mapped object in the same sense that foo or baz  
 is.   Keep in mind that a SQLAlchemy relation looks like:
 
   mapped class - collection - mapped class - (etc.)

Yes, I think I understand this. I wrote this in my original post as the 
two main cases:

* collection entries in a MappedCollection (or other relations). [snip]

* a collection as created by a relation. [snip]

I don't have my naming entirely the same way as you do yet, but I meant 
the individual mapped class instances (as they appear in collections) 
and the collection instances themselves.

 If this is correct, then the assignment of __name__ and __parent__ to  
 bar, which falls under the collection, would go through different  
 channels than the assignment of __name__ and __parent__ to baz,  
 which falls under mapped class.  In the former case, bar would  
 receive its __name__ and __parent__ upon construction through its own  
 collection_adapter where information about its relationship to foo  
 is readily available, and baz would receive its __name__ and  
 __parent__ when it is added to bar using bar's own instrumented  
 mutator methods, where bar obviously knows the __name__ (the  
 keyfunc()) and the __parent__ is itself.  Does this make sense ?

Yes. Now as for the details. :)

Where do I get this special collection adapter from? I'd like people to 
just spell out normal relations and automatically have the __parent__ 
and __name__ information be there in the generated collections.

 It's important to have this information no matter how the object graph
 is constructed, either by construction (as I think we covered with the
 __setitem__ change), or by retrieval from the database.
 
 The ORM populates collections through the @appender method, so there  
 is no need to instrument the getter methods; nothing would enter the  
 collection without being assigned a __parent__ and __name__.   

Ah, that's good to know. I was very much afraid I'd end up in a 
situation where I'd need to override a lot of different things 
everywhere - I'm looking for two precise points where I can make both 
use cases work the right way.

 The  
 only restriction is that if an object is placed in a collection with  
 __parent__ and __name__, but is placed in *another* collection  
 simultaneously, then __parent__ and __name__ would be ambiguous.   My  
 comments regarding threading and wrappers refer to the latter use  
 case, which may not be needed.

Yes, this bit worries me. We can use wrappers in the form of location 
proxies; does the @appender method allow one to do the wrapping in the 
location proxy? Or is this object actually supposed to be only there one 
and is in two collections at once (preserving the identity relationship)?

So how do I override this appender method? Is this the monkey-patching 
approach you describe in your second code sketch?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: tracking parents, names

2008-08-18 Thread Martijn Faassen

Hey Michael,

Thanks for the helpful answer.

Michael Bayer wrote:
[snip]
 (after I've reread the above  
 two paragraphs many times it seems like the idea is that the target  
 object doesn't know anything about the name of the relation in which  
 its collected).

The idea is that if the object graph says:

foo.bar.baz

that'll be:

bar.__name__ == 'bar'
bar.__parent__ is foo
baz.__name__ == 'baz'
baz.__parent__ is bar

In this case:

foo.bar[key]

it'll be:

bar[key].__parent__ is foo.bar
bar[key].__name__ == key

where 'bar' is a collection.

 I'm not as fluent with the collection API (since Jason wrote it) but  
 it turns out its pretty easy to get at the parent object and the  
 relation which its mapped through a collection directly, since  
 collections always have an adapter present which maintains this  
 relationship.

Thanks, that's interesting to know. It turns out though that the adapter 
  considers the owner state of bar[key] to be foo, so we can't use that 
as a parent. That actually simplifies things and it turns out that in my 
modified version of _receive the adapter doesn't appear to be necessary. 
Still, good to have some idea about how to use them. I have this now:

 def _receive(self, item):
 item.__name__ = unicode(self.keyfunc(item))
 item.__parent__ = self


 This would still involve overriding the behavior of all the mutator  
 methods on collections.  I'm not sure why you'd want to override  
 retrieve methods as well, are we able to assign __name__ and  
 __parent__ to elements as they are added, or do these change depending  
 on where the item is accessed from ?  (and if the latter, is it  
 wrapped in a container of some kind, or are we guaranteeing single- 
 threaded access?)

It's important to have this information no matter how the object graph 
is constructed, either by construction (as I think we covered with the 
__setitem__ change), or by retrieval from the database. So yes, the 
__parent__ information does indeed change depending on where the item is 
accessed.

I take it however that there might be a problem with threaded access. 
It's certainly possible for the same object to be accessed in multiple 
threads, but each thread has its own session associated with it using 
scoped sessions. I take it that isn't enough?

We do have support for a proxy object that can add these attributes 
without modifying the object itself. That wouldn't handle the 
__setitem__ case probably though, but it could be used to make sure an 
object is properly wrapped when accessing, even though the same object 
may be retrieved with multiple parents. Hm, I see this conflicting with 
SQLAlchemy's ORM, where it's certainly possible for the same object to 
appear multiple times in the object graph.

Thanks for the proof of concept. With my modifications it makes my tests 
pass, but that's a sign the tests are insufficient. :)

Given the complexities involved, I need to rethink whether this whole 
approach is the right one. If it could be made to work it'd make URL 
construction work out of the box, but there are other ways...

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: time to remove create_session?

2008-06-27 Thread Martijn Faassen

Michael Bayer wrote:
 two things are needed:
 
 1. the official way to create a Session when all you want is a  
 Session, with no custom builder class or anything like that.  It  
 should not be grossly inconsistent with the default arguments of the  
 current sessionmaker() call.
 
 2. all the myriad create_session() calls in the unit tests need to  
 call something that acts like the old create_session(), i.e. with  
 all the newfangled things turned off by default.
 
 
 My vote would be for 1. just call Session(), and 2. we just put a  
 create_session() function in the testlib.

So creating the Session class directly is the right way to go? I 
switched from using that to create_session when I found out about the 
existence of this function, but I can switch back.

Note that I really need to create sessions themselves; I cannot use 
sessionmaker nor scoped_session, as I have a special scoped session that 
needs to be able to create sessions.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overzealous check breaks doctesting

2008-06-25 Thread Martijn Faassen

jason kirtland wrote:
[snip]
 Could the check somehow be modified to still find true builtins but not 
 those defined in a doctest?
 
 Sure.  Any suggestions for an alternate check?

Heh, no. It's quite difficult to come up with any alternative..

I wonder why doctest.DocFileSuite makes these classes appear as __builtin__.

I just went digging in doctest, but unfortunately this seems to be an 
unavoidable side effect of the behavior of the 'exec' statement, which 
doctest uses.

I've just did some experiments, but whatever I do, any class definition 
I exec ends up with a __module__ set to __builtin__.

Regards,

Martijn




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overzealous check breaks doctesting

2008-06-25 Thread Martijn Faassen

jason kirtland wrote:
 Martijn Faassen wrote:
 jason kirtland wrote:
 [snip]
 Could the check somehow be modified to still find true builtins but not 
 those defined in a doctest?
 Sure.  Any suggestions for an alternate check?
 Heh, no. It's quite difficult to come up with any alternative..

 I wonder why doctest.DocFileSuite makes these classes appear as __builtin__.

 I just went digging in doctest, but unfortunately this seems to be an 
 unavoidable side effect of the behavior of the 'exec' statement, which 
 doctest uses.

 I've just did some experiments, but whatever I do, any class definition 
 I exec ends up with a __module__ set to __builtin__.
 
 I think that comes from __name__ in the exec globals context:

Yes, I just found out myself, Fred Drake told me. zope.testing actually 
has some code that does a workaround based on this trick too.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] overzealous check breaks doctesting

2008-06-24 Thread Martijn Faassen

Hi there,

I'm writing a doctest in which I include a MappedCollection subclass. In 
my doctest, I create such a subclass::

class Foo(MappedCollection):
   ...pass

Unfortunately later on, sqlalchemy.orm.collections.py has a check to 
determine whether Foo is really a builtin, and if so, it fails to 
instrument, here::

def _instrument_class(cls):
 ...
 # In the normal call flow, a request for any of the 3 basic collection
 # types is transformed into one of our trivial subclasses
 # (e.g. InstrumentedList).  Catch anything else that sneaks in here...
 if cls.__module__ == '__builtin__':
 raise sa_exc.ArgumentError(
 Can not instrument a built-in type. Use a 
 subclass, even a trivial one.)

Unfortunately, when Foo is 'cls', it will have __module__ set to 
'__builtin__' even while Foo is not a builtin.

I can work around this issue in the doctest by something something evil 
like::

Foo.__module__ = 'foo'

Things then seem to work.

Could the check somehow be modified to still find true builtins but not 
those defined in a doctest? I can also see this as being a doctest 
problem; perhaps the __module__ should really be set to '__main__' in 
them, but it might be easier to get it fixed here...

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inconsistency in default arguments in trunk

2008-06-20 Thread Martijn Faassen

Hi there,

Michael Bayer wrote:
[snip]
 I think the solution here at the least would be to remove prominent 
 advertisement 
  of create_session().  I'm not sure yet about
 deprecation/non-publicizing it.  I still think its a useful function  
 for ad-hoc creation of sessions with no extra behavior but this may  
 very well be just my skewed view of it.

The reason I'm using create_session is because I don't see a way to use 
sessionmaker in the Zope integration code. This is because I have my own 
session factory that I pass to scoped_session.

Before I was using the actual Session class directly, but I figured 
create_session would be better. I noticed then that sessionmaker takes 
different defaults than create_session, and the defaults used by 
sessionmaker for autocommit and autoflush were the ones I actually need.

Anyway, I work around this in my integration code by putting the 
defaults back to those as used by sessionmaker, but it rather surprised 
me to notice the difference.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inconsistency in default arguments in trunk

2008-06-20 Thread Martijn Faassen

[EMAIL PROTECTED] wrote:
 On Friday 20 June 2008 17:38:25 Michael Bayer wrote:
[snip]
 move it in some convenience.py? 
 together with other such non-mandatory but convenient shortcuts that 
 most people would make in one form or another

In my mind, create_session is actually not the shortcut or convenience, 
it's more low-level than sessionmaker, which is quite magic in 
comparison. So I'd say sessionmaker is the convenience. The thing about 
good conveniences is that you'd want most people to use them.

I'm just in a special position as I'm basically replacing sessionmaker 
with Zope 3 style configuration, which, by working together with another 
magic convenience story, scoped_session, can do neat things like have 
the same 'Session' be connected to different engines in different contexts.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] inconsistency in default arguments in trunk

2008-06-19 Thread Martijn Faassen

Hi there,

I just noticed on the trunk that on sessionmaker(), autocommit defaults 
to False, but autocommit defaults to True when you use create_session. 
autoflush is also True for sessionmaker, but for create_session, it's 
False.  Finally autoexpire is True for sessionmaker, but False for 
create_session.

Is this intentional?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: reusing or recreating engines

2008-06-17 Thread Martijn Faassen

Hey,

Michael Bayer wrote:
 On Jun 17, 10:04 am, Martijn Faassen [EMAIL PROTECTED] wrote:
 Hi there,

 A question came up when thinking about how to integrate SQLAlchemy with
 Zope.

 Is it all right to recreate an engine, once per session scope (thread,
 say), or is it better to reuse the same engine as much as possible? How
 expensive is engine creation? Do you miss features like, say, connection
 pooling, if you recreate an engine?

 
 you should reuse a single engine.  It contains a pool of connections,
 so recreating engines means new connections are constantly rebuilt and
 it defeats the purpose of the pool.  Other than that the creation of
 an engine is not very expensive but even a small expense is needless
 here.

Okay, understood, thanks for the answer!

 I can't think of any advantage to recreating engines on each request
 (except if you're concerned about no-longer-used applications keeping
 their engines aroundin which case I'd use a WeakValueDict).

We wouldn't recreate the engine on each request, just each time a 
session is constructed for a new thread (or actually new 
application/thread combination). (that is, ScopedSession would get a 
session_factory that did this).

The reason I'm checking is because it might simplify some initialization 
code if we could just re-create the engine in case a new thread kicks 
in. We'll figure out how to retain a single engine though.

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---