[sqlalchemy] Re: Mapper with relation/custom join condition fails
On Tue, Jun 19, 2007 at 02:21:51AM +0300, [EMAIL PROTECTED] wrote: Perfectly. Although all the hassle makes me think even harder if there is really no other way of connecting these two database tables. A casted join with string comparison gets dirtier every time I look at it. :( excuse me if it sounds dumb... what about a middleman table of proper (casted) values that is automaticaly build by some trigger? i can guess at least 3 objections right now, but it is an option in general principle... Doesn't sound dumb at all. I assume you are thinking of an intermediary table that is connecting primary keys like row 10 of dns table is connected to row 515 of dhcp table. Yes, I'm considering that, too. With secondary properties SQLAlchemy makes such many-to-many relations really easy. However I have a higher risk of rows that have different IP address values in this case. I'll think about it though. Thanks for the suggestion. Christoph --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Mapper with relation/custom join condition fails
Perfectly. Although all the hassle makes me think even harder if there is really no other way of connecting these two database tables. A casted join with string comparison gets dirtier every time I look at it. :( excuse me if it sounds dumb... what about a middleman table of proper (casted) values that is automaticaly build by some trigger? i can guess at least 3 objections right now, but it is an option in general principle... Doesn't sound dumb at all. I assume you are thinking of an intermediary table that is connecting primary keys like row 10 of dns table is connected to row 515 of dhcp table. no, i was thinking of a facade table having just 1 column, a casted copy of the to-be-casted original column. i guess some primarykey sync/join would be needed too. and u use that table/column instead of the original column. it also looks a bit like multitable inheritance, this table being the child in the inheritance with one new attribute - but it is not real inheritance; and has a calculable attribute done by triggers, both ways. or u may copy+cast the whole original table... but your new idea of association-table may also work. whatever. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: postgresql: need schema-qualified foreign keys
This looks like a good solution. I'll need some time to provide a test case, however. If the change breaks existing code, how are cross-schema references supposed to be handled? Best regards Klaus On 18 Jun., 21:54, Michael Bayer [EMAIL PROTECTED] wrote: On Jun 18, 2007, at 4:25 AM, [EMAIL PROTECTED] wrote: In my first experiments with elixir I noticed that sqlalchemy doesn't handle foreign keys correctly on autoloaded tables. This has to to with schema handling in the postgresql driver. A foreign key referencing a table in the public schema from outside gets the table name without the schema part and thus locates the table in its own schema. I've appended a trivial fix below. im afraid this patch breaks several postgres reflection unit tests, since the schema attribute of Table is assumed to be of the default schema if not present. by making it present here, it fails to locate tables within its own metadata. i found when trying to create the test case for this issue it required explicitly stating the default schema name as the schema of the inside table. that is also a bug. the patch that fixes the all issues for the test I was able to create is: Index: lib/sqlalchemy/schema.py === --- lib/sqlalchemy/schema.py(revision 2742) +++ lib/sqlalchemy/schema.py(working copy) @@ -701,7 +701,7 @@ raise exceptions.ArgumentError(Invalid foreign key column specification: + self._colspec) if m.group(3) is None: (tname, colname) = m.group(1, 2) -schema = parenttable.schema +schema = None else: (schema,tname,colname) = m.group(1,2,3) table = Table(tname, parenttable.metadata, mustexist=True, schema=schema) meaning, if you say ForeignKey(sometable.somecolumn) for a particular Column, the schema is assumed to be the default schema, not the schema specified for the Table which contains the ForeignKey object. this means creating a ForeignKey between two tables A and B, both with schema=myschema, would have to look like ForeignKey(myschema.a.somecol), even though both tables are in the same myschema schema. Im OK with that but not sure how disruptive this change would be. if you can provide a test case illustrating your scenario (using Table/MetaData/Engine objects only; no elixir classes please), that would help greatly. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: UniqueObject recipe and turbogears
The way to hook most parts of the ORM is by creating a MapperExtension http://www.sqlalchemy.org/docs/adv_datamapping.html#advdatamapping_exte nding. Basically, define a subclass of MapperExtension, overriding whichever method you are interested in (possibly append_result or create_instance in your case), and pass an instance of this class as the 'extension' parameter to the mapper. However, I think it would be worthwhile trying to understand exactly why the current recipe isn't working for you. (I see someone has updated it to put the session in the hashkey - I'm not sure I would have been brave enough to do that until at least one other person had said it was the right thing to do. I don't really know what I'm talking about ;-) ) If you have any places in your code where you create UniqueName objects without explicitly saying UniqueName(whatever), then they won't pass through the metaclass __call__ method, and therefore won't get added to the cache. This also includes situations where the object is loaded from a relation on another class. But I don't understand why that would be a problem, as that would only affect instances that already exist in the database, and because of the identity_map magic, you will only get one object in the session representing that row anyway. I think a testcase is probably needed - you could try and make a single script that starts two threads and performs operations on UniqueName obects, and see if you can get the same failure. Simon -Original Message- From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of kris Sent: 18 June 2007 20:48 To: sqlalchemy Subject: [sqlalchemy] Re: UniqueObject recipe and turbogears I agree with your summary, I also noted that sqlalchemy doesn't really like to have objects that are linked together and in different sessions. I tried using hash_key = (session.context, name), but this failed in the same way. Loading the code with log statement, I note that some object are loaded not with the UniqueName loader, but probably by sqlalchemy itself. I believe it is those objects that are creating the conflict. Is there a way to hook the sqlalchemy loading system, so I can place the object in the cache? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL: using pyODBC
hmm it gives with use_scope_identity=False and r2746: File /home/stefanb/src/hor/sqlalchemy/engine/strategies.py, line 106, in create raise TypeError(Invalid argument(s) %s sent to create_engine(), using configuration %s/%s/%s. Please check that the keyword arguments are appropriate for this combination of components. % (','.join(['%s' % k for k in kwargs]), dialect.__class__.__name__, pool.__class__.__name__, engineclass.__name__)) TypeError: Invalid argument(s) 'use_scope_identity' sent to create_engine(), using configuration MSSQLDialect_pyodbc/QueuePool/ Engine. Please check that the keyword arguments are appropriate for this combination of components. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Checking for number of related items
I have a mapper defined as mapper(HierNode, HierTable, properties={ 'tools' : relation(ToolNode), }) Usually I am interested in iterating over all 'tools' and lazy loading works fine. However sometimes I need to determine if one HierNode has tools or not. Loading all tools would be too expensive for a has_tools property. It would be enough to check for only one tools (LIMIT 1) and return True/False. Is there any way to achieve that? Andreas pgp0EZ1gaasmp.pgp Description: PGP signature
[sqlalchemy] Re: MSSQL: using pyODBC
shame on me I omit your ...addng the keyword parameter..., i.e. db_mssql = create_engine( 'mssql://sa:[EMAIL PROTECTED] use_scope_identity=1', module= pyodbc) unfortunately with scope identity it crashes earlier - mssql.py returns None for id and rollbacks. best regards stefan --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] session.clear() not clearing cascaded items?
g'day i have Links, pointing to Nodes. Adding just Links to session + flush() works as expected - both Links and Nodes got saved. Doing session.close() although does not always detach all related objects, i.e. sometimes some Nodes stay with _session_id on them after session is long gone. for x in mysession: print x does not print those objects, i.e. they are not explicitly in the session. Is this some intended behaviour, and is there some way to clear these cascaded objects too? svil --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple mappers on une table problem.
On 6/11/07, Michael Bayer [EMAIL PROTECTED] wrote: secondly: a non-primary mapper is just an awkward way of defining an ORM query. Since we support generative queries now, you can just make a Query object with the criterion youre looking for and just hold onto it...youre just adding a single WHERE criterion, so my_secondary_query = query(MyClass).filter(table.c.tipo=='p'). much easier. ideally Query should be replacing 90% of all non_primary mapper use cases. So in your mind, is my non-primary example over at http://spyced.blogspot.com/2007/01/why-sqlalchemy-impresses-me.html part of the other 10%? I don't think holding onto a query is as elegant as being able to say user.max_order in a eager or lazy context and having that Just Work, but maybe I am missing something. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple mappers on une table problem.
On Jun 19, 2007, at 1:43 PM, Jonathan Ellis wrote: On 6/11/07, Michael Bayer [EMAIL PROTECTED] wrote: secondly: a non-primary mapper is just an awkward way of defining an ORM query. Since we support generative queries now, you can just make a Query object with the criterion youre looking for and just hold onto it...youre just adding a single WHERE criterion, so my_secondary_query = query(MyClass).filter(table.c.tipo=='p'). much easier. ideally Query should be replacing 90% of all non_primary mapper use cases. So in your mind, is my non-primary example over at http://spyced.blogspot.com/2007/01/why-sqlalchemy-impresses-me.html part of the other 10%? I don't think holding onto a query is as elegant as being able to say user.max_order in a eager or lazy context and having that Just Work, but maybe I am missing something. your blog post is the other 10% :) pretty much, the only time non-primary mapper provides anything that Query currently does not is if you make a relationship *to* the non- primary mapper. however, it would be nice to totally get rid of non-primary mappers because they are usually the wrong path to take, and they are so much like a Query.ideally, mappers shouldnt have anything to say about how its loaded, just how its represented in the database. (this is why i like our low version numbers. if we ever get to the 0.7s, 0.8s etc. these roles would hopefully be rock solid). the only thing you are getting in your particular example is the ability for the relationship to get bundled into an eager load. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Performing read-only operations without a session (avoiding leaks)
On 6/15/07, Michael Bayer [EMAIL PROTECTED] wrote: create_session().query(Whatever).filter(...).all() You added .all() when I wasn't looking? :) -- Mike Orr [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Performing read-only operations without a session (avoiding leaks)
On Jun 19, 3:50 pm, Mike Orr [EMAIL PROTECTED] wrote: On 6/15/07, Michael Bayer [EMAIL PROTECTED] wrote: create_session().query(Whatever).filter(...).all() You added .all() when I wasn't looking? :) -- Mike Orr [EMAIL PROTECTED] if its not there i need to add it since id like to get some 0.4 forwards-compatibility going (yes, the new Query interface is fully built in 0.4) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---