[sqlalchemy] Re: SA and Filemaker Pro
On Sep 3, 2008, at 8:58 AM, KMCB wrote: Michael, I had thought that may be the case, so I had already started with the databases/mssql.py file. Do you think that is the best place to start? Is SA structured that all DB specific components are stored in that location? Just trying to scope the effort from my end, any thoughts are appreciated. So far, I have noticed that for non DSN connections some minor changes are required on the pyodbc calls. It was pretty frustrating but, I found that I needed to use SDSN instead of DATABASE or DB (Filemaker docs were worthless). the Filemaker Pro dialect would probably be in its own file, in a similar manner to the access.py dialect.We've made efforts towards a new system whereby all the ODBC code for any kind of database backend would be centralized but this hasn't been completed as of yet. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Concatenating column names and text() clauses
On Aug 21, 5:25 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Aug 21, 2008, at 4:28 PM, Jeff wrote: Hello. I'm trying to writing something to generate full text searches for postgres. Here's the function I've got so far: from sqlalchemy import sql import operator def full_text(fields, text): def alternate(items): for i in items[:-1]: yield i yield sql.text( ' ' ) yield items[-1] return %s @@ to_tsquery('%s') % ( sql.func.to_tsvector( reduce(operator.add, alternate(fields)) ), text) You pass it a list of columns to match against and a full-text string to search with, and it returns a string that you can use in a filter() clause. It works fine if you only pass one or two column names--any more than that, and it dies. At first, I thought it was something wrong with my generator, but it turns out there's something wrong with the way I'm concatenating columns and raw text--that, or there's a bug there. Below is a simple example of what goes wrong. from sqlalchemy import * table1 = Table('table1', MetaData(), Column('col1', String()), Column('col2', String()), Column('col3', String()), ) #these work print table1.c.col1 + table1.c.col2 + table1.c.col3 print table1.c.col1 + text('sdf') + table1.c.col2 + table1.c.col3 print table1.c.col1 + text('sdf') + table1.c.col2 #these don't print table1.c.col1 + table1.c.col2 + text('sdf') print table1.c.col1 + table1.c.col2 + table1.c.col3 + text('sdf') print table1.c.col1 + text('sdf') + table1.c.col2 + text('sdf') print table1.c.col1 + text('sdf') + text('sdf') The ones that don't work die with AttributeError: 'NoneType' object has no attribute 'adapt_operator' on line 1328 in sqlachemy/sql/ expression.py Is this a bug, or am I doing this wrong? And all that aside, is there an existing way to generate a full-text search like that? in general, text() is intended primarily for fully constructed SQL statements, and does not implement the semantics of an element used within an expression. For individual literal components, use the literal() function which produces a bind parameter, which will have all the semantic properties of any other column-oriented expression element. if that doesn't resolve your issue we can try again with something more concrete. Michael, Thanks, as usual, for the help. Sorry I didn't respond earlier--I was on vacation. I tried what you suggested, but I couldn't figure out how to apply it properly. If I just change the line: yield sql.text( ' ' ) to: yield sql.literal( ' ' ) then it requires that I pass in parameters when I actually run the query--which is kind of silly in this case. I think I should explain better what I'm trying to do with that generator, as it's a bit weird. I need a string like this: table.col1 || ' ' || table.col2 || ' ' || table.col3 which I can then pass to posgres's to_tsvector() (via sql.func). The extra spaces are necessary for the full-text search to work properly. So, is there a better way to do this? I think I'm overcomplicating it. Thanks again, Jeff --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Concatenating column names and text() clauses
On Sep 3, 2008, at 11:09 AM, Jeff wrote: in general, text() is intended primarily for fully constructed SQL statements, and does not implement the semantics of an element used within an expression. For individual literal components, use the literal() function which produces a bind parameter, which will have all the semantic properties of any other column-oriented expression element. if that doesn't resolve your issue we can try again with something more concrete. Michael, Thanks, as usual, for the help. Sorry I didn't respond earlier--I was on vacation. I tried what you suggested, but I couldn't figure out how to apply it properly. If I just change the line: yield sql.text( ' ' ) to: yield sql.literal( ' ' ) then it requires that I pass in parameters when I actually run the query--which is kind of silly in this case. well, bind params are used but you dont have to pass them in explicitly, the compiled statement has their values built in. this is the default behavior so that quotes and such are properly escaped. I think I should explain better what I'm trying to do with that generator, as it's a bit weird. I need a string like this: table.col1 || ' ' || table.col2 || ' ' || table.col3 which I can then pass to posgres's to_tsvector() (via sql.func). The extra spaces are necessary for the full-text search to work properly. if you dont want the binds to be used, say: func._tsvector(t.c.col1 + literal_column(' ', type_=String) + t.c.col2 + literal_column(' ', type_=String) + t.c.col3) the type_=String is not strictly needed but allows the precedence rules to prevent a bunch of unnecessary parens --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] default values for columns in select mappers
Hi all, I just started playing with SQLAlchemy today (after several years of plain SQL experience) and I must say I'm impressed. I'm reading my way through the docs now, but there is one thing I can't seem to find. Let me briefly explain the situation. I was given the task of rewriting a database which is in use for many years now. And since many applications depend on its current structure I can only make small changes at the time. My plan is to rewrite all the attached applications but this time abstracting the app's logic from the data-structure itself. I think that SQLAlchemy will allow me to achieve this task by building a library of POPO's and some mappers to the data-structure. In that way I can rework the database and only have to adapt the mappers to keep my app's running. So I started that and immediately stumbled upon a 'common' situation which I don't now how to solve in SQLA. So here goes: I have 1 table (mytable) which is structured somewhat like this: id = int (primary key) name = varchar() type = int Now all rows with a type, say 1 'constitute' a MyObject. And rows with type say 2 are MyOtherObject instances, and so on. So in my applications I want to create a class like this: class MyObject(object): def __init__(self, name): self.name = name Then I need to map this to the database. So I write a mapper like this: myobject_table = select([mytable], mytable.c.type == 1).alias('somealias') (not sure if this is entirely correct. I'm writing this post at home and don't have access to my code at the office. But this is not the point so...) mapper(MyObject, myobject_table) So far all ok, but now when I insert new instances of type MyObject, the type column is not filled with value 1. The instance is inserted ok except for this 'hidden' column. I don't want to add this column to my MyObject class since I foresee that the structure of my DB will change and then there will be no more value for the type column. The column 'type' belongs to the internals of my data-structure and shouldn't be visible in my app's. In the new structure there will be a table just for MyObject instances. Does any guru out there knows how to solve this rather 'common' problem? Many thanks for reading this post! -- Wim --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] JOINING
here is my code: c_table = Table('campaign', metadata, Column('id', Integer, primary_key=True), Column('content', Unicode(200)), ) m_table = Table('mailings', metadata, Column('id', Integer, primary_key=True), Column('campaign_id',Integer, ForeignKey('campaign.id')), Column('date', DateTime), ) class Mailing(object): pass class Campaign(object): pass mapper(Campaign, c_table) mapper(Mailing, m_table, properties={'published_campaign':relation(Campaign, uselist=False) }) my question is this, when I do the following: mailing = session.query(Mailing).options(eagerload('published_campaign')).get(1) the resulting query does an outer join. How can I have it do a regular join? thanks, Matt --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: default values for columns in select mappers
On Sep 3, 2008, at 2:59 PM, [EMAIL PROTECTED] wrote: I have 1 table (mytable) which is structured somewhat like this: id = int (primary key) name = varchar() type = int Now all rows with a type, say 1 'constitute' a MyObject. And rows with type say 2 are MyOtherObject instances, and so on. So in my applications I want to create a class like this: class MyObject(object): def __init__(self, name): self.name = name Then I need to map this to the database. So I write a mapper like this: myobject_table = select([mytable], mytable.c.type == 1).alias('somealias') (not sure if this is entirely correct. I'm writing this post at home and don't have access to my code at the office. But this is not the point so...) mapper(MyObject, myobject_table) it is quite common and the pattern you describe is single table inheritance. You can map straight to the table and the type column will be taken care of for you. You can configure subtypes corresponding to each value for type and Query for just that subclass (or for all classes). http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_mapper_inheritance_single the feature is available in 0.4 and 0.5 but has some improvements to its behavior in the 0.5 series. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: default values for columns in select mappers
Thanks! That's indeed the stuff I was looking for! On Wed, Sep 3, 2008 at 9:23 PM, Michael Bayer [EMAIL PROTECTED] wrote: On Sep 3, 2008, at 2:59 PM, [EMAIL PROTECTED] wrote: I have 1 table (mytable) which is structured somewhat like this: id = int (primary key) name = varchar() type = int Now all rows with a type, say 1 'constitute' a MyObject. And rows with type say 2 are MyOtherObject instances, and so on. So in my applications I want to create a class like this: class MyObject(object): def __init__(self, name): self.name = name Then I need to map this to the database. So I write a mapper like this: myobject_table = select([mytable], mytable.c.type == 1).alias('somealias') (not sure if this is entirely correct. I'm writing this post at home and don't have access to my code at the office. But this is not the point so...) mapper(MyObject, myobject_table) it is quite common and the pattern you describe is single table inheritance. You can map straight to the table and the type column will be taken care of for you. You can configure subtypes corresponding to each value for type and Query for just that subclass (or for all classes). http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_mapper_inheritance_single the feature is available in 0.4 and 0.5 but has some improvements to its behavior in the 0.5 series. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA and Filemaker Pro
Michael, I had thought that may be the case, so I had already started with the databases/mssql.py file. Do you think that is the best place to start? Is SA structured that all DB specific components are stored in that location? Just trying to scope the effort from my end, any thoughts are appreciated. So far, I have noticed that for non DSN connections some minor changes are required on the pyodbc calls. It was pretty frustrating but, I found that I needed to use SDSN instead of DATABASE or DB (Filemaker docs were worthless). Thanks, -kmcb On Sep 2, 10:36 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Sep 1, 10:01 pm, KMCB [EMAIL PROTECTED] wrote: Hello, Being new to SA, I have read some more information and maybe I can be more specific. I think I need a dialect file for filemaker through pyodbc. I have gotten the pyodbc to FMP working. -kmcb one does not exist at the moment but this is something we'd accept as a contribution... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Accessing Views with SqlAlchemy
On Sep 3, 11:29 am, Mike [EMAIL PROTECTED] wrote: Hi, I need to interface with a couple of Views in Microsoft SQL Server 2000. How do I go about doing this with SqlAlchemy? Since a View doesn't have columns per se, how do I go about creating a class and a table object to map them? I tried my Google-Fu, but there's not much out there on this subject. The SqlAlchemy wiki / bug list seems to have some information, but that looks kind of sketchy. Any advice would be appreciated. I am using Python 2.5.2 with SA 0.5.0beta3 Thanks! Mike Replying to my own message seems kind of schizo, but I found one solution. I reflected the view into a Table() object and then used the select method along with and_ and not_. I would prefer to use the session object, but this works. I'm currently converting the following SQL: SELECT LNAME, FNAME, NETNAME FROM MyView WHERE (DEPT = '%s') AND (NETNAME '') I then access it like this: code engine = create_engine('mssql://user:[EMAIL PROTECTED]/DB') meta = MetaData(engine) emp_tbl = Table('MyView', meta, autoload=True) s = select([emp_tbl.c.LNAME, emp_tbl.c.FNAME, emp_tbl.c.NETNAME], and_(emp_tbl.c.HOME_DEPT==deptNum, not_(emp_tbl.c.NETNAME==))) res = engine.execute(s) row = res.fetchall() /code Is there a less messy way to accomplish this? (Yes, I am a SQL newb!) Mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] csv engine?
Hello, I was wondering if there are any plans to have a csv engine for sqlalchemy. I would like to see support for csv. There are some cases where csv is the best way to convert data to and from especially when they require cleaning. What I would like to see is a sqlalchemy wrapping over csv module and providing file definition and some of the sql functionality into csv world. This would be really helpful when moving things over from one system to another. Ideas? Thanks, Lucas -- OpenOffice and Python http://lucasmanual.com/mywiki/OpenOffice Commercial Grade Backup with Bacula http://lucasmanual.com/mywiki/Bacula --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: JOINING
On Sep 3, 2008, at 3:26 PM, mg wrote: here is my code: c_table = Table('campaign', metadata, Column('id', Integer, primary_key=True), Column('content', Unicode(200)), ) m_table = Table('mailings', metadata, Column('id', Integer, primary_key=True), Column('campaign_id',Integer, ForeignKey('campaign.id')), Column('date', DateTime), ) class Mailing(object): pass class Campaign(object): pass mapper(Campaign, c_table) mapper(Mailing, m_table, properties={'published_campaign':relation(Campaign, uselist=False) }) my question is this, when I do the following: mailing = session.query(Mailing).options(eagerload('published_campaign')).get(1) the resulting query does an outer join. How can I have it do a regular join? It depends on what you're trying to accomplish here. 1. if you just want to join to the other table, its session.query(Mailing).join(Mailing.published_campaign) , but this will not affect the published_campaign colllection; thats held as a separate join. 2. if you're just trying to optimize the eager load, sess .query (Mailing ).join (Mailing .published_campaign).options(contains_eager('published_campaign')) 3. otherwise eagerload always uses an outer join; a possible future improvement would be that it uses a join if the foreign key column is not nullable, but thats not the case now. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Accessing Views with SqlAlchemy
On Sep 3, 2008, at 4:14 PM, Mike wrote: Replying to my own message seems kind of schizo, but I found one solution. I reflected the view into a Table() object and then used the select method along with and_ and not_. I would prefer to use the session object, but this works. I'm currently converting the following SQL: SELECT LNAME, FNAME, NETNAME FROM MyView WHERE (DEPT = '%s') AND (NETNAME '') I then access it like this: code engine = create_engine('mssql://user:[EMAIL PROTECTED]/DB') meta = MetaData(engine) emp_tbl = Table('MyView', meta, autoload=True) s = select([emp_tbl.c.LNAME, emp_tbl.c.FNAME, emp_tbl.c.NETNAME], and_(emp_tbl.c.HOME_DEPT==deptNum, not_(emp_tbl.c.NETNAME==))) res = engine.execute(s) row = res.fetchall() /code Is there a less messy way to accomplish this? (Yes, I am a SQL newb!) im surprised the autoload works for a view. If you have that, then you should be able to just make a mapper() to that Table as usual - you might need to specify primary_key to your mapper and/or Table (e.g. Table('name', meta, Column('id', Integer, primary_key=True), autoload=True)) . You just can't issue any changes to the object (unless the view is writeable). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Accessing Views with SqlAlchemy
Hi, I need to interface with a couple of Views in Microsoft SQL Server 2000. How do I go about doing this with SqlAlchemy? Since a View doesn't have columns per se, how do I go about creating a class and a table object to map them? I tried my Google-Fu, but there's not much out there on this subject. The SqlAlchemy wiki / bug list seems to have some information, but that looks kind of sketchy. Any advice would be appreciated. I am using Python 2.5.2 with SA 0.5.0beta3 Thanks! Mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Accessing Views with SqlAlchemy
On Sep 3, 3:45 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Sep 3, 2008, at 4:14 PM, Mike wrote: Replying to my own message seems kind of schizo, but I found one solution. I reflected the view into a Table() object and then used the select method along with and_ and not_. I would prefer to use the session object, but this works. I'm currently converting the following SQL: SELECT LNAME, FNAME, NETNAME FROM MyView WHERE (DEPT = '%s') AND (NETNAME '') I then access it like this: code engine = create_engine('mssql://user:[EMAIL PROTECTED]/DB') meta = MetaData(engine) emp_tbl = Table('MyView', meta, autoload=True) s = select([emp_tbl.c.LNAME, emp_tbl.c.FNAME, emp_tbl.c.NETNAME], and_(emp_tbl.c.HOME_DEPT==deptNum, not_(emp_tbl.c.NETNAME==))) res = engine.execute(s) row = res.fetchall() /code Is there a less messy way to accomplish this? (Yes, I am a SQL newb!) im surprised the autoload works for a view. If you have that, then you should be able to just make a mapper() to that Table as usual - you might need to specify primary_key to your mapper and/or Table (e.g. Table('name', meta, Column('id', Integer, primary_key=True), autoload=True)) . You just can't issue any changes to the object (unless the view is writeable). I guess my issue is getting my mind around how to create the class object to map to. I didn't create this view or the tables that it manipulates, so I'm not sure what attributes / properties to give the class. I guess I can get the column names that are returned in SQL Server Enterprise Manager and see if those work for the class attributes. Mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: JOINING
So what I am understanding is that right now, SA only does LEFT OUTER JOINS? Or am I misunderstanding? thanks, Matt On Sep 3, 3:49 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Sep 3, 2008, at 3:26 PM, mg wrote: here is my code: c_table = Table('campaign', metadata, Column('id', Integer, primary_key=True), Column('content', Unicode(200)), ) m_table = Table('mailings', metadata, Column('id', Integer, primary_key=True), Column('campaign_id',Integer, ForeignKey('campaign.id')), Column('date', DateTime), ) class Mailing(object): pass class Campaign(object): pass mapper(Campaign, c_table) mapper(Mailing, m_table, properties={'published_campaign':relation(Campaign, uselist=False) }) my question is this, when I do the following: mailing = session.query(Mailing).options(eagerload('published_campaign')).get(1) the resulting query does an outer join. How can I have it do a regular join? It depends on what you're trying to accomplish here. 1. if you just want to join to the other table, its session.query(Mailing).join(Mailing.published_campaign) , but this will not affect the published_campaign colllection; thats held as a separate join. 2. if you're just trying to optimize the eager load, sess .query (Mailing ).join (Mailing .published_campaign).options(contains_eager('published_campaign')) 3. otherwise eagerload always uses an outer join; a possible future improvement would be that it uses a join if the foreign key column is not nullable, but thats not the case now. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Multiple Inheritance
I was wondering if it is possible to set up joined table inheritance so that a subclass inherits from more than one base table. To extend the example given in the documentation, we would have a base class 'Employee' and a base class 'Citizen' such that an 'Engineer' would inherit from both Employee and Citizen classes and have independent 'citizen_id' and 'employee_id'. One could imagine other classes that only inherit from either employee or citizen. employees = Table('employees', metadata, Column('employee_id', Integer, primary_key=True), Column('employee_type', String(30), nullable=False) ) citizens = Table('citizens', metadata, Column('citizen_id', Integer, primary_key=True), Column('citizen_type', String(30), nullable=False) ) An engineer who is both an employee and a citizen would have am employee_id and a citizen_id: engineers = Table('engineers', metadata, Column('id', Integer, primary_key=True) Column('employee_id', Integer, ForeignKey('employees.employee_id')), Column('citizen_id', Integer, ForeignKey('citizens.citizen_id')), Column('engineer_info', String(50)), ) And the classes would look like class Employee(object): pass class Citizen(object): pass class Engineer(Employe, Citizen): pass And the mappers for the base classes would be something like: mapper(Employee, employees, polymorphic_on=employees.c.employee_type, polymorphic_identity='employee') mapper(Citizen, citizens, polymorphic_on=citizens.c.citizen_type, polymorphic_identity='citizen') While the mapper for the engineer has argument 'inherits_multi' and looks like: mapper(Engineer, engineers, inherits_multi=[Employee, Citizen], polymorphic_identity='engineer') I realize that the mapper does not have an 'inherits_muti' argument and that multiple inheritance is not supported. I also saw this thread from back in 2006: http://www.mail-archive.com/[EMAIL PROTECTED]/msg03303.html For my application, this pattern is important (the above example is only an example of the pattern, I'm not really modeling employees and citizens) and I was wondering if anyone had any suggestions as to how to go about implementing this functionality, which I'm planning on doing. Also, thanks to the folks on this list who have responded to my previous questions. I'm hoping that someone else would find this functionality useful and I can share my (yet to be had) solution with the community. Sam --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: JOINING
On Sep 3, 2008, at 6:06 PM, mg wrote: So what I am understanding is that right now, SA only does LEFT OUTER JOINS? Or am I misunderstanding? The eager load feature of a relation(), which is responsible for loading the child items on a collection, does only OUTER JOIN, since its assumed you want the parent object in all cases - eager loading has nothing to do with the selection of rows (i.e. the WHERE clause), only with populating an object graph more rapidly than would otherwise be the case.As I illustrated earlier, performing an INNER JOIN for the purposes of query criterion is straightforward. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Multiple Inheritance
On Sep 3, 2008, at 8:47 PM, Sam Magister wrote: I was wondering if it is possible to set up joined table inheritance so that a subclass inherits from more than one base table. To extend the example given in the documentation, we would have a base class 'Employee' and a base class 'Citizen' such that an 'Engineer' would inherit from both Employee and Citizen classes and have independent 'citizen_id' and 'employee_id'. One could imagine other classes that only inherit from either employee or citizen. employees = Table('employees', metadata, Column('employee_id', Integer, primary_key=True), Column('employee_type', String(30), nullable=False) ) citizens = Table('citizens', metadata, Column('citizen_id', Integer, primary_key=True), Column('citizen_type', String(30), nullable=False) ) An engineer who is both an employee and a citizen would have am employee_id and a citizen_id: engineers = Table('engineers', metadata, Column('id', Integer, primary_key=True) Column('employee_id', Integer, ForeignKey('employees.employee_id')), Column('citizen_id', Integer, ForeignKey('citizens.citizen_id')), Column('engineer_info', String(50)), ) This pattern doesnt entirely make sense - the citizen_type and employee_type columns seem superfluous and redundant against each other, since we really can't load Engineer rows without querying all three tables. In that sense it takes on all the limitations of concrete table inheritance, which doesnt use a type column at the table level. Also, a key aspect of SQLA's polymorphic loading capability is that a mapper is aware of all of its possible subtypes. If multiple inheritance is part of that, the geometry of what are all my subtypes? becomes a more chaotic. We'd have to join to every table in the whole hierarchy to identify the type. To be fair I think this is a behavior that Hibernate supports but they only support it for single inheritance (and they also boast of how difficult it was to implement).SQLA's usual notion of primary key with respect to joined table inheritance wouldn't work here either (engineer's PK is either (x, y) or (x, y, z), employee and citizen are just (x)), suggesting again a more concrete notion - you need to select from the subclass table in order to locate the object, and the primary key itself does not provide enough information to select the appropriate subclass table. The standard patterns for multiple inheritance in SQL are listed at http://www.agiledata.org/essays/mappingObjects.html#MappingMultipleInheritance . There you'll find examples of concrete, single, and joined table multiple inheritance. You can certainly map to any of the hierarchies indicated in that article, but you wouldn't be able to take advantage of SQLA's polymorphic capabilities, which are designed to only handle single inheritance. You'd really want to make your Engineer(Employee, Citizen) class and just map it to engineers.join(citizens).join(employees). That would get your schema going, just without SQLA having any awareness of the inheritance portion of it, and is extremely similar to a plain concrete setup, which is pretty much all you'd get anyway without the ability to load polymorphically. For my application, this pattern is important (the above example is only an example of the pattern, I'm not really modeling employees and citizens) and I was wondering if anyone had any suggestions as to how to go about implementing this functionality, which I'm planning on doing. if you mean implementing within SQLAlchemy itself such that its core notion of inheritance is modified to support multiple base classes spread across multiple tables, this would be an enormously difficult feature to implement.For polymorphic loading, at the very least SQLA would need to lose its dependency on discriminator columns and learn to just look for row presence in a table as evidence of belonging to a certain type (that alone is not necessarily a bad thing, as Hibernate does this too). It would also need to learn to create joins to other tables corresponding to horizontal and vertical relationships, and be able to guess the type of a row based on a complicated equation of row presence. All of the ambigousness introduced by multiple inheritance, like diamond patterns and such would also have to be dealt with.So I'm not really sure that even with the best of efforts, multiple inheritance could ever be nearly as transparent as single inheritance. Beyond the effort level to implement, I'd be very concerned about the complexity it would introduce to SQLA's internals.The use case itself seems exceedingly rare. While a recipe that gets the job done is entirely fine in this case, I'm fairly skeptical of functionality like this as a core feature. --~--~-~--~~~---~--~~ You received this message
[sqlalchemy] using the native c implementation of ordereddict
Hi All, I am doing some work with xmlrpc. One thing I realize is that whenever I pass dict(row) through xmlrpc, I get an key-ordered struct. But this isn't what i really want. What I want is ordered by insertion or the original list order. This led me to look at the util.ordereddict implementation, which is pure python, which is slow. I looked around and found this: http://www.xs4all.nl/~anthon/Python/ordereddict/ which is a c-implementation. At the bottom of the page, there are performance tests. It's much faster. I've got some pretty gigantic tables to pass around, which i think this would really help. Hopefully this could somehow find itself into next official python. But before that, we can use this or we can just incorporate it somehow in sqlalchemy...as a suggestion. Frank --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] sql templating for those sql lovers (or just people who work with dirty legacy enterprise data)
Hi all, I've been thinking about using mako as a sql templating engine to dynamically generate sql. The reason for this is that recently I've realized 90% of time is really spent on generating some kind of analytic views (materialized into materialized views or tables) on existing oracle tables. Since oracle has this magic of rowid, most of the tables i deal with has no real keys or foreign constraints. Much of the model machinery is really useless to me. In addition, the model paradigm seems to fit only transactional type of work. For analytic work, very often i find myself using the analytic functions and some model queries built into Oracle sql. However, as anyone has worked with both python (as a frame of reference) and pl-sql, the latter is disgusting and quirky to the extreme. Sometimes I wish sqlalchemy could provide the flexibility to allow me to do all my work purely in python. But as due to the above reasons, I really can't quite get there yet. So as i was playing around with pylons, I found that templating with mako really provide that extra flexibility lacked in sqlalchemy. For example, pivoting and unpivoting existing tables is something that I found myself do pretty often. For pivoting it's easy to write a template like: select col1 % for val in col2_distinct_vals: , max(decode(col2, ${val}, col3, null) as col_${val} % endfor from some_table group by col1 and fill col2_distinct_vals with select distinct col2 from some_table The above is quite intuitive compare to the awkwardness and the insanity of hardcoded 500 line sql queries via union and maintained via find/replace (if tried hard enough, 95% of the stuff done in databases can somehow be cramped into 1 (oracle) query. But I guess only 1 person in only 1 instance of time would be capable of understanding it...). These can even be kept nicely packaged and documented. The sqlalchemy expression language can't really do very well here either (at least too proprietary in some ways...sort of like working with pylons versus working with a CMS like Plone, which both can be used to create a blog, but one gives more control, and the knowledge is more readily out there). Anyway, to keep this short, can anyone find a way to somehow integrate such a templating idea into sqlalchemy while leveraging the convention already established in ansi-sql and some python voodoo? (Of course, the existing stuff is great for creating transactional type of backend, like a blog or wiki, which then again, really occupies 10% of my time, and hopefully (or not) others as well.) Frank --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---