[sqlalchemy] Re: A question about best practices after the Object Relational Tutorial

2009-02-16 Thread Randall Smith

I would put the addCustomer method on something other than the customer 
object, like the thing you're adding the customer to.  Or bettery yet, 
you'd probably be just appending the customer.  I think for your 
specific example, you should handle customer creation stuff (login + 
password) in __init__.

class Customer(object):

 def __init__(self, login, secret):
 self.password = secret
 self.login = login

 # from Michael's example
 def _set_password(self, pw):
 self._password = sha1(pw)

 def _get_password(self):
 return self._password

 # other helper functions.

 def activate(self):
 self.status = 'a'

 def inactivate(self):
 self.status = 'i'

 def annoyOnThePhone(self):
 dialAsteriskAndAnnoy(self.phone_number)

 def sendMonthlyBill(self):
 # stuff
 pass


customer = Customer('thelogin', 'thesecret')
store.customers.append(customer)
customer.activate()

--Randall

Jessica Meyer wrote:
 Thank you for your replies.
 
 With helper code I mean:
 
 I want the architecture of my program so:
 
   - There's a database
   - Only SQLAlchemy talks to the database
   - There is a function for every task that SQLAlchemy does
 
 So, I want to do something like this in the end: addCustomer
 (login='mylogin', password='secret', ...) and the function
 addCustomer checks input, creates a hash of the password and so on..
 And that helper function then uses the SQLAlchemy API. I mean, I think
 you're all doing this too, since nobody wants to call the SQLalchemy
 API every time again and again.
 
 Thanks
 Jess


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dev: dialect reflection methods

2009-02-13 Thread Randall Smith

I started implementing this and it seemed a bit pointless because it 
just moves logic from one method (reflecttable) to several 
(reflect_columns, refelct_primary_keys, etc) without much benefit.  The 
only immediate benefit I can see is that it may eliminate some 
redundancy.   For example, most of the dialects might use 
DefaultDialect.reflect_columns and so on.

So for now, I'll leave that idea alone and move on to refactoring the 
rest of the dialects.

--Randall

Randall Smith wrote:
 I've been working on breaking Dialect.reflecttable into components so 
 that they can be used independently (e.g. get_columns, get_foreign_keys, 
 etc).  One of the original goals was to be able to do away with 
 reflecttable and have the Inspector, engine.reflection.Inspector, 
 construct the table using the more granular methods.  This has proven to 
 be difficult because several dialects do something special when 
 creating the table.  Oracle deals with synonyms, MSSQL deals with 
 identity columns, etc.  So was thinking, this could be solved by adding 
 granular table construction methods.
 
 class SomeDialect:
 
  def get_columns( ...
 
  def get_foreign_keys( ...
 
  def reflect_columns(self, table, columns ...
 
  def reflect_foreign_keys(self table, fkeys ...
 
 So the inspector would build a table with its reflecttable method:
 
 class Inspector:
 
  def get_columns(...
 
  def get_foreign_keys(...
 
  ...
 
  def reflecttable(self, table ...)
 
  self.engine.dialect.add_columns(table, connection,
  self.info_cache ...
 
  self.engine.dialect.add_foreign_keys(table, connection ...
 
 
 For the various reflect_* methods, there would be a default 
 implementation on DefaultDialect.
 
 What do you think?
 
 --Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] dev: dialect reflection methods

2009-02-12 Thread Randall Smith

I've been working on breaking Dialect.reflecttable into components so 
that they can be used independently (e.g. get_columns, get_foreign_keys, 
etc).  One of the original goals was to be able to do away with 
reflecttable and have the Inspector, engine.reflection.Inspector, 
construct the table using the more granular methods.  This has proven to 
be difficult because several dialects do something special when 
creating the table.  Oracle deals with synonyms, MSSQL deals with 
identity columns, etc.  So was thinking, this could be solved by adding 
granular table construction methods.

class SomeDialect:

 def get_columns( ...

 def get_foreign_keys( ...

 def reflect_columns(self, table, columns ...

 def reflect_foreign_keys(self table, fkeys ...

So the inspector would build a table with its reflecttable method:

class Inspector:

 def get_columns(...

 def get_foreign_keys(...

 ...

 def reflecttable(self, table ...)

 self.engine.dialect.add_columns(table, connection,
 self.info_cache ...

 self.engine.dialect.add_foreign_keys(table, connection ...


For the various reflect_* methods, there would be a default 
implementation on DefaultDialect.

What do you think?

--Randall




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: getting columns name from query result

2009-02-04 Thread Randall Smith

maxi wrote:
 Hi,
 
 Are there any approach to get columns names from query result ?
 
 I'm executing a dynamic sql statemet using text function and execute.
 
 s = 'select * from foo where foo.name like = :name'
 txt = text(s, bind=database.metadata.bind)
 p = {'name':u'some name'}
 result = txt.execute(p).fetchall()
 
 Now, I need something like...
 
 result.get_columns_names()
 
 Thanks in advance.
 

The ResultProxy returned by execute has a keys attribute.

  e = sa.create_engine('postgres:///test1')
  rp = e.execute('select * from customers')
  rp.keys
[u'id', u'name']

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Getting a column's type from the column

2009-02-04 Thread Randall Smith

PacSci wrote:
 Hi.
 After working out my metadata issues (and scrapping four revisions and
 a Tomboy notepad of plans for my framework), I've realized that I am
 going to need something that is like FormAlchemy, but will convert
 models to a WTForms form instead. I've got the basis of the form field
 extraction code from the wtforms.ext.django.orm module, but what I
 need is a way to check the type of a column from the column instance.
 I'm having trouble tracing where the type is registered since
 everything inherits from three or four other classes, so is there a
 method or attribute of Column that contains the type provided?
 
 Regards again,
 Leaf

The type attribute (col.type) is all I'm aware of.  I would study the 
types module (sqlalchemy.types), since col.type will subclass one of 
those base types.  Also, col.type.get_col_spec() may be of interest.

An example of a Postgresql text typed column, where t is the table def.

  t.c.name.type
PGText(length=None, convert_unicode=False, assert_unicode=None)
  t.c.name.type.get_col_spec()
'TEXT'

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] postgres autocommit

2009-01-09 Thread Randall Smith

SA, without a transaction in progress will call commit on the database 
connection when it detects an INSERT, UPDATE, DELETE, CREATE, etc.  This 
is it's autocommit mode.  The relevant code is:

 if context.should_autocommit and not self.in_transaction():
 self._commit_impl()

and

 def _do_commit(self):
 self.connection._commit_impl()

The DBAPI specifies that a connection by default begins a transaction. 
So if you only issue SELECT statements you remain in a single 
transaction and with a db like Postgresql, by design, you don't see 
changes made to the database by other sessions.  You can test this by 
connecting with psycopg2, issuing a select and then ps ax | grep 
postgres. You'll see idle in transaction.

There are two problems I see with this.

1. The session's view of the database is inconsistent, depending on if 
and when queries that modify records are used.  Every time a commit is 
issued the view of the database (in terms of MVCC) has changed.

2.  In the case of Postgresql, a connection held open for a long time, 
running only SELECT queries, would never see updates from other sessions 
and this might tax the database while it tries to maintain MVCC for that 
connection.

Once solution, at least for the psycopg2 based connection would be to 
call commit on SELECTS also when not using a transaction (SA 
transaction).  I've done some testing and this seems to work well.

I know that to some extent this comes down to opinion on how the 
connection should behave.

-Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres autocommit

2009-01-09 Thread Randall Smith

Jon Nelson wrote:
 An alternate solution is to use begin rollback for statements that
 (aren't supposed to) update data, like (most) select statements.
 That's even cheaper than calling commit usually.

There is no begin method in the DBAPI.  You could call rollback instead 
of commit, but I just thought that commit made more sense in 
autocommit mode.

-Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres autocommit

2009-01-09 Thread Randall Smith

Michael Bayer wrote:
 I think with your project specifically, that of reading lots of  
 information during a reflection process given a single Connection, we  
 probably want to look into establishing transactional boundaries for  
 the reflection process, because in that case you do have the less  
 common use case of an explicit Connection object that's in autocommit  
 mode.   That's something we can look into in 0.6.
 

After thinking about it some more, I think that the main issue is a lack 
of auto-commit in the DBAPI.  Some drivers implement it and some don't. 
  Though it's usually a good idea to use transactions, there are some 
times for real auto-commit.

For example, I'm working on a thick client query tool in which a 
connection might be held open for days.  There is an auto-commit mode as 
well as a transaction mode.  In auto-commit mode, changes in the 
database from any session should be visible immediately.

The issue is really just when using SA as a high level DBAPI.  Because 
transactions require explicit calls (e.g. trans = conn.begin()), I think 
that most people would imply that no transaction is in effect without 
this explicit call and would be surprised to discover there is one. 
Maybe you didn't intend for it to be used like I'm using it, but I must 
say that for the most part, it does a great job of creating a standard 
interface for the various drivers.  Also, I like ResultProxy, 
reflecttable and other goodies.

Curious.  Why would you choose not to use the driver's autocommit 
functionality if available?  For example, with psycopg2, you could do:

 con.set_isolation_level(0)

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: reflection for indices, function-based indices

2009-01-04 Thread Randall Smith

Michael Bayer wrote:
 
 On Dec 17, 2008, at 6:36 AM, Diez B. Roggisch wrote:
 
 Hi,

 we've got some function-based indices we currently create using plain
 text()-calls, like this:

 CREATE UNIQUE INDEX
users_lower_email_index
ON
user_db.users (lower(email));

 This works, but it would be nice to stay in the SA-world even with  
 these
 indices.

 And for schema-evolution purposes, we utilize reflection to compare  
 tables.
 Reflection does not not include indices like the above though. Or I  
 missed
 them?

 If not, is there a way to add that to SA, maybe through a plugin?  
 I'd be happy
 to contribute if this is seen as valuable addition, I must add  
 though that
 because we solely use postgres, I won't be able to add support for  
 anything
 beyond that - but maybe others would step up to integrate other  
 engines.
 
 Randall Smith has been working on a schema inspection API which I  
 believe seeks to reflect a wider variety of constructs such as  
 indexes.   His progress can be seen here:
 
 http://www.sqlalchemy.org/trac/browser/sqlalchemy/branches/reflection

Index support for Postgres was recently added to the trunk and I merged 
it into the reflection branch.  You are welcome to check out the 
reflection branch and give it a go.  You could do a quick check like so:

import sqlalchemy as sa
from sqlalchemy.engine.reflection import Inspector

e = sa.create_engine('postgres:///yourdb')
inspector = Inspector(e)
print inspector.get_indexes('yourtable', schema='yourschema')

In fact, I'd love to get some feedback, especially for the Inspector 
interface.  Everything should work for Postgres.

Currently the Inspector supports these methods and attributes:

default_schema_name
get_schema_names()
get_table_names()
get_view_names()
get_columns()
get_primary_keys()
get_foreign_keys()
get_indexes()

Now is a good time for comments and requests for this interface.  Keep 
in mind that it's meant to be useful for non-ORM apps.

This is the commit log for the index support:

r5520 | zzzeek | 2008-12-22 22:47:52 -0600 (Mon, 22 Dec 2008) | 5 lines

- Added Index reflection support to Postgres, using a
great patch we long neglected, submitted by
Ken Kuhlman. [ticket:714]

-Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2008-12-23 Thread Randall Smith

Michael Bayer wrote:
 
 so if we can confirm this ! .socloseto...release
 


 From grepping:

For the Oracle dialect get_default_schema_name is used in reflecttable, 
has_table and has_sequence. has_table, in turn, is used by the 
SchemaGenerator and SchemaDropper.

MSSQL uses it for reflecttable and has_table and others do the same or less.

Outside of the dialects, get_default_schema_name is used for 
Engine.table_names

-Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2008-12-22 Thread Randall Smith

Michael Bayer wrote:
 just FTR, the current expected behavior of default schemas is that if
 your tables are known to exist in the default schema configured on the
 database connection, you leave the schema attribute on Table blank.
 otherwise, you set it.

The mssql dialect does not ask the server what the default schema is. 
It just sets it to 'dbo', so for user scott with default schema TEST 
this would yield the wrong value:

 import sqlalchemy as sa
 e = sa.create_engine('mssql://scott:ti...@blackie/test')
 print e.dialect.get_default_schema_name(e) # returns dbo

This would return the value from there server:

 def get_default_schema_name(self, connection):
 # get the username
 query = SELECT user_name() as user_name;
 rp = connection.execute(sql.text(query))
 row = rp.fetchone()
 rp.close()
 if row is not None:
 user_name = row.user_name
 # now, get the default schema
 query = 
 SELECT default_schema_name FROM
 sys.database_principals
 WHERE name = :user_name
 AND type = 'S'
 
 rp = connection.execute(sql.text(query), user_name=user_name)
 row = rp.fetchone()
 rp.close()
 if row is not None:
 return row.default_schema_name
 return self.schema_name

postgres doesn't even set a schema_name on the Dialect.  It goes like this:

 def get_default_schema_name(self, connection):
 return connection.scalar(select current_schema(), None)
 get_default_schema_name = base.connection_memoize(
 ('dialect', 'default_schema_name'))(get_default_schema_name)

Shouldn't mssql do something similar to Postgres here?

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2008-12-22 Thread Randall Smith

Michael Bayer wrote:

 Shouldn't mssql do something similar to Postgres here?
 
 it certainly should.
 

Ticket 1258

-Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-11 Thread Randall Smith
Michael Bayer wrote:
 You're on the right track.  The reflection methods are always called  
 with a Connection that is not shared among any other thread  
 (connections aren't considered to be threadsafe in any case) so  
 threadsafety is not a concern.
 
 I think you should look at the mysql dialect sooner rather than later,  
 as its going to present a challenge to the overall notion here.  It  
 has a much more elaborate design already and you're going to want to  
 cache the results of SHOW CREATE TABLE somewhere, and that would be  
 better to be *not* cached on the Connection itself, since that is  
 something which can change over the lifespan of a single connection.   
 I think for the PG _get_table_oid method that needs similar treatment,  
 i.e. that its called only once per reflection operation.


 
 We have a nice decorator that can be used in some cases called  
 @connection_memoize(), but for per-table information I think caching  
 info on the Connection is too long-lived.  I had the notion that the  
 Inspector() object itself might be passed to the more granular  
 reflection methods, where each method could store whatever state it  
 needed into a dictionary like inspector.info_cache - or, perhaps we  
 just pass a dictionary to each method that isn't necessarily tied to  
 anything.This because I think each dialect has a very different  
 way of piecing together the various elements of Table and Column  
 objects, we'd like to issue as little SQL as possible, and we are  
 looking to build a fine-grained interface.   The methods can therefore  
 do whatever queries they need and store any kind of structure per  
 table in the cache, and then retrieve the information when the new  
 Dialect methods are called.   This means the call for foreign_keys,  
 primary_keys, and columns when using the mysql dialect would issue one  
 SHOW CREATE TABLE, cache all the data, and return what's requested for  
 each call.
 
 With the caching approach, multiple queries with the same  
 reflection.Inspector object can be made to work nearly as efficiently  
 as the coarse-grained reflecttable() methods do now.

I've taken the info_cache approach and that seems to be working out 
well.  It's just a dict passed into the granular methods.

Since my last post, I've mostly been looking at the MySQL code.  I never 
liked MySQL much and now I've got more reason to dislike it.  I'm pretty 
much convinced that all of the schema info should be parsed at once (as 
it is now) because the entire statement has to be parsed regardless.  So 
I've taken a lazy approach to creating the granular methods.  They just 
create  a Table object (or use a cached one from info_cache), pass it 
into reflecttable and extract the required information.  I think this 
approach maybe could be improved on with some work to 
MySQLSchemaReflector and a structure more simple than Table, but I want 
be more certain before taking that plunge.  I attached mysql.py so you 
can have a look.  Not really much to see though.

I also created an Inspector class in engines/reflection.  It's very thin 
and clean so I'll just include it below:


class Inspector(object):

 def __init__(self, connection):
 self.connection = connection
 self.info_cache = {}

 @property
 def dialect(self):
 return self.connection.dialect

 def get_columns(self, table_name, schema=None):
 f = self.dialect.get_columns
 return f(self.connection, table_name, schema, self.info_cache)

 def get_views(self, table_name, schema=None):
 f = self.dialect.get_views
 return f(self.connection, table_name, schema, self.info_cache)

 def get_indexes(self, table_name, schema=None):
 f = self.dialect.get_indexes
 return f(self.connection, table_name, schema, self.info_cache)

 def get_primary_keys(self, table_name, schema=None):
 f = self.dialect.get_primary_keys
 return f(self.connection, table_name, schema, self.info_cache)

 def get_foreign_keys(self, table_name, schema=None):
 f = self.dialect.get_foreign_keys
 return f(self.connection, table_name, schema, self.info_cache)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

# mssql.py

MSSQL backend, thru either pymssq, adodbapi or pyodbc interfaces.

* ``IDENTITY`` columns are supported by using SA ``schema.Sequence()``
  objects. In other words::

Table('test', mss_engine,
   Column('id',   Integer, Sequence('blah',100,10), primary_key=True),
   Column('name', String(20))
 ).create()

  would yield::

   

[sqlalchemy] Re: schema inspection api

2008-11-08 Thread Randall Smith
Michael Bayer wrote:

 The structure of the API would drive the current reflection API to  
 become more componentized.  What we see as a need on the public  
 refleciton API side would drive the currently monolithic reflection  
 methods to be changed.   The current reflecttable() methods in turn  
 would ideally move *out* of the dialects, and the Table(autoload=True)  
 functionality would simply call up an Inspector and use that.  So the  
 ultimate outcome is that the individual reflecttable methods go  
 away, and dialects implement a larger set of simpler methods.
 

I started by factoring out components for reflecttable.  So now there 
are methods get_columns, get_primary_keys, get_foreign_keys, get_indexes 
and get_views.  get_columns, get_primary_keys, and get_foreign_keys are 
the methods used by reflecttable (and the ones I actually implemented).

There is one thing I did that will slightly degrade performance. I 
created a method in postgres.py __get_table_oid that several other 
methods use.  This value could be cached at the beginning of 
reflecttable, but I didn't won't to address threading issues.

I made a local branch called reflection and have attached the files and 
diffs for engine/base.py and databases/postgres.py at revision 5262.  I 
did a few real world tests, but not the automated tests.  I don't know 
how difficult they are to set up.  That'll be the next thing I do. 
Mainly I just want to get feedback on the API and if I'm going about the 
implementation right.

--Randall

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



base.py.diff.gz
Description: GNU Zip compressed data


base.py.gz
Description: GNU Zip compressed data


postgres.py.diff.gz
Description: GNU Zip compressed data


postgres.py.gz
Description: GNU Zip compressed data


[sqlalchemy] schema inspection api

2008-11-07 Thread Randall Smith

To my knowledge, there doesn't exist a tool to extract schema 
information from a database with a database independent API.  SA does 
this to some extent by providing table_names and reflectable methods 
on it's Engine class, but I think it would be nice to have something 
more comprehensive and fine grained.

It seems to me that ORM is only a part of SA's function.  I've used SA's 
connection, cursor (ResultProxy) and SQL abstractions without ORM as a 
kind of higher level DBAPI with good results.  I think that a schema 
information API would fit well into SA's theme.  The implementation 
could make use of the non-ORM features as well as providing for some 
other features like table reflection.

How would it fit in?  I think it should be usable without requiring the 
use of other SA facilities.  Maybe like this:

*

import sqlalchemy as sa
from sqlalchemy.database import dbinfo

engine = sa.create_engine('postgres:///')
# or a dbapi connection

# return list of schema names
schema_names = dbinfo.getSchemaNames(engine)
# return list of table names
table_names = dbinfo.getTableNames(engine, schema_name='public')
# return list of view names
view_names = dbinfo.getViewNames(engine, schema_name='public')
# return the SQL statement required to get view names (no connectable)
view_names = dbinfo.getViewNames(schema_name='public')

# Also like this.

info = dblib.PostgresInfoFetcher(engine)
print info.getSchemaNames()
print info.getConstraintInfo(table_name='t1', schema_name='s1')
# Returns the SQL.
print info.getConstraintInfo(table_name='t1', schema_name='s1', 

  return_statement=True)

*

Note that a method can return records or an SQL statement.  I have a 
basic implementation of this based on information_schema that works with 
Postgresql and MSSQL (and maybe Oracle).


I'd like to get some opinions on this before going any further.  Is it 
needed?  If so, how should it be implemented?  Would it be good as a 
part of SA or should it be separate?

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-07 Thread Randall Smith

If anyone wants to toy with this, I posted it here for the meantime. 
Works with Postgresql and MSSQL for schema_names, table_names, 
constraints (including foreign keys) and columns.

http://www.tnr.cc/dbinfo.py

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-07 Thread Randall Smith

Michael Bayer wrote:
 
 We did long ago attempt to build an information schema API, which  
 was based on the information schema specification, before realizing  
 that this model hardly works in any current RDBMS (for example, its  
 excruciatingly slow on postgres) and is entirely inconsistent in any  
 case (the queries to be used on MySQL are entirely different than  
 those of PG, and MySQL doesn't even give you enough information to  
 reconstruct a table with all of its constraints).   I've looked at  
 your sample code and I see it is based on information schema, so that  
 aspect of it in my experience isn't going to travel very well.

I was only using information_schema as a base (literally a base class 
implementation), knowing that most or all implementations would have 
quirks or performance issues and require method overrides.  However, I 
would hope that implementations of information_schema would improve over 
time and that custom method overrides in subclasses could be removed, 
making the code cleaner and less complex.  In short, start with the 
standard and go from there.

 
 But an open API to our existing reflection facilities, adding in  
 things like views and indexes, is a great idea.   Since the dialects  
 already have a great deal of inspection logic built in it would make  
 sense that the current table reflection functions would build on top  
 of these functions, since that's already what its doing.
 
 So the work I'd be willing to accept in this area would include adding  
 additional Dialect functions which accomplish all the desired  
 functionality of reading views, indexes, etc., building a front-end  
 API on top of it which I'd probably call sqlalchemy.engine.reflection,  
 and the biggest job of all would be comprehensive unit tests, for  
 which I'd even make a new package, tests/reflection/*.  We currently  
 have tests/engine/reflection.py which is a pretty old module and could  
 use a good deal of cleanup.   I would gladly give commit access to  
 anyone who wanted to take on this task, working in a branch to start  
 which would later be merged to trunk.   In particular we have a good  
 deal of MS-SQL bugs regarding reflection, even a new one just today,  
 that are in need of fixing.  I dont have regular access to MS-SQL on  
 my end so I rarely get to work with that code.
 
 One of the major tedious elements here is that the reflection code is  
 custom for each dialect.   Based on my experiences with information  
 schema, I don't think there's any way around this.

reflecttable is very monolithic.  Breaking it down into smaller 
components will make it easier to test.  I follow you on adding methods 
to the Dialects and that seems like a good place for them.  I don't know 
what kind of API sqlalchemy.engine.reflection would have. Is this 
something you could sketch out?

I'd like to work on this.  I have to do it anyway and would prefer 
getting it into SA instead of it being standalone.  I've got access to 
Oracle 8, 9 and 10, Postgres ... and MSSQL (Express), which brings me to 
another topic.

A project like SA needs access to all of the supported database systems 
for testing.  Currently, it seems that developers are hampered because 
they don't have access to these systems.  So I was thinking of ways to 
improve the situation and the idea I like the best is a VPN consisting 
of donated database setups.  There would be a central VPN host and 
donors would connect their donated database servers to this host. 
Developers could then connect to the VPN host and access all of the 
database systems.  With proper firewalling (iptables rules on the 
tun/tap devices) it should be safe for all parties.

--Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL, pyodbc linux

2008-03-31 Thread Randall Smith

Lukasz Szybalski wrote:
 On Mon, Mar 31, 2008 at 9:53 AM, Rick Morrison [EMAIL PROTECTED] wrote:
 So what you are saying here is that sqlalchemy will figure out what driver
 to use? pyodbc or other?


 Sort of. Attempts are made to import appropriate modules until one doesn't
 fail. The order is: [pyodbc, mssql, adodbapi]


 I have tried this with svn version and I get
 traceback snipped:

 I've tried it with Windows, and the normal DB-URL syntax works. The
 traceback you give suggests some kind of pydodbc error, but doesn't seem to
 provide any details. If you're on Linux, I would suspect your FreeTDS /
 unixodbc / iodbc setup. What sort of error you get when trying to connect
 using pyodbc without SQLAlchemy?

 
 I usually use dsn and pyodbc.  If I use the command Randall emailed:
 
 pyodbc.connect('DRIVER={SQL
 Server};Server=localhost;UID=user;PWD=pass') Traceback (most recent
 call last):
   File stdin, line 1, in ?
 type 'instance': ('IM002', '[IM002] [unixODBC][Driver Manager]Data
 source name not found, and no default driver specified (0)')
 pyodbc.connect('DRIVER={TDS};Server=localhost;UID=user;PWD=pass')
 pyodbc.Connection object at 0xb7d0b1e0
 a=pyodbc.connect('DRIVER={TDS};Server=localhost;UID=user;PWD=pass')
 cursor=a.cursor()
 cursor.execute('select * from 17000Z')
 pyodbc.Cursor object at 0xb7d02db0
 b=cursor.fetchall()
 
 As you can see I can connect via pyodbc if I pick TDS as a driver. Is
 there a way to tell sqlalchemy ti use TDS driver?
 
 e = sa.create_engine('mssql://user:[EMAIL PROTECTED]:1433/tempdb')
 
 (DRIVER={TDS}
 
 Lucas
 

I'm not certain about this, but it may be the way it's defined in 
/etc/odbcinst.ini.  Mine looks like this:

[SQL Server]
Description = TDS driver (Sybase/MS SQL)
Driver  = /usr/lib/odbc/libtdsodbc.so
Setup   = /usr/lib/odbc/libtdsS.so
CPTimeout   =
CPReuse =

Hope that works for you.

Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL, pyodbc linux

2008-03-30 Thread Randall Smith

Rick Morrison wrote:
 That's great news, and we're glad for the help. What version of MSSQL 
 will you be working with?

2005 currently.  I hope to get access to other versions though.

Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL, pyodbc linux

2008-03-29 Thread Randall Smith

Paul Johnston wrote:
 Hi,
 
 eng =
 
 sqlalchemy.create_engine(mssql:///?dsn=mydsn,UID=myusername,PWD=mypass,module=pyodbc)
 
 
 Try this:
 eng = 
 sqlalchemy.create_engine(mssql://myusername:mypass@/?dsn=mydsn,module=pyodbc)
 
 Paul

You shouldn't need to define a dsn.  This should work:

 e = sa.create_engine('mssql://user:[EMAIL PROTECTED]:1433/tempdb')

This actually doesn't work (for me), but it's something minor.  For my 
setup using unixodbc and tdsodbc, you can connect using pyodbc like so:

pyodbc.connect('DRIVER={SQL 
Server};Server=localhost;Database=tempdb;UID=user;PWD=pass')

but, SA is producing this:

pyodbc.connect('Driver={SQL 
Server};Server=localhost;Database=tempdb;UID=user;PWD=pass')

which fails.

Can you see this difference?  It's the case of Driver.  Apparently 
Server and Database are not case sensitive, but Driver is.  I don't know 
what software is buggy yet.  If you do know, please respond.

I plan to be extensively using and testing Linux/ODBC/pyodbc/SA, so I'd 
be happy to work with any exiting effort to improve SA's SQL Server support.

Randall



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] connection strategy

2007-08-30 Thread Randall Smith

I'm wondering what is the best way for a mapped object to acquire a 
connection for a non-ORM style query while staying within the context 
(reuse connection or transaction) of its own session.  For example, here 
is a method of a mapped object.

 @property
 def analysis_count(self):
 Return a count of related analysis records.

 
 sess = object_session(self)
 con = session.connect()
 result = select([func.count(sample_analysis.c.id)], 

 sample_analysis.c.batch_id == self.id,
 bind=con).execute().scalar()
 return result

Is this correct and is there a better way to accomplish this?  I'm 
working with SA 0.3.10.

Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: connection strategy

2007-08-30 Thread Randall Smith

Typo in untested example.  session should be sess.

Randall

Randall Smith wrote:
 I'm wondering what is the best way for a mapped object to acquire a 
 connection for a non-ORM style query while staying within the context 
 (reuse connection or transaction) of its own session.  For example, here 
 is a method of a mapped object.
 
  @property
  def analysis_count(self):
  Return a count of related analysis records.
 
  
  sess = object_session(self)
  con = session.connect()
  result = select([func.count(sample_analysis.c.id)], 
 
  sample_analysis.c.batch_id == self.id,
  bind=con).execute().scalar()
  return result
 
 Is this correct and is there a better way to accomplish this?  I'm 
 working with SA 0.3.10.
 
 Randall
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: no syncrules generated problem with many-to-many relationship

2007-05-23 Thread Randall Smith

Michael Bayer wrote:
 I should learn to explain better what syncrules are about, so heres
 an attempt:
 
 when you have a table A and a table B, and then a mapping relationship
 between A and B, theres a join condition set up between the two tables.
  By default, it draws from the foreign keys of the tables.
 Alternatively, you can send primaryjoin and optionally
 secondaryjoin (if you have a secondary association table in the
 middle) to specify this join condition manually.
 
 Lets take a join condition of A.a_id=B.a_id.   That means when I create
 an instance mapped to table A, it will have a primary key attribute
 a_id.  An instance mapped to table B will have an attribute a_id
 as well which contains the id of its related A object.
 
 The syncrule generated for this relationship is then after saving an
 instance of A, copy the value of its a_id attribute to the a_id
 attribute on all the B elements attached to it.  The syncrule is
 generated in the sqlalchemy.orm.sync module, and traverses across the
 join conditions (i.e. the primaryjoin) in order to generate the rule.
 
 If the syncrules execute and dont manage to find any rules that map
 attributes from A to B, its clear that the mapper has no way of
 relating in the database the A/B relationship.  So for now, it
 assumes that its an error.  Since if you have made a relation() from A
 to B, the mapper needs to be able to persist it.
 
 So when you get that error message, it means the join conditions are
 not taking into account the two tables involved in such a way that the
 syncrules can figure it out, usually because the two tables are related
 via an intermediary table which is stuck in the middle of the join (it
 doesnt handle that).

I have a setup similar to this and would prefer not to use an 
association table.

A, B and C are tables.

A  -FK  B(intermediate)  -  C

C could possibly be truncated and restored so I don't want an FK 
deleting or restricting records in B.

Table B does not have a Foreign Key defined for C because I don't want 
changes in C to affect B. Is there a way to define the foreign key such 
that it doesn't add a FK constraint to the database, but understands the 
relationship?

I'm creating A and B via metadata.create_all() and I don't want a FK for 
B-C created, but do want sqlalchemy to know about the relationship so 
that I could define it like:

mapper(AClass, A, properties={'C' : relation(C, secondary=B)})

So the only difference is that there would be no Database FK, but I 
don't want to resort to manually deleting it if I dont' have to. Is 
there a nice way to do this?

Randall


 
 The syncrules are only needed for saving objects, not loading them,
 which is why I added the viewonly flag to relations in case you
 really want a relation() that uses an unusual join, and you dont care
 about the mapper being able to persist it.
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: no syncrules generated problem with many-to-many relationship

2007-05-23 Thread Randall Smith

Responding to myself.  As I wrote the first message, I started thinking 
how just deleting the FK after metadata.create_all() wouldn't be so bad 
and now I think that's a pretty good solution.

Randall

Randall Smith wrote:
 Michael Bayer wrote:
 I should learn to explain better what syncrules are about, so heres
 an attempt:

 when you have a table A and a table B, and then a mapping relationship
 between A and B, theres a join condition set up between the two tables.
  By default, it draws from the foreign keys of the tables.
 Alternatively, you can send primaryjoin and optionally
 secondaryjoin (if you have a secondary association table in the
 middle) to specify this join condition manually.

 Lets take a join condition of A.a_id=B.a_id.   That means when I create
 an instance mapped to table A, it will have a primary key attribute
 a_id.  An instance mapped to table B will have an attribute a_id
 as well which contains the id of its related A object.

 The syncrule generated for this relationship is then after saving an
 instance of A, copy the value of its a_id attribute to the a_id
 attribute on all the B elements attached to it.  The syncrule is
 generated in the sqlalchemy.orm.sync module, and traverses across the
 join conditions (i.e. the primaryjoin) in order to generate the rule.

 If the syncrules execute and dont manage to find any rules that map
 attributes from A to B, its clear that the mapper has no way of
 relating in the database the A/B relationship.  So for now, it
 assumes that its an error.  Since if you have made a relation() from A
 to B, the mapper needs to be able to persist it.

 So when you get that error message, it means the join conditions are
 not taking into account the two tables involved in such a way that the
 syncrules can figure it out, usually because the two tables are related
 via an intermediary table which is stuck in the middle of the join (it
 doesnt handle that).
 
 I have a setup similar to this and would prefer not to use an 
 association table.
 
 A, B and C are tables.
 
 A  -FK  B(intermediate)  -  C
 
 C could possibly be truncated and restored so I don't want an FK 
 deleting or restricting records in B.
 
 Table B does not have a Foreign Key defined for C because I don't want 
 changes in C to affect B. Is there a way to define the foreign key such 
 that it doesn't add a FK constraint to the database, but understands the 
 relationship?
 
 I'm creating A and B via metadata.create_all() and I don't want a FK for 
 B-C created, but do want sqlalchemy to know about the relationship so 
 that I could define it like:
 
   mapper(AClass, A, properties={'C' : relation(C, secondary=B)})
 
 So the only difference is that there would be no Database FK, but I 
 don't want to resort to manually deleting it if I dont' have to. Is 
 there a nice way to do this?
 
 Randall
 
 
 The syncrules are only needed for saving objects, not loading them,
 which is why I added the viewonly flag to relations in case you
 really want a relation() that uses an unusual join, and you dont care
 about the mapper being able to persist it.


 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is this a bug?

2007-05-23 Thread Randall Smith

This is the error:  Note that the first param is an SA object instead of 
the integer key.

sqlalchemy.exceptions.SQLError: (ProgrammingError) can't adapt 'SELECT 
records_tinwsys.tinwsys_st_code AS records_tinwsys_tinwsys_st_code, 
records_tinwsys.tinwsys_is_number AS records_tinwsys_tinwsys_is_number, 
records_tinwsys.record_id AS records_tinwsys_record_id \nFROM 
efile2.records_tinwsys \nWHERE records_tinwsys.record_id = 
%(records_tinwsys_record_id)s AND records_tinwsys.tinwsys_is_number = 
%(records_tinwsys_tinwsys_is_number)s AND 
records_tinwsys.tinwsys_st_code = %(records_tinwsys_tinwsys_st_code)s 
ORDER BY records_tinwsys.record_id' {'records_tinwsys_record_id': 
mswater.lib.dblib.sqla.classes.efile.Record object at 0xb73324ac, 
'records_tinwsys_tinwsys_st_code': 'MS', 
'records_tinwsys_tinwsys_is_number': 327L}

Here is the intermediate table:

 table = Table('records_tingeoar', metadata,
 Column('record_id', Integer, primary_key=True),
 Column('tingeoar_is_number', Integer, primary_key=True),
 Column('tingeoar_st_code', CHAR(2), primary_key=True),
 ForeignKeyConstraint(['record_id', ], [records.c.id, ],
 records_tinwsys_record_id_fk,
 onupdate=CASCADE, ondelete=CASCADE),
 ForeignKeyConstraint(['tingeoar_is_number', 'tingeoar_st_code'],
  [tingeoar.c.tingeoar_is_number,
   tingeoar.c.tingeoar_st_code],
 records_tingeoar_tingeoar_fk,
 onupdate=CASCADE, ondelete=NO ACTION),
 schema=schemaname,
 )

And here are the mapper definitions:

RecordTinwsys.mapper = sa.mapper(RecordTinwsys, records_tinwsys,
 primary_key=[records_tinwsys.c.record_id,
  records_tinwsys.c.tinwsys_is_number,
  records_tinwsys.c.tinwsys_st_code],
 properties={
 'record' : sa.relation(Record),
 'tinwsys' : sa.relation(Tinwsys)
 }
)

Record.mapper = sa.mapper(Record, records,
 properties={
 'tinwsys' : sa.relation(Tinwsys, secondary=records_tinwsys,
 backref='efile_records'),
 'tingeoar' : sa.relation(Tingeoar, secondary=records_tingeoar,
  backref='efile_records'),
 }
)

The error occurs where I access the backref attribute 'efile_records' 
for Tinwsys.

Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is this a bug?

2007-05-23 Thread Randall Smith

Never mind.  My mistake.  --Randall

Randall Smith wrote:
 This is the error:  Note that the first param is an SA object instead of 
 the integer key.
 
 sqlalchemy.exceptions.SQLError: (ProgrammingError) can't adapt 'SELECT 
 records_tinwsys.tinwsys_st_code AS records_tinwsys_tinwsys_st_code, 
 records_tinwsys.tinwsys_is_number AS records_tinwsys_tinwsys_is_number, 
 records_tinwsys.record_id AS records_tinwsys_record_id \nFROM 
 efile2.records_tinwsys \nWHERE records_tinwsys.record_id = 
 %(records_tinwsys_record_id)s AND records_tinwsys.tinwsys_is_number = 
 %(records_tinwsys_tinwsys_is_number)s AND 
 records_tinwsys.tinwsys_st_code = %(records_tinwsys_tinwsys_st_code)s 
 ORDER BY records_tinwsys.record_id' {'records_tinwsys_record_id': 
 mswater.lib.dblib.sqla.classes.efile.Record object at 0xb73324ac, 
 'records_tinwsys_tinwsys_st_code': 'MS', 
 'records_tinwsys_tinwsys_is_number': 327L}
 
 Here is the intermediate table:
 
  table = Table('records_tingeoar', metadata,
  Column('record_id', Integer, primary_key=True),
  Column('tingeoar_is_number', Integer, primary_key=True),
  Column('tingeoar_st_code', CHAR(2), primary_key=True),
  ForeignKeyConstraint(['record_id', ], [records.c.id, ],
  records_tinwsys_record_id_fk,
  onupdate=CASCADE, ondelete=CASCADE),
  ForeignKeyConstraint(['tingeoar_is_number', 'tingeoar_st_code'],
   [tingeoar.c.tingeoar_is_number,
tingeoar.c.tingeoar_st_code],
  records_tingeoar_tingeoar_fk,
  onupdate=CASCADE, ondelete=NO ACTION),
  schema=schemaname,
  )
 
 And here are the mapper definitions:
 
 RecordTinwsys.mapper = sa.mapper(RecordTinwsys, records_tinwsys,
  primary_key=[records_tinwsys.c.record_id,
   records_tinwsys.c.tinwsys_is_number,
   records_tinwsys.c.tinwsys_st_code],
  properties={
  'record' : sa.relation(Record),
  'tinwsys' : sa.relation(Tinwsys)
  }
 )
 
 Record.mapper = sa.mapper(Record, records,
  properties={
  'tinwsys' : sa.relation(Tinwsys, secondary=records_tinwsys,
  backref='efile_records'),
  'tingeoar' : sa.relation(Tingeoar, secondary=records_tingeoar,
   backref='efile_records'),
  }
 )
 
 The error occurs where I access the backref attribute 'efile_records' 
 for Tinwsys.
 
 Randall
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] unwanted deletes with backref

2007-05-23 Thread Randall Smith

With this relationship

Record.mapper = sa.mapper(Record, records,
 properties={
 'tinwsys' : sa.relation(Tinwsys, secondary=records_tinwsys,
 backref=sa.backref('efile_records')
 }
)

SA is deleting associated records in records_tinwsys when a Tinwsys 
record is deleted. This only occurs when backref is defined. The 
intended behavior is for nothing to happen to records_tinwsys when a 
Tinwsys record is deleted. How do I accomplish this?

Randall


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic_identity determination

2006-11-07 Thread Randall Smith

I worked on this some more and found that the queries are correct as 
they are.  The problem is in the mapping.  A Chemical Engineer is an 
Engineer, who is an Employee.  So the employees selectable should return 
all employees and the engineer selectable should return all engineers, 
not just unspecific engineers.

For the purpose of discussion and example, assume all engineers must be 
specified.  They may be chemical or mechanical engineers.  I think the 
polymorphic union should look like this:

p_union = polymorphic_union(
 {
 'ac': accountants,
 'me': mechanical_engineers,
 'ce': chemical_engineers,
 },
 type,
)

Notice that I didn't include the employee table nor the engineer 
selectable.  That's because I don't want instances of the Employee class 
or the Engineer class.  They would cause redundant data since a Chemical 
Engineer is an Engineer is an Employee, etc.

That leads to the part I'm stuck on; mapper inheritance.  When finished, 
session.query(Employee).select() should list all employee as instances 
of their specific classes and session.query(Engineer).select() should 
list all engineers ...  So how do I set up the mappers to accomplish 
this?  The polymorphic_on/polymorphic_identity method seems to only 
accommodate 1 level of inheritance.  The Engineer mapper will inherit 
from the Employee mapper, but will have no polymorphic_identity.

Summary of design:

* Single employees table with ac_info, ce_info, and me_info columns.
* Selectables for each employee type that looks like this:
 engineers = select([employees, ],
 and_(employees.c.engineer_info != None,
 employees.c.cheme_info==None)).alias('engineers')
 chemical_engineers = select([employees,
 column('ce').label('type')],
 and_(employees.c.engineer_info != None,
 employees.c.ce_info != None)).alias('chemical_engineers')
* Polymorphic Union that only includes types for instances we want
   (No Employee or Engineer) types.

So how could mappers be set up to accommodate these criteria?

Randall

Michael Bayer wrote:
 if you change your echo to 'debug', or just select straight from your
 p_union selectable, youll see these rows:
 
 (5, u'cengineer1', u'cengineer1', u'cengineer1', None,
 u'chemical_engineer')
 (6, u'cengineer2', u'cengineer2', u'cengineer2', None,
 u'chemical_engineer')
 (1, u'manager1', None, None, u'manager1', u'manager')
 (2, u'manager2', None, None, u'manager2', u'manager')
 (3, u'engineer1', u'engineer1', None, None, u'engineer')
 (4, u'engineer2', u'engineer2', None, None, u'engineer')
 (5, u'cengineer1', u'cengineer1', u'cengineer1', None, u'engineer')
 (6, u'cengineer2', u'cengineer2', u'cengineer2', None, u'engineer')
 
 Where you can see that the chemical engineers are coming out twice with
 inconsistent types.  the query has to be tuned to be more specific:
 
 managers = select([employees, column('manager').label('type')],
 employees.c.manager_data !=
None).alias('managers')
 engineers = select([employees, column('engineer').label('type')],
 and_(employees.c.engineer_info !=
 None,
 employees.c.cheme_info==None)).alias('engineers')
 chemical_engineers = select([employees,
 column('chemical_engineer').label('type')],
  and_(employees.c.engineer_info != None,
   employees.c.cheme_info !=
 None)).alias(
   'chemical_engineers')
 
 
 p_union = polymorphic_union(
 {
 'engineer': engineers,
 'manager': managers,
 'chemical_engineer': chemical_engineers
 },
 None, 
 )


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] General Inheritance Questions

2006-11-07 Thread Randall Smith

As I've been experimenting with inheritance, I think my hang ups are 
largely due to my lack of understanding of how SA accomplishes it.  So 
here are some simple questions.

1. What does it mean for a mapper to inherit from another mapper?  What 
does it inherit from that mapper and how does that affect its function?

2. What does Concrete mean?  Is it only applicable to inheritance?

3. When should I use select_table instead of passing in a selectable for 
local_table?

Randall


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Constructing where-clauses dynamically

2006-11-06 Thread Randall Smith

dmiller wrote:
 Is there a reason why this doesn't work:
 
 orders = Table('orders', meta,
  Column('id', Integer, Sequence('order_id_seq'), primary_key=True),
  ...
 
 )
 items = Table('items', meta,
  Column('id', Integer, Sequence('item_id_seq'), primary_key=True),
  Column('order_id', Integer, ForeignKey(orders.c.id),  
 nullable=False),
  ...
 )
 
 class Order(object): pass
 class Item(object): pass
 
 itemMapper = mapper(Item, items)
 orderMapper = mapper(Order, orders, properties=dict(
  items=relation(itemMapper, backref=order)
 ))
 
 session = create_session()
 order = session.query(Order).get(1) # assume order exists
 itemsNotInOrder = session.query(Item).select(Item.c.order != order) #  
 ERROR!

This should work.
itemsNotInOrder = session.query(Item).select(Item.c.order_id != order.id)

 
 
 The Item.c object does not have an 'order' attribute. Is there a  
 reason why it can't have one?

I would guess that attributes of Item.c are Column instances, which 
order is not.  Your approach seems intuitive (I did the same thing 
once), but the above example I think is easy enough.

I could be wrong or missing something.  Just trying to be helpful.

Randall

 
 ~ Daniel
 
 


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic_identity determination

2006-11-05 Thread Randall Smith
I've attached a file which is a variant to the employees example with 
two objectives.

1. Base polymorphic_identity on select criteria (no type column).
2. Use two levels of inheritance.

The first objective seems to be met, but the second is not working 
properly.  I put in two Managers, two Generic Engineers and two Chemical 
Engineers (which inherit from Engineer).  When I select from employees, 
I get eight records.  The Chemical Engineers are included once as 
Chemical Engineers and once and Generic Engineers.

How might this be better written to meet these objectives?

Randall

Michael Bayer wrote:
 just FYI, the type column idea is taken from Hibernate, and that's
 all Hibernate supports as far as polymorphic loading.
 
 But for polymorphic loading in SA, you are free to make any kind of
 polymorphic_union you like that can add in a functionally-generated
 type column, and specify it into select_table.  im pretty sure this
 should work completely right now, such as:
 
 import sqlalchemy.sql as sql
 
 person_join = polymorphic_union(
 {
   'engineer':sql.select([people.join(engineers),
 sql.column('engineer').label('type')]),
   'manager':sql.select([people.join(managers),
 sql.column('manager').label('type')]),
}
 )
 
 etc.
 
 
  
 



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---
from sqlalchemy import *

metadata = DynamicMetaData('testdata')

employees = Table('employees', metadata, 
Column('employee_id', Integer, primary_key=True),
Column('name', String(50)),
Column('manager_data', String(50)),
Column('engineer_info', String(50)),
Column('cheme_info', String(50)),
)

managers = select([employees], employees.c.manager_data !=
   None).alias('managers')
engineers = select([employees], employees.c.engineer_info !=
None).alias('engineers')
chemical_engineers = select([employees],
 and_(employees.c.engineer_info != None,
  employees.c.cheme_info != None)).alias(
  'chemical_engineers')

class Employee(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.__class__.__name__ +   + self.name

class Manager(Employee):
def __init__(self, name, manager_data):
self.name = name
self.manager_data = manager_data
def __repr__(self):
return self.__class__.__name__ +   + self.name +   +  self.manager_data

class Engineer(Employee):
def __init__(self, name, engineer_info):
self.name = name
self.engineer_info = engineer_info
def __repr__(self):
return self.__class__.__name__ +   + self.name +   +  self.engineer_info

class ChemicalEngineer(Engineer):
def __init__(self, name, engineer_info, cheme_info):
self.name = name
self.engineer_info = engineer_info
self.cheme_info = cheme_info

def __repr__(self):
return self.__class__.__name__ +   + self.name +   +  self.engineer_info

p_union = polymorphic_union(
{
'engineer': engineers,
'manager': managers,
'chemical_engineer': chemical_engineers
},
'type'
)

employee_mapper = mapper(Employee, p_union, polymorphic_on=p_union.c.type)
manager_mapper = mapper(Manager, managers, inherits=employee_mapper,
concrete=True, polymorphic_identity='manager')
engineer_mapper = mapper(Engineer, engineers, inherits=employee_mapper,
concrete=True, polymorphic_identity='engineer')
mapper(ChemicalEngineer, chemical_engineers, inherits=engineer_mapper,
   concrete=True, polymorphic_identity='chemical_engineer')

def populate(session):
m1 = Manager('manager1', 'manager1') 
m2 = Manager('manager2', 'manager2') 
e1 = Engineer('engineer1', 'engineer1') 
e2 = Engineer('engineer2', 'engineer2') 
ce1 = ChemicalEngineer('cengineer1', 'cengineer1', 'cengineer1') 
ce2 = ChemicalEngineer('cengineer2', 'cengineer2', 'cengineer2') 
for o in (m1, m2, e1, e2, ce1, ce2):
session.save(o)
session.flush()

if __name__ == '__main__':
engine = create_engine('sqlite:///test.db')
engine.echo = True
metadata.connect(engine)
metadata.drop_all()
metadata.create_all()
session = create_session(engine)
populate(session)
print session.query(Employee).select()


[sqlalchemy] Re: polymorphic_identity determination

2006-11-05 Thread Randall Smith

John,

Thanks for the feedback.  The code I put up is not part of any real 
system.  I'm just playing off of the existing examples in the docs and 
trying to get comfortable with SA inheritance.

Randall

John M Camara wrote:
 
 Randall Smith wrote:
 For discussion, consider the Employee, Manager, Engineer example from
 the docs.  If I were designing the tables, I would not normally have a
 type field.  I would use the null status of engineer_info or
 manager_data to determine the employee type.  Or if that data was in
 separate tables, I would use the existence of records in those tables.
 And these are simple cases.  I can think of some real life cases where
 the values of multiple fields would determine the class.
 
 IMO you always need to keep your people/place/things data separate from
 your roles data.  If you put the 2 together you end up developing a
 system that is highly inflexible and the code will start to look ugly.
 After all a user can wear many hats.  He can be an Employee, Manager,
 Engineer, etc.  He can even be an Employee of 2 different companies, a
 manager of multiple groups, etc.
 
 So I would keep user tables/classes separate from role tables/classes.
 For the database design you would have 2 basic choices.
 
 First option:
 
 User Table
   userID
   ...
 Role Table
   userID
   roleType
   roleData
   ...
 
 Where roleData would likely contain serialized (pickled) data.
 
 Second option:
 
 User Table
   id
   ...
 Role Table
   id
   userID
   roleType
   roleDetailsID
   ...
 Role1Details Table
   id
   ...
 Role2Details Table
   id
   ...
 other role details tables
 
 I would only go with the second option if you actually needed to do ad
 hoc queries of the Role#Details tables.  Otherwise, the first option is
 far quicker to code and provides more flexibility.
 
 John
 
 
  
 


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] polymorphic_identity determination

2006-11-04 Thread Randall Smith

I touched on this in an earlier thread, but think it deserves its own. 
The current inheritance implementation requires a column specified for 
polymorphic_on passed to a mapper and a value for that column specified 
for each inheriting mapper.  I don't think this approach is flexible 
enough and I think there are better ways to do this.

For discussion, consider the Employee, Manager, Engineer example from 
the docs.  If I were designing the tables, I would not normally have a 
type field.  I would use the null status of engineer_info or 
manager_data to determine the employee type.  Or if that data was in 
separate tables, I would use the existence of records in those tables. 
And these are simple cases.  I can think of some real life cases where 
the values of multiple fields would determine the class.

I think it would be good to have the same type of flexibility as mapping 
a class to a select object.  In fact, I'm wondering now if the two can 
be used in combination to this effect.  Maybe something like ...

Map to a select object that does this:

 SELECT employee_id, name, manager_data, engineer_info, 'engineer' 
as type FROM employees WHERE engineer_info IS NOT NULL

Do that for the manager too, and apply polymorphic inheritance.

That would allow for great flexibility without having duplicate type 
information (type + engineer_info) in the table.  I'm going to play with 
this some more and am interested in what others think.

Randall


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic_identity determination

2006-11-04 Thread Randall Smith

Michael Bayer wrote:
 just FYI, the type column idea is taken from Hibernate, and that's
 all Hibernate supports as far as polymorphic loading.

I think it's good that SA takes good features from and is similar to 
Hibernate, but I hope that your aspirations and those of SA's users are 
to make SA much better than Hibernate.  I think Python vs. Java/XML (I 
despise the XML modeling) is already a boost for SA.  SA is amazing and 
I've got high hopes for its future.

 
 But for polymorphic loading in SA, you are free to make any kind of
 polymorphic_union you like that can add in a functionally-generated
 type column, and specify it into select_table.  im pretty sure this
 should work completely right now, such as:
 
 import sqlalchemy.sql as sql
 
 person_join = polymorphic_union(
 {
   'engineer':sql.select([people.join(engineers),
 sql.column('engineer').label('type')]),
   'manager':sql.select([people.join(managers),
 sql.column('manager').label('type')]),
}
 )
 
 etc.
 

Thanks for code example.  I suspect that's the approach I'll be using 
most often.

Randall


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: single table inheritance questions

2006-11-03 Thread Randall Smith

Michael Bayer wrote:
 On Nov 2, 2006, at 11:47 PM, Randall Smith wrote:
 
 Is there a way to inherit more than one level for single table
 inheritance?  Take this relationship for example:

  Animal - Dog - German Shepard

 Say there are 10 animals; 5 are dogs and 2 are German Shepard.

  session.query(Animal).select() # Should yield 10 results.
  session.query(Dog).select() # Should yield 5 results.
  session.query(GermanShepard).select() # Should yield 2 results.

 
 not unless you apply the attached patch, and/or update to rev 2084.

Works nicely.  Thanks!

To me, it seems that getting inheritance right is important to fully 
utilizing the O in ORM and I like you implementation so far.

Randall


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] single table inheritance questions

2006-11-02 Thread Randall Smith

Is there a way to inherit more than one level for single table 
inheritance?  Take this relationship for example:

 Animal - Dog - German Shepard

Say there are 10 animals; 5 are dogs and 2 are German Shepard.

 session.query(Animal).select() # Should yield 10 results.
 session.query(Dog).select() # Should yield 5 results.
 session.query(GermanShepard).select() # Should yield 2 results.

Second question.  Can the polymorphic_identity be extracted 
functionally?  Say that in the type column I have data like this:

 type value
 ---  
 default_heaters  electric
 default_heaters  gas
 default_floors   vinyl
 default_floors   carpet

All 4 records are DefaultAmenity instances, 2 are DefaultHeater and 2 
are DefaultFloor.  The latter two inherit from the first.  Identifying a 
DefaultAmenity record using the type column would require something like 
  table.c.type.startswith('default').

Randall


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---