Re: [sqlalchemy] Writing simple dialect for iSeries + pyodbc in order to use engine.execute for sql statements.

2014-06-11 Thread Jaimy Azle
Actually the ibm_db_sa support several methods to connect to iSeries;
natively through ibm_db, pyodbc, or jdbc (with jython).
 On Jun 8, 2014 8:53 AM, Cory Lutton cory.lut...@gmail.com wrote:

 Thanks for such a quick reply.  Great to hear that I am starting out on
 the right path with building a dialect, I have some work to do so I just
 wanted to make sure I wasn't missing something before I spend the time.
 Hopefully I can get things working enough where I can post it somewhere.

 I have looked at that IBM DB package, unfortunately the iSeries is a
 remote connection using ibm_db meaning I would have to get the Enterprise
 DB2 connect or get a special quote for unlimited makes me laugh a bit
 what the listed price is.

 On Saturday, June 7, 2014 5:48:02 PM UTC-7, Michael Bayer wrote:


 On Jun 7, 2014, at 8:27 PM, Cory Lutton cory@gmail.com wrote:

 I have been looking at using sqlalchemy in an internal company cherrypy
 application I am working on.  It will need to interface with my companies
 iSeries server in order to use ERP data.  I have been using pyodbc so far
 and everything works great.  I am thinking of adding access to another
 database that is postgres.  Rather than write that stuff again, I was
 thinking about trying to use sqlalchemy.  If I use that I would want to use
 it for bothone for the iSeries (DB2) and one for postgres..

 So, I started writing a dialect for iseries+pyodbc and want to make
 sure I am headed down the right path.  It seems to be working so far
 import sqlalchemy as sa
 import sqlalchemy_iseries
 from urllib.parse import quote

 engine = sa.create_engine(
 iseries+pyodbc:///?odbc_connect={connect}.format(
 connect=quote(connect)), pool_size=1)
 con = engine.connect()

 # Only using like a pyodbc cursor, executing specifically created
 statements.
 rows = con.execute(SELECT * FROM alpha.r50all.lbmx)

 # Access via name like a dictionary rather than row.LBID
 for row in rows:
 print(row['LBID'])

 con.close()

 Being new to sqlalchemy I am hoping to get some advice on whether what I
 am doing below is basically going in the right direction or point me in the
 right direction if I am headed the wrong way (or reinventing something)
 .

 Here is what I have so far...

 *__init__.py:*
 from sqlalchemy.dialects import registry
 from . import pyodbc

 dialect = pyodbc.dialect

 registry.register(iseries.pyodbc, sqlalchemy_iseries, dialect)

 *base.py:*
 from sqlalchemy.engine import default

 class ISeriesDialect(default.DefaultDialect):
 name = 'iseries'
 max_identifier_length = 128
 schema_name = qgpl


 *pyodbc.py:*
 from .base import ISeriesDialect
 from sqlalchemy.connectors.pyodbc import PyODBCConnector

 class ISeriesDialect_pyodbc(PyODBCConnector, ISeriesDialect):
 pyodbc_driver_name = 'iSeries Access ODBC Driver'

 def _check_unicode_returns(self, connection):
 return False

 dialect = ISeriesDialect_pyodbc



 looks great.if you want examples of the full format, take a look at
 some of the existing external dialects at http://docs.sqlalchemy.org/
 en/rel_0_9/dialects/index.html#external-dialects.

 Are you sure that the IBM DB SA dialect doesn’t cover this backend
 already?  They have support for pyodbc + DB2, but I’m not really sure how
 “iSeries” differs.  https://code.google.com/p/ibm-db/


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] another quick question regarding abstract classes

2014-06-11 Thread Richard Gerd Kuesters

thanks again Mike! i'll work this here :)

my best regards,
richard.


On 06/10/2014 07:14 PM, Mike Bayer wrote:

On Tue Jun 10 15:36:09 2014, Richard Gerd Kuesters wrote:

so, here i am again with another weird question, but it may be
interesting for what it may come (i dunno yet).

the problem: i have a collection of abstract classes that, when
requested, the function (that does the request) checks in a internal
dictionary if that class was already created or creates it by using
declarative_base(cls=MyAbstractClass), that later can have an engine
and then work against a database.

i use this format because i work with multiple backends from multiple
sources, so abstract classes are a *must* here. now, the problem:
foreign keys and relationships. it's driving me nuts.

ok, let's say I have 2 classes, Foo and Bar, where Bar have one FK to 
Foo.



class Foo(object):
__abstract__ = True
foo_id = Column(...)
...

class Bar(object):
__abstract__ = True
foo_id = Column(ForeignKey(...))




/(those classes are just examples and weren't further coded because
it's a conceptual question)/

i know that the code might be wrong, because i can use @declared_attr
here and furthermore help sqlalchemy act accordingly (i don't know if
this is the right way to say it in english, but it is not a complain
about sqlalchemy actions).

ok, suppose I created two subclasses, one from each abstract model
(Foo and Bar) in a postgres database with some named schema, let's say
sc1. we then have sc1.foo and sc1.bar.

now, i want to create a third table, also from Bar, but in the sc2
schema, where its foreign key will reference sc1.foo, which postgres
supports nicely.

how can i work this out, in a pythonic and sqlalchemy friendly way?
does @declared_attr solves this? or do I have to map that foreign key
(and furthermore relationships) in the class mapper, before using it,
like a @classmethod of some kind?



@declared_attr can help since the decorated function is called with 
cls as an argument.  You can look on cls for __table_args__ or 
some other attribute if you need, and you can create a Table on the 
fly to serve as secondary, see 
http://docs.sqlalchemy.org/en/rel_0_9/_modules/examples/generic_associations/table_per_related.html 
for an example of what this looks like.







best regards and sorry for my english,
richard.

--
You received this message because you are subscribed to the Google
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com
mailto:sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Sub-classing declarative classes

2014-06-11 Thread Jonathan Vanasco


On Tuesday, June 10, 2014 3:47:00 PM UTC-4, Noah Davis wrote:

 I suspect there's some way to tell the ORM to Do The Right Thing here, but 
 I have no idea what it might be. I'd like the particular applications to be 
 as unaware of the underlying table information as possible. I guess in 
 essence I'm trying to separate business logic from the DB logic as much as 
 possible. Maybe I'm heading down a dead-end... I'm open to better 
 suggestions.



I've done this before.  It's messy, not fun, and unpythonic I've been 
told.  But it can be done and I was happy with the results ( not the 
process ).

I can't share the code anymore, but I remember doing  a few things:

1. Using an initialization routine.  The classdef of 'MyAlice' would 
register as a provider of 'Alice'; the initialization hook would setup 
relations of 'Alice' to use 'MyAlice'.   I would just inspect the inherited 
'models' package, and then declare the necessary relations.  An early 
version used a registry, later versions I realized I could just use the 
inheritance for the registry information.

2. All functions in the 'base' code referenced `self.__class__`  -- as in 
`getattr(self.class, column)`

While it does require you to set up the package in a certain way and call 
an init routine, I think it's the right thing to do -- since you need 
things to be setup correctly, your init routine can have a handful of 
statements to ensure the package and database are properly setup in the 
'child' project.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Sub-classing declarative classes

2014-06-11 Thread Simon King
On Tue, Jun 10, 2014 at 8:47 PM, Noah Davis neopygmal...@gmail.com wrote:
 Hi,
I've been banging my head against this one for several days now, and
 aside from a three year-old post here, I've come up empty.

 I've got a python module that defines a set of Declarative models that
 several other applications may use. What I'd like is some way to for the
 individual applications to sub-class the existing Declarative objects,
 without adding any new SQL functionality. Specifically, I'd just like to add
 application-specific helper code to the objects. As an example.

 some_model.py
 ---
 [SQLA setup of Base class here]
 class Alice(Base):
__tablename__ = 'alice'
id = Column(Integer, primary_key=True)
value = Column(String)

 class Bob(Base):
__tablename__ = 'bob'
id = Column(Integer, primary_key=True)
subval = Column(String)
alice_id = Colum(Integer, ForeignKey('alice.id'))
alice = relationship('Alice', backref='bobs')
 

 some_app.py
 
 import some_model

 class MyAlice(some_model.Alice):
def myfunc(self):
do_nothing_sql_related_here()

 class MyBob(some_model.Bob):
def otherfunc(self):
   again_something_unrelated()
 -

 This actually works okay out of the box if I select on the subclasses:
 DBSession.query(MyAlice).filter(MyAlice.id==5).first() - MyAlice(...)

 The problem, of course, is relations:
 a = DBSession.query(MyAlice).filter(MyAlice.id=1).first()
 a.bobs - [Bob(...), Bob(...), Bob(...)]
 instead of
 a.bobs - [MyBob(...), MyBob(...), MyBob(...)]

 I suspect there's some way to tell the ORM to Do The Right Thing here, but I
 have no idea what it might be. I'd like the particular applications to be as
 unaware of the underlying table information as possible. I guess in essence
 I'm trying to separate business logic from the DB logic as much as possible.
 Maybe I'm heading down a dead-end... I'm open to better suggestions.

 Thanks,
Noah


Is it really very important to you that myfunc and otherfunc are
methods of your mapped objects, rather than functions defined
somewhere else that take an Alice or Bob object as a parameter? I used
to write code where pretty much everything that *could* be a method of
a class *was* a method of that class, and my classes became unwieldy
and difficult to test as a result. Recently I've been trying to keep
my classes small and write more functions instead, and I think it has
been a general improvement.

Having said that, if you *really* want these functions to be available
as methods of your classes, and you only want a single application
within each process, you could monkeypatch the methods in:

def myfunc(self):
do_nothing_sql_related_here()

some_model.Alice.myfunc = myfunc


You could wrap the monkeypatching up in a decorator (untested):

def monkeypatch(cls):
def patcher(f):
setattr(cls, f.__name__, f)
return patcher

@monkeypatch(some_model.Alice)
def myfunc(self):
do_nothing_sql_related_here()


(This is pretty nasty really, but depending on your requirements it
might be the easiest way to make it work)

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] pandas - issue with SQL server and checking for table existence with user-defined default schema

2014-06-11 Thread Joris Van den Bossche
Hi,

Since version 0.14 (released two weeks ago), pandas uses sqlalchemy in the 
SQL reading and writing functions to support different database flavors. A 
user reported an issue with SQL server: 
https://github.com/pydata/pandas/issues/7422 (and question on SO: 
http://stackoverflow.com/questions/24126883/pandas-dataframe-to-sql-function-if-exists-parameter-not-working).

The user has set the default schema to `test`, but 
`engine.has_table('table_name')` and `meta.tables` still seem to return the 
tables in schema `dbo`.
This leads to the following issue in our sql writing function `to_sql`:
- when creating the table (using `Table.create()`), it creates it in the 
schema set as default (so 'test')
- when checking for existence of the table (needed to see if the function 
has to fail, or has to append to the existing table), it however checks if 
the table exists in the 'dbo' schema
- for this reason, the function thinks the table does not yet exists, tries 
to create it, resulting in a There is already an object named 'foobar' in 
the database error.

Is there a way to resolve this? Is this an issue on our side, or possibly 
in sqlalchemy?

BTW, I tried this myself with PostgreSQL, but couldn't reproduce it.

Kind regards,
Joris

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Writing simple dialect for iSeries + pyodbc in order to use engine.execute for sql statements.

2014-06-11 Thread Cory Lutton
Looks like I wasn't looking at it correctly then...

What I did so far lets it pass through statements using execute() which is 
what I need for now but at least I know not to spend much more time on it.

I just took another look at the ibm_db_sa and installed it but it seems to 
have Python 3 issues, I'll submit a request on the google code site for it 
and see where I get.

Thanks for letting me know

On Wednesday, June 11, 2014 4:16:19 AM UTC-7, Jaimy Azle wrote:

 Actually the ibm_db_sa support several methods to connect to iSeries; 
 natively through ibm_db, pyodbc, or jdbc (with jython). 
  On Jun 8, 2014 8:53 AM, Cory Lutton cory@gmail.com javascript: 
 wrote:

 Thanks for such a quick reply.  Great to hear that I am starting out on 
 the right path with building a dialect, I have some work to do so I just 
 wanted to make sure I wasn't missing something before I spend the time.  
 Hopefully I can get things working enough where I can post it somewhere.

 I have looked at that IBM DB package, unfortunately the iSeries is a 
 remote connection using ibm_db meaning I would have to get the Enterprise 
 DB2 connect or get a special quote for unlimited makes me laugh a bit 
 what the listed price is.

 On Saturday, June 7, 2014 5:48:02 PM UTC-7, Michael Bayer wrote:


 On Jun 7, 2014, at 8:27 PM, Cory Lutton cory@gmail.com wrote:

 I have been looking at using sqlalchemy in an internal company cherrypy 
 application I am working on.  It will need to interface with my companies 
 iSeries server in order to use ERP data.  I have been using pyodbc so far 
 and everything works great.  I am thinking of adding access to another 
 database that is postgres.  Rather than write that stuff again, I was 
 thinking about trying to use sqlalchemy.  If I use that I would want to use 
 it for bothone for the iSeries (DB2) and one for postgres..

 So, I started writing a dialect for iseries+pyodbc and want to make 
 sure I am headed down the right path.  It seems to be working so far 
 import sqlalchemy as sa
 import sqlalchemy_iseries
 from urllib.parse import quote
 
 engine = sa.create_engine(
 iseries+pyodbc:///?odbc_connect={connect}.format(
 connect=quote(connect)), pool_size=1)
 con = engine.connect()

 # Only using like a pyodbc cursor, executing specifically created 
 statements.
 rows = con.execute(SELECT * FROM alpha.r50all.lbmx)

 # Access via name like a dictionary rather than row.LBID
 for row in rows:
 print(row['LBID'])

 con.close()

 Being new to sqlalchemy I am hoping to get some advice on whether what I 
 am doing below is basically going in the right direction or point me in the 
 right direction if I am headed the wrong way (or reinventing something) 
 .

 Here is what I have so far...

 *__init__.py:*
 from sqlalchemy.dialects import registry
 from . import pyodbc

 dialect = pyodbc.dialect

 registry.register(iseries.pyodbc, sqlalchemy_iseries, dialect)

 *base.py:*
 from sqlalchemy.engine import default

 class ISeriesDialect(default.DefaultDialect):
 name = 'iseries'
 max_identifier_length = 128
 schema_name = qgpl


 *pyodbc.py:*
 from .base import ISeriesDialect
 from sqlalchemy.connectors.pyodbc import PyODBCConnector

 class ISeriesDialect_pyodbc(PyODBCConnector, ISeriesDialect):
 pyodbc_driver_name = 'iSeries Access ODBC Driver'

 def _check_unicode_returns(self, connection):
 return False

 dialect = ISeriesDialect_pyodbc



 looks great.if you want examples of the full format, take a look at 
 some of the existing external dialects at http://docs.sqlalchemy.org/
 en/rel_0_9/dialects/index.html#external-dialects.

 Are you sure that the IBM DB SA dialect doesn’t cover this backend 
 already?  They have support for pyodbc + DB2, but I’m not really sure how 
 “iSeries” differs.  https://code.google.com/p/ibm-db/


  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how to tell if a relationship was loaded or not ?

2014-06-11 Thread Mike Bayer

On 6/11/14, 2:17 PM, Jonathan Vanasco wrote:
 I can't find this in the API or by using `inspect` on an object.

 I'm trying to find out how to tell if a particular relationship was
loaded or not.

 ie, I loaded Foo from the ORM, and want to see if foo.bar was loaded.

 I thought it might have been the `.attrs[column].state` , which is an
InstanceState, but it doesn't appear to be so.

usually we do  key in obj.__dict__ or key in inspect(obj).dict.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] pandas - issue with SQL server and checking for table existence with user-defined default schema

2014-06-11 Thread Mike Bayer

On 6/11/14, 4:07 PM, Joris Van den Bossche wrote:
 Hi,

 Since version 0.14 (released two weeks ago), pandas uses sqlalchemy in
 the SQL reading and writing functions to support different database
 flavors. A user reported an issue with SQL server:
 https://github.com/pydata/pandas/issues/7422 (and question on SO:
 http://stackoverflow.com/questions/24126883/pandas-dataframe-to-sql-function-if-exists-parameter-not-working).

 The user has set the default schema to `test`, but
 `engine.has_table('table_name')` and `meta.tables` still seem to
 return the tables in schema `dbo`.
 This leads to the following issue in our sql writing function `to_sql`:
 - when creating the table (using `Table.create()`), it creates it in
 the schema set as default (so 'test')
 - when checking for existence of the table (needed to see if the
 function has to fail, or has to append to the existing table), it
 however checks if the table exists in the 'dbo' schema
 - for this reason, the function thinks the table does not yet exists,
 tries to create it, resulting in a |There is already an object named
 'foobar' in the database| error.

 Is there a way to resolve this? Is this an issue on our side, or
 possibly in sqlalchemy?

all sqlalchemy dialects make sure to determine what in fact is the
default schema upon first connect.  With SQL server, turn on
echo='debug' and you will see this query:

2014-06-11 19:14:02,064 INFO sqlalchemy.engine.base.Engine
SELECT default_schema_name FROM
sys.database_principals
WHERE principal_id=database_principal_id()
   
2014-06-11 19:14:02,065 INFO sqlalchemy.engine.base.Engine ()
2014-06-11 19:14:02,065 DEBUG sqlalchemy.engine.base.Engine Col
('default_schema_name',)
2014-06-11 19:14:02,065 DEBUG sqlalchemy.engine.base.Engine Row ('dbo', )

that row ('dbo', ) you see is what is being determined as the default
schema.a subsequent has_table() command with no explicit schema will
use this value:

SELECT [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME],
[COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE],
[COLUMNS_1].[DATA_TYPE], [COLUMNS_1].[ORDINAL_POSITION],
[COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT],
[COLUMNS_1].[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND
[COLUMNS_1].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2014-06-11 19:14:02,071 INFO sqlalchemy.engine.base.Engine ('foo', 'dbo')

so you want to look into the SQL server database_principal_id() function
and why that might not be working as expected.   If you see that the
function is returning NULL or None, there's a workaround which is that
you can specify schema_name='xyz' to create_engine() as an argument;
however this value is only used if the above query is returning NULL
(which it should not be).









 BTW, I tried this myself with PostgreSQL, but couldn't reproduce it.

 Kind regards,
 Joris
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Writing simple dialect for iSeries + pyodbc in order to use engine.execute for sql statements.

2014-06-11 Thread Jaimy Azle
The iSeries support of ibm_db_sa has not been tested on python3 when I
write it.
 On Jun 12, 2014 4:04 AM, Cory Lutton cory.lut...@gmail.com wrote:

 Looks like I wasn't looking at it correctly then...

 What I did so far lets it pass through statements using execute() which is
 what I need for now but at least I know not to spend much more time on it.

 I just took another look at the ibm_db_sa and installed it but it seems to
 have Python 3 issues, I'll submit a request on the google code site for it
 and see where I get.

 Thanks for letting me know

 On Wednesday, June 11, 2014 4:16:19 AM UTC-7, Jaimy Azle wrote:

 Actually the ibm_db_sa support several methods to connect to iSeries;
 natively through ibm_db, pyodbc, or jdbc (with jython).
  On Jun 8, 2014 8:53 AM, Cory Lutton cory@gmail.com wrote:

 Thanks for such a quick reply.  Great to hear that I am starting out on
 the right path with building a dialect, I have some work to do so I just
 wanted to make sure I wasn't missing something before I spend the time.
 Hopefully I can get things working enough where I can post it somewhere.

 I have looked at that IBM DB package, unfortunately the iSeries is a
 remote connection using ibm_db meaning I would have to get the Enterprise
 DB2 connect or get a special quote for unlimited makes me laugh a bit
 what the listed price is.

 On Saturday, June 7, 2014 5:48:02 PM UTC-7, Michael Bayer wrote:


 On Jun 7, 2014, at 8:27 PM, Cory Lutton cory@gmail.com wrote:

 I have been looking at using sqlalchemy in an internal company cherrypy
 application I am working on.  It will need to interface with my companies
 iSeries server in order to use ERP data.  I have been using pyodbc so far
 and everything works great.  I am thinking of adding access to another
 database that is postgres.  Rather than write that stuff again, I was
 thinking about trying to use sqlalchemy.  If I use that I would want to use
 it for bothone for the iSeries (DB2) and one for postgres..

 So, I started writing a dialect for iseries+pyodbc and want to make
 sure I am headed down the right path.  It seems to be working so far
 import sqlalchemy as sa
 import sqlalchemy_iseries
 from urllib.parse import quote

 engine = sa.create_engine(
 iseries+pyodbc:///?odbc_connect={connect}.format(
 connect=quote(connect)), pool_size=1)
 con = engine.connect()

 # Only using like a pyodbc cursor, executing specifically created
 statements.
 rows = con.execute(SELECT * FROM alpha.r50all.lbmx)

 # Access via name like a dictionary rather than row.LBID
 for row in rows:
 print(row['LBID'])

 con.close()

 Being new to sqlalchemy I am hoping to get some advice on whether what
 I am doing below is basically going in the right direction or point me in
 the right direction if I am headed the wrong way (or reinventing something)
 .

 Here is what I have so far...

 *__init__.py:*
 from sqlalchemy.dialects import registry
 from . import pyodbc

 dialect = pyodbc.dialect

 registry.register(iseries.pyodbc, sqlalchemy_iseries, dialect)

 *base.py:*
 from sqlalchemy.engine import default

 class ISeriesDialect(default.DefaultDialect):
 name = 'iseries'
 max_identifier_length = 128
 schema_name = qgpl


 *pyodbc.py:*
 from .base import ISeriesDialect
 from sqlalchemy.connectors.pyodbc import PyODBCConnector

 class ISeriesDialect_pyodbc(PyODBCConnector, ISeriesDialect):
 pyodbc_driver_name = 'iSeries Access ODBC Driver'

 def _check_unicode_returns(self, connection):
 return False

 dialect = ISeriesDialect_pyodbc



 looks great.if you want examples of the full format, take a look at
 some of the existing external dialects at http://docs.sqlalchemy.org/
 en/rel_0_9/dialects/index.html#external-dialects.

 Are you sure that the IBM DB SA dialect doesn’t cover this backend
 already?  They have support for pyodbc + DB2, but I’m not really sure how
 “iSeries” differs.  https://code.google.com/p/ibm-db/


  --
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an