Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 4:06 PM,   wrote:
> It happens implicitly when iterating over the cursor to get the results.
> Here's an example using fetchmany:

I noticed that you're using the Postgresql dialect directly, note
there is a Redshift variant of this dialect here:

https://github.com/sqlalchemy-redshift/sqlalchemy-redshift

This dialect doesn't have anything specific going on with the cursor,
so is unlikely to solve this issue.  However it might be good to use
this dialect for its other Redshift features anyway (as well as that
it's a more appropriate place if some workaround is needed specific to
Redshift).

As far as the cursor, SQLAlchemy opens absolutely one cursor for the
duration of any statement.   Test:

from sqlalchemy import create_engine
engine = create_engine('postgres+psycopg2://scott:tiger@localhost/test')
print "INITIALIZE ENGINE FIRST, DOES A FEW THINGS"
engine.connect().close()

print "OK, READY TO RUN A SINGLE STATEMENT!!!"
with engine.connect().execution_options(stream_results=True) as conn:
results = conn.execute("SELECT 1 WHERE 1=0")
data = [r for r in results]
assert not data

patch with print statements:

diff --git a/lib/sqlalchemy/dialects/postgresql/psycopg2.py
b/lib/sqlalchemy/dialects/postgresql/psycopg2.py
index 31792a492..af8367018 100644
--- a/lib/sqlalchemy/dialects/postgresql/psycopg2.py
+++ b/lib/sqlalchemy/dialects/postgresql/psycopg2.py
@@ -429,6 +429,7 @@ _server_side_id = util.counter()

 class PGExecutionContext_psycopg2(PGExecutionContext):
 def create_server_side_cursor(self):
+print "CREATE PG SERVER SIDE CURSOR!"
 # use server-side cursors:
 # http://lists.initd.org/pipermail/psycopg/2007-January/005251.html
 ident = "c_%s_%s" % (hex(id(self))[2:],
diff --git a/lib/sqlalchemy/engine/default.py b/lib/sqlalchemy/engine/default.py
index 8b72c0001..9a0a1e55a 100644
--- a/lib/sqlalchemy/engine/default.py
+++ b/lib/sqlalchemy/engine/default.py
@@ -945,6 +945,7 @@ class DefaultExecutionContext(interfaces.ExecutionContext):
 return use_server_side

 def create_cursor(self):
+print "CREATE CURSOR!!!"
 if self._use_server_side_cursor():
 self._is_server_side = True
 return self.create_server_side_cursor()


output:

INITIALIZE ENGINE FIRST, DOES A FEW THINGS
CREATE CURSOR!!!
CREATE CURSOR!!!
CREATE CURSOR!!!
OK, READY TO RUN A SINGLE STATEMENT!!!
CREATE CURSOR!!!
CREATE PG SERVER SIDE CURSOR!

it is also the case that this cursor is explicitly closed, provided
you have fully fetched all results on the result. If you are getting a
result set, then not fully fetching or closing the result, then doing
another statement, that will *not* have necessarily closed the
previous cursor; it happens upon garbage collection too but its better
to close explicitly if you aren't fully fetching.

the stack trace you have shows that psycopg2 is referring to a new
cursor at the point of fetchmany() being called.I don't have
insight into why that might be.

It might be nice to check however that Redshift is allowing more than
one server side cursor to be opened for the *whole lifespan* of the
database connection - as the connections here are stored in a
connection pool, it is certainly the case that after one cursor is
closed, if you keep doing things with the engine, another cursor will
be opened on that connection.




>
> import psycopg2
> from datetime import datetime
> conn_str = 'host=redshift_host port=redshift_port username=username
> password=password'
> query = 'SELECT * FROM test WHERE timestamp > %s'
> params = (datetime(2020, 12, 31, 0, 0),)
> with psycopg2.connect(conn_str) as conn:
> with conn.cursor(name='test_cursor') as cur:
> cur.execute(query, params)
> results = cur.fetchmany(1000)
>
> This doesn't produce any errors as well.
>
>
> On Wednesday, September 6, 2017 at 1:57:54 PM UTC-6, Mike Bayer wrote:
>>
>> On Wed, Sep 6, 2017 at 2:42 PM,   wrote:
>> > Yes, it is a select query.  I tried to re-create this just using
>> > psycopg2
>> > and named cursors and didn't get an error so I thought it had to be
>> > somewhere in sqlalchemy
>> >
>> > import psycopg2
>> > from datetime import datetime
>> > conn_str = 'host=redshift_host port=redshift_port username=username
>> > password=password'
>> > query = 'SELECT * FROM test WHERE timestamp > %s'
>> > params = (datetime(2020, 12, 31, 0, 0),)
>> > with psycopg2.connect(conn_str) as conn:
>> > with conn.cursor(name='test_cursor') as cur:
>> > cur.itersize=1000 # default is 2000, but the server I connected
>> > to
>> > only allowed 1000
>> > cur.execute(query, params)
>> > results = [r for r in cur]
>> >
>> >
>> > I tried connecting to the same server and running the same query.  No
>> > error
>> > is returned and I end up with an results as an empty list.
>>
>> where's the fetchmany() call ?
>>
>>
>> >
>> >
>> > On Wednesday, 

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread dan
It happens implicitly when iterating over the cursor to get the results. 
 Here's an example using fetchmany:

import psycopg2
from datetime import datetime
conn_str = 'host=redshift_host port=redshift_port username=username 
password=password'
query = 'SELECT * FROM test WHERE timestamp > %s'
params = (datetime(2020, 12, 31, 0, 0),)
with psycopg2.connect(conn_str) as conn:
with conn.cursor(name='test_cursor') as cur:
cur.execute(query, params)
results = cur.fetchmany(1000)

This doesn't produce any errors as well.


On Wednesday, September 6, 2017 at 1:57:54 PM UTC-6, Mike Bayer wrote:
>
> On Wed, Sep 6, 2017 at 2:42 PM,   wrote: 
> > Yes, it is a select query.  I tried to re-create this just using 
> psycopg2 
> > and named cursors and didn't get an error so I thought it had to be 
> > somewhere in sqlalchemy 
> > 
> > import psycopg2 
> > from datetime import datetime 
> > conn_str = 'host=redshift_host port=redshift_port username=username 
> > password=password' 
> > query = 'SELECT * FROM test WHERE timestamp > %s' 
> > params = (datetime(2020, 12, 31, 0, 0),) 
> > with psycopg2.connect(conn_str) as conn: 
> > with conn.cursor(name='test_cursor') as cur: 
> > cur.itersize=1000 # default is 2000, but the server I connected 
> to 
> > only allowed 1000 
> > cur.execute(query, params) 
> > results = [r for r in cur] 
> > 
> > 
> > I tried connecting to the same server and running the same query.  No 
> error 
> > is returned and I end up with an results as an empty list. 
>
> where's the fetchmany() call ? 
>
>
> > 
> > 
> > On Wednesday, September 6, 2017 at 12:27:13 PM UTC-6, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 6, 2017 at 12:21 PM,   wrote: 
> >> > I've run into an issue with Redshift and I thought I would post it 
> here 
> >> > before logging a bug.  If I run a query against a Redshift database 
> >> > table 
> >> > that happens to return 0 rows I get an error when I try to iterate 
> over 
> >> > the 
> >> > result set.  I run a pretty simple script that is looking for rows 
> where 
> >> > a 
> >> > timestamp field contains a date time that is greater than the last 
> time 
> >> > the 
> >> > query was run.  Most of the time there are several thousand rows, but 
> at 
> >> > times there are no rows that have been updated and the query returns 
> a 
> >> > result set with no records. 
> >> > 
> >> > It is simple enough to reproduce simply be generating a query that 
> >> > always 
> >> > returns 0 records. 
> >> 
> >> is it a SELECT query? 
> >> 
> >> I don't see anything on the SQLAlchemy side that's doing the wrong 
> >> thing here, the server-side cursors option just means we say 
> >> cursor.fetchmany() one or more times to pre-buffer rows from the 
> >> cursor.  We do not "open" a second server side cursor.psycopg2 is 
> >> reporting the error at the moment the code is calling fetchmany(), 
> >> which does not advertise as opening a new cursor. 
> >> 
> >> 
> >> > 
> >> > from sqlalchemy import create_engine 
> >> > engine = 
> >> > 
> >> > 
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
>  
>
> >> > with engine.connect().execute_options(stream_results=True) as conn: 
> >> > results = conn.execute(query) 
> >> > data = [r for r in results] 
> >> > 
> >> > Traceback (most recent call last): 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1174, in fetchone 
> >> > row = self._fetchone_impl() 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1296, in _fetchone_impl 
> >> > self.__buffer_rows() 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1283, in __buffer_rows 
> >> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size)) 
> >> > psycopg2.InternalError: opening multiple cursors from within the same 
> >> > client 
> >> > connection is not allowed. 
> >> > The above exception was the direct cause of the following exception: 
> >> > Traceback (most recent call last): 
> >> >   File "", line 3, in  
> >> >   File "", line 3, in  
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 870, in __iter__ 
> >> > row = self.fetchone() 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1183, in fetchone 
> >> > self.cursor, self.context) 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
>  
>
> >> > line 1402, in _handle_dbapi_exception 
> >> > exc_info 
> 

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 2:42 PM,   wrote:
> Yes, it is a select query.  I tried to re-create this just using psycopg2
> and named cursors and didn't get an error so I thought it had to be
> somewhere in sqlalchemy
>
> import psycopg2
> from datetime import datetime
> conn_str = 'host=redshift_host port=redshift_port username=username
> password=password'
> query = 'SELECT * FROM test WHERE timestamp > %s'
> params = (datetime(2020, 12, 31, 0, 0),)
> with psycopg2.connect(conn_str) as conn:
> with conn.cursor(name='test_cursor') as cur:
> cur.itersize=1000 # default is 2000, but the server I connected to
> only allowed 1000
> cur.execute(query, params)
> results = [r for r in cur]
>
>
> I tried connecting to the same server and running the same query.  No error
> is returned and I end up with an results as an empty list.

where's the fetchmany() call ?


>
>
> On Wednesday, September 6, 2017 at 12:27:13 PM UTC-6, Mike Bayer wrote:
>>
>> On Wed, Sep 6, 2017 at 12:21 PM,   wrote:
>> > I've run into an issue with Redshift and I thought I would post it here
>> > before logging a bug.  If I run a query against a Redshift database
>> > table
>> > that happens to return 0 rows I get an error when I try to iterate over
>> > the
>> > result set.  I run a pretty simple script that is looking for rows where
>> > a
>> > timestamp field contains a date time that is greater than the last time
>> > the
>> > query was run.  Most of the time there are several thousand rows, but at
>> > times there are no rows that have been updated and the query returns a
>> > result set with no records.
>> >
>> > It is simple enough to reproduce simply be generating a query that
>> > always
>> > returns 0 records.
>>
>> is it a SELECT query?
>>
>> I don't see anything on the SQLAlchemy side that's doing the wrong
>> thing here, the server-side cursors option just means we say
>> cursor.fetchmany() one or more times to pre-buffer rows from the
>> cursor.  We do not "open" a second server side cursor.psycopg2 is
>> reporting the error at the moment the code is calling fetchmany(),
>> which does not advertise as opening a new cursor.
>>
>>
>> >
>> > from sqlalchemy import create_engine
>> > engine =
>> >
>> > create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
>> > with engine.connect().execute_options(stream_results=True) as conn:
>> > results = conn.execute(query)
>> > data = [r for r in results]
>> >
>> > Traceback (most recent call last):
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1174, in fetchone
>> > row = self._fetchone_impl()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1296, in _fetchone_impl
>> > self.__buffer_rows()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1283, in __buffer_rows
>> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
>> > psycopg2.InternalError: opening multiple cursors from within the same
>> > client
>> > connection is not allowed.
>> > The above exception was the direct cause of the following exception:
>> > Traceback (most recent call last):
>> >   File "", line 3, in 
>> >   File "", line 3, in 
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 870, in __iter__
>> > row = self.fetchone()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1183, in fetchone
>> > self.cursor, self.context)
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
>> > line 1402, in _handle_dbapi_exception
>> > exc_info
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>> > line 203, in raise_from_cause
>> > reraise(type(exception), exception, tb=exc_tb, cause=cause)
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>> > line 186, in reraise
>> > raise value.with_traceback(tb)
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1174, in fetchone
>> > row = self._fetchone_impl()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1296, in _fetchone_impl
>> > self.__buffer_rows()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1283, in __buffer_rows
>> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread dan
Yes, it is a select query.  I tried to re-create this just using psycopg2 
and named cursors and didn't get an error so I thought it had to be 
somewhere in sqlalchemy

import psycopg2
from datetime import datetime
conn_str = 'host=redshift_host port=redshift_port username=username 
password=password'
query = 'SELECT * FROM test WHERE timestamp > %s'
params = (datetime(2020, 12, 31, 0, 0),)
with psycopg2.connect(conn_str) as conn:
with conn.cursor(name='test_cursor') as cur:
cur.itersize=1000 # default is 2000, but the server I connected to only 
allowed 1000
cur.execute(query, params)
results = [r for r in cur]


I tried connecting to the same server and running the same query.  No error 
is returned and I end up with an results as an empty list.


On Wednesday, September 6, 2017 at 12:27:13 PM UTC-6, Mike Bayer wrote:
>
> On Wed, Sep 6, 2017 at 12:21 PM,   
> wrote: 
> > I've run into an issue with Redshift and I thought I would post it here 
> > before logging a bug.  If I run a query against a Redshift database 
> table 
> > that happens to return 0 rows I get an error when I try to iterate over 
> the 
> > result set.  I run a pretty simple script that is looking for rows where 
> a 
> > timestamp field contains a date time that is greater than the last time 
> the 
> > query was run.  Most of the time there are several thousand rows, but at 
> > times there are no rows that have been updated and the query returns a 
> > result set with no records. 
> > 
> > It is simple enough to reproduce simply be generating a query that 
> always 
> > returns 0 records. 
>
> is it a SELECT query? 
>
> I don't see anything on the SQLAlchemy side that's doing the wrong 
> thing here, the server-side cursors option just means we say 
> cursor.fetchmany() one or more times to pre-buffer rows from the 
> cursor.  We do not "open" a second server side cursor.psycopg2 is 
> reporting the error at the moment the code is calling fetchmany(), 
> which does not advertise as opening a new cursor. 
>
>
> > 
> > from sqlalchemy import create_engine 
> > engine = 
> > 
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
>  
>
> > with engine.connect().execute_options(stream_results=True) as conn: 
> > results = conn.execute(query) 
> > data = [r for r in results] 
> > 
> > Traceback (most recent call last): 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1174, in fetchone 
> > row = self._fetchone_impl() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1296, in _fetchone_impl 
> > self.__buffer_rows() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1283, in __buffer_rows 
> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size)) 
> > psycopg2.InternalError: opening multiple cursors from within the same 
> client 
> > connection is not allowed. 
> > The above exception was the direct cause of the following exception: 
> > Traceback (most recent call last): 
> >   File "", line 3, in  
> >   File "", line 3, in  
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 870, in __iter__ 
> > row = self.fetchone() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1183, in fetchone 
> > self.cursor, self.context) 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
>  
>
> > line 1402, in _handle_dbapi_exception 
> > exc_info 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>  
>
> > line 203, in raise_from_cause 
> > reraise(type(exception), exception, tb=exc_tb, cause=cause) 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>  
>
> > line 186, in reraise 
> > raise value.with_traceback(tb) 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1174, in fetchone 
> > row = self._fetchone_impl() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1296, in _fetchone_impl 
> > self.__buffer_rows() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1283, in __buffer_rows 
> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size)) 
> > sqlalchemy.exc.InternalError: (psycopg2.InternalError) opening multiple 
> > cursors from within the 

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 12:21 PM,   wrote:
> I've run into an issue with Redshift and I thought I would post it here
> before logging a bug.  If I run a query against a Redshift database table
> that happens to return 0 rows I get an error when I try to iterate over the
> result set.  I run a pretty simple script that is looking for rows where a
> timestamp field contains a date time that is greater than the last time the
> query was run.  Most of the time there are several thousand rows, but at
> times there are no rows that have been updated and the query returns a
> result set with no records.
>
> It is simple enough to reproduce simply be generating a query that always
> returns 0 records.

is it a SELECT query?

I don't see anything on the SQLAlchemy side that's doing the wrong
thing here, the server-side cursors option just means we say
cursor.fetchmany() one or more times to pre-buffer rows from the
cursor.  We do not "open" a second server side cursor.psycopg2 is
reporting the error at the moment the code is calling fetchmany(),
which does not advertise as opening a new cursor.


>
> from sqlalchemy import create_engine
> engine =
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
> with engine.connect().execute_options(stream_results=True) as conn:
> results = conn.execute(query)
> data = [r for r in results]
>
> Traceback (most recent call last):
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1174, in fetchone
> row = self._fetchone_impl()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1296, in _fetchone_impl
> self.__buffer_rows()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1283, in __buffer_rows
> self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
> psycopg2.InternalError: opening multiple cursors from within the same client
> connection is not allowed.
> The above exception was the direct cause of the following exception:
> Traceback (most recent call last):
>   File "", line 3, in 
>   File "", line 3, in 
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 870, in __iter__
> row = self.fetchone()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1183, in fetchone
> self.cursor, self.context)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
> line 1402, in _handle_dbapi_exception
> exc_info
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
> line 203, in raise_from_cause
> reraise(type(exception), exception, tb=exc_tb, cause=cause)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
> line 186, in reraise
> raise value.with_traceback(tb)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1174, in fetchone
> row = self._fetchone_impl()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1296, in _fetchone_impl
> self.__buffer_rows()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1283, in __buffer_rows
> self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
> sqlalchemy.exc.InternalError: (psycopg2.InternalError) opening multiple
> cursors from within the same client connection is not allowed.
>
>
> I get the same problem using fetchone() or fetchmany() on the result set.
> If I don't use .execute_options(stream_results=True) to create the
> connection then I don't get the error.  It seems that option somehow ends up
> creating multiple server-side cursors when the are no records in the result
> set.  If any records are returned in the result set then I don't get the
> error.  Right now, as a work around I simply check the results.rowcount
> attribute before iterating through results, but I don't really like that
> work around.  Has anyone else encountered this problem with streamed results
> from Redshift?
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to 

[sqlalchemy] Re: Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread dan
Sorry, that code sample contained a typo.  It should be:

from sqlalchemy import create_engine
engine = create_engine('postgres+psycopg2://username:password@
redshifthost:port/db_name')
with engine.connect().execution_options(stream_results=True) as conn:
results = conn.execute(query)
data = [r for r in results]

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread dan
I've run into an issue with Redshift and I thought I would post it here 
before logging a bug.  If I run a query against a Redshift database table 
that happens to return 0 rows I get an error when I try to iterate over the 
result set.  I run a pretty simple script that is looking for rows where a 
timestamp field contains a date time that is greater than the last time the 
query was run.  Most of the time there are several thousand rows, but at 
times there are no rows that have been updated and the query returns a 
result set with no records.

It is simple enough to reproduce simply be generating a query that always 
returns 0 records.
 
from sqlalchemy import create_engine
engine = create_engine(
'postgres+psycopg2://username:password@redshifthost:port/db_name')
with engine.connect().execute_options(stream_results=True) as conn:
results = conn.execute(query)
data = [r for r in results]

Traceback (most recent call last):
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1174, in fetchone
row = self._fetchone_impl()
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1296, in _fetchone_impl
self.__buffer_rows()
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1283, in __buffer_rows
self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
psycopg2.InternalError: opening multiple cursors from within the same 
client connection is not allowed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "", line 3, in 
  File "", line 3, in 
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 870, in __iter__
row = self.fetchone()
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1183, in fetchone
self.cursor, self.context)
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
 
line 1402, in _handle_dbapi_exception
exc_info
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
 
line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
 
line 186, in reraise
raise value.with_traceback(tb)
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1174, in fetchone
row = self._fetchone_impl()
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1296, in _fetchone_impl
self.__buffer_rows()
  File 
"/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
 
line 1283, in __buffer_rows
self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
sqlalchemy.exc.InternalError: (psycopg2.InternalError) opening multiple 
cursors from within the same client connection is not allowed.


I get the same problem using fetchone() or fetchmany() on the result set. 
 If I don't use .execute_options(stream_results=True) to create the 
connection then I don't get the error.  It seems that option somehow ends 
up creating multiple server-side cursors when the are no records in the 
result set.  If any records are returned in the result set then I don't get 
the error.  Right now, as a work around I simply check the results.rowcount 
attribute 
before iterating through results, but I don't really like that work around. 
 Has anyone else encountered this problem with streamed results from 
Redshift?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems running test suite for dialect

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 8:11 AM, Gijs Molenaar  wrote:
> Hi!
>
> I'm working on improving the MonetDB dialect for SQLAlchemy.
>
> https://github.com/gijzelaerr/sqlalchemy-monetdb
>
> Most of the work is done, but there are some tests failing. I'm running into
> some issues running the set suite though.
>
> As described in the dialogs readme, I have a custom test script to run the
> tests. The issues I'm having is:
>
> 1> can't select a specfic test.
>
> if I specify the test class, no tests are ran:
>
>
>
> $ ./run_tests.py --tests test.test_suite.ComponentReflectionTest
> [...]
>
> Ran 0 tests in 0.261s
>
> OK

OK, in the past couple of weeks I rewrote the README.unittests.rst and
a bit of README.dialects.rst to reflect the current state of things.
  If there is one major fact to note, it is that nose is gone as well
as run_tests.py - the nose plugin points are still there to support
any stragglers, but SQLAlchemy and related projects have not used nose
for years, so you're going to want to move to the py.test approach
which is also documented even in order README.dialects.rst.

I don't actually recall using a --tests flag with nose either.   The
README refers to this syntax:

 ./sqla_nose.py test.orm.test_mapper:MapperTest.test_utils

But I would recommend using py.test going forward.   nose
unfortunately became unmaintained which is the whole reason SQLAlchemy
now keeps test frameworks at arm's length (so we don't have to go
through that again).




>
>
>
> If I specify a specif test I get an module error:
>
>
> ./run_tests.py --tests
> test.test_suite.ComponentReflectionTest.test_get_temp_table_columns
> E
> [...]
>
> ImportError: No module named test_get_temp_table_columns
>
>
> 2>  Failing test tacktrace is quite useless
>
> ==
> ERROR: test.test_suite.ComponentReflectionTest.test_get_temp_table_columns
> --
> Traceback (most recent call last):
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/nose/case.py",
> line 197, in runTest
> self.test(*self.arg)
>   File "", line 2, in test_get_temp_table_columns
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py",
> line 95, in decorate
> return self._do(config._current, fn, *args, **kw)
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py",
> line 124, in _do
> self._expect_failure(cfg, ex, name=fn.__name__)
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py",
> line 136, in _expect_failure
> util.raise_from_cause(ex)
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
> line 203, in raise_from_cause
> reraise(type(exception), exception, tb=exc_tb, cause=cause)
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py",
> line 122, in _do
> return_value = fn(*args, **kw)
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/suite/test_reflection.py",
> line 364, in test_get_temp_table_columns
> user_tmp = self.tables.user_tmp
>   File
> "/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/util.py",
> line 228, in __getattribute__
> return dict.__getattribute__(self, key)
> AttributeError: 'adict' object has no attribute 'user_tmp'
>
> So the stacktrace shows me all the test code reraising the exception, but
> not where the actual problem is.
>
> 3> Can't set breakpoints using PyCharm
>
> When I run the test runner, all breakpoints I set with PyCharm are ignored.
> Also I'm unsuccessful making the test suite work nicely with the integrated
> test runner in PyCharm. Any idea how to get this working?
>
> thanks!
>
> greetings,
>
>  - Gijs
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  

Re: Modeling One-to-One relationships and Unique constraints

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 3:12 AM,   wrote:
> Hello,
>
> I am a bit puzzled over modeling One-to-One relationships. The example in
> the documentation says to use uselist flag on the relationship declaration.
> That does make sense to ensure scalars on both sides of the relationship.
> However, that's just an ORM flag and does not necessarily translate to the
> DB (e.g. using Alembic).
>
> In this context I miss the mention of a unique constraint. Could the uselist
> flag not be derived if a unique constraint was specified on the foreign key
> column? For example:
>
> class Parent(Base):
> __tablename__ = 'parent'
> id = Column(Integer, primary_key=True)
> child_id = Column(Integer, ForeignKey('child.id'), unique=True)
> child = relationship("Child", back_populates="parent")
>
> class Child(Base):
> __tablename__ = 'child'
> id = Column(Integer, primary_key=True)
> parent = relationship("Parent", back_populates="child") # uselist=False
> redundant?
>
> Here, the Parent.child_id column has a unique constraint which narrows the
> Many-to-One to a One-to-One relationship. Would this not make the uselist
> flag redundant?
>
> Or am I missing something?

One can build a one-to-one relationship *without* a unique constraint
being present, hence the uselist flag as a public accessor still has a
purpose.   Additionally, if your Parent class linked to Child using a
composite foreign key where only some of the columns had unique=True,
that again means having uselist as explicit is useful.Add to that,
sometimes the table has been reflected, and it's only in the last few
years that SQLAlchemy has had the ability to reflect unique
constraints, and even then, not on all backends.

So there's many cases where a unique constraint may or may not be
present, yet we still need the uselist flag.   None of these cases are
the very specific case you have here.   The precedent for uselist
setting itself to false implicitly would be that a standard
many-to-one, which involves foreign keys that link directly to a
primary key, and the relationship() construct performs a lot of
heuristics when it is configured in order to figure this out.  These
heuristics took years to get right.

Overall, it is optional from an ORM point of view to set unique
constraints on Table metadata, even if the backing database does have
those constraints in place.   But this is not the case for a primary
key, which is explicitly required by the ORM.   So it is easier for
the ORM to rely upon making decisions for uselist based on a primary
key constraint, which is a very reliable source of information, versus
the presence of a unique constraint, which is an unreliable source of
information (in that it is optional, might not be set, might not be
reflected, might be present in the database and/or the model and not
actually on the other side).

So to minimize confusion and try to stick to the principle of "There
should be one-- and preferably only one --obvious way to do it.",
there is no feature right now to attempt to auto-determine uselist
based on the combination of the presence of a unique constraint
combined with the correct primaryjoin geometry.




>
> Thanks!
> Jens
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy, Oracle, and Oracle synonyms

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 2:14 AM, Shane Ó Conchúir  wrote:
> Hi all,
>
> I have used SQLAlchemy in the past with MySQL, representing tables with
> classes and adding relationships. This worked really well for me.
>
> I am now trying to autogenerate SQLAlchemy models for an Oracle database
> with multiple schemas which uses tables, views, and public synonyms but am
> new to Oracle and do not know the correct approach here.
>
> The database has:
>
> tables, typically in non-public schemas hidden from users;
> views ("CREATE VIEW..."),  typically in non-public schemas hidden from
> users;
> public synonyms (Oracle synonyms: "CREATE PUBLIC SYNONYM...",
> https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_7001.htm)
> of views, used to expose a subset of the table data to users.
>
> This is how I am modeling the database:
>
> Tables
> I create DeclarativeBase subclasses for each table. Relationships and
> foreign keys are added as usual. The owning schema is added as a class
> attribute. This seems fine.
>
> Views
> Each view is generated with a call to Table(). This seems to be the standard
> approach
> (http://docs.sqlalchemy.org/en/latest/core/reflection.html#reflecting-views).
> Is it the 'best' approach? How does this work with DeclarativeBase? I add
> the owning schema and the foreign keys to views.
>
> Public synonyms
> I am at a loss as to the best way to represent these. Should they be
> subclasses of DeclarativeBase? Wrappers around views with the schema set to
> 'public'?
>
> Finally, is this a sensible approach? I did try using metadata and
> reflection to avoid explicitly writing any models but there seemed to be a
> startup cost in seconds to that when the script ran and it is also useful to
> have classes to augment with utility functions.

it's not clear if you are still working from reflection or if you are
just building out the classes at this point.  if you wish for you
application to use the public synonyms for database access, then name
your tables and views using the synonym names as the names, and
disregard the private names.   Alternatively, if you are going to
access the private names directly, disregard the synonym names
altogether.





>
> Any help is welcome!
>
>
> Regards,
>
> Shane
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] problems running test suite for dialect

2017-09-06 Thread Gijs Molenaar
Hi!

I'm working on improving the MonetDB dialect for SQLAlchemy.

https://github.com/gijzelaerr/sqlalchemy-monetdb

Most of the work is done, but there are some tests failing. I'm running 
into some issues running the set suite though.

As described in the dialogs readme, I have a custom test script to run the 
tests. The issues I'm having is:

1> can't select a specfic test.

if I specify the test class, no tests are ran:



$ ./run_tests.py --tests test.test_suite.ComponentReflectionTest
[...]

Ran 0 tests in 0.261s

OK



If I specify a specif test I get an module error:


./run_tests.py --tests test.test_suite.ComponentReflectionTest.
test_get_temp_table_columns
E
[...]

ImportError: No module named test_get_temp_table_columns


2>  Failing test tacktrace is quite useless

==
ERROR: test.test_suite.ComponentReflectionTest.test_get_temp_table_columns
--
Traceback (most recent call last):
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/nose/case.py"
, line 197, in runTest
self.test(*self.arg)
  File "", line 2, in test_get_temp_table_columns
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py"
, line 95, in decorate
return self._do(config._current, fn, *args, **kw)
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py"
, line 124, in _do
self._expect_failure(cfg, ex, name=fn.__name__)
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py"
, line 136, in _expect_failure
util.raise_from_cause(ex)
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py"
, line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/exclusions.py"
, line 122, in _do
return_value = fn(*args, **kw)
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/suite/test_reflection.py"
, line 364, in test_get_temp_table_columns
user_tmp = self.tables.user_tmp
  File 
"/home/gijs/Work/sqlalchemy-monetdb/.venv2/local/lib/python2.7/site-packages/sqlalchemy/testing/util.py"
, line 228, in __getattribute__
return dict.__getattribute__(self, key)
AttributeError: 'adict' object has no attribute 'user_tmp'

So the stacktrace shows me all the test code reraising the exception, but 
not where the actual problem is.

3> Can't set breakpoints using PyCharm

When I run the test runner, all breakpoints I set with PyCharm are ignored. 
Also I'm unsuccessful making the test suite work nicely with the integrated 
test runner in PyCharm. Any idea how to get this working?

thanks!

greetings,

 - Gijs

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Modeling One-to-One relationships and Unique constraints

2017-09-06 Thread jens . troeger
Hello,

I am a bit puzzled over modeling One-to-One relationships. The example in 
the documentation 
 
says to use uselist flag 

 
on the relationship declaration. That does make sense to ensure scalars on 
both sides of the relationship. However, that's just an ORM flag and does 
not necessarily translate to the DB (e.g. using Alembic).

In this context I miss the mention of a unique constraint 
. 
Could the uselist flag not be derived if a unique constraint was specified 
on the foreign key column? For example:

class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
child_id = Column(Integer, ForeignKey('child.id'), unique=True)
child = relationship("Child", back_populates="parent")

class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
parent = relationship("Parent", back_populates="child") # uselist=False 
redundant?

Here, the Parent.child_id column has a unique constraint which narrows the 
Many-to-One to a One-to-One relationship. Would this not make the uselist 
flag redundant?

Or am I missing something?

Thanks!
Jens

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy, Oracle, and Oracle synonyms

2017-09-06 Thread Shane Ó Conchúir
Hi all,

I have used SQLAlchemy in the past with MySQL, representing tables with 
classes and adding relationships. This worked really well for me.

I am now trying to autogenerate SQLAlchemy models for an Oracle database 
with multiple schemas which uses tables, views, and public synonyms but am 
new to Oracle and do not know the correct approach here.

The database has:

   - tables, typically in non-public schemas hidden from users;
   - views ("CREATE VIEW..."),  typically in non-public schemas hidden from 
   users;
   - public synonyms (Oracle synonyms: "CREATE PUBLIC SYNONYM...", 
   https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_7001.htm) 
   of views, used to expose a subset of the table data to users.

This is how I am modeling the database:


*Tables*I create DeclarativeBase subclasses for each table. Relationships 
and foreign keys are added as usual. The owning schema is added as a class 
attribute. This seems fine.

*Views*
Each view is generated with a call to Table(). This seems to be the 
standard approach 
(http://docs.sqlalchemy.org/en/latest/core/reflection.html#reflecting-views). 
Is it the 'best' approach? How does this work with DeclarativeBase? I add 
the owning schema and the foreign keys to views.

*Public synonyms*
I am at a loss as to the best way to represent these. Should they be 
subclasses of DeclarativeBase? Wrappers around views with the schema set to 
'public'?

Finally, is this a sensible approach? I did try using metadata and 
reflection to avoid explicitly writing any models but there seemed to be a 
startup cost in seconds to that when the script ran and it is also useful 
to have classes to augment with utility functions.

Any help is welcome!


Regards,

Shane

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.