Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 4:06 PM,   wrote:
> It happens implicitly when iterating over the cursor to get the results.
> Here's an example using fetchmany:

I noticed that you're using the Postgresql dialect directly, note
there is a Redshift variant of this dialect here:

https://github.com/sqlalchemy-redshift/sqlalchemy-redshift

This dialect doesn't have anything specific going on with the cursor,
so is unlikely to solve this issue.  However it might be good to use
this dialect for its other Redshift features anyway (as well as that
it's a more appropriate place if some workaround is needed specific to
Redshift).

As far as the cursor, SQLAlchemy opens absolutely one cursor for the
duration of any statement.   Test:

from sqlalchemy import create_engine
engine = create_engine('postgres+psycopg2://scott:tiger@localhost/test')
print "INITIALIZE ENGINE FIRST, DOES A FEW THINGS"
engine.connect().close()

print "OK, READY TO RUN A SINGLE STATEMENT!!!"
with engine.connect().execution_options(stream_results=True) as conn:
results = conn.execute("SELECT 1 WHERE 1=0")
data = [r for r in results]
assert not data

patch with print statements:

diff --git a/lib/sqlalchemy/dialects/postgresql/psycopg2.py
b/lib/sqlalchemy/dialects/postgresql/psycopg2.py
index 31792a492..af8367018 100644
--- a/lib/sqlalchemy/dialects/postgresql/psycopg2.py
+++ b/lib/sqlalchemy/dialects/postgresql/psycopg2.py
@@ -429,6 +429,7 @@ _server_side_id = util.counter()

 class PGExecutionContext_psycopg2(PGExecutionContext):
 def create_server_side_cursor(self):
+print "CREATE PG SERVER SIDE CURSOR!"
 # use server-side cursors:
 # http://lists.initd.org/pipermail/psycopg/2007-January/005251.html
 ident = "c_%s_%s" % (hex(id(self))[2:],
diff --git a/lib/sqlalchemy/engine/default.py b/lib/sqlalchemy/engine/default.py
index 8b72c0001..9a0a1e55a 100644
--- a/lib/sqlalchemy/engine/default.py
+++ b/lib/sqlalchemy/engine/default.py
@@ -945,6 +945,7 @@ class DefaultExecutionContext(interfaces.ExecutionContext):
 return use_server_side

 def create_cursor(self):
+print "CREATE CURSOR!!!"
 if self._use_server_side_cursor():
 self._is_server_side = True
 return self.create_server_side_cursor()


output:

INITIALIZE ENGINE FIRST, DOES A FEW THINGS
CREATE CURSOR!!!
CREATE CURSOR!!!
CREATE CURSOR!!!
OK, READY TO RUN A SINGLE STATEMENT!!!
CREATE CURSOR!!!
CREATE PG SERVER SIDE CURSOR!

it is also the case that this cursor is explicitly closed, provided
you have fully fetched all results on the result. If you are getting a
result set, then not fully fetching or closing the result, then doing
another statement, that will *not* have necessarily closed the
previous cursor; it happens upon garbage collection too but its better
to close explicitly if you aren't fully fetching.

the stack trace you have shows that psycopg2 is referring to a new
cursor at the point of fetchmany() being called.I don't have
insight into why that might be.

It might be nice to check however that Redshift is allowing more than
one server side cursor to be opened for the *whole lifespan* of the
database connection - as the connections here are stored in a
connection pool, it is certainly the case that after one cursor is
closed, if you keep doing things with the engine, another cursor will
be opened on that connection.




>
> import psycopg2
> from datetime import datetime
> conn_str = 'host=redshift_host port=redshift_port username=username
> password=password'
> query = 'SELECT * FROM test WHERE timestamp > %s'
> params = (datetime(2020, 12, 31, 0, 0),)
> with psycopg2.connect(conn_str) as conn:
> with conn.cursor(name='test_cursor') as cur:
> cur.execute(query, params)
> results = cur.fetchmany(1000)
>
> This doesn't produce any errors as well.
>
>
> On Wednesday, September 6, 2017 at 1:57:54 PM UTC-6, Mike Bayer wrote:
>>
>> On Wed, Sep 6, 2017 at 2:42 PM,   wrote:
>> > Yes, it is a select query.  I tried to re-create this just using
>> > psycopg2
>> > and named cursors and didn't get an error so I thought it had to be
>> > somewhere in sqlalchemy
>> >
>> > import psycopg2
>> > from datetime import datetime
>> > conn_str = 'host=redshift_host port=redshift_port username=username
>> > password=password'
>> > query = 'SELECT * FROM test WHERE timestamp > %s'
>> > params = (datetime(2020, 12, 31, 0, 0),)
>> > with psycopg2.connect(conn_str) as conn:
>> > with conn.cursor(name='test_cursor') as cur:
>> > cur.itersize=1000 # default is 2000, but the server I connected
>> > to
>> > only allowed 1000
>> > cur.execute(query, params)
>> > results = [r for r in cur]
>> >
>> >
>> > I tried connecting to the same server and running the same query.  No
>> > error
>> > is returned and I end up with an results as an empty list.
>>
>> where's the fetchmany() call ?
>>
>>
>> >
>> >
>> > On Wednesday, 

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread dan
It happens implicitly when iterating over the cursor to get the results. 
 Here's an example using fetchmany:

import psycopg2
from datetime import datetime
conn_str = 'host=redshift_host port=redshift_port username=username 
password=password'
query = 'SELECT * FROM test WHERE timestamp > %s'
params = (datetime(2020, 12, 31, 0, 0),)
with psycopg2.connect(conn_str) as conn:
with conn.cursor(name='test_cursor') as cur:
cur.execute(query, params)
results = cur.fetchmany(1000)

This doesn't produce any errors as well.


On Wednesday, September 6, 2017 at 1:57:54 PM UTC-6, Mike Bayer wrote:
>
> On Wed, Sep 6, 2017 at 2:42 PM,   wrote: 
> > Yes, it is a select query.  I tried to re-create this just using 
> psycopg2 
> > and named cursors and didn't get an error so I thought it had to be 
> > somewhere in sqlalchemy 
> > 
> > import psycopg2 
> > from datetime import datetime 
> > conn_str = 'host=redshift_host port=redshift_port username=username 
> > password=password' 
> > query = 'SELECT * FROM test WHERE timestamp > %s' 
> > params = (datetime(2020, 12, 31, 0, 0),) 
> > with psycopg2.connect(conn_str) as conn: 
> > with conn.cursor(name='test_cursor') as cur: 
> > cur.itersize=1000 # default is 2000, but the server I connected 
> to 
> > only allowed 1000 
> > cur.execute(query, params) 
> > results = [r for r in cur] 
> > 
> > 
> > I tried connecting to the same server and running the same query.  No 
> error 
> > is returned and I end up with an results as an empty list. 
>
> where's the fetchmany() call ? 
>
>
> > 
> > 
> > On Wednesday, September 6, 2017 at 12:27:13 PM UTC-6, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 6, 2017 at 12:21 PM,   wrote: 
> >> > I've run into an issue with Redshift and I thought I would post it 
> here 
> >> > before logging a bug.  If I run a query against a Redshift database 
> >> > table 
> >> > that happens to return 0 rows I get an error when I try to iterate 
> over 
> >> > the 
> >> > result set.  I run a pretty simple script that is looking for rows 
> where 
> >> > a 
> >> > timestamp field contains a date time that is greater than the last 
> time 
> >> > the 
> >> > query was run.  Most of the time there are several thousand rows, but 
> at 
> >> > times there are no rows that have been updated and the query returns 
> a 
> >> > result set with no records. 
> >> > 
> >> > It is simple enough to reproduce simply be generating a query that 
> >> > always 
> >> > returns 0 records. 
> >> 
> >> is it a SELECT query? 
> >> 
> >> I don't see anything on the SQLAlchemy side that's doing the wrong 
> >> thing here, the server-side cursors option just means we say 
> >> cursor.fetchmany() one or more times to pre-buffer rows from the 
> >> cursor.  We do not "open" a second server side cursor.psycopg2 is 
> >> reporting the error at the moment the code is calling fetchmany(), 
> >> which does not advertise as opening a new cursor. 
> >> 
> >> 
> >> > 
> >> > from sqlalchemy import create_engine 
> >> > engine = 
> >> > 
> >> > 
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
>  
>
> >> > with engine.connect().execute_options(stream_results=True) as conn: 
> >> > results = conn.execute(query) 
> >> > data = [r for r in results] 
> >> > 
> >> > Traceback (most recent call last): 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1174, in fetchone 
> >> > row = self._fetchone_impl() 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1296, in _fetchone_impl 
> >> > self.__buffer_rows() 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1283, in __buffer_rows 
> >> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size)) 
> >> > psycopg2.InternalError: opening multiple cursors from within the same 
> >> > client 
> >> > connection is not allowed. 
> >> > The above exception was the direct cause of the following exception: 
> >> > Traceback (most recent call last): 
> >> >   File "", line 3, in  
> >> >   File "", line 3, in  
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 870, in __iter__ 
> >> > row = self.fetchone() 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> >> > line 1183, in fetchone 
> >> > self.cursor, self.context) 
> >> >   File 
> >> > 
> >> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
>  
>
> >> > line 1402, in _handle_dbapi_exception 
> >> > exc_info 
> 

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 2:42 PM,   wrote:
> Yes, it is a select query.  I tried to re-create this just using psycopg2
> and named cursors and didn't get an error so I thought it had to be
> somewhere in sqlalchemy
>
> import psycopg2
> from datetime import datetime
> conn_str = 'host=redshift_host port=redshift_port username=username
> password=password'
> query = 'SELECT * FROM test WHERE timestamp > %s'
> params = (datetime(2020, 12, 31, 0, 0),)
> with psycopg2.connect(conn_str) as conn:
> with conn.cursor(name='test_cursor') as cur:
> cur.itersize=1000 # default is 2000, but the server I connected to
> only allowed 1000
> cur.execute(query, params)
> results = [r for r in cur]
>
>
> I tried connecting to the same server and running the same query.  No error
> is returned and I end up with an results as an empty list.

where's the fetchmany() call ?


>
>
> On Wednesday, September 6, 2017 at 12:27:13 PM UTC-6, Mike Bayer wrote:
>>
>> On Wed, Sep 6, 2017 at 12:21 PM,   wrote:
>> > I've run into an issue with Redshift and I thought I would post it here
>> > before logging a bug.  If I run a query against a Redshift database
>> > table
>> > that happens to return 0 rows I get an error when I try to iterate over
>> > the
>> > result set.  I run a pretty simple script that is looking for rows where
>> > a
>> > timestamp field contains a date time that is greater than the last time
>> > the
>> > query was run.  Most of the time there are several thousand rows, but at
>> > times there are no rows that have been updated and the query returns a
>> > result set with no records.
>> >
>> > It is simple enough to reproduce simply be generating a query that
>> > always
>> > returns 0 records.
>>
>> is it a SELECT query?
>>
>> I don't see anything on the SQLAlchemy side that's doing the wrong
>> thing here, the server-side cursors option just means we say
>> cursor.fetchmany() one or more times to pre-buffer rows from the
>> cursor.  We do not "open" a second server side cursor.psycopg2 is
>> reporting the error at the moment the code is calling fetchmany(),
>> which does not advertise as opening a new cursor.
>>
>>
>> >
>> > from sqlalchemy import create_engine
>> > engine =
>> >
>> > create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
>> > with engine.connect().execute_options(stream_results=True) as conn:
>> > results = conn.execute(query)
>> > data = [r for r in results]
>> >
>> > Traceback (most recent call last):
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1174, in fetchone
>> > row = self._fetchone_impl()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1296, in _fetchone_impl
>> > self.__buffer_rows()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1283, in __buffer_rows
>> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
>> > psycopg2.InternalError: opening multiple cursors from within the same
>> > client
>> > connection is not allowed.
>> > The above exception was the direct cause of the following exception:
>> > Traceback (most recent call last):
>> >   File "", line 3, in 
>> >   File "", line 3, in 
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 870, in __iter__
>> > row = self.fetchone()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1183, in fetchone
>> > self.cursor, self.context)
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
>> > line 1402, in _handle_dbapi_exception
>> > exc_info
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>> > line 203, in raise_from_cause
>> > reraise(type(exception), exception, tb=exc_tb, cause=cause)
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>> > line 186, in reraise
>> > raise value.with_traceback(tb)
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1174, in fetchone
>> > row = self._fetchone_impl()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1296, in _fetchone_impl
>> > self.__buffer_rows()
>> >   File
>> >
>> > "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>> > line 1283, in __buffer_rows
>> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread dan
Yes, it is a select query.  I tried to re-create this just using psycopg2 
and named cursors and didn't get an error so I thought it had to be 
somewhere in sqlalchemy

import psycopg2
from datetime import datetime
conn_str = 'host=redshift_host port=redshift_port username=username 
password=password'
query = 'SELECT * FROM test WHERE timestamp > %s'
params = (datetime(2020, 12, 31, 0, 0),)
with psycopg2.connect(conn_str) as conn:
with conn.cursor(name='test_cursor') as cur:
cur.itersize=1000 # default is 2000, but the server I connected to only 
allowed 1000
cur.execute(query, params)
results = [r for r in cur]


I tried connecting to the same server and running the same query.  No error 
is returned and I end up with an results as an empty list.


On Wednesday, September 6, 2017 at 12:27:13 PM UTC-6, Mike Bayer wrote:
>
> On Wed, Sep 6, 2017 at 12:21 PM,   
> wrote: 
> > I've run into an issue with Redshift and I thought I would post it here 
> > before logging a bug.  If I run a query against a Redshift database 
> table 
> > that happens to return 0 rows I get an error when I try to iterate over 
> the 
> > result set.  I run a pretty simple script that is looking for rows where 
> a 
> > timestamp field contains a date time that is greater than the last time 
> the 
> > query was run.  Most of the time there are several thousand rows, but at 
> > times there are no rows that have been updated and the query returns a 
> > result set with no records. 
> > 
> > It is simple enough to reproduce simply be generating a query that 
> always 
> > returns 0 records. 
>
> is it a SELECT query? 
>
> I don't see anything on the SQLAlchemy side that's doing the wrong 
> thing here, the server-side cursors option just means we say 
> cursor.fetchmany() one or more times to pre-buffer rows from the 
> cursor.  We do not "open" a second server side cursor.psycopg2 is 
> reporting the error at the moment the code is calling fetchmany(), 
> which does not advertise as opening a new cursor. 
>
>
> > 
> > from sqlalchemy import create_engine 
> > engine = 
> > 
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
>  
>
> > with engine.connect().execute_options(stream_results=True) as conn: 
> > results = conn.execute(query) 
> > data = [r for r in results] 
> > 
> > Traceback (most recent call last): 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1174, in fetchone 
> > row = self._fetchone_impl() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1296, in _fetchone_impl 
> > self.__buffer_rows() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1283, in __buffer_rows 
> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size)) 
> > psycopg2.InternalError: opening multiple cursors from within the same 
> client 
> > connection is not allowed. 
> > The above exception was the direct cause of the following exception: 
> > Traceback (most recent call last): 
> >   File "", line 3, in  
> >   File "", line 3, in  
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 870, in __iter__ 
> > row = self.fetchone() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1183, in fetchone 
> > self.cursor, self.context) 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
>  
>
> > line 1402, in _handle_dbapi_exception 
> > exc_info 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>  
>
> > line 203, in raise_from_cause 
> > reraise(type(exception), exception, tb=exc_tb, cause=cause) 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
>  
>
> > line 186, in reraise 
> > raise value.with_traceback(tb) 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1174, in fetchone 
> > row = self._fetchone_impl() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1296, in _fetchone_impl 
> > self.__buffer_rows() 
> >   File 
> > 
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
>  
>
> > line 1283, in __buffer_rows 
> > self.__rowbuffer = collections.deque(self.cursor.fetchmany(size)) 
> > sqlalchemy.exc.InternalError: (psycopg2.InternalError) opening multiple 
> > cursors from within the 

Re: [sqlalchemy] Error streaming results of query with 0 rowcount from Redshift

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 12:21 PM,   wrote:
> I've run into an issue with Redshift and I thought I would post it here
> before logging a bug.  If I run a query against a Redshift database table
> that happens to return 0 rows I get an error when I try to iterate over the
> result set.  I run a pretty simple script that is looking for rows where a
> timestamp field contains a date time that is greater than the last time the
> query was run.  Most of the time there are several thousand rows, but at
> times there are no rows that have been updated and the query returns a
> result set with no records.
>
> It is simple enough to reproduce simply be generating a query that always
> returns 0 records.

is it a SELECT query?

I don't see anything on the SQLAlchemy side that's doing the wrong
thing here, the server-side cursors option just means we say
cursor.fetchmany() one or more times to pre-buffer rows from the
cursor.  We do not "open" a second server side cursor.psycopg2 is
reporting the error at the moment the code is calling fetchmany(),
which does not advertise as opening a new cursor.


>
> from sqlalchemy import create_engine
> engine =
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
> with engine.connect().execute_options(stream_results=True) as conn:
> results = conn.execute(query)
> data = [r for r in results]
>
> Traceback (most recent call last):
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1174, in fetchone
> row = self._fetchone_impl()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1296, in _fetchone_impl
> self.__buffer_rows()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1283, in __buffer_rows
> self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
> psycopg2.InternalError: opening multiple cursors from within the same client
> connection is not allowed.
> The above exception was the direct cause of the following exception:
> Traceback (most recent call last):
>   File "", line 3, in 
>   File "", line 3, in 
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 870, in __iter__
> row = self.fetchone()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1183, in fetchone
> self.cursor, self.context)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
> line 1402, in _handle_dbapi_exception
> exc_info
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
> line 203, in raise_from_cause
> reraise(type(exception), exception, tb=exc_tb, cause=cause)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
> line 186, in reraise
> raise value.with_traceback(tb)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1174, in fetchone
> row = self._fetchone_impl()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1296, in _fetchone_impl
> self.__buffer_rows()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1283, in __buffer_rows
> self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
> sqlalchemy.exc.InternalError: (psycopg2.InternalError) opening multiple
> cursors from within the same client connection is not allowed.
>
>
> I get the same problem using fetchone() or fetchmany() on the result set.
> If I don't use .execute_options(stream_results=True) to create the
> connection then I don't get the error.  It seems that option somehow ends up
> creating multiple server-side cursors when the are no records in the result
> set.  If any records are returned in the result set then I don't get the
> error.  Right now, as a work around I simply check the results.rowcount
> attribute before iterating through results, but I don't really like that
> work around.  Has anyone else encountered this problem with streamed results
> from Redshift?
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to