On Wed, Sep 6, 2017 at 12:21 PM,  <d...@numetric.com> wrote:
> I've run into an issue with Redshift and I thought I would post it here
> before logging a bug.  If I run a query against a Redshift database table
> that happens to return 0 rows I get an error when I try to iterate over the
> result set.  I run a pretty simple script that is looking for rows where a
> timestamp field contains a date time that is greater than the last time the
> query was run.  Most of the time there are several thousand rows, but at
> times there are no rows that have been updated and the query returns a
> result set with no records.
>
> It is simple enough to reproduce simply be generating a query that always
> returns 0 records.

is it a SELECT query?

I don't see anything on the SQLAlchemy side that's doing the wrong
thing here, the server-side cursors option just means we say
cursor.fetchmany(<num>) one or more times to pre-buffer rows from the
cursor.  We do not "open" a second server side cursor.    psycopg2 is
reporting the error at the moment the code is calling fetchmany(),
which does not advertise as opening a new cursor.


>
> from sqlalchemy import create_engine
> engine =
> create_engine('postgres+psycopg2://username:password@redshifthost:port/db_name')
> with engine.connect().execute_options(stream_results=True) as conn:
>     results = conn.execute(query)
>     data = [r for r in results]
>
> Traceback (most recent call last):
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1174, in fetchone
>     row = self._fetchone_impl()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1296, in _fetchone_impl
>     self.__buffer_rows()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1283, in __buffer_rows
>     self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
> psycopg2.InternalError: opening multiple cursors from within the same client
> connection is not allowed.
> The above exception was the direct cause of the following exception:
> Traceback (most recent call last):
>   File "<input>", line 3, in <module>
>   File "<input>", line 3, in <listcomp>
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 870, in __iter__
>     row = self.fetchone()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1183, in fetchone
>     self.cursor, self.context)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/base.py",
> line 1402, in _handle_dbapi_exception
>     exc_info
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
> line 203, in raise_from_cause
>     reraise(type(exception), exception, tb=exc_tb, cause=cause)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/util/compat.py",
> line 186, in reraise
>     raise value.with_traceback(tb)
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1174, in fetchone
>     row = self._fetchone_impl()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1296, in _fetchone_impl
>     self.__buffer_rows()
>   File
> "/Users/dstovall/.virtualenvs/connectors/lib/python3.6/site-packages/sqlalchemy/engine/result.py",
> line 1283, in __buffer_rows
>     self.__rowbuffer = collections.deque(self.cursor.fetchmany(size))
> sqlalchemy.exc.InternalError: (psycopg2.InternalError) opening multiple
> cursors from within the same client connection is not allowed.
>
>
> I get the same problem using fetchone() or fetchmany() on the result set.
> If I don't use .execute_options(stream_results=True) to create the
> connection then I don't get the error.  It seems that option somehow ends up
> creating multiple server-side cursors when the are no records in the result
> set.  If any records are returned in the result set then I don't get the
> error.  Right now, as a work around I simply check the results.rowcount
> attribute before iterating through results, but I don't really like that
> work around.  Has anyone else encountered this problem with streamed results
> from Redshift?
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to