[sqlalchemy] flask SQLAlchemy hybrid_property Boolean value of this clause is not defined

2018-08-29 Thread bibiana . rivadeneira
 Hi !

I'm using Python 3.6.5, Flask 1.0.2, SQLAlchemy 1.0.5 and I want to define 
an attribute as a maximum between other two, based on flask-admin hybrid 
property example 

:


from flask import Flaskfrom flask_sqlalchemy import SQLAlchemyfrom 
sqlalchemy.ext.hybrid import hybrid_property
import flask_admin as adminfrom flask_admin.contrib import sqla
from sqlalchemy.sql.expression import func
# Create application
app = Flask(__name__)
# Create dummy secrey key so we can use sessions
app.config['SECRET_KEY'] = '123456790'
# Create in-memory database
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///sample_db_2.sqlite'
app.config['SQLALCHEMY_ECHO'] = True
db = SQLAlchemy(app)

# Flask views@app.route('/')def index():
return 'Click me to get to Admin!'

class Screen(db.Model):
__tablename__ = 'screen'
id = db.Column(db.Integer, primary_key=True)
width = db.Column(db.Integer, nullable=False)
height = db.Column(db.Integer, nullable=False)

@hybrid_property
def max(self):
return max(self.height, self.width)
class ScreenAdmin(sqla.ModelView):
""" Flask-admin can not automatically find a hybrid_property yet. You will
need to manually define the column in list_view/filters/sorting/etc."""
column_list = ['id', 'width', 'height', 'max']
column_sortable_list = ['id', 'width', 'height', 'max']

# Flask-admin can automatically detect the relevant filters for hybrid 
properties.
column_filters = ('max', )

# Create admin
admin = admin.Admin(app, name='Example: SQLAlchemy2', 
template_mode='bootstrap3')
admin.add_view(ScreenAdmin(Screen, db.session))
if __name__ == '__main__':

# Create DB
db.create_all()

# Start app
app.run(debug=True)


But it fails, raising this error:

raise TypeError("Boolean value of this clause is not defined")


Traceback:

/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_sqlalchemy/__init__.py:794:
 FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant 
overhead and will be disabled by default in the future.  Set it to True or 
False to suppress this warning.
  'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and 'Traceback 
(most recent call last):
  File "app.py", line 49, in 
admin.add_view(ScreenAdmin(Screen, db.session))
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/contrib/sqla/view.py",
 line 329, in __init__
menu_icon_value=menu_icon_value)
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/model/base.py",
 line 804, in __init__
self._refresh_cache()
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/model/base.py",
 line 881, in _refresh_cache
self._list_columns = self.get_list_columns()
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/model/base.py",
 line 1022, in get_list_columns
excluded_columns=self.column_exclude_list,
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/contrib/sqla/view.py",
 line 531, in get_column_names
column, path = tools.get_field_with_path(self.model, c)
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/contrib/sqla/tools.py",
 line 150, in get_field_with_path
value = getattr(current_model, attribute)
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/ext/hybrid.py",
 line 867, in __get__
return self._expr_comparator(owner)
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/ext/hybrid.py",
 line 1066, in expr_comparator
owner, self.__name__, self, comparator(owner),
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/ext/hybrid.py",
 line 1055, in _expr
return ExprComparator(cls, expr(cls), self)
  File "app.py", line 34, in number_of_pixels
return max(self.width,self.height)
  File 
"/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/sql/elements.py",
 line 2975, in __bool__
raise TypeError("Boolean value of this clause is not defined")


I've tried with the max sqlalchemy function:


return func.max(self.height, self.width)


But it returns literally a function not a value.


I also tried to use concept of this answer 
,
 
without success.


Any idea?


Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" 

Re: [sqlalchemy] Alembic and postgresql multiple schema question

2018-08-29 Thread Mike Bayer
On Wed, Aug 29, 2018 at 5:12 AM, sector119  wrote:
> Hello
>
> I have N schemas with the same set of tables, 1 system schema with users,
> groups, ... tables and 6 schemas with streets, organizations, transactions,
> ... tables.
> On those schemas tables I don't set __table_args__ = ({'schema': SCHEMA},)
> I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA)
> before sql queries.
>
> When I make some changes in my model structures I want to refactor table in
> all schemas using Alembic, how can I do that?
> Maybe I can make some loop over my schemas somewhere?

setting the search path is going to confuse SQLAlchemy's table
reflection process, such that it assumes a Table of a certain schema
does not require a "schema" argument, because it is already in the
search path.

Keep the search path set to "public", see
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path.
There is an option to change this behavior mentioned in that
section called postgresql_ignore_search_path, however it isn't
guaranteed to suit all use cases.   if that makes your case work, then
that would be all you need.  if not, then read on...

For the officially supported way to do this, you want to have the
explicit schema name inside the SQL - but this can be automated for a
multi-tenancy application.  Use the schema translation map feature:
http://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execution_options#schema-translating.


>
>
> Thanks
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] An error of bind processors of compiler when insert data into a table of mysql with

2018-08-29 Thread Mike Bayer
On Wed, Aug 29, 2018 at 12:04 AM, yacheng zhu  wrote:
> hi guys, recently I meet a strange error when i use sqlalchemy to insert
> data into a table of mysql.


the error looks like you are passing a string value to a binary
datatype, this needs to be a bytes object, e.g. b'value'.

that's all that really is, there's no evidence of a bug here.  if you
want to illustrate a bug you need to provide a minimal and complete
reproducing test case.

>
> Traceback (most recent call last):
>
>   File "C:\Program
> Files\Python35\lib\site-packages\sqlalchemy\engine\base.py", line 1127, in
> _execute_context
>
> context = constructor(dialect, self, conn, *args)
>
>   File "C:\Program
> Files\Python35\lib\site-packages\sqlalchemy\engine\default.py", line 694, in
> _init_compiled
>
> for key in compiled_params
>
>   File "C:\Program
> Files\Python35\lib\site-packages\sqlalchemy\engine\default.py", line 694, in
> 
>
> for key in compiled_params
>
>   File "C:\Program
> Files\Python35\lib\site-packages\sqlalchemy\sql\sqltypes.py", line 877, in
> process
>
> return DBAPIBinary(value)
>
>   File "C:\Program Files\Python35\lib\site-packages\pymysql\__init__.py",
> line 85, in Binary
>
> return bytes(x)
>
> TypeError: string argument without an encoding
>
>
>
> It seems that the the value of the variable processors in module
> sqlalchemy\engine\default.py line 690 has some problem
>
>
> param = dict(
>
> (
>
> key,
>
> processors[key](compiled_params[key])
>
> if key in processors
>
> else compiled_params[key]
>
> )
>
> for key in compiled_params
>
> )
>
>
>
> and the variable processsors was assigned at line 652 to equal to
> compiled._bind_processors
>
>
>
>
> when I try to find out the cause of the error, I find an interesting
> appearance that if I watch the variable in debug mode or add the expression
> (in red color)
>
>
> if statement is not None:
>
> self.statement = statement
>
> self.can_execute = statement.supports_execution
>
> if self.can_execute:
>
> self.execution_options = statement._execution_options
>
> self._bind_processors = self._bind_processors
>
> self.string = self.process(self.statement, **compile_kwargs)
>
>
> into the module sqlchemy/sql/compiler.py at line 219, the error will not
> occur.
>
>
> In fact, if I add any expression about self._bind_processors, for example
>
>
> print(self._bind_processors)
>
>
> before calling the self.process method, the error will always disappear.
>
>
> I guess there may be some bug here.
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: AssertionError seen for queries taking more than 30 seconds to execute

2018-08-29 Thread Mike Bayer
On Wed, Aug 29, 2018 at 2:35 AM, Mohit Agarwal 
wrote:

> Hi Mike,
> Thanks for replying.
>
> The code snipped as mentioned before is -
> try:
> obj = Obj()
>  session.add(obj)
>  session.commit() -> this will take a lot of time in some cases (since we
> have a trigger on insert of this entity which is suboptimal for certain use
> cases)
> except Exception as e:
>  session.rollback()
> logger.exception(e)
> raise e
>
>
>
> Unfortunately before we log the exception trace, we do session.rollback()
> which leads to a simple AssertionError on the line "*assert
> self._is_transaction_boundary" *in sqlalchemy library
>
>
> From my server logs though, i can provide high level logs  -
> "time": 1535505375696, "line": "[2018-08-29 01:16:15 +] [1]
> [CRITICAL] WORKER TIMEOUT (pid:12)", "host": "logentries-d0rqp" }
> "time": 1535505375698, "line": "/usr/local/lib/python2.7/
> dist-packages/sqlalchemy/orm/session.py:434: SAWarning: Session's state
> has been changed on a non-active transaction - this state will be
> discarded.", "host": "logentries-d0rqp" }
> "time": 1535505375699, "line": " \"Session's state has been changed on
> \"", "host": "logentries-d0rqp" }
>
>
> "time": 1535505375700, "line": do_create\\n db.session.rollback()\\n File
> \"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/scoping.py\",
> line 150, in do\\n return getattr(self.registry(), name)(*args,
> **kwargs)\\n File 
> \"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py\",
> line 754, in rollback\\n self.transaction.rollback()\\n File
> \"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py\",
> line 437, in rollback\\n 
> boundary._restore_snapshot(dirty_only=boundary.nested)\\n
> File \"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py\",
> line 273, in _restore_snapshot\\n assert 
> self._is_transaction_boundary\\nAssertionError\\n'",
> "host": "logentries-d0rqp" }
>
> So this could have happened -
> 1. We have gunicorn worker timeout set at 30 seconds. So worker was killed
> by gunicorn after waiting for 30 seconds for api call to respond.
> 2. this may trigger some session teardown on sqlalchemy. We use
> Flask-SQLAlchemy==2.0 (this is just a theory, i am trying to find out
> documents around this)
> 3. some exception happened in the ongoing query because of that. (not able
> to find out the exception trace, since it is logged after
> session.rollback())
> 4. a rollback in our except block then lead to this error.
>
> Hope this helps.
>
> Meanwhile, I am working on adding logs and also optimizing our trigger.
>

are you using gevent monkeypatching in order to be compatible with
gunicorn?  if a gevent worker is killed, SQLAlchemy definitely does not
clean up correctly at all,
you at least need to be on 1.1 because you need this:
http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3803




>
> Thanks
> Mohit
>
> On Tue, Aug 28, 2018 at 6:44 PM Mike Bayer 
> wrote:
>
>>
>>
>> On Tue, Aug 28, 2018 at 7:56 AM, Mohit Agarwal 
>> wrote:
>>
>>> I will also like to know is there anytime out setting to sql alchemy
>>> query because of which it is raising an exception after 30 seconds and
>>> going into the except block. We have not explicitly passed
>>> statement_timeout in our implementation.
>>>
>>
>>
>> I would need to see the complete error message you are getting so I can
>> google it.   SQLAlchemy itself has no concept of timeouts,  this is
>> something that happens either at the driver level or in the server side
>> configuration of your database.
>>
>>
>>
>>
>>
>>
>>>
>>> On Tuesday, August 28, 2018 at 5:20:39 PM UTC+5:30, Mohit Agarwal wrote:

 Hi,
 We are seeing exceptions:AssertionError being raised when of our APIs
 has a long running query. In code we are rolling back transaction if any
 error is received while committing. Basically we have this general wrapper

 try:
  session.commit()
 except Exception as e:
  session.rollback()

>>>   raise e
>>>



 Our sql alchemy version - 1.0.6
 Our database - Azure SQL (sql server)


 Stack trace -
 File "/code/api/named_location/resources.py", line 258, in
 create_named_locations_dataclass
 File "/code/api/named_location/operations.py", line 94, in do_create
 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/scoping.py",
 line 150, in do
 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
 line 754, in rollback
 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
 line 437, in rollback
 File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
 line 273, in _restore_snapshot

 From the code it looks like it fails here -
 def _restore_snapshot(self, dirty_only=False):
 *assert self._is_transaction_boundary*
 What does it mean, why rollback is failing ?



 Thanks
 Mohit

>>> --
>>> 

Re: [sqlalchemy] sqlalchemy session in same transaction as existing psycopg2 connection

2018-08-29 Thread 'Brian DeRocher' via sqlalchemy
Beautiful.  Skipping the psycopg2 initialization prevents that rollback and 
allows SQLAlchemy to use the same transaction.

FWIW, I don't think pool_reset_on_return=None is needed, at least for my 
purposes.

Thanks for the help and thanks for the advice about raw_connection().  I'll 
get that into place, at least for the testing suite.

Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Alembic and postgresql multiple schema question

2018-08-29 Thread sector119


I've found some example at 
https://stackoverflow.com/questions/21109218/alembic-support-for-multiple-postgres-schemas

But when I run alembic revision --autogenerate -m "Initial upgrade" at 
alembic/versions/24648f118be9_initial_upgrade.py I've got no schema='myschema' 
keywords on table, indexes, columns items ((


def run_migrations_online():
"""Run migrations in 'online' mode.

In this scenario we need to create an Engine
and associate a connection with the context.

"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)

with connectable.connect() as connection:
for schema_name in schema_names.split():
conn = connection.execution_options(schema_translate_map={None: 
schema_name})

print("Migrating schema %s" % schema_name)

context.configure(
connection=conn,
target_metadata=target_metadata
)

with context.begin_transaction():
context.run_migrations()



среда, 29 августа 2018 г., 12:12:19 UTC+3 пользователь sector119 написал:
>
> Hello
>
> I have N schemas with the same set of tables, 1 system schema with users, 
> groups, ... tables and 6 schemas with streets, organizations, transactions, 
> ... tables. 
> On those schemas tables I don't set __table_args__ = ({'schema': SCHEMA},)
> I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA) 
> before sql queries.
>
> When I make some changes in my model structures I want to refactor table 
> in all schemas using Alembic, how can I do that?
> Maybe I can make some loop over my schemas somewhere? 
>
>
> Thanks
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Alembic and postgresql multiple schema question

2018-08-29 Thread sector119
Hello

I have N schemas with the same set of tables, 1 system schema with users, 
groups, ... tables and 6 schemas with streets, organizations, transactions, 
... tables. 
On those schemas tables I don't set __table_args__ = ({'schema': SCHEMA},)
I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA) 
before sql queries.

When I make some changes in my model structures I want to refactor table in 
all schemas using Alembic, how can I do that?
Maybe I can make some loop over my schemas somewhere? 


Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: AssertionError seen for queries taking more than 30 seconds to execute

2018-08-29 Thread Mohit Agarwal
Hi Mike,
Thanks for replying.

The code snipped as mentioned before is -
try:
obj = Obj()
 session.add(obj)
 session.commit() -> this will take a lot of time in some cases (since we
have a trigger on insert of this entity which is suboptimal for certain use
cases)
except Exception as e:
 session.rollback()
logger.exception(e)
raise e



Unfortunately before we log the exception trace, we do session.rollback()
which leads to a simple AssertionError on the line "*assert
self._is_transaction_boundary" *in sqlalchemy library


>From my server logs though, i can provide high level logs  -
"time": 1535505375696, "line": "[2018-08-29 01:16:15 +] [1] [CRITICAL]
WORKER TIMEOUT (pid:12)", "host": "logentries-d0rqp" }
"time": 1535505375698, "line":
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py:434:
SAWarning: Session's state has been changed on a non-active transaction -
this state will be discarded.", "host": "logentries-d0rqp" }
"time": 1535505375699, "line": " \"Session's state has been changed on \"",
"host": "logentries-d0rqp" }


"time": 1535505375700, "line": do_create\\n db.session.rollback()\\n File
\"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/scoping.py\", line
150, in do\\n return getattr(self.registry(), name)(*args, **kwargs)\\n
File \"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py\",
line 754, in rollback\\n self.transaction.rollback()\\n File
\"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py\", line
437, in rollback\\n
boundary._restore_snapshot(dirty_only=boundary.nested)\\n File
\"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py\", line
273, in _restore_snapshot\\n assert
self._is_transaction_boundary\\nAssertionError\\n'", "host":
"logentries-d0rqp" }

So this could have happened -
1. We have gunicorn worker timeout set at 30 seconds. So worker was killed
by gunicorn after waiting for 30 seconds for api call to respond.
2. this may trigger some session teardown on sqlalchemy. We use
Flask-SQLAlchemy==2.0 (this is just a theory, i am trying to find out
documents around this)
3. some exception happened in the ongoing query because of that. (not able
to find out the exception trace, since it is logged after
session.rollback())
4. a rollback in our except block then lead to this error.

Hope this helps.

Meanwhile, I am working on adding logs and also optimizing our trigger.

Thanks
Mohit

On Tue, Aug 28, 2018 at 6:44 PM Mike Bayer  wrote:

>
>
> On Tue, Aug 28, 2018 at 7:56 AM, Mohit Agarwal 
> wrote:
>
>> I will also like to know is there anytime out setting to sql alchemy
>> query because of which it is raising an exception after 30 seconds and
>> going into the except block. We have not explicitly passed
>> statement_timeout in our implementation.
>>
>
>
> I would need to see the complete error message you are getting so I can
> google it.   SQLAlchemy itself has no concept of timeouts,  this is
> something that happens either at the driver level or in the server side
> configuration of your database.
>
>
>
>
>
>
>>
>> On Tuesday, August 28, 2018 at 5:20:39 PM UTC+5:30, Mohit Agarwal wrote:
>>>
>>> Hi,
>>> We are seeing exceptions:AssertionError being raised when of our APIs
>>> has a long running query. In code we are rolling back transaction if any
>>> error is received while committing. Basically we have this general wrapper
>>>
>>> try:
>>>  session.commit()
>>> except Exception as e:
>>>  session.rollback()
>>>
>>   raise e
>>
>>>
>>>
>>>
>>> Our sql alchemy version - 1.0.6
>>> Our database - Azure SQL (sql server)
>>>
>>>
>>> Stack trace -
>>> File "/code/api/named_location/resources.py", line 258, in
>>> create_named_locations_dataclass
>>> File "/code/api/named_location/operations.py", line 94, in do_create
>>> File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/scoping.py",
>>> line 150, in do
>>> File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
>>> line 754, in rollback
>>> File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
>>> line 437, in rollback
>>> File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
>>> line 273, in _restore_snapshot
>>>
>>> From the code it looks like it fails here -
>>> def _restore_snapshot(self, dirty_only=False):
>>> *assert self._is_transaction_boundary*
>>> What does it mean, why rollback is failing ?
>>>
>>>
>>>
>>> Thanks
>>> Mohit
>>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to