Re: [sqlalchemy] Re: long transaction after database switched over

2018-05-13 Thread jiajunsu . zju
OK. I'll reproduce it and upload all logs of the process.

在 2018年5月14日星期一 UTC+8上午10:48:49,Mike Bayer写道:
>
> as well as the log that shows this: 
>
> +if self in pool._pool.queue: 
> +pool.logger.debug("jjs rec already in pool checkin, skip %r", 
> self) 
> +return 
>
>
> actually happening. 
>
>
>
>
> On Sun, May 13, 2018 at 10:42 PM, Mike Bayer  > wrote: 
> > On Sun, May 13, 2018 at 10:35 PM,   
> wrote: 
> >> "jjs" is short for my name and "cjr" is my colleague's(JinRong Cai). We 
> work 
> >> together and add logs in the same environment, add prefix of name to 
> get a 
> >> keyword for log filter. 
> >> 
> >> The doubled ConnectionRecords(with None connection) are got by two 
> different 
> >> coroutines, and the variable record.connection is rewrite by the second 
> >> coroutines. 
> >> After the record being put back to QueuePool again, there are two same 
> >> record with same connection in the pool, which will cause the next two 
> >> coroutines get conflict and deadlock in psycopg2 as above. 
> > 
> > OK, I need the detailed step by step that shows how the two coroutines 
> > are interacting such that they both are returning the same connection. 
> >   The nova log above seems to show just one coroutine I assume since 
> > there is just one nova request ID, if I understand correctly. 
> > 
> >> 
> >> The logs of psycopg2 has been pasted at 
> >> https://github.com/psycopg/psycopg2/issues/703 
> > 
> > The above issue says the bug has been found in SQLAlchemy but there is 
> > no bug report yet and I've not yet been shown how the behavior here is 
> > possible.  I posted a straightforward example above, can you 
> > please show me where the two coroutines get mixed up ? 
> > 
> > 
> > 
> > 
> >> 
> >> 
> >> 
> >> 在 2018年5月11日星期五 UTC+8上午3:05:46,Mike Bayer写道: 
> >>> 
> >>> I tried to follow your log lines, which is difficult because I don't 
> >>> know where those log statements actually are and I don't know what 
> >>> acronyms like "jjs", "cjr", mean.   But it does look like it's al in 
> >>> one thread (so not async GC) and all in one Nova request (so perhaps 
> >>> just one greenlet involved, unless other greenlets are affecting this 
> >>> without them being displayed in this log snippet).   I do see the 
> >>> connection record doubled at the end.   So here is the illustration of 
> >>> those steps in isolation, the connection record is not doubled.  Can 
> >>> you alter this program to show this condition occurring? 
> >>> 
> >>> from sqlalchemy import create_engine 
> >>> 
> >>> e = create_engine("postgresql://scott:tiger@localhost/test", 
> >>> echo_pool='debug') 
> >>> e.connect().close() 
> >>> 
> >>> assert len(e.pool._pool.queue) == 1 
> >>> 
> >>> c1 = e.connect() 
> >>> 
> >>> assert len(e.pool._pool.queue) == 0 
> >>> 
> >>> c1.engine.pool._invalidate(c1._Connection__connection, None) 
> >>> c1.invalidate(e) 
> >>> 
> >>> # connection record is back 
> >>> assert len(e.pool._pool.queue) == 1 
> >>> 
> >>> # do a reconnect 
> >>> c1.connection 
> >>> 
> >>> # uses the record again 
> >>> assert len(e.pool._pool.queue) == 0 
> >>> 
> >>> # close the Connection 
> >>> c1.close() 
> >>> 
> >>> # record is back 
> >>> assert len(e.pool._pool.queue) == 1 
> >>> 
> >>> # etc. 
> >>> c1 = e.connect() 
> >>> assert len(e.pool._pool.queue) == 0 
> >>> 
> >>> c1.close() 
> >>> assert len(e.pool._pool.queue) == 1 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> On Thu, May 10, 2018 at 12:25 PM, Mike Bayer  
>
> >>> wrote: 
> >>> > Here's a print statement you might want to try: 
> >>> > 
> >>> > diff --git a/lib/sqlalchemy/pool.py b/lib/sqlalchemy/pool.py 
> >>> > index 89a4cea7c..452db4883 100644 
> >>> > --- a/lib/sqlalchemy/pool.py 
> >>> > +++ b/lib/sqlalchemy/pool.py 
> >>> > @@ -689,6 +689,19 @@ def _finalize_fairy(connection, 
> connection_record, 
> >>> >  been garbage collected. 
> >>> > 
> >>> >  """ 
> >>> > +print( 
> >>> > +"FINALIZE FAIRY:  GREENLET ID: %s, DB CONN: %s, FAIRY: %s, 
> REF: 
> >>> > %s  " 
> >>> > +"CONNECTION_REC: %s CONNECTION_REC REF %s" % ( 
> >>> > +"put greenlet id here", 
> >>> > +connection, 
> >>> > +fairy, 
> >>> > +ref, 
> >>> > +connection_record, 
> >>> > +connection_record.fairy_ref 
> >>> > +if connection_record is not None else "n/a" 
> >>> > +) 
> >>> > +) 
> >>> > + 
> >>> >  _refs.discard(connection_record) 
> >>> > 
> >>> >  if ref is not None: 
> >>> > 
> >>> > 
> >>> > 
> >>> > 
> >>> > 
> >>> > On Thu, May 10, 2018 at 12:21 PM, Mike Bayer <
> mik...@zzzcomputing.com> 
> >>> > wrote: 
> >>> >> On Thu, May 10, 2018 at 7:23 AM,   wrote: 
> >>> >>> Hi Mike, 
> >>> >>> 
> >>> >>> I added more logs and finnaly find the reason and a method to 
> avoid 
> >>> >>> the 
> >>> >>> deadlock between coroutines. 
> >>> >>> 

Re: [sqlalchemy] is it possible to make load_only local to an object only and/or avoid the identity map and/or not available via `get`?

2018-05-13 Thread Jonathan Vanasco


On Sunday, May 13, 2018 at 10:27:16 PM UTC-4, Mike Bayer wrote:
>
>
> that's not true, you can use the same database connection on a second 
> Session.   Just say s2 = Session(bind=s1.connection()). 
>

Thanks, mike.  I didn't know this was doable - it seems like the right 
approach.

The `populate_existing` approach isn't very desirable, as it could 
potentially effect a few hundred queries. I'd need to introduce a global 
filter that is conditionally triggered, and it's just likely to be messy.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: SQLACODEGEN + VSCODE + Intellisense not working.

2018-05-13 Thread Mike Bayer
On Sun, May 13, 2018 at 6:38 AM, Jeremy Flowers
 wrote:
> I added PEP0526 code hint and that fixed it.
>
>jobrow =
> session.query(Jobmst).filter(Jobmst.jobmst_id==job['jobmst_id']).first() #
> type: Jobmst
>
>
> Is it feasible for the SQLAlchemy tool to correctly add these a the library
> level?
> It would be preferable to my client code!

if you mean SQLAlchemy would have type hinting internally, while that
might be something we do at some point I don't see how it helps the
issue of knowing what Query.first() returns, because that type is not
static. At the source code level, Query.first() returns a subclass
of "object".



>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: long transaction after database switched over

2018-05-13 Thread Mike Bayer
as well as the log that shows this:

+if self in pool._pool.queue:
+pool.logger.debug("jjs rec already in pool checkin, skip %r", self)
+return


actually happening.




On Sun, May 13, 2018 at 10:42 PM, Mike Bayer  wrote:
> On Sun, May 13, 2018 at 10:35 PM,   wrote:
>> "jjs" is short for my name and "cjr" is my colleague's(JinRong Cai). We work
>> together and add logs in the same environment, add prefix of name to get a
>> keyword for log filter.
>>
>> The doubled ConnectionRecords(with None connection) are got by two different
>> coroutines, and the variable record.connection is rewrite by the second
>> coroutines.
>> After the record being put back to QueuePool again, there are two same
>> record with same connection in the pool, which will cause the next two
>> coroutines get conflict and deadlock in psycopg2 as above.
>
> OK, I need the detailed step by step that shows how the two coroutines
> are interacting such that they both are returning the same connection.
>   The nova log above seems to show just one coroutine I assume since
> there is just one nova request ID, if I understand correctly.
>
>>
>> The logs of psycopg2 has been pasted at
>> https://github.com/psycopg/psycopg2/issues/703
>
> The above issue says the bug has been found in SQLAlchemy but there is
> no bug report yet and I've not yet been shown how the behavior here is
> possible.  I posted a straightforward example above, can you
> please show me where the two coroutines get mixed up ?
>
>
>
>
>>
>>
>>
>> 在 2018年5月11日星期五 UTC+8上午3:05:46,Mike Bayer写道:
>>>
>>> I tried to follow your log lines, which is difficult because I don't
>>> know where those log statements actually are and I don't know what
>>> acronyms like "jjs", "cjr", mean.   But it does look like it's al in
>>> one thread (so not async GC) and all in one Nova request (so perhaps
>>> just one greenlet involved, unless other greenlets are affecting this
>>> without them being displayed in this log snippet).   I do see the
>>> connection record doubled at the end.   So here is the illustration of
>>> those steps in isolation, the connection record is not doubled.  Can
>>> you alter this program to show this condition occurring?
>>>
>>> from sqlalchemy import create_engine
>>>
>>> e = create_engine("postgresql://scott:tiger@localhost/test",
>>> echo_pool='debug')
>>> e.connect().close()
>>>
>>> assert len(e.pool._pool.queue) == 1
>>>
>>> c1 = e.connect()
>>>
>>> assert len(e.pool._pool.queue) == 0
>>>
>>> c1.engine.pool._invalidate(c1._Connection__connection, None)
>>> c1.invalidate(e)
>>>
>>> # connection record is back
>>> assert len(e.pool._pool.queue) == 1
>>>
>>> # do a reconnect
>>> c1.connection
>>>
>>> # uses the record again
>>> assert len(e.pool._pool.queue) == 0
>>>
>>> # close the Connection
>>> c1.close()
>>>
>>> # record is back
>>> assert len(e.pool._pool.queue) == 1
>>>
>>> # etc.
>>> c1 = e.connect()
>>> assert len(e.pool._pool.queue) == 0
>>>
>>> c1.close()
>>> assert len(e.pool._pool.queue) == 1
>>>
>>>
>>>
>>>
>>> On Thu, May 10, 2018 at 12:25 PM, Mike Bayer 
>>> wrote:
>>> > Here's a print statement you might want to try:
>>> >
>>> > diff --git a/lib/sqlalchemy/pool.py b/lib/sqlalchemy/pool.py
>>> > index 89a4cea7c..452db4883 100644
>>> > --- a/lib/sqlalchemy/pool.py
>>> > +++ b/lib/sqlalchemy/pool.py
>>> > @@ -689,6 +689,19 @@ def _finalize_fairy(connection, connection_record,
>>> >  been garbage collected.
>>> >
>>> >  """
>>> > +print(
>>> > +"FINALIZE FAIRY:  GREENLET ID: %s, DB CONN: %s, FAIRY: %s, REF:
>>> > %s  "
>>> > +"CONNECTION_REC: %s CONNECTION_REC REF %s" % (
>>> > +"put greenlet id here",
>>> > +connection,
>>> > +fairy,
>>> > +ref,
>>> > +connection_record,
>>> > +connection_record.fairy_ref
>>> > +if connection_record is not None else "n/a"
>>> > +)
>>> > +)
>>> > +
>>> >  _refs.discard(connection_record)
>>> >
>>> >  if ref is not None:
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Thu, May 10, 2018 at 12:21 PM, Mike Bayer 
>>> > wrote:
>>> >> On Thu, May 10, 2018 at 7:23 AM,   wrote:
>>> >>> Hi Mike,
>>> >>>
>>> >>> I added more logs and finnaly find the reason and a method to avoid
>>> >>> the
>>> >>> deadlock between coroutines.
>>> >>>
>>> >>> In summary, the deadlock happend because two different coroutines use
>>> >>> the
>>> >>> same ConnectionRecord with same connection.
>>> >>> It happends under QueuePool, when the DB connections are reset by the
>>> >>> DB
>>> >>> server(not related with the backend driver).
>>> >>>
>>> >>> 1.A coroutine get a ConnectionRecord into ConnectionFairy, and apply a
>>> >>> weakref of the ConnectionFairy, with callback function
>>> >>> `_finalize_fairy`.
>>> >>> 2.When the connection get disconnected, it receives an
>>> >>> 

Re: [sqlalchemy] Re: long transaction after database switched over

2018-05-13 Thread Mike Bayer
On Sun, May 13, 2018 at 10:35 PM,   wrote:
> "jjs" is short for my name and "cjr" is my colleague's(JinRong Cai). We work
> together and add logs in the same environment, add prefix of name to get a
> keyword for log filter.
>
> The doubled ConnectionRecords(with None connection) are got by two different
> coroutines, and the variable record.connection is rewrite by the second
> coroutines.
> After the record being put back to QueuePool again, there are two same
> record with same connection in the pool, which will cause the next two
> coroutines get conflict and deadlock in psycopg2 as above.

OK, I need the detailed step by step that shows how the two coroutines
are interacting such that they both are returning the same connection.
  The nova log above seems to show just one coroutine I assume since
there is just one nova request ID, if I understand correctly.

>
> The logs of psycopg2 has been pasted at
> https://github.com/psycopg/psycopg2/issues/703

The above issue says the bug has been found in SQLAlchemy but there is
no bug report yet and I've not yet been shown how the behavior here is
possible.  I posted a straightforward example above, can you
please show me where the two coroutines get mixed up ?




>
>
>
> 在 2018年5月11日星期五 UTC+8上午3:05:46,Mike Bayer写道:
>>
>> I tried to follow your log lines, which is difficult because I don't
>> know where those log statements actually are and I don't know what
>> acronyms like "jjs", "cjr", mean.   But it does look like it's al in
>> one thread (so not async GC) and all in one Nova request (so perhaps
>> just one greenlet involved, unless other greenlets are affecting this
>> without them being displayed in this log snippet).   I do see the
>> connection record doubled at the end.   So here is the illustration of
>> those steps in isolation, the connection record is not doubled.  Can
>> you alter this program to show this condition occurring?
>>
>> from sqlalchemy import create_engine
>>
>> e = create_engine("postgresql://scott:tiger@localhost/test",
>> echo_pool='debug')
>> e.connect().close()
>>
>> assert len(e.pool._pool.queue) == 1
>>
>> c1 = e.connect()
>>
>> assert len(e.pool._pool.queue) == 0
>>
>> c1.engine.pool._invalidate(c1._Connection__connection, None)
>> c1.invalidate(e)
>>
>> # connection record is back
>> assert len(e.pool._pool.queue) == 1
>>
>> # do a reconnect
>> c1.connection
>>
>> # uses the record again
>> assert len(e.pool._pool.queue) == 0
>>
>> # close the Connection
>> c1.close()
>>
>> # record is back
>> assert len(e.pool._pool.queue) == 1
>>
>> # etc.
>> c1 = e.connect()
>> assert len(e.pool._pool.queue) == 0
>>
>> c1.close()
>> assert len(e.pool._pool.queue) == 1
>>
>>
>>
>>
>> On Thu, May 10, 2018 at 12:25 PM, Mike Bayer 
>> wrote:
>> > Here's a print statement you might want to try:
>> >
>> > diff --git a/lib/sqlalchemy/pool.py b/lib/sqlalchemy/pool.py
>> > index 89a4cea7c..452db4883 100644
>> > --- a/lib/sqlalchemy/pool.py
>> > +++ b/lib/sqlalchemy/pool.py
>> > @@ -689,6 +689,19 @@ def _finalize_fairy(connection, connection_record,
>> >  been garbage collected.
>> >
>> >  """
>> > +print(
>> > +"FINALIZE FAIRY:  GREENLET ID: %s, DB CONN: %s, FAIRY: %s, REF:
>> > %s  "
>> > +"CONNECTION_REC: %s CONNECTION_REC REF %s" % (
>> > +"put greenlet id here",
>> > +connection,
>> > +fairy,
>> > +ref,
>> > +connection_record,
>> > +connection_record.fairy_ref
>> > +if connection_record is not None else "n/a"
>> > +)
>> > +)
>> > +
>> >  _refs.discard(connection_record)
>> >
>> >  if ref is not None:
>> >
>> >
>> >
>> >
>> >
>> > On Thu, May 10, 2018 at 12:21 PM, Mike Bayer 
>> > wrote:
>> >> On Thu, May 10, 2018 at 7:23 AM,   wrote:
>> >>> Hi Mike,
>> >>>
>> >>> I added more logs and finnaly find the reason and a method to avoid
>> >>> the
>> >>> deadlock between coroutines.
>> >>>
>> >>> In summary, the deadlock happend because two different coroutines use
>> >>> the
>> >>> same ConnectionRecord with same connection.
>> >>> It happends under QueuePool, when the DB connections are reset by the
>> >>> DB
>> >>> server(not related with the backend driver).
>> >>>
>> >>> 1.A coroutine get a ConnectionRecord into ConnectionFairy, and apply a
>> >>> weakref of the ConnectionFairy, with callback function
>> >>> `_finalize_fairy`.
>> >>> 2.When the connection get disconnected, it receives an
>> >>> exception(psycopg2.OperationalError), and goto
>> >>> Connection._handle_dbapi_exception.
>> >>> 3.In `_handle_dbapi_exception.finnaly`, it will call both
>> >>> `ConnectionFairy.invalidate` and `self.invalidate`.
>> >>> 4.In `ConnectionFairy.invalidate`, `_ConnectionRecord._checkin` is
>> >>> called,
>> >>> and put the ConnectionRecord back to the QueuePool.
>> >>> 5.In `self.invalidate`, `del self._root.__connection` is 

Re: [sqlalchemy] Re: long transaction after database switched over

2018-05-13 Thread jiajunsu . zju
"jjs" is short for my name and "cjr" is my colleague's(JinRong Cai). We 
work together and add logs in the same environment, add prefix of name to 
get a keyword for log filter.

The doubled ConnectionRecords(with None connection) are got by two 
different coroutines, and the variable record.connection is rewrite by the 
second coroutines.
After the record being put back to QueuePool again, there are two same 
record with same connection in the pool, which will cause the next two 
coroutines get conflict and deadlock in psycopg2 as above.

The logs of psycopg2 has been pasted 
at https://github.com/psycopg/psycopg2/issues/703



在 2018年5月11日星期五 UTC+8上午3:05:46,Mike Bayer写道:
>
> I tried to follow your log lines, which is difficult because I don't 
> know where those log statements actually are and I don't know what 
> acronyms like "jjs", "cjr", mean.   But it does look like it's al in 
> one thread (so not async GC) and all in one Nova request (so perhaps 
> just one greenlet involved, unless other greenlets are affecting this 
> without them being displayed in this log snippet).   I do see the 
> connection record doubled at the end.   So here is the illustration of 
> those steps in isolation, the connection record is not doubled.  Can 
> you alter this program to show this condition occurring? 
>
> from sqlalchemy import create_engine 
>
> e = create_engine("postgresql://scott:tiger@localhost/test", 
> echo_pool='debug') 
> e.connect().close() 
>
> assert len(e.pool._pool.queue) == 1 
>
> c1 = e.connect() 
>
> assert len(e.pool._pool.queue) == 0 
>
> c1.engine.pool._invalidate(c1._Connection__connection, None) 
> c1.invalidate(e) 
>
> # connection record is back 
> assert len(e.pool._pool.queue) == 1 
>
> # do a reconnect 
> c1.connection 
>
> # uses the record again 
> assert len(e.pool._pool.queue) == 0 
>
> # close the Connection 
> c1.close() 
>
> # record is back 
> assert len(e.pool._pool.queue) == 1 
>
> # etc. 
> c1 = e.connect() 
> assert len(e.pool._pool.queue) == 0 
>
> c1.close() 
> assert len(e.pool._pool.queue) == 1 
>
>
>
>
> On Thu, May 10, 2018 at 12:25 PM, Mike Bayer  > wrote: 
> > Here's a print statement you might want to try: 
> > 
> > diff --git a/lib/sqlalchemy/pool.py b/lib/sqlalchemy/pool.py 
> > index 89a4cea7c..452db4883 100644 
> > --- a/lib/sqlalchemy/pool.py 
> > +++ b/lib/sqlalchemy/pool.py 
> > @@ -689,6 +689,19 @@ def _finalize_fairy(connection, connection_record, 
> >  been garbage collected. 
> > 
> >  """ 
> > +print( 
> > +"FINALIZE FAIRY:  GREENLET ID: %s, DB CONN: %s, FAIRY: %s, REF: 
> %s  " 
> > +"CONNECTION_REC: %s CONNECTION_REC REF %s" % ( 
> > +"put greenlet id here", 
> > +connection, 
> > +fairy, 
> > +ref, 
> > +connection_record, 
> > +connection_record.fairy_ref 
> > +if connection_record is not None else "n/a" 
> > +) 
> > +) 
> > + 
> >  _refs.discard(connection_record) 
> > 
> >  if ref is not None: 
> > 
> > 
> > 
> > 
> > 
> > On Thu, May 10, 2018 at 12:21 PM, Mike Bayer  > wrote: 
> >> On Thu, May 10, 2018 at 7:23 AM,   
> wrote: 
> >>> Hi Mike, 
> >>> 
> >>> I added more logs and finnaly find the reason and a method to avoid 
> the 
> >>> deadlock between coroutines. 
> >>> 
> >>> In summary, the deadlock happend because two different coroutines use 
> the 
> >>> same ConnectionRecord with same connection. 
> >>> It happends under QueuePool, when the DB connections are reset by the 
> DB 
> >>> server(not related with the backend driver). 
> >>> 
> >>> 1.A coroutine get a ConnectionRecord into ConnectionFairy, and apply a 
> >>> weakref of the ConnectionFairy, with callback function 
> `_finalize_fairy`. 
> >>> 2.When the connection get disconnected, it receives an 
> >>> exception(psycopg2.OperationalError), and goto 
> >>> Connection._handle_dbapi_exception. 
> >>> 3.In `_handle_dbapi_exception.finnaly`, it will call both 
> >>> `ConnectionFairy.invalidate` and `self.invalidate`. 
> >>> 4.In `ConnectionFairy.invalidate`, `_ConnectionRecord._checkin` is 
> called, 
> >>> and put the ConnectionRecord back to the QueuePool. 
> >>> 5.In `self.invalidate`, `del self._root.__connection` is called, and 
> **del** 
> >>> the ConnectionFairy. 
> >>> 6.The **del** operation will make weakref of ConnectionFairy to dead, 
> and 
> >>> the callback `_finalize_fairy` executed. 
> >>> 7.In `_finalize_fairy`, another `_ConnectionRecord._checkin` is 
> called, put 
> >>> the ConnectionRecord to the QueuePool again. 
> >> 
> >> Here's what does not make sense.  You are claiming that a simple 
> >> invalidation will result in the same ConnectionRecord being checked in 
> >> twice.   This is obviously not the case, what is special in your 
> >> situation that makes this happen? 
> >> 
> >> If I follow the steps you refer towards, in step 4, when 
> >> 

Re: [sqlalchemy] is it possible to make load_only local to an object only and/or avoid the identity map and/or not available via `get`?

2018-05-13 Thread Mike Bayer
On Sun, May 13, 2018 at 3:21 PM, Jonathan Vanasco  wrote:
> A better way of conveying my problem might be: `get()` returns any object
> which is in the identity map; I am having issues after the identity map is
> now populated with some objects which are 'partials' that were created via
> 'load_only' parameters.  The benefits of using `get` to minimize a database
> query are outnumbered by dozens of selects to the database.
>
> I'll try to re-explain everything with a more details below.
>
> On Sunday, May 13, 2018 at 11:41:56 AM UTC-4, Mike Bayer wrote:
>>
>>
>> Phase one is a query with lots of joinedload, and *also*, a
>> load_only() (Is that right?   your previous email shows a joinedload +
>> load_only, this email OTOH says "phase 1, lots of eagerloading and
>> joinedloading", is that the one with the "load_only" ?)
>>
>
> Phase 1 can have a load_only, usually not.  The problem with the load_only
> is in the stuff that now happens before phase 1.
>
>> Then, you say, the get() in phase 2 (which we assume is the get() you
>> refer towards in your previous email) hits objects from this
>> joinedload + loadonly 99% of the time, so 1% of the time they are from
>> a cache, which you haven't told me how objects get into that or what
>> state they are in.
>
>
> That 1% are direct queries against the Database.  The cache I'm talking
> about is SqlAlchemy's identity map.
>
> Phase-1:
> - SqlAlchemy queries PostgreSQL for the primary keys of objects under
> several criteria.
> - The app aggregates all the primary keys by object type, then selects the
> objects by the primary key, making extensive use of the joined/eager
> loading.
> - The generated SqlAlchemy objects are inspected and analyzed for foreign
> keys that are of interest, which are then loaded via the aggregated gets.
> - Batching selects like this enormously improved PostgreSQL performance by
> influencing how it loads/unloads and caches data. The queries only involve
> indexes, and then tables only get 1 (sometimes 2) operations with the
> primary index.
>
> Phase-2:
> - The objects are interacted with. Their attributes and relationships are
> accessed.
> - Sometimes this code needs to use a specific object that was
> not-necessarily loaded in Phase-1. SqlAlchemy's `get(primary_key)` is used
> to leverage the identity map, which would either return an existing object
> by primary key, or hit the database.  This almost always results in a "cache
> hit" from the identity map, so no sql is emitted. Occasionally the object
> was not loaded, so sql must be emitted to select it.
>
> The current problem was created when a new "Phase-0" was introduced.
>
> In Phase-0, SqlAlchemy queries a number of objects with `load_only`
> specifying a small subset of their columns.  If these objects are queried in
> Phase-2 via `get()`, the identity map usually returns the previously
> selected object with a small subset of the the columns; this causes sql to
> be emitted and data to load every time a previously untouched attributed or
> relationship is accessed for the bulk of the objects' columns and
> relationships.  This results in dozens of sql selects.
>
> What I'd like to accomplish: not return any objects which were loaded in
> Phase-0 via the `get` queries in Phase-2.

This is not necessarily what you want, if your phase-1 query happens
to locate an identity that's already present from phase-0, you just
want to populate it.  This is easy, call populate_existing() on your
phase-1 query.   Then any objects it locates will be fully loaded with
whatever your query has.


> I'd use a second session to handle the Phase-0 objects (which are readonly),
> except that means two database connections.

that's not true, you can use the same database connection on a second
Session.   Just say s2 = Session(bind=s1.connection()).

> Loading the 'full' objects isn't a good option either - some of these
> objects span a half dozen database tables and are mapped back into a single
> object via association_proxy.

I'm not sure which step/phase this refers towards. But there are two
solutions for you up above.


>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and 

Re: [sqlalchemy] Re: long transaction after database switched over

2018-05-13 Thread Mike Bayer
if that connection error is not handled in "is_disconnect" then this
wouldn't invalidate the pool.

However we've had lots of problems moving vips around with Galera.  I
would recommend using haproxy or PGBouncer which are both designed to
mitigate this issue. As well as, run Nova under mod_wsgi instead
of eventlet.

There's lots of ways your problem can be solved here.   I'd like to
find the specific race in SQLAlchemy but I need specific code steps.

On Sun, May 13, 2018 at 9:50 PM,   wrote:
> The scenario we make to reproduce the problem is:
> 1.stop PG master
> 2.move the IP of PG from master node to slave node, and make an ARP
> 3.fail over the PG slave to master
>
> That will shut all DB connections to PG master, and in SQLAlchemy we got an
> exception:
>
> ERROR sqlalchemy.pool.QueuePool Traceback (most recent call last):
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/pool.py", line 655, in
> _finalize_fairy
> ERROR sqlalchemy.pool.QueuePool fairy._reset(pool)
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/pool.py", line 798, in
> _reset
> ERROR sqlalchemy.pool.QueuePool self._reset_agent.rollback()
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line
> 1586, in rollback
> ERROR sqlalchemy.pool.QueuePool self._do_rollback()
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line
> 1624, in _do_rollback
> ERROR sqlalchemy.pool.QueuePool self.connection._rollback_impl()
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line
> 672, in _rollback_impl
> ERROR sqlalchemy.pool.QueuePool self._handle_dbapi_exception(e, None,
> None, None, None)
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line
> 1356, in _handle_dbapi_exception
> ERROR sqlalchemy.pool.QueuePool util.raise_from_cause(newraise,
> exc_info)
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/util/compat.py", line
> 200, in raise_from_cause
> ERROR sqlalchemy.pool.QueuePool reraise(type(exception), exception,
> tb=exc_tb)
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line
> 670, in _rollback_impl
> ERROR sqlalchemy.pool.QueuePool
> self.engine.dialect.do_rollback(self.connection)
> ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/default.py", line
> 421, in do_rollback
> ERROR sqlalchemy.pool.QueuePool dbapi_connection.rollback()
> ERROR sqlalchemy.pool.QueuePool DBConnectionError:
> (psycopg2.OperationalError) connection pointer is NULL
>
> With that exception, SQLAlchemy first get in
> `Connection._handle_dbapi_exception`(keyword "err connection pointer is
> NULL" in log) and then get it again in `_finalize_fairy`(keyword "Exception
> during reset or similar" in log)
>
>
>
> 在 2018年5月11日星期五 UTC+8上午12:21:42,Mike Bayer写道:
>>
>> On Thu, May 10, 2018 at 7:23 AM,   wrote:
>> > Hi Mike,
>> >
>> > I added more logs and finnaly find the reason and a method to avoid the
>> > deadlock between coroutines.
>> >
>> > In summary, the deadlock happend because two different coroutines use
>> > the
>> > same ConnectionRecord with same connection.
>> > It happends under QueuePool, when the DB connections are reset by the DB
>> > server(not related with the backend driver).
>> >
>> > 1.A coroutine get a ConnectionRecord into ConnectionFairy, and apply a
>> > weakref of the ConnectionFairy, with callback function
>> > `_finalize_fairy`.
>> > 2.When the connection get disconnected, it receives an
>> > exception(psycopg2.OperationalError), and goto
>> > Connection._handle_dbapi_exception.
>> > 3.In `_handle_dbapi_exception.finnaly`, it will call both
>> > `ConnectionFairy.invalidate` and `self.invalidate`.
>> > 4.In `ConnectionFairy.invalidate`, `_ConnectionRecord._checkin` is
>> > called,
>> > and put the ConnectionRecord back to the QueuePool.
>> > 5.In `self.invalidate`, `del self._root.__connection` is called, and
>> > **del**
>> > the ConnectionFairy.
>> > 6.The **del** operation will make weakref of ConnectionFairy to dead,
>> > and
>> > the callback `_finalize_fairy` executed.
>> > 7.In `_finalize_fairy`, another `_ConnectionRecord._checkin` is called,
>> > put
>> > the ConnectionRecord to the QueuePool again.
>>
>> Here's what does not make sense.  You are claiming that a simple
>> invalidation will result in the same ConnectionRecord being checked in
>> twice.   This is obviously not the case, what is special in your
>> situation that makes this happen?
>>
>> If I follow the steps you refer towards, in step 4, when
>> ConnectionFairy._checkin is called, that calls finalize_fairy
>> directly, which then calls ConnectionRecord.checkin().  Within
>> ConnectionRecord.checkin(), ConnectionRecord.fairy_ref is set to None.
>>Then back in ConnectionFairy._checkin(), it also sets
>> self._connection_record = None.
>>
>> If we come back into finalize_fairy a *second* time then, as a result
>> of the "del" which occurs in _handle_error(), and the
>> 

Re: [sqlalchemy] Re: long transaction after database switched over

2018-05-13 Thread jiajunsu . zju
The scenario we make to reproduce the problem is: 
1.stop PG master
2.move the IP of PG from master node to slave node, and make an ARP
3.fail over the PG slave to master

That will shut all DB connections to PG master, and in SQLAlchemy we got an 
exception:

ERROR sqlalchemy.pool.QueuePool Traceback (most recent call last):
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/pool.py", line 655, in 
_finalize_fairy
ERROR sqlalchemy.pool.QueuePool fairy._reset(pool)
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/pool.py", line 798, in 
_reset
ERROR sqlalchemy.pool.QueuePool self._reset_agent.rollback()
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line 
1586, in rollback
ERROR sqlalchemy.pool.QueuePool self._do_rollback()
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line 
1624, in _do_rollback
ERROR sqlalchemy.pool.QueuePool self.connection._rollback_impl()
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line 
672, in _rollback_impl
ERROR sqlalchemy.pool.QueuePool self._handle_dbapi_exception(e, None, 
None, None, None)
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line 
1356, in _handle_dbapi_exception
ERROR sqlalchemy.pool.QueuePool util.raise_from_cause(newraise, 
exc_info)
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/util/compat.py", line 
200, in raise_from_cause
ERROR sqlalchemy.pool.QueuePool reraise(type(exception), exception, 
tb=exc_tb)
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/base.py", line 
670, in _rollback_impl
ERROR sqlalchemy.pool.QueuePool
 self.engine.dialect.do_rollback(self.connection)
ERROR sqlalchemy.pool.QueuePool   File "sqlalchemy/engine/default.py", line 
421, in do_rollback
ERROR sqlalchemy.pool.QueuePool dbapi_connection.rollback()
ERROR sqlalchemy.pool.QueuePool DBConnectionError: 
(psycopg2.OperationalError) connection pointer is NULL

With that exception, SQLAlchemy first get in 
`Connection._handle_dbapi_exception`(keyword "err connection pointer is 
NULL" in log) and then get it again in `_finalize_fairy`(keyword "Exception 
during reset or similar" in log)



在 2018年5月11日星期五 UTC+8上午12:21:42,Mike Bayer写道:
>
> On Thu, May 10, 2018 at 7:23 AM,   
> wrote: 
> > Hi Mike, 
> > 
> > I added more logs and finnaly find the reason and a method to avoid the 
> > deadlock between coroutines. 
> > 
> > In summary, the deadlock happend because two different coroutines use 
> the 
> > same ConnectionRecord with same connection. 
> > It happends under QueuePool, when the DB connections are reset by the DB 
> > server(not related with the backend driver). 
> > 
> > 1.A coroutine get a ConnectionRecord into ConnectionFairy, and apply a 
> > weakref of the ConnectionFairy, with callback function 
> `_finalize_fairy`. 
> > 2.When the connection get disconnected, it receives an 
> > exception(psycopg2.OperationalError), and goto 
> > Connection._handle_dbapi_exception. 
> > 3.In `_handle_dbapi_exception.finnaly`, it will call both 
> > `ConnectionFairy.invalidate` and `self.invalidate`. 
> > 4.In `ConnectionFairy.invalidate`, `_ConnectionRecord._checkin` is 
> called, 
> > and put the ConnectionRecord back to the QueuePool. 
> > 5.In `self.invalidate`, `del self._root.__connection` is called, and 
> **del** 
> > the ConnectionFairy. 
> > 6.The **del** operation will make weakref of ConnectionFairy to dead, 
> and 
> > the callback `_finalize_fairy` executed. 
> > 7.In `_finalize_fairy`, another `_ConnectionRecord._checkin` is called, 
> put 
> > the ConnectionRecord to the QueuePool again. 
>
> Here's what does not make sense.  You are claiming that a simple 
> invalidation will result in the same ConnectionRecord being checked in 
> twice.   This is obviously not the case, what is special in your 
> situation that makes this happen? 
>
> If I follow the steps you refer towards, in step 4, when 
> ConnectionFairy._checkin is called, that calls finalize_fairy 
> directly, which then calls ConnectionRecord.checkin().  Within 
> ConnectionRecord.checkin(), ConnectionRecord.fairy_ref is set to None. 
>Then back in ConnectionFairy._checkin(), it also sets 
> self._connection_record = None. 
>
> If we come back into finalize_fairy a *second* time then, as a result 
> of the "del" which occurs in _handle_error(), and the 
> connection_record is present, the function will exit immediately, 
> because: 
>
>if connection_record.fairy_ref is not ref: 
> return 
>
> So the situation as given is not possible without concurrent access to 
> the ConnectionFairy being introduced externally to the pool. 
>
> There is exactly one thing I can think of that is both unique to the 
> psycopg2-style of asynchronous connection, not the PyMySQL form of it, 
> that might be important here, and it's that I think psycopg2's 
> connection.close() method will defer to another greenlet.  That means 
> when we're inside of 

Re: [sqlalchemy] is it possible to make load_only local to an object only and/or avoid the identity map and/or not available via `get`?

2018-05-13 Thread Jonathan Vanasco
A better way of conveying my problem might be: `get()` returns any object 
which is in the identity map; I am having issues after the identity map is 
now populated with some objects which are 'partials' that were created via 
'load_only' parameters.  The benefits of using `get` to minimize a database 
query are outnumbered by dozens of selects to the database.

I'll try to re-explain everything with a more details below.

On Sunday, May 13, 2018 at 11:41:56 AM UTC-4, Mike Bayer wrote:
>
>
> Phase one is a query with lots of joinedload, and *also*, a 
> load_only() (Is that right?   your previous email shows a joinedload + 
> load_only, this email OTOH says "phase 1, lots of eagerloading and 
> joinedloading", is that the one with the "load_only" ?) 
>
>
Phase 1 can have a load_only, usually not.  The problem with the load_only 
is in the stuff that now happens before phase 1.

Then, you say, the get() in phase 2 (which we assume is the get() you 
> refer towards in your previous email) hits objects from this 
> joinedload + loadonly 99% of the time, so 1% of the time they are from 
> a cache, which you haven't told me how objects get into that or what 
> state they are in. 
>

That 1% are direct queries against the Database.  The cache I'm talking 
about is SqlAlchemy's identity map.

Phase-1:
- SqlAlchemy queries PostgreSQL for the primary keys of objects under 
several criteria.
- The app aggregates all the primary keys by object type, then selects the 
objects by the primary key, making extensive use of the joined/eager 
loading.
- The generated SqlAlchemy objects are inspected and analyzed for foreign 
keys that are of interest, which are then loaded via the aggregated gets.
- Batching selects like this enormously improved PostgreSQL performance by 
influencing how it loads/unloads and caches data. The queries only involve 
indexes, and then tables only get 1 (sometimes 2) operations with the 
primary index.

Phase-2:
- The objects are interacted with. Their attributes and relationships are 
accessed.
- Sometimes this code needs to use a specific object that was 
not-necessarily loaded in Phase-1. SqlAlchemy's `get(primary_key)` is used 
to leverage the identity map, which would either return an existing object 
by primary key, or hit the database.  This almost always results in a 
"cache hit" from the identity map, so no sql is emitted. Occasionally the 
object was not loaded, so sql must be emitted to select it.

The current problem was created when a new "Phase-0" was introduced.

In Phase-0, SqlAlchemy queries a number of objects with `load_only` 
specifying a small subset of their columns.  If these objects are queried 
in Phase-2 via `get()`, the identity map usually returns the previously 
selected object with a small subset of the the columns; this causes sql to 
be emitted and data to load every time a previously untouched attributed or 
relationship is accessed for the bulk of the objects' columns and 
relationships.  This results in dozens of sql selects.

What I'd like to accomplish: not return any objects which were loaded in 
Phase-0 via the `get` queries in Phase-2.
I'd use a second session to handle the Phase-0 objects (which are 
readonly), except that means two database connections.
Loading the 'full' objects isn't a good option either - some of these 
objects span a half dozen database tables and are mapped back into a single 
object via association_proxy.


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] is it possible to make load_only local to an object only and/or avoid the identity map and/or not available via `get`?

2018-05-13 Thread Mike Bayer
On Sat, May 12, 2018 at 4:28 PM, Jonathan Vanasco  wrote:
>
>
> On Saturday, May 12, 2018 at 9:49:34 AM UTC-4, Mike Bayer wrote:
>>
>>
>> The former would be a bug.  The latter, I'm not sure what you would expect
>> it to do.  Do you want the unloaded attributes to raise attribute error?  Or
>> did you want the get() to fully refresh what was not loaded (that is
>> doable).
>
>
> The latter.  The closest thing I've been able to get the behavior I need
> with is a separate session.
>
> This need came due to application growth over a long period.
>
> The application previously had two phases:
>
> * phase 1: lots of eagerloading and joinedloading
> * phase 2: leverage get(), which hits items loaded in phase1 99% of the
> time.


Trying to take what you've said directly (rather than my usual
glossing over things that aren't making sense and just re-asking), I
don't yet understand what you want to do when:

Phase one is a query with lots of joinedload, and *also*, a
load_only() (Is that right?   your previous email shows a joinedload +
load_only, this email OTOH says "phase 1, lots of eagerloading and
joinedloading", is that the one with the "load_only" ?)

Then, you say, the get() in phase 2 (which we assume is the get() you
refer towards in your previous email) hits objects from this
joinedload + loadonly 99% of the time, so 1% of the time they are from
a cache, which you haven't told me how objects get into that or what
state they are in.

And then you say, you don't like that when the program hits attributes
on the object that came from the get(), they emit SQL.   Instead,
you'd like those attributes to be loaded already.

Based on these facts, it seems like you would like to not use
load_only in the first place, or, when you do the get(), you really
would rather re-emit the query in all cases since 99% of these hits
are already in the identity map and won't have the attributes you
want.e.g. don't use get(), use query(..).filter_by(id=pk).first().

What I don't understand is what any of this has to do with the "phase
0" you refer towards so I am very uncertain that I'm understanding the
problem.


>
> i just finished up some changes to the authorization system, which is now
> using a dogpile cache with a small subset of keys on short timeout. when
> there is a cache miss, this component executes sql before "phase 1" – i'll
> call this "phase 0".
>
> a handful of items accessed in "phase 2" are now loaded into the identity
> map during "phase 0", instead of being a miss.  instead of saving a Sql
> query, i'm now 20+ queries per object.
>
>
>
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLACODEGEN + VSCODE + Intellisense not working.

2018-05-13 Thread Jeremy Flowers
I added PEP0526 code hint  and 
that fixed it.

   jobrow = 
session.query(Jobmst).filter(Jobmst.jobmst_id==job['jobmst_id']).first() # 
type: Jobmst


Is it feasible for the SQLAlchemy tool to correctly add these a the library 
level?
It would be preferable to my client code!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLACODEGEN + VSCODE + Intellisense not working.

2018-05-13 Thread Jeremy Flowers

>
> I'd say specifically for this part:
>
 

  jobrow = session.query(Jobmst).filter(Jobmst.jobmst_id==job['jobmst_id']).
first()

That first method at the end... 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLACODEGEN + VSCODE + Intellisense not working.

2018-05-13 Thread Jeremy Flowers

I got some feedback on Stack Overflow that sounds promising..
Does SQLAlchemy by any chance use Doctstrings to indicated return types?
Seems PyCharm Python IDE may have a work around for this?
See:
https://stackoverflow.com/questions/24684954/pycharm-type-hinting-of-class-fields-instance-variables
https://www.jetbrains.com/help/pycharm/using-docstrings-to-specify-types.html#d243105e184

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLACODEGEN + VSCODE + Intellisense not working.

2018-05-13 Thread Jeremy Flowers
I got some feedback on Stack Overflow that sounds promising..
Does SQLAlchemy by any chance use Doctstrings to indicated return types?
Seems PyCharm Python IDE may have a work around for this?
See:
https://stackoverflow.com/questions/24684954/pycharm-type-hinting-of-class-fields-instance-variables
https://stackoverflow.com/questions/24684954/pycharm-type-hinting-of-class-fields-instance-variables

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.