[sqlalchemy] Re: Default behavior for sqlalchemy_utils.UUIDType

2016-08-02 Thread Jason Libbey
I see, so I didn't quite understand the relationship between sqlalchemy and 
the sqlalchemy_utils libraries, but I think I've got it all straight now. 
 To answer your question as to why I would need the dashes, it's because 
the other dependencies expect to be able to look up uuid by the standard 
format.  At any rate, I've got it fixed now, thanks so much Mike!

On Friday, July 29, 2016 at 6:45:01 PM UTC-7, Jason Libbey wrote:
>
> Hi, this is my first post, so if it does not adhere by your guidelines, 
> please let me know and I'll fix it ASAP.
>
> I'm using the sqlalchemy_utils.UUIDType as per backend-agnostic-guid-type 
> 
> .
>
> from sqlalchemy_utils import UUIDType
>
>
>
> class ApplicationStore(Base, Timestamp):
>
> __tablename__ = 'applications'
>
> id = Column(UUIDType(binary=False), primary_key=True, default=uuid.uuid4)
>
> user_uuid = Column(UUIDType(binary=False), unique=True, index=True, 
> nullable=False)
>
>
>
> Since I am using a mysql db, the UUIDType is correctly falling back to the 
> CHAR32 representation.  However, the value that gets written to the db is 
> the uuid with no dashes.
>
>
> application = application_service.create(user_uuid=uuid.uuid4())
>
>
>
> The string representation of the uuid shows it with dashes, but the 
> database char is saved without dashes.  So my questions are:
> 1. Am I using this correctly?
> 2. Is this the expected behavior?
> 3. How can I force it to save as uuid with dashes, while still using the 
> backend agnostic guid type?
>
> Python version: 2.7.10
> SQLAlchemy version: 1.0.12
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Mapping sql tables and automatically creating classes for ORM manipulation

2016-08-02 Thread Shravan Murali
Hey guys !

I recently worked on an app with which you can generate classes for any 
number of  SQL tables  automatically , just by giving in the table names 
and database name through command line . 

https://github.com/shravan97/ORM-Creator

I would certainly love to integrate this with SqlAlchemy . Also , I would 
be happy to see contributions for this project

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Mike Bayer
What I need is a complete .py file that sets up a *minimal* version of 
*every* class required, then the Query object, then prints it.   I'll 
mangle it to do the right thing.


Like this:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
bs = relationship("B")

class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a_id = Column(ForeignKey('a.id'))


s = Session()

q = s.query(A).join(B)

print q







On 08/02/2016 02:45 PM, Brian Cherinka wrote:

So I managed to get something to return using this definition of the
@expression, however, I'm not quite there yet.

|@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls):
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\

filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels=True)

unwave = 
session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c.z)).label('restw')).subquery('unwave',
 with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave|

Using the example query,

|
rwquery
=session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520')
rw =rwquery.first()
|

I am getting a modified wavelength array, but it's the wrong one, using
the wrong ID.  For the ID 1-113520 I should be modifying the wavelength
array by (1+0.016765) and instead it's just grabbing the very first
value in the NSA.z column, which corresponds to (1+0.099954).  I think
this is because my filter condition is not getting passed into the nsaz
subquery, where it needs to go.   Do you know how I can pass filter
condition parameters down into any subqueries I may have in me
expression select statements?  Is that what .correlate() does?

My query looks like

|
In[24]:printrwquery
SELECT (SELECT array_agg(unwave.restw)AS array_agg_1
FROM (SELECT unnest(mangadatadb.wavelength.wavelength)/(%(z_1)s
+nsaz.z)AS restw
FROM mangadatadb.wavelength,(SELECT mangasampledb.nsa.z AS z
FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON
mangasampledb.nsa.pk =mangasampledb.manga_target_to_nsa.nsa_pk JOIN
mangasampledb.manga_target ON mangasampledb.manga_target.pk
=mangasampledb.manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube
ON mangasampledb.manga_target.pk =mangadatadb.cube.manga_target_pk
WHERE mangadatadb.cube.mangaid =mangadatadb.cube.mangaid)AS nsaz)AS
unwave)AS anon_1
FROM mangadatadb.cube
WHERE mangadatadb.cube.mangaid =%(mangaid_1)s

|



--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Brian Cherinka
So I managed to get something to return using this definition of the 
@expression, however, I'm not quite there yet.

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls): 
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave

Using the example query, 

rwquery = session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid==
'1-113520')
rw = rwquery.first()

I am getting a modified wavelength array, but it's the wrong one, using the 
wrong ID.  For the ID 1-113520 I should be modifying the wavelength array 
by (1+0.016765) and instead it's just grabbing the very first value in the 
NSA.z column, which corresponds to (1+0.099954).  I think this is because 
my filter condition is not getting passed into the nsaz subquery, where it 
needs to go.   Do you know how I can pass filter condition parameters down 
into any subqueries I may have in me expression select statements?  Is that 
what .correlate() does?

My query looks like

In [24]: print rwquery
SELECT (SELECT array_agg(unwave.restw) AS array_agg_1
FROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + nsaz.z) 
AS restw
FROM mangadatadb.wavelength, (SELECT mangasampledb.nsa.z AS z
FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangasampledb.manga_target ON mangasampledb.manga_target.pk = mangasampledb.
manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube ON mangasampledb.
manga_target.pk = mangadatadb.cube.manga_target_pk
WHERE mangadatadb.cube.mangaid = mangadatadb.cube.mangaid) AS nsaz) AS 
unwave) AS anon_1
FROM mangadatadb.cube
WHERE mangadatadb.cube.mangaid = %(mangaid_1)s




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Brian Cherinka
So I managed to get something to return using this definition of the 
@expression, however, I'm not quite there yet.

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls): 
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave 

Using the example query, 
rwquery = 
session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520')


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] refresh_flush is not triggered when only the primary key is updated

2016-08-02 Thread Mike Bayer



On 08/02/2016 10:14 AM, Lenar Imamutdinov wrote:

The only problem with after_flush() is that it's a session event. Why
there is no such event on instance level?


because it would involve adding a loop that iterates through all the 
objects that were flushed and searching each one for event handlers, on 
every flush, adding a significant amount of CPU work in linear 
proportion to the number of objects handled to all flushes for all 
SQLAlchemy installations everywhere.You can build this loop yourself 
and make it optimize only for those types of objects you know you care 
about (as well as handle said objects in batches).


alternatively, there are event handlers for state changes on objects in 
1.1 and you can review these here: 
http://docs.sqlalchemy.org/en/latest/orm/session_events.html#object-lifecycle-events 
 you can likely intercept these to approximate an "object after flush" 
event, if you can limit your criteria to objects that had a state 
change.   But building a loop in after_flush() is easy enough.











On Tuesday, August 2, 2016 at 4:37:19 PM UTC+3, Mike Bayer wrote:



On 08/02/2016 04:24 AM, Lenar Imamutdinov wrote:
> Thank you for the comprehensive answer. Without going too much into
> details, I have a table where file associated with each database
record,
> and the name of that file is based on object id. This is what's
> happening now when I'm doing an INSERT:
> 1. Model instance is created
> 2. Explicit flush is done to get the id
> 3. File is created, the id is used for its name
> 4. Filename is written back to the instance
> 5. Commit
>
> It works perfectly, but I would like to avoid explicit flushing
and do
> all that file creating stuff somewhere implicitly before commit. This
> code is used inside the library so the solution should be
> database-specifics tolerant as much as possible too.
>
> Actually, I'm starting to realize that refresh_flush event is not
very
> good for this task, because it's called from within the flushing
process
> and all object modifications remain ignored, so I cannot write the
> filename to the object from the event handler. What I need is the
event
> that hooks to the model class, happens before commit, when the
flush is
> already done, but I can modify the object and trigger another
flush on
> exit if the object has been indeed modified. Is there anything
like this
> in current SQLAlchemy?

well the after_flush() event would be where you could do that, you can
in fact trigger a second flush in there.

This is a use case that is actually detected, as we've seen the case
where event handlers go into an endless loop (e.g. flush = do my
event =
do another flush); it will stop after 100 loops around to prevent this.
So you'd need to make sure your routine has no chance of going into an
endless loop.




>
>
> On Friday, July 29, 2016 at 5:35:01 PM UTC+3, Mike Bayer wrote:
>
> Thanks for the great test case and this would be appropriate
to be
> posted as a bug, since it is a complete description.
>
> So, the reason the primary key is not included right now is
because
> primary keys are populated in many different ways on the object,
> meaning
> if your code relies upon refresh_flush(), it will break if you
> switch to
> another database, or use an old version of Postgresql that
doesn't
> support RETURNING, or the table/dialect is set up to no longer
use
> RETURNING, etc.   The primary key is not actually populated at
that
> point the way the other defaults are (it was already populated
> previously, so I guess to the event consumer, it doesn't make
that much
> difference).
>
> Looking at where refresh_flush is called, there is still
inconsistency
> for other kinds of defaults too; if you use eager_defaults
with a MySQL
> database and server-side defaults it looks like you'd get
called in the
> ordinary refresh() event anyway (that is, if you try to
eager_defaults
> against a server_default="0", a Postgresql DB will call
> refresh_flush, a
> MySQL DB will call refresh()).
>
> Still, a primary key that's an autoincrement will never come
back for
> MySQL within either of these events - there's no RETURNING for
MySQL
> (and others).
>
> refresh_flush() is very new and not widely used and it is likely
> harmless to just add what we have here (not to mention
document it
> either way).  But for any change I'd first ask, what are you
trying to
> do?   The primary keys are excluded from "refresh" because
they are not
> usually a part of that concept, primary keys are "first class"
  

Re: [sqlalchemy] refresh_flush is not triggered when only the primary key is updated

2016-08-02 Thread Lenar Imamutdinov
The only problem with after_flush() is that it's a session event. Why there 
is no such event on instance level?

On Tuesday, August 2, 2016 at 4:37:19 PM UTC+3, Mike Bayer wrote:
>
>
>
> On 08/02/2016 04:24 AM, Lenar Imamutdinov wrote: 
> > Thank you for the comprehensive answer. Without going too much into 
> > details, I have a table where file associated with each database record, 
> > and the name of that file is based on object id. This is what's 
> > happening now when I'm doing an INSERT: 
> > 1. Model instance is created 
> > 2. Explicit flush is done to get the id 
> > 3. File is created, the id is used for its name 
> > 4. Filename is written back to the instance 
> > 5. Commit 
> > 
> > It works perfectly, but I would like to avoid explicit flushing and do 
> > all that file creating stuff somewhere implicitly before commit. This 
> > code is used inside the library so the solution should be 
> > database-specifics tolerant as much as possible too. 
> > 
> > Actually, I'm starting to realize that refresh_flush event is not very 
> > good for this task, because it's called from within the flushing process 
> > and all object modifications remain ignored, so I cannot write the 
> > filename to the object from the event handler. What I need is the event 
> > that hooks to the model class, happens before commit, when the flush is 
> > already done, but I can modify the object and trigger another flush on 
> > exit if the object has been indeed modified. Is there anything like this 
> > in current SQLAlchemy? 
>
> well the after_flush() event would be where you could do that, you can 
> in fact trigger a second flush in there. 
>
> This is a use case that is actually detected, as we've seen the case 
> where event handlers go into an endless loop (e.g. flush = do my event = 
> do another flush); it will stop after 100 loops around to prevent this. 
> So you'd need to make sure your routine has no chance of going into an 
> endless loop. 
>
>
>
>
> > 
> > 
> > On Friday, July 29, 2016 at 5:35:01 PM UTC+3, Mike Bayer wrote: 
> > 
> > Thanks for the great test case and this would be appropriate to be 
> > posted as a bug, since it is a complete description. 
> > 
> > So, the reason the primary key is not included right now is because 
> > primary keys are populated in many different ways on the object, 
> > meaning 
> > if your code relies upon refresh_flush(), it will break if you 
> > switch to 
> > another database, or use an old version of Postgresql that doesn't 
> > support RETURNING, or the table/dialect is set up to no longer use 
> > RETURNING, etc.   The primary key is not actually populated at that 
> > point the way the other defaults are (it was already populated 
> > previously, so I guess to the event consumer, it doesn't make that 
> much 
> > difference). 
> > 
> > Looking at where refresh_flush is called, there is still 
> inconsistency 
> > for other kinds of defaults too; if you use eager_defaults with a 
> MySQL 
> > database and server-side defaults it looks like you'd get called in 
> the 
> > ordinary refresh() event anyway (that is, if you try to 
> eager_defaults 
> > against a server_default="0", a Postgresql DB will call 
> > refresh_flush, a 
> > MySQL DB will call refresh()). 
> > 
> > Still, a primary key that's an autoincrement will never come back 
> for 
> > MySQL within either of these events - there's no RETURNING for MySQL 
> > (and others). 
> > 
> > refresh_flush() is very new and not widely used and it is likely 
> > harmless to just add what we have here (not to mention document it 
> > either way).  But for any change I'd first ask, what are you trying 
> to 
> > do?   The primary keys are excluded from "refresh" because they are 
> not 
> > usually a part of that concept, primary keys are "first class" and 
> > always get pulled no matter what, you can grab them in 
> after_insert(), 
> > for example. 
> > 
> > 
> > 
> > 
> > 
> > On 07/29/2016 04:25 AM, Lenar Imamutdinov wrote: 
> > > Hello! 
> > > 
> > > Looks like the refresh_flush event is not triggered when the only 
> > field 
> > > to update after flush is the primary key fed from RETURNING 
> clause. 
> > > However it should, as far as I understand what is mentioned in the 
> > > documentation. 
> > > 
> > > Environment: SQLAlchemy 1.0.14, PostgreSQL 9.5, Python 3.4 
> > > 
> > > Here is how to reproduce this problem: 
> > > 
> > > from sqlalchemy.ext.declarative import declarative_base 
> > > from sqlalchemy import Column, Integer 
> > > from sqlalchemy import create_engine 
> > > from sqlalchemy.orm import Session 
> > > from sqlalchemy import event 
> > > 
> > > Base = declarative_base() 
> > > 
> > > def receive_refresh_flush(target, context, attrs): 
> > >  

Re: [sqlalchemy] Re: DBAPI Error with SQLAlchemy-1.0.5

2016-08-02 Thread Mike Bayer
I'd recommend trying psycopg2 as the driver as it may handle this 
failure mode more gracefully.




On 08/02/2016 04:34 AM, Simon King wrote:

It looks like an exception is occurring, which SQLAlchemy has caught
and is now trying to roll back the transaction before re-raising the
initial exception. However, a second exception has occurred during the
rollback, so you can no longer see the original exception.

The second exception looks like a possible bug in the pg8000 driver,
where it's trying to send a rollback command without an actual
connection to the database.

In order to find the original exception, perhaps you could start by
putting some print statements in the _handle_dbapi_exception method:

https://bitbucket.org/zzzeek/sqlalchemy/src/3873d7db340835a38e6b191e8466fb42c3a9d3f6/lib/sqlalchemy/engine/base.py?at=master=file-view-default#base.py-1235

Hope that helps,

Simon

On Tue, Aug 2, 2016 at 4:11 AM, Nikhil S Menon  wrote:

If the datatype of in database is Varchar its working fine, error comes only
with integer. Any help here is highly appreciated.

Best Regards,
Nikhil


On Sunday, 31 July 2016 22:28:27 UTC+5:30, Nikhil S Menon wrote:


i am using SQLAlchemy-1.0.5.


i have an external db with a table having integer. while i run the select
query i am getting below error.

File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 914, in execute
return meth(self, multiparams, params)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1146, in _execute_context
context)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1334, in _handle_dbapi_exception
self._autorollback()
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 791, in _autorollback
self._root._rollback_impl()
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 670, in _rollback_impl
self._handle_dbapi_exception(e, None, None, None, None)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1266, in _handle_dbapi_exception
exc_info
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 668, in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
line 420, in do_rollback
dbapi_connection.rollback()
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/pg8000/core.py", line
1751, in rollback
self.execute(self._cursor, "rollback", None)
  File
"/home/nfs/nikhil/local/lib/python2.7/site-packages/pg8000/core.py", line
2016, in execute
self._flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
sqlalchemy.exc.DBAPIError: (exceptions.AttributeError) 'NoneType' object
has no attribute 'sendall'
No handlers could be found for logger "sqlalchemy.pool.QueuePool"


sqlalchemy.exc.DBAPIError: (exceptions.AttributeError) 'NoneType' object
has no attribute 'sendall'


from sqlalchemy import *
def connect_db(db):

engine=create_engine('postgresql+pg8000://postgres@10.102.31.123:5432/%s'%db)
return engine

def load_table(table_name):
table_data = Table(table_name, m, autoload=True, autoload_with=engine)
return table_data

def get_all_tables_data():
m.reflect(bind=engine)
for table in m.tables:
print "%s\n-\n"%table
table_data = load_table(table)
q = select([table_data])
print q
res = conn.execute(q)
for row in res:
print row
engine = connect_db("db")
m = MetaData(engine)
conn = engine.connect()
get_all_tables_data()


Its happening with external database only when I use the sqlalchemy
library in python.  If i use postgres shell its working. Please find below
details from postgres interface.

db=# \d mas_license;
   Table "public.mas_license"
 Column |   Type| Modifiers
+---+---
 id | character varying | not null
 node_id| character varying |
 rpt_sampletime | bigint|
 max_vips   | character varying |
 pooled_lic | integer   |
 cpx_lic| integer   |
 perf   | integer   |
 snmp_traps  

Re: [sqlalchemy] refresh_flush is not triggered when only the primary key is updated

2016-08-02 Thread Mike Bayer



On 08/02/2016 04:24 AM, Lenar Imamutdinov wrote:

Thank you for the comprehensive answer. Without going too much into
details, I have a table where file associated with each database record,
and the name of that file is based on object id. This is what's
happening now when I'm doing an INSERT:
1. Model instance is created
2. Explicit flush is done to get the id
3. File is created, the id is used for its name
4. Filename is written back to the instance
5. Commit

It works perfectly, but I would like to avoid explicit flushing and do
all that file creating stuff somewhere implicitly before commit. This
code is used inside the library so the solution should be
database-specifics tolerant as much as possible too.

Actually, I'm starting to realize that refresh_flush event is not very
good for this task, because it's called from within the flushing process
and all object modifications remain ignored, so I cannot write the
filename to the object from the event handler. What I need is the event
that hooks to the model class, happens before commit, when the flush is
already done, but I can modify the object and trigger another flush on
exit if the object has been indeed modified. Is there anything like this
in current SQLAlchemy?


well the after_flush() event would be where you could do that, you can 
in fact trigger a second flush in there.


This is a use case that is actually detected, as we've seen the case 
where event handlers go into an endless loop (e.g. flush = do my event = 
do another flush); it will stop after 100 loops around to prevent this. 
So you'd need to make sure your routine has no chance of going into an 
endless loop.








On Friday, July 29, 2016 at 5:35:01 PM UTC+3, Mike Bayer wrote:

Thanks for the great test case and this would be appropriate to be
posted as a bug, since it is a complete description.

So, the reason the primary key is not included right now is because
primary keys are populated in many different ways on the object,
meaning
if your code relies upon refresh_flush(), it will break if you
switch to
another database, or use an old version of Postgresql that doesn't
support RETURNING, or the table/dialect is set up to no longer use
RETURNING, etc.   The primary key is not actually populated at that
point the way the other defaults are (it was already populated
previously, so I guess to the event consumer, it doesn't make that much
difference).

Looking at where refresh_flush is called, there is still inconsistency
for other kinds of defaults too; if you use eager_defaults with a MySQL
database and server-side defaults it looks like you'd get called in the
ordinary refresh() event anyway (that is, if you try to eager_defaults
against a server_default="0", a Postgresql DB will call
refresh_flush, a
MySQL DB will call refresh()).

Still, a primary key that's an autoincrement will never come back for
MySQL within either of these events - there's no RETURNING for MySQL
(and others).

refresh_flush() is very new and not widely used and it is likely
harmless to just add what we have here (not to mention document it
either way).  But for any change I'd first ask, what are you trying to
do?   The primary keys are excluded from "refresh" because they are not
usually a part of that concept, primary keys are "first class" and
always get pulled no matter what, you can grab them in after_insert(),
for example.





On 07/29/2016 04:25 AM, Lenar Imamutdinov wrote:
> Hello!
>
> Looks like the refresh_flush event is not triggered when the only
field
> to update after flush is the primary key fed from RETURNING clause.
> However it should, as far as I understand what is mentioned in the
> documentation.
>
> Environment: SQLAlchemy 1.0.14, PostgreSQL 9.5, Python 3.4
>
> Here is how to reproduce this problem:
>
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy import Column, Integer
> from sqlalchemy import create_engine
> from sqlalchemy.orm import Session
> from sqlalchemy import event
>
> Base = declarative_base()
>
> def receive_refresh_flush(target, context, attrs):
> print('refresh_flush received')
>
> class Test(Base):
> __tablename__ = 'refresh_flush_test'
> id = Column(Integer, primary_key=True)
> # uncomment the field below to receive the event
> # dummy = Column(Integer, default=0)
>
> engine =
create_engine('postgresql://test:test@localhost:5432/test')
> Base.metadata.create_all(engine)
> session = Session(engine)
>
> event.listen(Test, 'refresh_flush', receive_refresh_flush)
>
> obj = Test()
> session.add(obj)
> session.commit()
>
> --
> You received this message 

Re: [sqlalchemy] SAWarning shows not immediately, but after some time of app execution

2016-08-02 Thread TomS.


On 07/22/2016 05:13 PM, Simon King wrote:

On Fri, Jul 22, 2016 at 3:25 PM, TomS.  wrote:

On 07/19/2016 06:41 PM, Mike Bayer wrote:



On 07/19/2016 11:51 AM, TomS. wrote:

Hi,

We have Flask app which uses SQLAlchemy. Weird error started to happen
recently. The difficulty is that we can't reproduce the error (/figure
out conditions causing issue) - maybe someone could help. Any hints/tips
would be appreciated.

There is a part in the code which constructs IN in SQL:

MyModel.id.in_(my_ids)

For some cases my_ids is an empty list. It works without any problems,
but after some time the same query (using empty list) starts to raise an
exception:

SAWarning: The IN-predicate on "MyModel.id" was invoked with an empty
sequence. This results in a contradiction, which nonetheless can be
expensive to evaluate.  Consider alternative strategies for improved
performance.

After restarting app, everything works again.

The question is - why this exception is not risen always (although we
tried to run app with empty list directly), but after some time of app
execution (~1 day)?


It's not an exception, it's a warning.  Python warnings by default emit only
once, see:
https://docs.python.org/2/library/warnings.html#the-warnings-filter

Ok, this is the explanation why it doesn't show up regularly. Thank you.

I don't know why SAWarning is treated as error. Here is the log:

2016-07-21 13:58:14,108 ERROR: Exception on /own [GET]
Traceback (most recent call last):
[...]
   File
"/home/developer/.virtualenvs/rest/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py",
line 1297, in warn
 warnings.warn(msg, exc.SAWarning, stacklevel=2)
SAWarning: The IN-predicate on "MyModel.id" was invoked with an empty
sequence. This results in a contradiction, which nonetheless can be
expensive to evaluate.

I will also ask on Flask group, but maybe you know the reason - Flask config
regarding logging is as follow:

import logging

from logging.handlers import RotatingFileHandler

logger_file_handler = RotatingFileHandler('my.log', maxBytes=1024 * 1024 *
100, backupCount=20)
logger_file_handler.setLevel(logging.DEBUG)
logger_formatter = logging.Formatter(u'%(asctime)s %(levelname)s:
%(message)s')

logger_file_handler.setFormatter(logger_formatter)

logging.captureWarnings(True)

app.logger.addHandler(logger_file_handler)
app.logger.setLevel(logging.DEBUG)

There is no code which change behavior (by  filter) of SAWarning to 'error'
...

The python warnings system is completely separate from the logging
system. Whether or not a particular warning is turned into an
exception is driven by the "warnings filter":

https://docs.python.org/2/library/warnings.html#the-warnings-filter

I think you must be configuring the warnings filter somewhere, because
by default this shouldn't raise an exception.
Yes, you were right. I finally found the source of the issue. There was 
link in the part of the code which set warnings to behave like the 
exceptions. But this code was not executed always but under certain 
conditions. That was the first issue with debuging. The second one - the 
fact that it is web app which is run using many processes. So on some 
"process" code was already executed while on others not.


Thank you for the ideas!


Simon


Cheers!

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: DBAPI Error with SQLAlchemy-1.0.5

2016-08-02 Thread Simon King
It looks like an exception is occurring, which SQLAlchemy has caught
and is now trying to roll back the transaction before re-raising the
initial exception. However, a second exception has occurred during the
rollback, so you can no longer see the original exception.

The second exception looks like a possible bug in the pg8000 driver,
where it's trying to send a rollback command without an actual
connection to the database.

In order to find the original exception, perhaps you could start by
putting some print statements in the _handle_dbapi_exception method:

https://bitbucket.org/zzzeek/sqlalchemy/src/3873d7db340835a38e6b191e8466fb42c3a9d3f6/lib/sqlalchemy/engine/base.py?at=master=file-view-default#base.py-1235

Hope that helps,

Simon

On Tue, Aug 2, 2016 at 4:11 AM, Nikhil S Menon  wrote:
> If the datatype of in database is Varchar its working fine, error comes only
> with integer. Any help here is highly appreciated.
>
> Best Regards,
> Nikhil
>
>
> On Sunday, 31 July 2016 22:28:27 UTC+5:30, Nikhil S Menon wrote:
>>
>> i am using SQLAlchemy-1.0.5.
>>
>>
>> i have an external db with a table having integer. while i run the select
>> query i am getting below error.
>>
>> File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 914, in execute
>> return meth(self, multiparams, params)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
>> line 323, in _execute_on_connection
>> return connection._execute_clauseelement(self, multiparams, params)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 1010, in _execute_clauseelement
>> compiled_sql, distilled_params
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 1146, in _execute_context
>> context)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 1334, in _handle_dbapi_exception
>> self._autorollback()
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 791, in _autorollback
>> self._root._rollback_impl()
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 670, in _rollback_impl
>> self._handle_dbapi_exception(e, None, None, None, None)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 1266, in _handle_dbapi_exception
>> exc_info
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
>> line 202, in raise_from_cause
>> reraise(type(exception), exception, tb=exc_tb, cause=cause)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
>> line 668, in _rollback_impl
>> self.engine.dialect.do_rollback(self.connection)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
>> line 420, in do_rollback
>> dbapi_connection.rollback()
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/pg8000/core.py", line
>> 1751, in rollback
>> self.execute(self._cursor, "rollback", None)
>>   File
>> "/home/nfs/nikhil/local/lib/python2.7/site-packages/pg8000/core.py", line
>> 2016, in execute
>> self._flush()
>>   File "/usr/lib/python2.7/socket.py", line 303, in flush
>> self._sock.sendall(view[write_offset:write_offset+buffer_size])
>> sqlalchemy.exc.DBAPIError: (exceptions.AttributeError) 'NoneType' object
>> has no attribute 'sendall'
>> No handlers could be found for logger "sqlalchemy.pool.QueuePool"
>>
>>
>> sqlalchemy.exc.DBAPIError: (exceptions.AttributeError) 'NoneType' object
>> has no attribute 'sendall'
>>
>>
>> from sqlalchemy import *
>> def connect_db(db):
>>
>> engine=create_engine('postgresql+pg8000://postgres@10.102.31.123:5432/%s'%db)
>> return engine
>>
>> def load_table(table_name):
>> table_data = Table(table_name, m, autoload=True, autoload_with=engine)
>> return table_data
>>
>> def get_all_tables_data():
>> m.reflect(bind=engine)
>> for table in m.tables:
>> print "%s\n-\n"%table
>> table_data = load_table(table)
>> q = select([table_data])
>> print q
>> res = conn.execute(q)
>> for row in res:
>> print row
>> engine = connect_db("db")
>> m = MetaData(engine)
>> conn = engine.connect()
>> get_all_tables_data()
>>
>>
>> Its happening with external database only when I use the sqlalchemy
>> library in python.  If i use postgres shell its working. Please find below
>> details from postgres interface.
>>
>> db=# \d mas_license;
>>Table "public.mas_license"
>>  Column |   Type| Modifiers
>> +---+---
>>  id | character varying | not null
>>  node_id| character varying |
>>  rpt_sampletime | bigint 

Re: [sqlalchemy] refresh_flush is not triggered when only the primary key is updated

2016-08-02 Thread Lenar Imamutdinov
Thank you for the comprehensive answer. Without going too much into 
details, I have a table where file associated with each database record, 
and the name of that file is based on object id. This is what's happening 
now when I'm doing an INSERT:
1. Model instance is created
2. Explicit flush is done to get the id
3. File is created, the id is used for its name
4. Filename is written back to the instance
5. Commit

It works perfectly, but I would like to avoid explicit flushing and do all 
that file creating stuff somewhere implicitly before commit. This code is 
used inside the library so the solution should be database-specifics 
tolerant as much as possible too.

Actually, I'm starting to realize that refresh_flush event is not very good 
for this task, because it's called from within the flushing process and all 
object modifications remain ignored, so I cannot write the filename to the 
object from the event handler. What I need is the event that hooks to the 
model class, happens before commit, when the flush is already done, but I 
can modify the object and trigger another flush on exit if the object has 
been indeed modified. Is there anything like this in current SQLAlchemy?


On Friday, July 29, 2016 at 5:35:01 PM UTC+3, Mike Bayer wrote:
>
> Thanks for the great test case and this would be appropriate to be 
> posted as a bug, since it is a complete description. 
>
> So, the reason the primary key is not included right now is because 
> primary keys are populated in many different ways on the object, meaning 
> if your code relies upon refresh_flush(), it will break if you switch to 
> another database, or use an old version of Postgresql that doesn't 
> support RETURNING, or the table/dialect is set up to no longer use 
> RETURNING, etc.   The primary key is not actually populated at that 
> point the way the other defaults are (it was already populated 
> previously, so I guess to the event consumer, it doesn't make that much 
> difference). 
>
> Looking at where refresh_flush is called, there is still inconsistency 
> for other kinds of defaults too; if you use eager_defaults with a MySQL 
> database and server-side defaults it looks like you'd get called in the 
> ordinary refresh() event anyway (that is, if you try to eager_defaults 
> against a server_default="0", a Postgresql DB will call refresh_flush, a 
> MySQL DB will call refresh()). 
>
> Still, a primary key that's an autoincrement will never come back for 
> MySQL within either of these events - there's no RETURNING for MySQL 
> (and others). 
>
> refresh_flush() is very new and not widely used and it is likely 
> harmless to just add what we have here (not to mention document it 
> either way).  But for any change I'd first ask, what are you trying to 
> do?   The primary keys are excluded from "refresh" because they are not 
> usually a part of that concept, primary keys are "first class" and 
> always get pulled no matter what, you can grab them in after_insert(), 
> for example. 
>
>
>
>
>
> On 07/29/2016 04:25 AM, Lenar Imamutdinov wrote: 
> > Hello! 
> > 
> > Looks like the refresh_flush event is not triggered when the only field 
> > to update after flush is the primary key fed from RETURNING clause. 
> > However it should, as far as I understand what is mentioned in the 
> > documentation. 
> > 
> > Environment: SQLAlchemy 1.0.14, PostgreSQL 9.5, Python 3.4 
> > 
> > Here is how to reproduce this problem: 
> > 
> > from sqlalchemy.ext.declarative import declarative_base 
> > from sqlalchemy import Column, Integer 
> > from sqlalchemy import create_engine 
> > from sqlalchemy.orm import Session 
> > from sqlalchemy import event 
> > 
> > Base = declarative_base() 
> > 
> > def receive_refresh_flush(target, context, attrs): 
> > print('refresh_flush received') 
> > 
> > class Test(Base): 
> > __tablename__ = 'refresh_flush_test' 
> > id = Column(Integer, primary_key=True) 
> > # uncomment the field below to receive the event 
> > # dummy = Column(Integer, default=0) 
> > 
> > engine = create_engine('postgresql://test:test@localhost:5432/test') 
> > Base.metadata.create_all(engine) 
> > session = Session(engine) 
> > 
> > event.listen(Test, 'refresh_flush', receive_refresh_flush) 
> > 
> > obj = Test() 
> > session.add(obj) 
> > session.commit() 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > . 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you