[sqlalchemy] SQL Alchemy is bugging out my Redshift table.

2018-08-30 Thread laurent . bastien
Hi everyone.

I've been experiencing a very weird issue, and I hope someone could help me 
with it.

I've mapped a Redshift table using SQL Alchemy's ORM.

Here is my code.

from sqlalchemy.ext.automap import automap_base
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import psycopg2


user = ""
passwd = ""
host = ""

engine = create_engine('postgresql+psycopg2://{}:{}{}'.format(user,passwd,
host))

Base = automap_base()


class Bookmarks(Base):
__tablename__ = 'my_table'
__table_args__ = {"schema":"dev"}

agent_id = Column(Integer, primary_key=True)
username = Column(String)
email = Column(String)
first_name = Column(String)
last_name = Column(String)
role_id = Column(Integer)
role = Column(String)
team = Column(String)
access_level = Column(String)
location = Column(String)


def loadSession():
""
metadata = Base.metadata
Session = sessionmaker(bind=engine)
session = Session()
return session


Base.prepare()
session = loadSession()
res = session.query(Bookmarks).all()


Everything works fine after executing the code. The problem is that it 
looks like it's been interacting weirdly with AWS. I've been creating 
tables to test this out, and it seems like everytime I map a Redshift table 
using this code, the table becomes totally unresponsive. I'm able to query 
it using SQLAlchemy, but if I try to alter it, or delete it, it will 
literally take a hour before getting a response. 

I'm not getting any errors, it's just that everything becomes very slow. 
The tables are very small, so this shouldn't be happening.

Has anyone experienced this issue? 




-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Custom JSON type that acts as a numpy array on Python side

2018-08-30 Thread Ruben Di Battista
Ok, thank you for the fast answer as always.

Moreover the specs for the JSON datatype for MySQL say:

A JSON column cannot have a default value.
So I think I will just skip the default value.

Thank you by the way again!

  _
-. .´  |
  ',  ;|∞∞
˜˜ |∞ RdB
,.,|∞∞
  .'   '.  |
-'   `’
https://rdb.is


On 30 agosto 2018 a 17:09:33, Mike Bayer (mike...@zzzcomputing.com) scritto:

On Thu, Aug 30, 2018 at 6:32 AM, Ruben Di Battista
 wrote:
> Ehi Mike, thank you :).
>
> I just went ahead in implementing a custom type for a Python object that
is
> a (subclass of) numpy array and that is stored as JSON in the DB.
>
> This is the Python class:
>
> class ElevationMask(np.ndarray):
> r""" A class to represent an elevation mask.
>
> This class proxies the functionality of a numpy array encoding the
> elevation mask as a functional dependency of the type:
>
> .. math::
>
> \epsilon_\text{mask} = \epsilon_\text{mask}(\theta)
>
> So the elevation mask is described as the mapping
> :math:`\epsilon_\text{mask}` that for each value of azimuth angle
> :math:`\theta` maps the corresponding elevation angle value
>
>
> Args:
> elevation_mask(numpy.ndarray): The elevation mask as numpy array.
> It's
> an 2xN array. First row is the azimuth values. Second row is
> elevation values
> Attributes:
> azimuth(numpy.ndarray): The values of the azimuth angles that are
> part
> of the elevation mask
> elevation(numpy.ndarray): The corrisponding elevation values to the
> azimuth values
> """
>
> def __new__(cls, elevation_mask):
> obj = np.asarray(elevation_mask).view(cls)
> rows, cols = obj.shape
>
> if not(rows == 2):
> raise IndexError("The elevation mask array given has not the "
> "right shape. It needs to be a 2xN array.")
>
> return obj
>
> @property
> def azimuth(self):
> return self[0, :]
>
> @property
> def elevation(self):
> return self[1, :]
>
> def __call__(self, azimuth_value):
>
> # Interpolation bounds checking
> if azimuth_value < self.azimuth.min() or \
> azimuth_value > self.azimuth.max():
> raise ValueError("The required azimuth "
> "value is out of interpolation bounds ")
>
> return np.interp(azimuth_value,
> self.azimuth,
> self.elevation)
>
> @staticmethod
> def _to_json(obj):
> """ This method returns the dict representation for JSON encoding
> """
>
> return {
> '__type__': type(obj).__name__,
> 'entries': {
> 'azimuth': obj.azimuth.tolist(),
> 'elevation': obj.elevation.tolist()
> }
> }
>
> def to_json(self):
> """ This actually dumps the instance into the JSON string """
>
> return json.dumps(self, default=self._to_json)
>
> @staticmethod
> def _from_json(json_dict):
> """ This reconstitutes the Python object from the JSON serialization
> """
> if '__type__' in json_dict:
> if json_dict['__type__'] == ElevationMask.__name__:
> mask_entries = json_dict['entries']
> azimuth = mask_entries['azimuth']
> elevation = mask_entries['elevation']
>
> el_mask_array = np.array([
> azimuth,
> elevation
> ])
>
> return ElevationMask(el_mask_array)
>
> return json_dict
>
> @classmethod
> def from_json(self, json_dict):
> return json.loads(json_dict, object_hook=self._from_json)
>
>
> So it's basically a 2xN array.
>
> Then I have the actual custom type
>
> from sqlalchemy.dialects.mysql import JSON
>
> class JSONElevationMask(TypeDecorator):
> """ This type emits a JSON object into the Database. When instead
> loading
> the JSON from the database, it transforms the JSON in a ElevationMask
> object (i.e. a numpy-like object with few additional features """
>
> impl = JSON
>
> def process_bind_param(self, value, dialect):
> if value is not None:
> return value.to_json()
>
> def process_load_param(self, value, dialect):
> if value is not None:
> return ElevationMask.from_json(value)
>
> It seems to work as expected. The only problem I'm experiencing is when I
> setup a default value for a Column of this type:
>
> elevation_mask = Column('elevationMask', JSONElevationMask,
> default=ElevationMask([
> [0, 2*PI],
> [0, 0]])
> )


it looks like that object you have has many special behaviors,
including __new__, __call__, and is also a descendant of a C object,
so some combination of those things which I can't exactly reproduce
here is causing SQLAlchemy's check for passing of a "callable" to
fail. The "default" parameter of a Column can accept a Python
callable, so this object can't be passed as is since it is already a
non-compatible callable. Your best bet is to just pass the JSON
itself as a SQL object, e.g. text("'%s'" %
ElevationMask(...).to_json()). SQLAlchemy will use the text()
construct as is to generate a SQL string that will be inline with the
INSERT statement.


>
> When I do this, I get
> File
>
"/Users/rubendibattista/.envs/pyggdrasill/lib/python2.7/site-packages/sqlalchemy/sql/schema.py",

> line 1259, in __init__
> args.append(ColumnDefault(self.default))
> File
>
"/Users/rubendibattista/.envs/pyggdrasill/lib/python2.7/site-packages/sqlalchemy/sql/schema.py",

Re: [sqlalchemy] Custom JSON type that acts as a numpy array on Python side

2018-08-30 Thread Mike Bayer
On Thu, Aug 30, 2018 at 6:32 AM, Ruben Di Battista
 wrote:
> Ehi Mike, thank you :).
>
> I just went ahead in implementing a custom type for a Python object that is
> a (subclass of) numpy array and that is stored as JSON in the DB.
>
> This is the Python class:
>
> class ElevationMask(np.ndarray):
> r""" A class to represent an elevation mask.
>
> This class proxies the functionality of a numpy array encoding the
> elevation mask as a functional dependency of the type:
>
> .. math::
>
> \epsilon_\text{mask} = \epsilon_\text{mask}(\theta)
>
> So the elevation mask is described as the mapping
> :math:`\epsilon_\text{mask}` that for each value of azimuth angle
> :math:`\theta` maps the corresponding elevation angle value
>
>
> Args:
> elevation_mask(numpy.ndarray): The elevation mask as numpy array.
> It's
> an 2xN array. First row is the azimuth values. Second row is
> elevation values
> Attributes:
> azimuth(numpy.ndarray): The values of the azimuth angles that are
> part
> of the elevation mask
> elevation(numpy.ndarray): The corrisponding elevation values to the
> azimuth values
> """
>
> def __new__(cls, elevation_mask):
> obj = np.asarray(elevation_mask).view(cls)
> rows, cols = obj.shape
>
> if not(rows == 2):
> raise IndexError("The elevation mask array given has not the "
>  "right shape. It needs to be a 2xN array.")
>
> return obj
>
> @property
> def azimuth(self):
> return self[0, :]
>
> @property
> def elevation(self):
> return self[1, :]
>
> def __call__(self, azimuth_value):
>
> # Interpolation bounds checking
> if azimuth_value < self.azimuth.min() or \
> azimuth_value > self.azimuth.max():
> raise ValueError("The required azimuth "
>  "value is out of interpolation bounds ")
>
> return np.interp(azimuth_value,
>  self.azimuth,
>  self.elevation)
>
> @staticmethod
> def _to_json(obj):
> """ This method returns the dict representation for JSON encoding
> """
>
> return {
> '__type__': type(obj).__name__,
> 'entries': {
> 'azimuth': obj.azimuth.tolist(),
> 'elevation': obj.elevation.tolist()
> }
> }
>
> def to_json(self):
> """ This actually dumps the instance into the JSON string """
>
> return json.dumps(self, default=self._to_json)
>
> @staticmethod
> def _from_json(json_dict):
> """ This reconstitutes the Python object from the JSON serialization
> """
> if '__type__' in json_dict:
> if json_dict['__type__'] == ElevationMask.__name__:
> mask_entries = json_dict['entries']
> azimuth = mask_entries['azimuth']
> elevation = mask_entries['elevation']
>
> el_mask_array = np.array([
> azimuth,
> elevation
> ])
>
> return ElevationMask(el_mask_array)
>
> return json_dict
>
> @classmethod
> def from_json(self, json_dict):
> return json.loads(json_dict, object_hook=self._from_json)
>
>
> So it's basically a 2xN array.
>
> Then I have the actual custom type
>
> from sqlalchemy.dialects.mysql import JSON
>
> class JSONElevationMask(TypeDecorator):
> """ This type emits a JSON object into the Database. When instead
> loading
> the JSON from the database, it transforms the JSON in a ElevationMask
> object (i.e. a numpy-like object with few additional features """
>
> impl = JSON
>
> def process_bind_param(self, value, dialect):
> if value is not None:
> return value.to_json()
>
> def process_load_param(self, value, dialect):
> if value is not None:
> return ElevationMask.from_json(value)
>
> It seems to work as expected. The only problem I'm experiencing is when I
> setup a default value for a Column of this type:
>
> elevation_mask = Column('elevationMask', JSONElevationMask,
> default=ElevationMask([
> [0, 2*PI],
> [0, 0]])
> )


it looks like that object you have has many special behaviors,
including __new__, __call__, and is also a descendant of a C object,
so some combination of those things which I can't exactly reproduce
here is causing SQLAlchemy's check for passing of a "callable" to
fail.   The "default" parameter of a Column can accept a Python
callable, so this object can't be passed as is since it is already a
non-compatible callable.Your best bet is to just pass the JSON
itself as a SQL object, e.g. text("'%s'" %
Elevation

Re: [sqlalchemy] flask SQLAlchemy hybrid_property Boolean value of this clause is not defined

2018-08-30 Thread Mike Bayer
On Wed, Aug 29, 2018 at 2:17 PM,   wrote:
> Hi !
>
> I'm using Python 3.6.5, Flask 1.0.2, SQLAlchemy 1.0.5 and I want to define
> an attribute as a maximum between other two, based on flask-admin hybrid
> property example:
>
>
> from flask import Flask
> from flask_sqlalchemy import SQLAlchemy
> from sqlalchemy.ext.hybrid import hybrid_property
>
> import flask_admin as admin
> from flask_admin.contrib import sqla
>
> from sqlalchemy.sql.expression import func
>
> # Create application
> app = Flask(__name__)
>
> # Create dummy secrey key so we can use sessions
> app.config['SECRET_KEY'] = '123456790'
>
> # Create in-memory database
> app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///sample_db_2.sqlite'
> app.config['SQLALCHEMY_ECHO'] = True
> db = SQLAlchemy(app)
>
>
> # Flask views
> @app.route('/')
> def index():
> return 'Click me to get to Admin!'
>
>
> class Screen(db.Model):
> __tablename__ = 'screen'
> id = db.Column(db.Integer, primary_key=True)
> width = db.Column(db.Integer, nullable=False)
> height = db.Column(db.Integer, nullable=False)
>
> @hybrid_property
> def max(self):
> return max(self.height, self.width)

when you use a hybrid_property you need to ensure the expression you
are returning is also suitable as a SQL expression, so your code
should read:

class Screen(db.Model):
__tablename__ = 'screen'
id = db.Column(db.Integer, primary_key=True)
width = db.Column(db.Integer, nullable=False)
height = db.Column(db.Integer, nullable=False)

@hybrid_property
def max(self):
return max(self.height, self.width)


@max.expression
def max(cls):
return func.max(cls.height, cls.width)

>
> class ScreenAdmin(sqla.ModelView):
> """ Flask-admin can not automatically find a hybrid_property yet. You
> will
> need to manually define the column in
> list_view/filters/sorting/etc."""
> column_list = ['id', 'width', 'height', 'max']
> column_sortable_list = ['id', 'width', 'height', 'max']
>
> # Flask-admin can automatically detect the relevant filters for hybrid
> properties.
> column_filters = ('max', )
>
>
> # Create admin
> admin = admin.Admin(app, name='Example: SQLAlchemy2',
> template_mode='bootstrap3')
> admin.add_view(ScreenAdmin(Screen, db.session))
>
> if __name__ == '__main__':
>
> # Create DB
> db.create_all()
>
> # Start app
> app.run(debug=True)
>
>
> But it fails, raising this error:
>
> raise TypeError("Boolean value of this clause is not defined")
>
>
> Traceback:
>
> /home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_sqlalchemy/__init__.py:794:
> FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant
> overhead and will be disabled by default in the future.  Set it to True or
> False to suppress this warning.
>   'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
> Traceback (most recent call last):
>   File "app.py", line 49, in 
> admin.add_view(ScreenAdmin(Screen, db.session))
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/contrib/sqla/view.py",
> line 329, in __init__
> menu_icon_value=menu_icon_value)
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/model/base.py",
> line 804, in __init__
> self._refresh_cache()
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/model/base.py",
> line 881, in _refresh_cache
> self._list_columns = self.get_list_columns()
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/model/base.py",
> line 1022, in get_list_columns
> excluded_columns=self.column_exclude_list,
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/contrib/sqla/view.py",
> line 531, in get_column_names
> column, path = tools.get_field_with_path(self.model, c)
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/flask_admin/contrib/sqla/tools.py",
> line 150, in get_field_with_path
> value = getattr(current_model, attribute)
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/ext/hybrid.py",
> line 867, in __get__
> return self._expr_comparator(owner)
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/ext/hybrid.py",
> line 1066, in expr_comparator
> owner, self.__name__, self, comparator(owner),
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/ext/hybrid.py",
> line 1055, in _expr
> return ExprComparator(cls, expr(cls), self)
>   File "app.py", line 34, in number_of_pixels
> return max(self.width,self.height)
>   File
> "/home/bibo/usr/miniconda/envs/otroflask/lib/python3.6/site-packages/sqlalchemy/sql/elements.py",
> line 2975, in __bool__
> raise TypeError("Boolean value of this clause is not defined")
>
>

Re: [sqlalchemy] Alembic and postgresql multiple schema question

2018-08-30 Thread Mike Bayer
On Thu, Aug 30, 2018 at 7:11 AM, sector119  wrote:
> Mike, but in run_migrations_online() I use conn =
> connection.execution_options(schema_translate_map={None: schema_name})
> But I get no schemas at resulting alembic/versions/file.py

can you share your env.py

>
>
> среда, 29 августа 2018 г., 20:46:07 UTC+3 пользователь Mike Bayer написал:
>>
>> On Wed, Aug 29, 2018 at 5:12 AM, sector119  wrote:
>> > Hello
>> >
>> > I have N schemas with the same set of tables, 1 system schema with
>> > users,
>> > groups, ... tables and 6 schemas with streets, organizations,
>> > transactions,
>> > ... tables.
>> > On those schemas tables I don't set __table_args__ = ({'schema':
>> > SCHEMA},)
>> > I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA)
>> > before sql queries.
>> >
>> > When I make some changes in my model structures I want to refactor table
>> > in
>> > all schemas using Alembic, how can I do that?
>> > Maybe I can make some loop over my schemas somewhere?
>>
>> setting the search path is going to confuse SQLAlchemy's table
>> reflection process, such that it assumes a Table of a certain schema
>> does not require a "schema" argument, because it is already in the
>> search path.
>>
>> Keep the search path set to "public", see
>>
>> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path.
>> There is an option to change this behavior mentioned in that
>> section called postgresql_ignore_search_path, however it isn't
>> guaranteed to suit all use cases.   if that makes your case work, then
>> that would be all you need.  if not, then read on...
>>
>> For the officially supported way to do this, you want to have the
>> explicit schema name inside the SQL - but this can be automated for a
>> multi-tenancy application.  Use the schema translation map feature:
>>
>> http://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execution_options#schema-translating.
>>
>>
>> >
>> >
>> > Thanks
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> > description.
>> > ---
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "sqlalchemy" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to sqlalchemy+...@googlegroups.com.
>> > To post to this group, send email to sqlal...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/sqlalchemy.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Alembic and postgresql multiple schema question

2018-08-30 Thread sector119
Mike, but in run_migrations_online() I use conn = 
connection.execution_options(schema_translate_map={None: schema_name})
But I get no schemas at resulting alembic/versions/file.py


среда, 29 августа 2018 г., 20:46:07 UTC+3 пользователь Mike Bayer написал:
>
> On Wed, Aug 29, 2018 at 5:12 AM, sector119  > wrote: 
> > Hello 
> > 
> > I have N schemas with the same set of tables, 1 system schema with 
> users, 
> > groups, ... tables and 6 schemas with streets, organizations, 
> transactions, 
> > ... tables. 
> > On those schemas tables I don't set __table_args__ = ({'schema': 
> SCHEMA},) 
> > I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA) 
> > before sql queries. 
> > 
> > When I make some changes in my model structures I want to refactor table 
> in 
> > all schemas using Alembic, how can I do that? 
> > Maybe I can make some loop over my schemas somewhere? 
>
> setting the search path is going to confuse SQLAlchemy's table 
> reflection process, such that it assumes a Table of a certain schema 
> does not require a "schema" argument, because it is already in the 
> search path. 
>
> Keep the search path set to "public", see 
>
> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path.
>  
>
> There is an option to change this behavior mentioned in that 
> section called postgresql_ignore_search_path, however it isn't 
> guaranteed to suit all use cases.   if that makes your case work, then 
> that would be all you need.  if not, then read on... 
>
> For the officially supported way to do this, you want to have the 
> explicit schema name inside the SQL - but this can be automated for a 
> multi-tenancy application.  Use the schema translation map feature: 
>
> http://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execution_options#schema-translating.
>  
>
>
>
> > 
> > 
> > Thanks 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Custom JSON type that acts as a numpy array on Python side

2018-08-30 Thread Ruben Di Battista
Ehi Mike, thank you :).

I just went ahead in implementing a custom type for a Python object that is 
a (subclass of) numpy array and that is stored as JSON in the DB.

This is the Python class:

class ElevationMask(np.ndarray):
r""" A class to represent an elevation mask.

This class proxies the functionality of a numpy array encoding the
elevation mask as a functional dependency of the type:

.. math::

\epsilon_\text{mask} = \epsilon_\text{mask}(\theta)

So the elevation mask is described as the mapping
:math:`\epsilon_\text{mask}` that for each value of azimuth angle
:math:`\theta` maps the corresponding elevation angle value


Args:
elevation_mask(numpy.ndarray): The elevation mask as numpy array. 
It's
an 2xN array. First row is the azimuth values. Second row is
elevation values
Attributes:
azimuth(numpy.ndarray): The values of the azimuth angles that are 
part
of the elevation mask
elevation(numpy.ndarray): The corrisponding elevation values to the
azimuth values
"""

def __new__(cls, elevation_mask):
obj = np.asarray(elevation_mask).view(cls)
rows, cols = obj.shape

if not(rows == 2):
raise IndexError("The elevation mask array given has not the "
 "right shape. It needs to be a 2xN array.")

return obj

@property
def azimuth(self):
return self[0, :]

@property
def elevation(self):
return self[1, :]

def __call__(self, azimuth_value):

# Interpolation bounds checking
if azimuth_value < self.azimuth.min() or \
azimuth_value > self.azimuth.max():
raise ValueError("The required azimuth "
 "value is out of interpolation bounds ")

return np.interp(azimuth_value,
 self.azimuth,
 self.elevation)

@staticmethod
def _to_json(obj):
""" This method returns the dict representation for JSON encoding
"""

return {
'__type__': type(obj).__name__,
'entries': {
'azimuth': obj.azimuth.tolist(),
'elevation': obj.elevation.tolist()
}
}

def to_json(self):
""" This actually dumps the instance into the JSON string """

return json.dumps(self, default=self._to_json)

@staticmethod
def _from_json(json_dict):
""" This reconstitutes the Python object from the JSON serialization
"""
if '__type__' in json_dict:
if json_dict['__type__'] == ElevationMask.__name__:
mask_entries = json_dict['entries']
azimuth = mask_entries['azimuth']
elevation = mask_entries['elevation']

el_mask_array = np.array([
azimuth,
elevation
])

return ElevationMask(el_mask_array)

return json_dict

@classmethod
def from_json(self, json_dict):
return json.loads(json_dict, object_hook=self._from_json)


So it's basically a 2xN array.

Then I have the actual custom type

from sqlalchemy.dialects.mysql import JSON

class JSONElevationMask(TypeDecorator):
""" This type emits a JSON object into the Database. When instead 
loading
the JSON from the database, it transforms the JSON in a ElevationMask
object (i.e. a numpy-like object with few additional features """

impl = JSON

def process_bind_param(self, value, dialect):
if value is not None:
return value.to_json()

def process_load_param(self, value, dialect):
if value is not None:
return ElevationMask.from_json(value)

It seems to work as expected. The only problem I'm experiencing is when I 
setup a default value for a Column of this type:

elevation_mask = Column('elevationMask', JSONElevationMask,
default=ElevationMask([
[0, 2*PI],
[0, 0]])
)

When I do this, I get
  File 
"/Users/rubendibattista/.envs/pyggdrasill/lib/python2.7/site-packages/sqlalchemy/sql/schema.py",
 
line 1259, in __init__
args.append(ColumnDefault(self.default))
  File 
"/Users/rubendibattista/.envs/pyggdrasill/lib/python2.7/site-packages/sqlalchemy/sql/schema.py",
 
line 2067, in __init__
arg = self._maybe_wrap_callable(arg)
  File 
"/Users/rubendibattista/.envs/pyggdrasill/lib/python2.7/site-packages/sqlalchemy/sql/schema.py",
 
line 2100, in _maybe_wrap_callable
argspec = util.get_callable_argspec(fn, no_self=True)
  File 
"/Users/rubendibattista/.envs/pyggdrasill/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py",
 
line 330, in get_callable_argspec
return get_callable_argspec(fn.__call__, no_self=no_self)
  File 
"/Users/rubendibattista/.e