Re: [sqlalchemy] set a query timeout on a per query basis

2018-11-12 Thread Brian Cherinka
Hmm. Ok.  I'm using a postgres database with the psycopg2 driver.  I'm 
aware of the `statement_timeout` option in `postgres` which I can pass into 
the psycopg2 `connect` method.  As far as I can tell there's not a way to 
pass that keyword in through SQLAlchemy after the db engine has been 
created.  Is that correct?  Does the `query.execution_options`, or 
something in session, accept that keyword?

On Monday, November 12, 2018 at 3:15:23 PM UTC-5, Mike Bayer wrote:
>
> On Mon, Nov 12, 2018 at 2:08 PM Brian Cherinka  > wrote: 
> > 
> > What's the best way to set a timeout for specific queries?  I have a 
> custom query tool that uses SQLAlchemy to build and submit queries.  This 
> tool can be used in a local python session with a database.  I'm also using 
> it to allow queries in a Flask web-app. In general, I do not want to apply 
> a time limit to all queries, but I only want to apply a time limit of 1 
> minute to queries submitted through the web app.  Given a built SQLAlchemy 
> query and a db session instance, is there any way to set a timeout through 
> the query or session objects without having to recreate a database 
> connection? 
>
> There's no set way to do that at the SQLAlchemy level, this depends 
> very much on the database and the Python driver you are using and may 
> not be possible for every driver. 
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] set a query timeout on a per query basis

2018-11-12 Thread Brian Cherinka
What's the best way to set a timeout for specific queries?  I have a custom 
query tool that uses SQLAlchemy to build and submit queries.  This tool can 
be used in a local python session with a database.  I'm also using it to 
allow queries in a Flask web-app. In general, I do not want to apply a time 
limit to all queries, but I only want to apply a time limit of 1 minute to 
queries submitted through the web app.  Given a built SQLAlchemy query and 
a db session instance, is there any way to set a timeout through the query 
or session objects without having to recreate a database connection?  

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: accessing functions in the postgres public schema using sqlalchemy.func

2018-08-01 Thread Brian Cherinka

Yeah everything is working in psql just fine.   

select current_schema();
 current_schema

 public
(1 row)


Time: 1.226 ms
show search_path;
   search_path
-
 "$user", public
(1 row)

select add(2,4);
 add
-
   6
(1 row)
 

But you jogged my memory and think I found the problem.  I dug back through 
my code and found this buried in it

def clearSearchPathCallback(dbapi_con, connection_record):
'''
When creating relationships across schema, SQLAlchemy
has problems when you explicitly declare the schema in
ModelClasses and it is found in search_path.


The solution is to set the search_path to "$user" for
the life of any connection to the database. Since there
is no (or shouldn't be!) schema with the same name
as the user, this effectively makes it blank.


This callback function is called for every database connection.


For the full details of this issue, see:
http:
//groups.google.com/group/sqlalchemy/browse_thread/thread/88b5cc5c12246220


dbapi_con - type: psycopg2._psycopg.connection
connection_record - type: sqlalchemy.pool._ConnectionRecord
'''
cursor = dbapi_con.cursor()
cursor.execute('SET search_path TO "$user",functions')
dbapi_con.commit()


listen(sqlalchemy.pool.Pool, 'connect', clearSearchPathCallback)

which shows my setting the search_path to exclude public.  Changing that 
line 'SET search_path TO "$user",functions, public' fixes everything SQLA.  
It's been a while since I've looked at my code that sets up the 
DatabaseConnection.  I must have had an original reason to exclude public 
but it doesn't seem relevant anymore.  Thanks for your help.


On Wednesday, August 1, 2018 at 2:18:55 PM UTC-4, Brian Cherinka wrote:
>
> Hi,
>
> What's the best way to access functions that live in schema='public' in 
> the postgres databases?  Some postgresql extensions install functions in 
> the public schema that I would like accessible via *sqlachemy.func,* however 
> I get an error when attempting to call them.   Am I missing something when 
> setting up my Base Classes?  Maybe regarding the `search_path`?  I'm 
> defining my models and tables with a Declarative Base.  My 
> default_schema_name is set to 'public'.   I've read through this page, 
> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path
>   
> <http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path>but
>  
> it's not clear where I should be setting my search_path to ensure it 
> includes the public schema.  
>
> As an example, I have a test "add" function in the 'public' schema , which 
> crashes presumably because it cannot find the function definition
>
> session.query(func.add(2,4)).all()
>
> ProgrammingError: (psycopg2.ProgrammingError) function add(integer, 
> integer) does not exist
> LINE 1: SELECT add(2, 4) AS add_1
>^
> HINT:  No function matches the given name and argument types. You might 
> need to add explicit type casts.
>  [SQL: 'SELECT add(%(add_2)s, %(add_3)s) AS add_1'] [parameters: {'add_3': 
> 4, 'add_2': 2}]
>
>
>  and a test "newadd" function defined in an explicit schema called 
> "functions".  This seems to automatically get reflected and mapped.  And 
> works perfectly.  
>
> session.query(func.newadd(2,4)).all()
>
> [(6)]
>
> One solution is to install the postgres extension into the functions 
> schema, but this kind of breaks the usage within postgres itself.  I have 
> to always explicity set search_path='functions' in order to use them.  So 
> it's not ideal.  Ideally, I'd like *sqlachemy.func* to understand 
> functions that live either in the "functions" or "public" schema.  Any 
> ideas on how to fix this? 
>
> Cheers, Brian
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] accessing functions in the postgres public schema using sqlalchemy.func

2018-08-01 Thread Brian Cherinka
Hi,

What's the best way to access functions that live in schema='public' in 
the postgres databases?  Some postgresql extensions install functions in 
the public schema that I would like accessible via *sqlachemy.func,* however 
I get an error when attempting to call them.   Am I missing something when 
setting up my Base Classes?  Maybe regarding the `search_path`?  I'm 
defining my models and tables with a Declarative Base.  My 
default_schema_name is set to 'public'.   I've read through this page, 
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path
  
but
 
it's not clear where I should be setting my search_path to ensure it 
includes the public schema.  

As an example, I have a test "add" function in the 'public' schema , which 
crashes presumably because it cannot find the function definition

session.query(func.add(2,4)).all()

ProgrammingError: (psycopg2.ProgrammingError) function add(integer, integer) 
does not exist
LINE 1: SELECT add(2, 4) AS add_1
   ^
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.
 [SQL: 'SELECT add(%(add_2)s, %(add_3)s) AS add_1'] [parameters: {'add_3': 4
, 'add_2': 2}]


 and a test "newadd" function defined in an explicit schema called 
"functions".  This seems to automatically get reflected and mapped.  And 
works perfectly.  

session.query(func.newadd(2,4)).all()

[(6)]

One solution is to install the postgres extension into the functions 
schema, but this kind of breaks the usage within postgres itself.  I have 
to always explicity set search_path='functions' in order to use them.  So 
it's not ideal.  Ideally, I'd like *sqlachemy.func* to understand functions 
that live either in the "functions" or "public" schema.  Any ideas on how 
to fix this? 

Cheers, Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-04 Thread Brian Cherinka

>
>
>
> I've yet to see an unambiguous statement of what "the raw SQL" is.   If 
> it is this: 
>
> select c.pk,c.mangaid,c.manga_target_pk, n.z, 
> (select (array_agg(unwave.restw))[0:5] as restwave from (select 
> (unnest(w.wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as 
> w) as unwave) 
> from mangadatadb.cube as c 
>  join mangasampledb.manga_target as m on m.pk=c.manga_target_pk 
>  join mangasampledb.manga_target_to_nsa as t on t.manga_target_pk=m.pk 
>  join mangasampledb.nsa as n on n.pk=t.nsa_pk 
>  join mangadatadb.wavelength as w on w.pk=c.wavelength_pk; 

 

> then that does not, and cannot, correspond to the Python code you are 
> sending.  the JOINs would not be inside of the @expression, you'd have 
> to write them out on the outside. 

 
Yeah, that was the SQL that would be typed into psql.  I know that the 
Python implementation is incorrect.  I've been trying to sort that out.   


I've provided a complete example of how to render SQL extremely similar 
> to what you want, featuring techniques such as correlate() as well as 
> how to nest the queries appropriately.  If at this point you aren't able 
> to manipulate the code to get what you want, then I'd advise not using a 
> hybrid for this query at all. It is extremely complicated in this 
> context, and unless you are deeply familiar with SQLAlchemy APIs, you 
> would just have a bunch of code that you can't effectively maintain. 
>

Ok..thanks for your help.  I appreciate it.  
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-04 Thread Brian Cherinka

Yeah, sorry about that.  Like I said, I don't normally build my classes 
this way, defining all the columns in the Base Class.  I define all my 
columns, primary and foreign keys, etc manually first, so my SQLA classes 
are minimally defined.  And I cobbled together pieces from my real code and 
your example to build that test code I sent you.  So I forgot some things. 
 I did say it may not be completely right, and you might have to hack it 
some.  I apologize.  Here is an attempt at cleaning it up. 

When I try to implement your @expression into my real code, it doesn't 
work.  I am getting an error about multiple results returned within the 
subquery. 

@restwave.expression
def restwave(cls):
s = db.Session()
restw = (func.unnest(Wavelength.wavelength) / (1 + sampledb.NSA.z)).
label("restw")
unwave = s.query(restw).select_from(Wavelength).correlate(sampledb.
NSA).subquery("unwave")
agg_unwave = s.query(func.array_agg(unwave.c.restw)).label(
"restwarr")
joined = s.query(agg_unwave).select_from(
 Cube
 ).join(sampledb.MangaTarget, sampledb.MangaTargetToNSA, 
sampledb.NSA, Wavelength)
return joined.as_scalar()

session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520').
all()

ProgrammingError: (psycopg2.ProgrammingError) more than one row returned by 
a subquery used as an expression
 [SQL: 'SELECT (SELECT (SELECT array_agg(unwave.restw) AS array_agg_1 
\nFROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + 
mangasampledb.nsa.z) AS restw \nFROM mangadatadb.wavelength) AS unwave) AS 
restwarr \nFROM mangadatadb.cube JOIN mangasampledb.manga_target ON 
mangasampledb.manga_target.pk = mangadatadb.cube.manga_target_pk JOIN 
mangasampledb.manga_target_to_nsa ON mangasampledb.manga_target.pk = 
mangasampledb.manga_target_to_nsa.manga_target_pk JOIN mangasampledb.nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangadatadb.wavelength ON mangadatadb.wavelength.pk = 
mangadatadb.cube.wavelength_pk) AS anon_1 \nFROM mangadatadb.cube \nWHERE 
mangadatadb.cube.mangaid = %(mangaid_1)s'] [parameters: {'z_1': 1, 
'mangaid_1': '1-113520'}]


 I don't know if this is because I have multiple versions of the same 
object or not.  I tried adding in a selection on version, but the joins did 
not work out properly.  This NSA table does not need to be joined with the 
versions.  What I expect the above to return is exactly what the raw SQL 
returns.  A list of the restwave array for each version of the object with 
id = '1-113520'.  I thought SQLA was designed precisely to let users do 
what I'm trying, i.e. construct complex columns involving multiple selects, 
without having to physically add a new column into the db, or write a new 
SQL function in the db to call?  

session.query(datadb.Cube.restwave).join(datadb.Cube,datadb.PipelineInfo,
datadb.PipelineVersion).filter(datadb.PipelineVersion=='v1_5_1',datadb.Cube.
mangaid=='1-113520').all()

InvalidRequestError: Could not find a FROM clause to join from.  Tried 
joining to , but got: 
Can't find any foreign key relationships between 'nsa' and 'cube'.


On Wednesday, August 3, 2016 at 2:22:43 PM UTC-4, Mike Bayer wrote:
>
> There is still much ambiguity here and inaccuracy (JOINs on the outside 
> or JOINs on the inside, the mappings have mistakes like foreign key to 
> "pk" but no "pk", mappings without primary keys, "autoload" makes no 
> sense as I don't have your tables, etc.), so I can only guess but 
> perhaps give you enough clues.   It is highly unusual to have a string 
> of four JOINs inside of a column-based subquery, but when you say 
> s.query(Cube.restwave), that's what that means here. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
import sqlalchemy
from sqlalchemy import ForeignKey, func
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, deferred, sessionmaker
from sqlalchemy.schema import Column
from sqlalchemy.dialects.postgresql import *
from sqlalchemy.types import Float, Integer, String
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy import create_engine

# import sqlite3
# conn = sqlite3.connect('/Users/Brian/Work/python/manga/test_sqla.db')
# c = conn.cursor()
# c.execute('''create table cube (pk integer primary key, mangaid text, wavelength_pk integer, target_pk integer);''')
# c.execute('''create table nsa (pk integer primary key, z real);''')
# c.execute('''create table target (pk integer primary key, mangaid text);''')
# c.execute('''create table target_to_nsa (pk integer primary 

Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-03 Thread Brian Cherinka
Ok.  Yeah, I have been trying many different ways of getting results.  The 
raw SQL that I'm trying to recreate in SQLA is this (for the restwave 
column only), which works in postgresql.  The limit was only there to do 
the filter the results.  You can ignore that limit.

manga=# select c.pk,c.mangaid,c.manga_target_pk, n.z, (select 
(array_agg(unwave.restw))[0:5] as restwave from (select 
(unnest(w.wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as w) 
as unwave)
from mangadatadb.cube as c
join mangasampledb.manga_target as m on m.pk=c.manga_target_pk
join mangasampledb.manga_target_to_nsa as t on t.manga_target_pk=m.pk
join mangasampledb.nsa as n on n.pk=t.nsa_pk
join mangadatadb.wavelength as w on w.pk=c.wavelength_pk;
  pk   | mangaid | manga_target_pk | z  |   
 restwave
---+-+-++-
 11211 | 1-22286 |   30678 |  0.099954180419445 | {3292.49709827422,
3293.25529747001,3294.01371862107,3294.7723617274,3295.53100483373}
  7729 | 1-22286 |   30678 |  0.099954180419445 | {3292.49709827422,
3293.25529747001,3294.01371862107,3294.7723617274,3295.53100483373}
 11209 | 1-22298 |   15026 | 0.0614774264395237 | {3411.84452637247,
3412.63020900144,3413.41612163118,3414.2022642617,3414.98840689221}
  7727 | 1-22298 |   15026 | 0.0614774264395237 | {3411.84452637247,
3412.63020900144,3413.41612163118,3414.2022642617,3414.98840689221}
 11219 | 1-22301 |  35 |  0.105152934789658 | {3277.00884941768,
3277.76348196558,3278.51833542465,3279.27340979488,3280.02848416512}
(5 rows)

In SQLA,  this code returns something but it is the wrong array for the 
given Cube specified in the filter condition

In [20]: rwquery = session.query(datadb.Cube.restwave).filter(datadb.Cube.
mangaid=='1-113520')

In [21]: print rwquery
SELECT (SELECT array_agg(unwave.restw) AS array_agg_1
FROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + nsaz.z) 
AS restw
FROM mangadatadb.wavelength, (SELECT mangasampledb.nsa.z AS z
FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangasampledb.manga_target ON mangasampledb.manga_target.pk = mangasampledb.
manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube ON mangasampledb.
manga_target.pk = mangadatadb.cube.manga_target_pk
WHERE mangadatadb.cube.mangaid = mangadatadb.cube.mangaid) AS nsaz) AS 
unwave) AS anon_1
FROM mangadatadb.cube
WHERE mangadatadb.cube.mangaid = %(mangaid_1)s

In [22]: rwave = rwquery.first()

In [23]: rwave[0][0:10]
Out[23]:
[3292.49709827422,
 3293.25529747001,
 3294.01371862107,
 3294.7723617274,
 3295.53100483373,
 3296.28986989532,
 3297.04895691218,
 3297.80826588431,
 3298.56779681171,
 3299.32732773911]


I do rwquery.first() here instead of .one() or .all() because in my 
database, I actually have different versions of the same object, 6 
versions, which I don't care about.  But that's fine, I can add additional 
filters later.  In the code setup I sent up, there is only one version of 
each cube object.   The above results gets produced with the @expression

@restwave.expression
def restwave(cls):
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()

return restwave

I don't need any limits and I think I need 4 selects, the one in the query 
I run explicitly s.query(Cube.restwave)  and three inside the @expression. 
  When I grab the actual cube instance object and access restwave on the 
instance side, it works just fine. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-03 Thread Brian Cherinka
Ok. Here is my test file.  I tried to set it up as much as I could, but I 
don't normally set up my db and sessions this way, so you may have to hack 
a bit here and there to finish some setup.  My original setup has classes 
from two different schema.  I don't know if that makes any difference. 
 I've also added some lines to add objects into the tables that I think you 
can use to recreate my problem.   Let me know if you need anything more in 
this file. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, deferred, sessionmaker
from sqlalchemy.schema import Column
from sqlalchemy.dialects.postgresql import *
from sqlalchemy.types import Float, Integer, String
from sqlalchemy.ext.hybrid import hybrid_property


Base = declarative_base()
Session = sessionmaker()


class Target(Base):
__tablename__ = 'target'
__table_args__ = {'autoload': True, 'schema': 'sampledb'}

id = Column(Integer, primary_key=True)
mangaid = Column(String)


class Wavelength(Base):
__tablename__ = 'wavelength'
__table_args__ = {'autoload': True, 'schema': 'datadb'}

pk = Column(Integer, primary_key=True)
wavelength = deferred(Column(ARRAY(Float, zero_indexes=True)))

def __repr__(self):
return ''.format(self.pk)


class NSA(Base):
__tablename__ = 'nsa'
__table_args__ = {'autoload': True, 'schema': 'sampledb'}

z = Column(Float)

def __repr__(self):
return ''.format(self.pk, self.nsaid)


class TargetToNSA(Base):
__tablename__ = 'target_to_nsa'
__table_args__ = {'autoload': True, 'schema': 'sampledb'}

target_pk = Column(Integer, ForeignKey('target.pk'))
nsa_pk = Column(Integer, ForeignKey('nsa.pk'))

def __repr__(self):
return ''.format(self.pk)

# Relationships
NSA.targets = relationship(Target, backref='NSA_objects', secondary=TargetToNSA.__table__)


class Cube(Base):
__tablename__ = 'cube'
__table_args__ = {'autoload': True, 'schema': 'datadb'}

pk = Column(Integer, primary_key=True)
mangaid = Column(String)

target_pk = Column(Integer, ForeignKey('target.pk'))
target = relationship(Target, backref='cubes')

wavelength_pk = Column(Integer, ForeignKey('wavelength.pk'))
wavelength = relationship(Wavelength, backref='cubes')

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls):
session = Session.object_session(self)
nsaz = session.query(NSA.z.label('z')).\
join(TargetToNSA, Target, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()

return restwave


session = Session()

# Add objects
session.begin()
wave = Wavelength(wavelength=[3950.00, 4000.00, 5100.00, 5200.00, 6212.00, 8348.00])
session.add(wave)
nsa = [NSA(z=0.0543), NSA(z=1.234567), NSA(z=0.0167534563)]
session.add_all(nsa)
targets = [Target(mangaid='1-234567', NSA_objects=nsa[0]),
   Target(mangaid='1-113520', NSA_objects=nsa[2]),
   Target(mangaid='1-367842', NSA_objects=nsa[1])]
session.add_all(targets)
session.flush()
cubes = [Cube(wavelength=wave, mangaid='1-987345'),
 Cube(wavelength=wave, mangaid='1-234567', target=targets[0]),
 Cube(wavelength=wave, mangaid='1-367842', target=targets[2]),
 Cube(wavelength=wave, mangaid='1-113520', target=targets[1])]
session.commit()


q = session.query(Cube.restwave).filter(Cube.mangaid == '1-113520')

print q





Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-03 Thread Brian Cherinka
Awesome. Thanks.  Ok. I'll work on it again, and get back to you as soon as 
I can.  

On Tuesday, August 2, 2016 at 3:57:10 PM UTC-4, Mike Bayer wrote:
>
> What I need is a complete .py file that sets up a *minimal* version of 
> *every* class required, then the Query object, then prints it.   I'll 
> mangle it to do the right thing. 
>
> Like this: 
>
> from sqlalchemy import * 
> from sqlalchemy.orm import * 
> from sqlalchemy.ext.declarative import declarative_base 
>
> Base = declarative_base() 
>
>
> class A(Base): 
>  __tablename__ = 'a' 
>  id = Column(Integer, primary_key=True) 
>  bs = relationship("B") 
>
> class B(Base): 
>  __tablename__ = 'b' 
>  id = Column(Integer, primary_key=True) 
>  a_id = Column(ForeignKey('a.id')) 
>
>
> s = Session() 
>
> q = s.query(A).join(B) 
>
> print q 
>
>
>
>
>
>
>
> On 08/02/2016 02:45 PM, Brian Cherinka wrote: 
> > So I managed to get something to return using this definition of the 
> > @expression, however, I'm not quite there yet. 
> > 
> > |@hybrid_property 
> > def restwave(self): 
> > if self.target: 
> > redshift = self.target.NSA_objects[0].z 
> > wave = np.array(self.wavelength.wavelength) 
> > restwave = wave/(1+redshift) 
> > return restwave 
> > else: 
> > return None 
> > 
> > @restwave.expression 
> > def restwave(cls): 
> > session = db.Session() 
> > nsaz = session.query(sampledb.NSA.z.label('z')).\ 
> > join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, 
> Cube).\ 
> > 
> > filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels=True) 
> > 
> > unwave = 
> session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c.z)).label('restw')).subquery('unwave',
>  
> with_labels=True) 
> > restwave = 
> session.query(func.array_agg(unwave.c.restw)).as_scalar() 
> > 
> > 
> > return restwave| 
> > 
> > Using the example query, 
> > 
> > | 
> > rwquery 
> > 
> =session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520') 
>
> > rw =rwquery.first() 
> > | 
> > 
> > I am getting a modified wavelength array, but it's the wrong one, using 
> > the wrong ID.  For the ID 1-113520 I should be modifying the wavelength 
> > array by (1+0.016765) and instead it's just grabbing the very first 
> > value in the NSA.z column, which corresponds to (1+0.099954).  I think 
> > this is because my filter condition is not getting passed into the nsaz 
> > subquery, where it needs to go.   Do you know how I can pass filter 
> > condition parameters down into any subqueries I may have in me 
> > expression select statements?  Is that what .correlate() does? 
> > 
> > My query looks like 
> > 
> > | 
> > In[24]:printrwquery 
> > SELECT (SELECT array_agg(unwave.restw)AS array_agg_1 
> > FROM (SELECT unnest(mangadatadb.wavelength.wavelength)/(%(z_1)s 
> > +nsaz.z)AS restw 
> > FROM mangadatadb.wavelength,(SELECT mangasampledb.nsa.z AS z 
> > FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
> > mangasampledb.nsa.pk =mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
> > mangasampledb.manga_target ON mangasampledb.manga_target.pk 
> > =mangasampledb.manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube 
> > ON mangasampledb.manga_target.pk =mangadatadb.cube.manga_target_pk 
> > WHERE mangadatadb.cube.mangaid =mangadatadb.cube.mangaid)AS nsaz)AS 
> > unwave)AS anon_1 
> > FROM mangadatadb.cube 
> > WHERE mangadatadb.cube.mangaid =%(mangaid_1)s 
> > 
> > | 
> > 
> > 
> > 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Brian Cherinka
So I managed to get something to return using this definition of the 
@expression, however, I'm not quite there yet.

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls): 
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave

Using the example query, 

rwquery = session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid==
'1-113520')
rw = rwquery.first()

I am getting a modified wavelength array, but it's the wrong one, using the 
wrong ID.  For the ID 1-113520 I should be modifying the wavelength array 
by (1+0.016765) and instead it's just grabbing the very first value in the 
NSA.z column, which corresponds to (1+0.099954).  I think this is because 
my filter condition is not getting passed into the nsaz subquery, where it 
needs to go.   Do you know how I can pass filter condition parameters down 
into any subqueries I may have in me expression select statements?  Is that 
what .correlate() does?

My query looks like

In [24]: print rwquery
SELECT (SELECT array_agg(unwave.restw) AS array_agg_1
FROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + nsaz.z) 
AS restw
FROM mangadatadb.wavelength, (SELECT mangasampledb.nsa.z AS z
FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangasampledb.manga_target ON mangasampledb.manga_target.pk = mangasampledb.
manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube ON mangasampledb.
manga_target.pk = mangadatadb.cube.manga_target_pk
WHERE mangadatadb.cube.mangaid = mangadatadb.cube.mangaid) AS nsaz) AS 
unwave) AS anon_1
FROM mangadatadb.cube
WHERE mangadatadb.cube.mangaid = %(mangaid_1)s




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Brian Cherinka
So I managed to get something to return using this definition of the 
@expression, however, I'm not quite there yet.

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls): 
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave 

Using the example query, 
rwquery = 
session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520')


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-07-29 Thread Brian Cherinka
The @expression as column thing is a bit confusing since in the correlated 
subquery example in the hybrid attribute section, it looks like you are 
returning a select?  Does the .label() effectively turn it into a column?

class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False)

accounts = relationship("SavingsAccount", backref="owner")

@hybrid_property
def balance(self):
return sum(acc.balance for acc in self.accounts)

@balance.expression
def balance(cls):
return select([func.sum(SavingsAccount.balance)]).\
where(SavingsAccount.user_id==cls.id).\
label('total_balance')




On Friday, July 29, 2016 at 5:29:45 PM UTC-4, Brian Cherinka wrote:
>
>
> Oh interesting.  I didn't know that about the @expression.  I'll play 
> around with the as_scalar() as well, and see if I can get something to 
> work.  
>
> class Wavelength(Base):
> __tablename__ = 'wavelength'
> __table_args__ = {'autoload': True, 'schema': 'mangadatadb', 
> 'extend_existing': True}
>
>
> wavelength = deferred(Column(ARRAY_D(Float, zero_indexes=True)))
>
>
> The wavelength table has a single row and single column, which is an 
> array.  
>
> The other table of interest would look something like 
>
> class NSA(Base):
> __tablename__ = 'nsa'
> __table_args__ = ({'autoload': True, 'schema': 'mangasampledb'})
>
>
> z = Column(Float)
>
>
> This table basically has a float column that corresponds to objects in the 
> main cube (object) table. Each float value is used to modify the array in 
> wavelength to a unique array for that object. 
>
> The Cube class joins to NSA via two tables that are just intermediate 
> linking tables for this purpose  Cube -> Table A -> Table AToB - > Table B 
> (NSA)
>
> class MangaTarget(Base):
> __tablename__ = 'manga_target'
> __table_args__ = {'autoload': True, 'schema': 'mangasampledb'}
>
>
> class MangaTargetToNSA(Base):
> __tablename__ = 'manga_target_to_nsa'
> __table_args__ = (
> ForeignKeyConstraint(['manga_target_pk'],
>  ['mangasampledb.manga_target.pk']),
> ForeignKeyConstraint(['nsa_pk'], ['mangasampledb.nsa.pk']),
> {'autoload': True, 'schema': 'mangasampledb'})
>
>
>  The rest can probably be hacked together.   Let me know if you need 
> anything else.  
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-07-29 Thread Brian Cherinka

Oh interesting.  I didn't know that about the @expression.  I'll play 
around with the as_scalar() as well, and see if I can get something to 
work.  

class Wavelength(Base):
__tablename__ = 'wavelength'
__table_args__ = {'autoload': True, 'schema': 'mangadatadb', 
'extend_existing': True}


wavelength = deferred(Column(ARRAY_D(Float, zero_indexes=True)))


The wavelength table has a single row and single column, which is an array. 
 

The other table of interest would look something like 

class NSA(Base):
__tablename__ = 'nsa'
__table_args__ = ({'autoload': True, 'schema': 'mangasampledb'})


z = Column(Float)


This table basically has a float column that corresponds to objects in the 
main cube (object) table. Each float value is used to modify the array in 
wavelength to a unique array for that object. 

The Cube class joins to NSA via two tables that are just intermediate 
linking tables for this purpose  Cube -> Table A -> Table AToB - > Table B 
(NSA)

class MangaTarget(Base):
__tablename__ = 'manga_target'
__table_args__ = {'autoload': True, 'schema': 'mangasampledb'}


class MangaTargetToNSA(Base):
__tablename__ = 'manga_target_to_nsa'
__table_args__ = (
ForeignKeyConstraint(['manga_target_pk'],
 ['mangasampledb.manga_target.pk']),
ForeignKeyConstraint(['nsa_pk'], ['mangasampledb.nsa.pk']),
{'autoload': True, 'schema': 'mangasampledb'})


 The rest can probably be hacked together.   Let me know if you need 
anything else.  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] nested subqueries in a hybrid expression?

2016-07-29 Thread Brian Cherinka

I'm trying to build a hybrid property / expression in one of my SQLA base 
classes, and the expression side requires several subqueries and I cannot 
for the life of me figure out the correct syntax.  I've tried many 
different versions / syntaxes for the expression, using the SQL alchemy 
expression language, the ORM language but I can't get any of it work.  What 
am I missing here?  Here is my latest attempt. 

The setup is this.  I have a class called Cube, which is my base object. 
 For each object in that cube table, I need to grab an attribute from a 
table A and use that attribute to modify an array column from a second 
table B.  And I want to make this new array queryable, thus the 
hybridization.   

The equivalent raw SQL is 

select (select (array_agg(unwave.restw)) as restwarr from (select (unnest(w.
wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as w) as unwave) 
from mangadatadb.cube as c join mangasampledb.manga_target as m on 
m.pk=c.manga_target_pk 
join mangasampledb.manga_target_to_nsa as t on t.manga_target_pk=m.pk join 
mangasampledb.nsa as n on n.pk=t.nsa_pk join mangadatadb.wavelength as w on 
w.pk=c.wavelength_pk limit 5;


The instance side is quite easy.  The class side is proving difficult.   
What is the proper way to write this?  I've read through the 
documentation countless times now and it hasn't helped much.  

class Cube
@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls):
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw))

return restwave


Trying 
session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid==
'1-113520').all()

fails with 
AttributeError: 'Query' object has no attribute 'is_clause_element'


I've also tried this. 

@restwave.expression
def restwave(cls):
unwave = select([(func.unnest(Wavelength.wavelength)/(1+sampledb.NSA
.z)).label('restw')])
restwave = select([func.array_agg(unwave.c.restw).label('restwarr'
)]).select_from(
Cube.__table__.join(Wavelength.wavelength).
join(sampledb.MangaTarget).
join(sampledb.MangaTargetToNSA).
join(sampledb.NSA)
)
   return restwave


and this fails

In [6]: datadb.Cube.restwave
---
ArgumentError Traceback (most recent call last)
 in ()
> 1 datadb.Cube.restwave


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/ext/hybrid.pyc 
in __get__(self, instance, owner)
738 def __get__(self, instance, owner):
739 if instance is None:
--> 740 return self.expr(owner)
741 else:
742 return self.fget(instance)


/Users/Brian/Work/sdss/repo/sdss/sdss_python_module/marvin/python/sdss/
internal/database/utah/mangadb/DataModelClasses.pyc in restwave(cls)
454 # unwave = 
select([(func.unnest(Wavelength.wavelength)/(1+sampledb.NSA.z)).label('restw')])
455 # restwave = 
select([func.array_agg(unwave.c.restw).label('restwarr')]).select_from(
--> 456 # Cube.__table__.join(Wavelength.wavelength).
457 # join(sampledb.MangaTarget).
458 # join(sampledb.MangaTargetToNSA).


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/selectable
.pyc in join(self, right, onclause, isouter)
350 """
351
--> 352 return Join(self, right, onclause, isouter)
353
354 def outerjoin(self, right, onclause=None):


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/selectable.pyc
 
in __init__(self, left, right, onclause, isouter)
658 """
659 self.left = _interpret_as_from(left)
--> 660 self.right = _interpret_as_from(right).self_group()
661
662 if onclause is None:


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/selectable
.pyc in _interpret_as_from(element)
 47 return insp.selectable
 48 except AttributeError:
---> 49 raise exc.ArgumentError("FROM expression expected")
 50
 51

ArgumentError: FROM expression expected



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from 

Re: [sqlalchemy] zero_indexes broken for 2d-arrays?

2016-05-23 Thread Brian Cherinka
Thanks Mike.  That fixed it!.  

On Monday, May 23, 2016 at 10:18:15 AM UTC-4, Mike Bayer wrote:
>
>
>
> On 05/23/2016 10:12 AM, Brian Cherinka wrote: 
> > 
> > Hi, 
> > 
> > It seems like the ARRAY option zero_indexes=True is broken for 
> > 2-dimensional arrays.   Is this a bug that is fixed in 1.1?  I'm 
> > actually using the subclass ARRAY_D as a fix for the __getitem__ 
> > indexing.  It works for 1-D arrays. 
> > 
>
>
> if you're passing zero_indexes=True then that needs to be propagated to 
> the new ARRAY type being created inside of __getitem__.  The recipe 
> appears to miss this. 
>
> class ARRAY_D(ARRAY): 
>  class Comparator(ARRAY.Comparator): 
>  def __getitem__(self, index): 
>  super_ = super(ARRAY_D.Comparator, self).__getitem__(index) 
>  if not isinstance(index, slice) and self.type.dimensions > 1: 
>  super_ = type_coerce( 
>  super_, 
>  ARRAY_D( 
>  self.type.item_type, 
>  dimensions=self.type.dimensions - 1, 
>  zero_indexes=self.type.zero_indexes) 
>  ) 
>  return super_ 
>  comparator_factory = Comparator 
>
> > 
> > *1-D array* 
> > wave = Column(ARRAY_D(Float, zero_indexes=True)) 
> > SQL 
> > | 
> > selectw.wavelength[17]fromdatadb.wavelength asw; 
> >  wavelength 
> >  
> > 3634.96 
> > (1row) 
> > | 
> > 
> > ORM - instance and class side 
> > | 
> > wave =session.query(datadb.Wavelength).first() 
> > wave.wavelength[16] 
> > 3634.96 
> > 
> > session.query(datadb.Wavelength.wavelength[16]).one() 
> > (3634.96) 
> > | 
> > 
> > 
> > *2-D array* 
> > value = Column(ARRAY_D(Float, dimensions=2, zero_indexes=True)) 
> > SQL 
> > || 
> > | 
> > selecte.value[17][18]from dapdb.emline ase limit 1; 
> > 
> >value 
> > --- 
> >  4.962736845652115 
> > | 
> > 
> > ORM - instance and class side 
> > || 
> > | 
> > # correct value on instance side 
> > emline=session.query(dapdb.EmLine).first() 
> > emline.value[16][17] 
> > 4.962736845652115 
> > 
> > # expected correct indexing - wrong value 
> > session.query(dapdb.EmLine.value[16][17]).first() 
> > (4.8138361075679565) 
> > 
> > # both "1-indexed" - wrong value 
> > session.query(dapdb.EmLine.value[17][18]).first() 
> > (5.380134788537585) 
> > 
> > # first index is correct, but second is incremented by 1 - correct value 
> > session.query(dapdb.EmLine.value[16][18]).first() 
> > (4.962736845652115) 
> > | 
> > 
> > Cheers, Brian 
> > 
> > -- 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] zero_indexes broken for 2d-arrays?

2016-05-23 Thread Brian Cherinka

Hi, 

It seems like the ARRAY option zero_indexes=True is broken for 
2-dimensional arrays.   Is this a bug that is fixed in 1.1?  I'm actually 
using the subclass ARRAY_D as a fix for the __getitem__ indexing.  It works 
for 1-D arrays.


*1-D array*
wave = Column(ARRAY_D(Float, zero_indexes=True))
SQL
select w.wavelength[17] from datadb.wavelength as w;
 wavelength

3634.96
(1 row)

ORM - instance and class side
wave = session.query(datadb.Wavelength).first()
wave.wavelength[16]
3634.96

session.query(datadb.Wavelength.wavelength[16]).one()
(3634.96)


*2-D array*
value = Column(ARRAY_D(Float, dimensions=2, zero_indexes=True))
SQL
select e.value[17][18] from dapdb.emline as e limit 1;

   value
---
 4.962736845652115

ORM - instance and class side
# correct value on instance side
emline=session.query(dapdb.EmLine).first()
emline.value[16][17]
4.962736845652115

# expected correct indexing - wrong value
session.query(dapdb.EmLine.value[16][17]).first()
(4.8138361075679565)

# both "1-indexed" - wrong value
session.query(dapdb.EmLine.value[17][18]).first()
(5.380134788537585)

# first index is correct, but second is incremented by 1 - correct value
session.query(dapdb.EmLine.value[16][18]).first()
(4.962736845652115)

Cheers, Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] return value of array at a specific index

2016-05-22 Thread Brian Cherinka
Thanks Mike.  That ARRAY_D class did the trick.  Thanks for pointing it 
out. 

On Sunday, May 22, 2016 at 11:52:11 PM UTC-4, Mike Bayer wrote:
>
>
>
> On 05/22/2016 07:12 PM, Brian Cherinka wrote: 
> > 
> > What's the proper way to return in an ORM query the value of a Postgres 
> > array attribute at a given specific index within the array? 
> > 
> > I have a db table with a column called value, which is a 2d array, 
> > defined as REAL[][]. 
> > 
> > My ModelClass is defined as 
> > 
> > | 
> > classEmLine(Base): 
> > __tablename__ ='emline' 
> > __table_args__ ={'autoload':True,'schema':'dapdb'} 
> > 
> > def__repr__(self): 
> > return' > 
> > value =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > ivar =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > mask =Column(ARRAY(Integer,dimensions=2,zero_indexes=True)) 
> > 
> > | 
> > 
> > Pure SQL indexing an array works just fine 
> > | 
> > selecte.value[16][17]fromdapdb.emline ase; 
> > | 
> > 
> > But SQLalchemy does not 
> > | 
> > session.query(dapdb.EmLine.value[16][17]).first() 
> > | 
> > 
> > returns the error 
> > | 
> > NotImplementedError:Operator'getitem'isnotsupported on thisexpression 
>
> this is a bug that's been fixed for 1.1.   It's detailed here: 
>
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#correct-sql-types-are-established-from-indexed-access-of-array-json-hstore
>  
>
> For multi-dimensional access, this can be worked around for a one-off 
> using type_coerce: 
>
>  >>> from sqlalchemy import type_coerce 
>  >>> type_coerce(c[4], ARRAY(Integer))[5] 
>
> There is also a generalized workaround created for the bug that you can 
> see at 
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3487#comment-20200804 . 
> It involves creation of an ARRAY subclass that does the right thing 
> within __getitem__.   That subclass can be a drop-in replacement for 
> regular ARRAY. 
>
>
>
>
>
>
>
>
> > | 
> > 
> > I've tried defining a hybrid method/expression in my ModelClass, and 
> running 
> > | 
> > session.query(dapdb.EmLine.singleat('value',16,17)).first() 
> > | 
> > 
> >  but I'm getting the same "getitem" error 
> > 
> > | 
> > classEmLine(Base): 
> > __tablename__ ='emline' 
> > __table_args__ ={'autoload':True,'schema':'mangadapdb'} 
> > 
> > def__repr__(self): 
> > return' > 
> > value =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > ivar =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > mask =Column(ARRAY(Integer,dimensions=2,zero_indexes=True)) 
> > 
> > @hybrid_method 
> > defsingleat(self,name,x,y): 
> > param =self.__getattribute__(name) 
> > returnparam[x][y] 
> > 
> > @singleat.expression 
> > defsingleat(cls,name,x,y): 
> > param =cls.__getattribute__(cls,name) 
> > print(param,x,y) 
> > returnfunc.ARRAY(param)[x][y] 
> > 
> > | 
> > 
> > In my singleat expression, I've tried a variety of returns.  return 
> > func.ARRAY(param)[x][y] ;  return param[x][y].  What's the proper syntax 
> > to match the actual SQL array indexing? 
> > 
> > 
> >
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] return value of array at a specific index

2016-05-22 Thread Brian Cherinka

What's the proper way to return in an ORM query the value of a Postgres 
array attribute at a given specific index within the array?

I have a db table with a column called value, which is a 2d array, defined 
as REAL[][]. 

My ModelClass is defined as 

class EmLine(Base):
__tablename__ = 'emline'
__table_args__ = {'autoload': True, 'schema': 'dapdb'}

def __repr__(self):
return 'https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] accessing base class on hybrid expressions from the class side

2016-05-10 Thread Brian Cherinka
Ok, thanks for the response.  What's the timeframe for the 1.1 release?  In 
the meantime, I will have a look into adding my own class_ attribute, or 
using the Comparator.  

I tried something like

setattr(datadb.Cube.plateifu, "class_", datadb.Cube.id.class_)

but it didn't seem to work.  But I'll dig a bit deeper.  If I can't get 
something working with a 1.0X release, I'll try the 1.1 in bitbucket. 


On Tuesday, May 10, 2016 at 2:32:22 PM UTC-4, Mike Bayer wrote:
>
> in 1.1 these hybrids will have the class_ attribute like other attributes. 
>
> Until then you can probably add your own class_ attribute to the object 
> which you are returning.   Also, using a custom Comparator class (see 
> the example in the hybrid docs) will also return an instrumented 
> attribute that should have a class_ attribute. 
>
> Or you could try using the 1.1 hybrid_property class yourself, it should 
> be compatible with 1.0.   The commits are illustrated in 
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3653 but you can probably 
> just use the hybrid.py straight from the git repository with 1.0. 
>
>
>
>
> On 05/10/2016 02:01 PM, Brian Cherinka wrote: 
> > 
> > I'm trying to build a query system where given a filter parameter name, 
> > I can figure out which DeclarativeBase class it is attached to.  I need 
> > to do this for a mix of standard InstrumentedAttributes and Hybrid 
> > Properties/Expressions. I have several Declarative Base classes with 
> > hybrid properties / expressions defined, in addition to the standard 
> > InstrumentedAttributes from the actual table. 
> >   mydb.dataModelClasses.Cube for example. 
> > 
> > For a standard attribute, I can access the class using the class_ 
> variable. 
> > 
> > Standard Attribute on the DeclarativeBase class Cube 
> > | 
> > type(datadb.Cube.id) 
> > sqlalchemy.orm.attributes.InstrumentedAttribute 
> > 
> > printdatadb.Cube.id.class_ 
> > mydb.DataModelClasses.Cube 
> > | 
> > 
> > What's the best way to retrieve this same information for a hybrid 
> > expression?  My expressions are other types, thus don't have the class_ 
> > attribute.  One example of my hybrid property defined in the Cube class 
> > 
> > | 
> > @hybrid_property 
> > defplateifu(self): 
> > return'{0}-{1}'.format(self.plate,self.ifu.name) 
> > 
> > @plateifu.expression 
> > defplateifu(cls): 
> > returnfunc.concat(Cube.plate,'-',IFUDesign.name) 
> > | 
> > 
> > | 
> > type(datadb.Cube.plateifu) 
> > sqlalchemy.sql.functions.concat 
> > | 
> > 
> > Since this property is now a function concat, what's the best way to 
> > retrieve the name of the class that this property is attached to, namely 
> > 'mydb.DataModelClasses.Cube'?  It doesn't seem to have a .class_ or 
> > .parent attribute.  Is there a way to add a new attribute onto my hybrid 
> > columns that let me access the parent class? 
> > 
> > I need to do this for a variety of hybrid properties/expressions, that 
> > are all constructed in unique ways.  This particular example is a 
> > function concat, however I have others that are of type 
> > sqlalchemy.sql.elements.BinaryExpression. 
> > 
> > Is there a way to generically do this no matter the type of hybrid 
> > expression I define? 
> > 
> > Thanks. 
> > 
> > -- 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] accessing base class on hybrid expressions from the class side

2016-05-10 Thread Brian Cherinka

I'm trying to build a query system where given a filter parameter name, I 
can figure out which DeclarativeBase class it is attached to.  I need to do 
this for a mix of standard InstrumentedAttributes and Hybrid 
Properties/Expressions. I have several Declarative Base classes with 
hybrid properties / expressions defined, in addition to the standard 
InstrumentedAttributes from the actual table.  mydb.dataModelClasses.Cube 
for example.

For a standard attribute, I can access the class using the class_ variable. 
 

Standard Attribute on the DeclarativeBase class Cube
type(datadb.Cube.id)
sqlalchemy.orm.attributes.InstrumentedAttribute

print datadb.Cube.id.class_
mydb.DataModelClasses.Cube

What's the best way to retrieve this same information for a hybrid 
expression?  My expressions are other types, thus don't have the class_ 
attribute.  One example of my hybrid property defined in the Cube class

@hybrid_property
def plateifu(self):
return '{0}-{1}'.format(self.plate, self.ifu.name)

@plateifu.expression
def plateifu(cls):
return func.concat(Cube.plate, '-', IFUDesign.name)

type(datadb.Cube.plateifu)
sqlalchemy.sql.functions.concat

Since this property is now a function concat, what's the best way to 
retrieve the name of the class that this property is attached to, namely '
mydb.DataModelClasses.Cube'?  It doesn't seem to have a .class_ or .parent 
attribute.  Is there a way to add a new attribute onto my hybrid columns 
that let me access the parent class?

I need to do this for a variety of hybrid properties/expressions, that are 
all constructed in unique ways.  This particular example is a function 
concat, however I have others that are of type 
sqlalchemy.sql.elements.BinaryExpression. 

Is there a way to generically do this no matter the type of hybrid 
expression I define?

Thanks. 
  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-03 Thread Brian Cherinka
Yeah, that might ultimately be the best way to go if things get too 
complicated.   I think people might not want to re-run several lines of 
code to change some parameters but that could be a thing I make them just 
live with.

On Thursday, March 3, 2016 at 3:06:57 AM UTC-5, Ladislav Lenart wrote:
>
> Hello. 
>
> I think it would be (much) easier to simply rebuild the query from scratch 
> before each run. IMHO the time to build the query is not that big a factor 
> to 
> justify the added source code complexity. 
>
> HTH, 
>
> Ladislav Lenart 
>
>
> On 3.3.2016 05:47, Brian Cherinka wrote: 
> > 
> > 
> > well you need a list of names so from a mapped class you can get: 
> > 
> > for name in inspect(MyClass).column_attrs.keys(): 
> > if name in : 
> > q = q.filter_by(name = bindparam(name)) 
> > 
> > though I'd think if you're dynamically building the query you'd have 
> the 
> > values already, not sure how it's working out that you need 
> bindparam() 
> > at that stage... 
> > 
> >   
> > Ok.  I'll try this out. This looks like it could work.  I think I need 
> it for 
> > the cases where a user specifies a query with condition e.g. X < 10, 
> runs it, 
> > gets results.  Then they want to change the condition to X < 5 and rerun 
> the 
> > query.  As far as I know, if condition 2 gets added into the filter, you 
> would 
> > have both X < 10 and X < 5 in your filter expression.  Rather than a 
> single 
> > updated X < 5. 
> > 
> > What would be even more awesome is if there was a way to also update the 
> > operator in place as well.  So changing X < 10 to X > 10.   
> > 
> > 
> > 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka

Also, I've noticed that when you update the bindparams

q = q.params(x='1234')

and then try to print the whereclause, the parameters are not updated.  Yet 
in the statement, they are updated. 

print 
q.query.whereclause.compile(dialect=postgresql.dialect(),compile_kwargs={'literal_binds':True})
shows old x condition

print 
q.query.statement.compile(dialect=postgresql.dialect(),compile_kwargs={'literal_binds':True})
 
shows updated x='1234'

Is this a bug or does the whereclause need to be updated separately?  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka

>
>
>
> well you need a list of names so from a mapped class you can get: 
>
> for name in inspect(MyClass).column_attrs.keys(): 
> if name in : 
> q = q.filter_by(name = bindparam(name)) 
>
> though I'd think if you're dynamically building the query you'd have the 
> values already, not sure how it's working out that you need bindparam() 
> at that stage... 
>
>  
Ok.  I'll try this out. This looks like it could work.  I think I need it 
for the cases where a user specifies a query with condition e.g. X < 10, 
runs it, gets results.  Then they want to change the condition to X < 5 and 
rerun the query.  As far as I know, if condition 2 gets added into the 
filter, you would have both X < 10 and X < 5 in your filter expression. 
 Rather than a single updated X < 5. 

What would be even more awesome is if there was a way to also update the 
operator in place as well.  So changing X < 10 to X > 10.  




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka
I'm essentially building a tool for a group that doesn't know SQLalchemy, 
and doesn't want to know or care about it, or even SQL in general.  They 
need to be able to query a dataset using minimal input, namely only caring 
about the input filter conditions (e.g. X > 10).  Presumably, they will 
need to be able to update and change these parameters without having to 
reset or rebuild the query I think.  The parameter list is also spread 
across multiple tables as well. 

My query builder is a mix of SQLalchemy + WTForm-Alchemy, but the 
pseudocode is something like

q = Query()
q.createBaseQuery()  # this sets the default   query = 
session.query(ModelClass)
q.set_params(params=params)  # define and set the input parameters to 
search on, as dictionary of {'parameter_name': 'operand value'}  , e.g 
{'X': '< 10'} 
q.add_conditions() # builds and adds the parameters into a giant filter, 
and adds it to the query object, e.g.

myfilt = None
for param in parameters:
 if parameter_table not in join, add it to the join:  
  query = query.join(parameter_table)
 
 parse the input parameter into operand and value
 newfilter = construct a new BinaryExpression based on input
 
 if not myfilt:
 myfilt = and_(newfilter)
 else:
 myfit = and_(myfilt, newfilter)


if any filter exists, add it to query:
query = query.filter(myfilt)



results = q.run()  # runs the sql query and returns the results with either 
query.all(), or query.one(), or query.count(), etc...

So if they want to change the conditions to query with, as far as my 
limited understanding goes, they would either have to rebuild the query 
from scratch and reapply the filters, or they would have to modify the 
values inside the sqlalchemy query object?  And it seems like this 
bindparam is a nice way to allow for flexible attribute changes without 
resetting the query.

Cheers, Brian

On Wednesday, March 2, 2016 at 5:31:09 PM UTC-5, Simon King wrote:
>
> Out of interest, how are you building your query, and why do you need to 
> be able to change the values afterwards? 
>
> Simon 
>
> > On 2 Mar 2016, at 21:59, Brian Cherinka <havo...@gmail.com > 
> wrote: 
> > 
> > Thanks, Mike.  This is excellent.  That did the trick.  That's much 
> easier than what I was trying to do.   Do you know if there is a way to 
> auto bindparam every parameter I have in my Declarative Bases, if and when 
> they get added into a filter?  Basically, I need to allow the user to be 
> able to modify any parameter they set after the fact, but I have a crazy 
> amount of parameters to explicitly do this for. 
> > 
> > Cheers, Brian 
> > 
> > On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote: 
> > 
> > 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka

I'm essentially building a tool for a large group of people that don't know 
SQLalchemy, and will never care enough or want to know, about SQLalchemy or 
SQL in general.  And they need to be able to build queries to the dataset 
based on really minimal information.  All they would input are filter 
conditions of "parameter operand value" and the system builds the rest of 
the query.  They should be able to update these values without having to 
reset and rebuild the query.   

The actual query build is a strange generic combination of SQLalchemy + 
WTForms-Alchemy, but the pseudocode is something like 

q = Query()
q.set_params(params=params)  =  the input parameters they are searching on 
as {'X': '< 10'} 
q.add_conditions()  = builds and adds the filter clauses, something like 

for parameter in parameters:
   if parameter table not in the join: 









On Wednesday, March 2, 2016 at 5:31:09 PM UTC-5, Simon King wrote:
>
> Out of interest, how are you building your query, and why do you need to 
> be able to change the values afterwards? 
>
> Simon 
>
> > On 2 Mar 2016, at 21:59, Brian Cherinka <havo...@gmail.com > 
> wrote: 
> > 
> > Thanks, Mike.  This is excellent.  That did the trick.  That's much 
> easier than what I was trying to do.   Do you know if there is a way to 
> auto bindparam every parameter I have in my Declarative Bases, if and when 
> they get added into a filter?  Basically, I need to allow the user to be 
> able to modify any parameter they set after the fact, but I have a crazy 
> amount of parameters to explicitly do this for. 
> > 
> > Cheers, Brian 
> > 
> > On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote: 
> > 
> > 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka
Thanks, Mike.  This is excellent.  That did the trick.  That's much easier 
than what I was trying to do.   Do you know if there is a way to auto 
bindparam every parameter I have in my Declarative Bases, if and when they 
get added into a filter?  Basically, I need to allow the user to be able to 
modify any parameter they set after the fact, but I have a crazy amount of 
parameters to explicitly do this for.

Cheers, Brian

On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote:
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka
Hi, 

After a query has been constructed with some filter conditions applied, but 
before the query has been run, what's the best way to replace the attribute 
in the filter clause?

Let's say I have a query like this

q = session.query(Cube).join(Version).filter(Version.version == 'v1_5_1')

and I can print the filters with and without the values bound

print q.whereclause
version.version = :version_1

q.whereclause.compile(compile_kwargs={'literal_binds':True})
version.version = 'v1_5_1'

What's the best way to replace the attribute with a new parameter, like 
version = 'v1_3_3'?

For a single condition, the type of whereclause is a BinaryExpression, and 
I figured out that left, operator, and right, get me the left-hand, 
right-hand side of the clause and the operator.  And I can modify the value 
with 

q.whereclause.right.value='v1_3_3'

print q.whereclause.compile(compile_kwargs={'literal_binds':True})
version.version = 'v1_3_3'

Is this the best way?  

My bigger problem is I have a list of clauses in my query

t = session.query(Cube).join(Version,Sample).filter(Version.version == 
'v1_5_1', Sample.x < 10)

now the t.whereclause is a BooleanClauseList and I can't figure out how to 
iterate over this list, such that I can do the above, and modify the 
version value in place.   What's the best way to do this?  I can't find the 
proper location in the documentation that describes a BooleanClauseList. 
 Searching for it returns 0 results. 

Thanks. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] relationship between declarative base and automap base

2016-02-19 Thread Brian Cherinka
Hi.  

I have two database schemas, with a table from each I would like to join. 
 The classes for one schema have been created as explicit declarative 
Bases, while the classes for the other were all created via automap Base. 
 I have a foreign key joining the two tables.  Sqlalchemy sees the foreign 
key, yet does not recognize it or use.  Upon import of my Classes, I'm 
getting this error

NoForeignKeysError: Could not determine join condition between parent/child 
tables on relationship Cube.target - there are no foreign keys linking 
these tables.  Ensure that referencing columns are associated with a 
ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.

datadb.Cube is my declarative base and sampledb.MangaTarget is an automap 
base.  Here are the foreign keys on datadb.Cube
  
In [4]: datadb.Cube.__table__.foreign_keys
Out[4]:
{ForeignKey(u'mangadatadb.ifudesign.pk'),
 ForeignKey(u'mangasampledb.manga_target.pk'),
 ForeignKey(u'mangadatadb.pipeline_info.pk'),
 ForeignKey(u'mangadatadb.wavelength.pk')}

I've created the relationship via 

Cube.target = relationship(sampledb.MangaTarget, backref='cubes')


I've also tried explicitly adding a foreign_keys=[Cube.manga_target_pk] 
here, but that didn't work either.  

I initially created the constraint in my schema table with 

ALTER TABLE ONLY mangadatadb.cube
ADD CONSTRAINT manga_target_fk
FOREIGN KEY (manga_target_pk) REFERENCES mangasampledb.manga_target(pk)
ON UPDATE CASCADE ON DELETE CASCADE;

Can a relationship be created on a foreign key between a declarative base 
class and an automap base class?  

I tested out the relationships with an explicit declarative Base class for 
MangaTarget and everything works perfectly.  However, explicitly declaring 
all the tables in my sampledb schema is not really an option, 
unfortunately. I'm at a loss here.

If it should be possible, is there a procedure somewhere documented on how 
to get that working?

Thanks for any help.

Cheers, Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: proper attribute names for many-to-many relationships using automap

2016-02-16 Thread Brian Cherinka
Yeah I realized I have somewhat nuanced relationships for automap to 
handle, and that in all the time I spent trying to figure out how to get 
automap working for me, I could have written my classes in explicit 
declarative base.  So that's what I ended up doing.  I was hoping to bypass 
that a bit since I have a large number of tables to declare.  Thanks for 
your explanations and help though.  I appreciate it.  

On Sunday, February 14, 2016 at 5:01:19 PM UTC-5, Brian Cherinka wrote:
>
> What is the proper way to get pluralized shortened names for many-to-many 
> tables when using automap?  I currently have it set to generate pluralized 
> lowercase names for collections instead of the default "_collection".  This 
> is what I want for one-to-many or many-to-one relationships, but not 
> many-to-many.  For example, I have two tables, hdu, and extcol, joined 
> together through a many-to-many table, hdu_to_extcol
>
> create table hdu (pk serial primary key not null, extname_pk integer, 
> exttype_pk integer, extno integer, file_pk integer);
> create table hdu_to_extcol (pk serial primary key not null, hdu_pk integer
> , extcol_pk integer);
> create table extcol (pk serial primary key not null, name text);
>
> ALTER TABLE ONLY mangadapdb.hdu_to_extcol
> ADD CONSTRAINT hdu_fk
> FOREIGN KEY (hdu_pk) REFERENCES mangadapdb.hdu(pk)
> ON UPDATE CASCADE ON DELETE CASCADE;
>
> ALTER TABLE ONLY mangadapdb.hdu_to_extcol
> ADD CONSTRAINT extcol_fk
> FOREIGN KEY (extcol_pk) REFERENCES mangadapdb.extcol(pk)
> ON UPDATE CASCADE ON DELETE CASCADE;
>
> When I use SQLalchemy to automap the Base classes, the relationship this 
> generates on the Hdu and Extcol classes are *Hdu.hdu_to_extcol, and 
> Extcol.hdu_to_extcols*, using the below pluralize, and relationship, 
> code.  However, ideally what I'd like the names to be are *Hdu.extcols, 
> and Extcol.hdus*, respectively.  What's the best to generate this for 
> these many-to-many tables?   I'm not sure if automap is recognizing these 
> as many-to-many tables.  The direction indicated when I print during the 
> relationship stage don't indicate as such.
>
> symbol('ONETOMANY')  extcol
> symbol('MANYTOONE')  
> hdu_to_extcol
> symbol('ONETOMANY')  hdu
> symbol('MANYTOONE')  
> hdu_to_extcol
>
> Here is my Base class generation code. 
>
> def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
> referred_cls, **kw):
> if local_cls.__table__.name in onetoones:
> kw['uselist'] = False
> # make use of the built-in function to actually return the result.
>
> return generate_relationship(base, direction, return_fn, attrname, 
> local_cls, referred_cls, **kw)
>
> _pluralizer = inflect.engine()
> def pluralize_collection(base, local_cls, referred_cls, constraint):
> referred_name = referred_cls.__name__
> uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(),
> referred_name)[1:]
> pluralized = _pluralizer.plural(uncamelized)
> return pluralized
>
> # Grabs engine
> db = DatabaseConnection()
> engine = db.engine
>
> # Selects schema and automaps it.
> metadata = MetaData(schema='mangadapdb')
> Base = automap_base(bind=engine, metadata=metadata)
> Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
> name_for_collection_relationship=pluralize_collection, 
> generate_relationship=_gen_relationship)
>
> # Explicitly declare classes
> for cl in Base.classes.keys():
> exec('{0} = Base.classes.{0}'.format(cl))
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] proper attribute names for many-to-many relationships using automap

2016-02-14 Thread Brian Cherinka
What is the proper way to get pluralized shortened names for many-to-many 
tables when using automap?  I currently have it set to generate pluralized 
lowercase names for collections instead of the default "_collection".  This 
is what I want for one-to-many or many-to-one relationships, but not 
many-to-many.  For example, I have two tables, hdu, and extcol, joined 
together through a many-to-many table, hdu_to_extcol

create table hdu (pk serial primary key not null, extname_pk integer, 
exttype_pk integer, extno integer, file_pk integer);
create table hdu_to_extcol (pk serial primary key not null, hdu_pk integer, 
extcol_pk integer);
create table extcol (pk serial primary key not null, name text);

ALTER TABLE ONLY mangadapdb.hdu_to_extcol
ADD CONSTRAINT hdu_fk
FOREIGN KEY (hdu_pk) REFERENCES mangadapdb.hdu(pk)
ON UPDATE CASCADE ON DELETE CASCADE;

ALTER TABLE ONLY mangadapdb.hdu_to_extcol
ADD CONSTRAINT extcol_fk
FOREIGN KEY (extcol_pk) REFERENCES mangadapdb.extcol(pk)
ON UPDATE CASCADE ON DELETE CASCADE;

When I use SQLalchemy to automap the Base classes, the relationship this 
generates on the Hdu and Extcol classes are *Hdu.hdu_to_extcol, and 
Extcol.hdu_to_extcols*, using the below pluralize, and relationship, code. 
 However, ideally what I'd like the names to be are *Hdu.extcols, and 
Extcol.hdus*, respectively.  What's the best to generate this for these 
many-to-many tables?   I'm not sure if automap is recognizing these as 
many-to-many tables.  The direction indicated when I print during the 
relationship stage don't indicate as such.

symbol('ONETOMANY')  extcol
symbol('MANYTOONE')  
hdu_to_extcol
symbol('ONETOMANY')  hdu
symbol('MANYTOONE')  
hdu_to_extcol

Here is my Base class generation code. 

def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
referred_cls, **kw):
if local_cls.__table__.name in onetoones:
kw['uselist'] = False
# make use of the built-in function to actually return the result.

return generate_relationship(base, direction, return_fn, attrname, 
local_cls, referred_cls, **kw)

_pluralizer = inflect.engine()
def pluralize_collection(base, local_cls, referred_cls, constraint):
referred_name = referred_cls.__name__
uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(),
referred_name)[1:]
pluralized = _pluralizer.plural(uncamelized)
return pluralized

# Grabs engine
db = DatabaseConnection()
engine = db.engine

# Selects schema and automaps it.
metadata = MetaData(schema='mangadapdb')
Base = automap_base(bind=engine, metadata=metadata)
Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
name_for_collection_relationship=pluralize_collection, generate_relationship
=_gen_relationship)

# Explicitly declare classes
for cl in Base.classes.keys():
exec('{0} = Base.classes.{0}'.format(cl))




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] best way to declare one-to-one relationships with automap and non-explicit relationships

2016-02-14 Thread Brian Cherinka
Hi, 

I'm trying to use automap a schema, and let it generate all classes and 
relationships between my tables.  It seems to work well for all 
relationships except for one-to-one.  I know to set a one-to-one 
relationship, you must apply the uselist=True keyword in relationship(). 
 What's the best way to do that when I'm letting automap generate them? 
 Without having to manually do it myself by removing the automap generated 
ones, and setting them explicitly.  Here is what I'm trying so far, but 
it's not working. 

onetoones = ['file']

def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
referred_cls, **kw):
if local_cls.__table__.name in onetoones:
kw['uselist'] = False
# make use of the built-in function to actually return the result.
return generate_relationship(base, direction, return_fn, attrname, 
local_cls, referred_cls, **kw)

def camelizeClassName(base, tablename, table):
return str(tablename[0].upper() + re.sub(r'_([a-z])', lambda m: 
m.group(1).upper(), tablename[1:]))

_pluralizer = inflect.engine()
def pluralize_collection(base, local_cls, referred_cls, constraint):
referred_name = referred_cls.__name__
uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(), 
referred_name)[1:]
pluralized = _pluralizer.plural(uncamelized)
return pluralized

# Grabs engine
db = DatabaseConnection()
engine = db.engine

# Selects schema and automaps it.
metadata = MetaData(schema='mangadapdb')
Base = automap_base(bind=engine, metadata=metadata)

# Pre-define Dap class.  Necessary so automap knows to join this table to a 
declarative base class from another schema
class Dap(Base):
__tablename__ = 'dap'

cube_pk = Column(Integer, ForeignKey(datadb.Cube.pk))
cube = relationship(datadb.Cube, backref='dap', uselist=False)

# Prepare the base
Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
name_for_collection_relationship=pluralize_collection, 
generate_relationship=_gen_relationship)

# Explicitly declare classes
for cl in Base.classes.keys():
exec('{0} = Base.classes.{0}'.format(cl))

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: best way to declare one-to-one relationships with automap and non-explicit relationships

2016-02-14 Thread Brian Cherinka


>
>
> you'd need to implement a generate_relationship function as described at 
> http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/automap.html#custom-relationship-arguments
>  
> which applies the rules you want in order to establish those relationships 
> that you'd like to be one-to-one.
>
>  
Yeah, that's what I tried to do here, but it appeared to do nothing.  That 
documentation isn't entirely clear I'm afraid.  The direction input doesn't 
have an interaction.ONETOONE option, so I tried to find an alternative way 
of identifying which tables needed a one-to-one relationship.  Having a 
look, I don't think this function works as is because I don't define the 
pair of tables (from local_cls, referred_cls) that are in a one-to-one.  I 
only define one in my onetoones list.  

onetoones = ['file']

def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
referred_cls, **kw):
if local_cls.__table__.name in onetoones:
kw['uselist'] = False
# make use of the built-in function to actually return the result.
return generate_relationship(base, direction, return_fn, attrname, 
local_cls, referred_cls, **kw)
 

>  
>
>  Here is what I'm trying so far, but it's not working. 
>>
>
>
> I don't see anything obviously wrong with it but you'd want to step 
> through with pdb.set_trace() to ensure every aspect of it is doing what 
> you'd expect.   Otherwise "not working" can mean lots of things.
>
>  
Ok.  Well I'll keep digging around.  
 

>  
>
>>  
>>  
>>
>
>
>> onetoones = ['file']
>>
>> def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
>> referred_cls, **kw):
>> if local_cls.__table__.name in onetoones:
>> kw['uselist'] = False
>> # make use of the built-in function to actually return the result.
>> return generate_relationship(base, direction, return_fn, attrname, 
>> local_cls, referred_cls, **kw)
>>
>> def camelizeClassName(base, tablename, table):
>> return str(tablename[0].upper() + re.sub(r'_([a-z])', lambda m: 
>> m.group(1).upper(), tablename[1:]))
>>
>> _pluralizer = inflect.engine()
>> def pluralize_collection(base, local_cls, referred_cls, constraint):
>> referred_name = referred_cls.__name__
>> uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(), 
>> referred_name)[1:]
>> pluralized = _pluralizer.plural(uncamelized)
>> return pluralized
>>
>> # Grabs engine
>> db = DatabaseConnection()
>> engine = db.engine
>>
>> # Selects schema and automaps it.
>> metadata = MetaData(schema='mangadapdb')
>> Base = automap_base(bind=engine, metadata=metadata)
>>
>> # Pre-define Dap class.  Necessary so automap knows to join this table to 
>> a declarative base class from another schema
>> class Dap(Base):
>> __tablename__ = 'dap'
>>
>> cube_pk = Column(Integer, ForeignKey(datadb.Cube.pk))
>> cube = relationship(datadb.Cube, backref='dap', uselist=False)
>>
>> # Prepare the base
>> Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
>> name_for_collection_relationship=pluralize_collection, 
>> generate_relationship=_gen_relationship)
>>
>> # Explicitly declare classes
>> for cl in Base.classes.keys():
>> exec('{0} = Base.classes.{0}'.format(cl))
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-19 Thread Brian Cherinka
Hmm.  So I removed the column definitions and it worked just fine.  This 
doesn't make any sense to me, since this is essentially my original class 
definition, and now it works.  Maybe I had a type before?, but it didn't 
look like it.  I'm using a postgreSQL database, and my table definition is 
very straightforward.  It looks like 

CREATE TABLE datadb.sample (pk serial PRIMARY KEY NOT NULL, tileid INTEGER, 
nsa_mstar REAL, nsa_id INTEGER,   bunch of other column definitions ) 

No my table does not have a logmstar definition.  Only mstar.  I create the 
property in my Sample class definition.   This is very strange.  

Brian

On Friday, January 15, 2016 at 6:31:23 PM UTC-5, Simon King wrote:
>
> You shouldn’t need to define the columns. Here’s another test script: 
>
> ### 
> import math 
>
> import sqlalchemy as sa 
> import sqlalchemy.orm as saorm 
> from sqlalchemy.ext.hybrid import hybrid_property 
> from sqlalchemy.ext.declarative import declarative_base 
>
> Base = declarative_base() 
>
> engine = sa.create_engine('sqlite:///hybridtest.db') 
>
> engine.execute(""" 
> CREATE TABLE sample ( 
> pk INTEGER NOT NULL, 
> nsa_mstar FLOAT, 
> PRIMARY KEY (pk) 
> ) 
> """) 
>
> class Sample(Base): 
> __tablename__ = 'sample' 
> __table_args__ = {'autoload' : True, 'autoload_with': engine} 
>
> @hybrid_property 
> def nsa_logmstar(self): 
> try: 
> return math.log10(self.nsa_mstar) 
> except ValueError: 
> return -.0 
> except TypeError: 
> return None 
>
> @nsa_logmstar.expression 
> def nsa_logmstar(cls): 
> return sa.func.log(cls.nsa_mstar) 
>
>
> if __name__ == '__main__': 
> sm = saorm.sessionmaker() 
> session = sm() 
> print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9) 
> ### 
>
> What database are you using, and what is your SQL table definition? Does 
> your table already have a nsa_logmstar column? (I don’t think that should 
> matter, but it would be worth checking) 
>
> Simon 
>
>
> > On 15 Jan 2016, at 22:27, Brian Cherinka <havo...@gmail.com 
> > wrote: 
> > 
> > It looks like I needed to define the columns inside my class.  That's 
> the only difference between your class and mine.  And I tested out the 
> query and it now works, and returns the correct number of rows. 
> > 
> > In [4]: print 
> > session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar 
> < 9) 
> > 
> > SELECT datadb.sample.pk AS datadb_sample_pk 
> > FROM datadb.sample 
> > WHERE log(datadb.sample.nsa_mstar) < %(log_1)s 
> > 
> > In [6]: 
> > len(session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar 
> < 9,datadb.Sample.nsa_mstar > 0).all()) 
> > Out[6]: 273 
> > 
> > Do you have any idea why the column definition matters here?  Thanks for 
> all your help. 
> > 
> > Brian 
> > 
> > On Friday, January 15, 2016 at 5:02:03 PM UTC-5, Brian Cherinka wrote: 
> > Here is the print immediately after my original class definition: 
> > 
> > print 'sample nsa log mstar', 
> Sample.nsa_logmstar 
> > 
> > and the result 
> > 
> > sample nsa log mstar None 
> > 
> > When I run your script exactly as is, I get the same output as you.   
> > 
> > When I replace my class definition with yours, inside my code, as 
> follows 
> > 
> > class Sample(Base): 
> > __tablename__ = 'sample' 
> > __table_args__ = {'autoload' : True, 'schema' : 'mangadatadb'}  (I 
> needed to add this line in) 
> > 
> > pk = Column(Integer, primary_key=True) 
> > nsa_mstar = Column(Float) 
> > 
> > @hybrid_property 
> > def nsa_logmstar(self): 
> > try: 
> > return math.log10(self.nsa_mstar) 
> > except ValueError: 
> > return -.0 
> > except TypeError: 
> > return None 
> > 
> > @nsa_logmstar.expression 
> > def nsa_logmstar(cls): 
> > return func.log(cls.nsa_mstar) 
> > 
> > now the print statement :  print 'sample nsa log mstar', 
> Sample.nsa_logmstar 
> > returns 
> > 
> > sample nsa log mstar log(mangadatadb.sample.nsa_mstar) 
> > 
> > 
> > On Friday, January 15, 2016 at 4:28:31 PM UTC-5, Simon King wrote: 
> > Does my test script produce the right output for you in your 
> installation? 
>

Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
Ahh.  Thanks.  Here is the class side then.  Still None.

In [14]: print datadb.Sample.nsa_logmstar
None

Brian

On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote:
>
> "Sample()" is an instance. "Sample" is the class. Try:
>
> print datadb.Sample.nsa_logmstar
>
> Simon
>
> On Fri, Jan 15, 2016 at 1:46 PM, Brian Cherinka <havo...@gmail.com 
> > wrote:
>
>> Hi Simon, 
>>
>> Printing on the class side, I get 
>>
>> In [11]: print datadb.Sample().nsa_logmstar
>> None
>>
>> It looks like it's getting set to None (or remaining None).  I'm not 
>> quite sure what this tells me, except that it's not working.  Printing on 
>> in the instance side, I get
>>
>> In [12]: print cube.sample[0].nsa_mstar
>> 138616.0
>>
>> In [13]: print cube.sample[0].nsa_logmstar
>> 9.14181336239
>>
>> nsa_mstar is a column in my database table, and nsa_logmstar I want to be 
>> simply the log-base10 of that quantity.  
>>
>> If this doesn't give any insight, then it will take me some time to 
>> provide a small script.  This code is embedded into a bunch of stuff.  But 
>> I'll work on it.  
>>
>> Brian
>>
>>
>> On Friday, January 15, 2016 at 5:00:51 AM UTC-5, Simon King wrote:
>>
>>> On Fri, Jan 15, 2016 at 6:16 AM, Brian Cherinka <havo...@gmail.com> 
>>> wrote:
>>>
>>>> I'm trying to set up a hybrid property / expression in a custom class, 
>>>> that I can use in queries. I think I have the syntax correct, however the 
>>>> query returns the entire table, instead of the correct subset of results.  
>>>> And the where clause just indicates True rather than the correct 
>>>> expression. 
>>>>
>>>>
>>>> Here is my hybrid property/expression definition
>>>>
>>>> class Sample(Base,ArrayOps):
>>>>__tablename__ = 'sample'
>>>>__table_args__ = {'autoload' : True, 'schema' : 'datadb'}
>>>>
>>>>def __repr__(self):
>>>>return '>>>
>>>>@hybrid_property
>>>>def nsa_logmstar(self):
>>>>try: return math.log10(self.nsa_mstar)
>>>>except ValueError as e:
>>>>return -.0
>>>>except TypeError as e:
>>>>return None
>>>>
>>>>@nsa_logmstar.expression
>>>>def nsa_logmstar(cls):
>>>>return func.log(cls.nsa_mstar)  
>>>>
>>>> The session query is
>>>>
>>>> session.query(Sample.pk).filter(Sample.nsa_logmstar < 9)
>>>>
>>>> But printing it does not show the appropriate condition. I get
>>>>
>>>> SELECT datadb.sample.pk AS datadb_sample_pk, 
>>>> FROM datadb.sample
>>>> WHERE true 
>>>>
>>>> and the results return the entire table of ~11000 rows instead of the 
>>>> expected 272 rows. What's going on here?  Everything looks correct to me, 
>>>> but I can't figure it out.  
>>>>
>>>> I'm expecting the SQL statement to look like this
>>>>
>>>> select s.pk 
>>>> from datadb.sample as s 
>>>> where log(s.nsa_mstar) < 9;
>>>>
>>>> Any thoughts?  Thanks. 
>>>>
>>>>
>>> I can't see anything obviously wrong with your code, but it looks like 
>>> Sample.nsa_logmstar is not actually resolving to the hybrid property in 
>>> your query. What happens if you "print Sample.nsa_logmstar" just before the 
>>> query?
>>>
>>> Otherwise, please provide a small runnable script that demonstrates the 
>>> problem.
>>>
>>> Simon
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+...@googlegroups.com .
>> To post to this group, send email to sqlal...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
Actually, the class definition is entirely what I posted in the original 
message.  I didn't cut anything out of that.  I don't define the columns in 
mine, as you did.  The property nsa_logmstar is not defined anywhere else 
in the class or in any other place in this code, or in any code that 
interacts with this code.

class Sample(Base,ArrayOps):
   __tablename__ = 'sample'
   __table_args__ = {'autoload' : True, 'schema' : 'datadb'}

   def __repr__(self):
   return '
> What happens if you put the print statement immediately after the class 
> definition? Is there any chance that you've got "nsa_logmstar = None" 
> somewhere in your class definition?
>
> Here's a test script which appears to work:
>
> import math
>
> import sqlalchemy as sa
> import sqlalchemy.orm as saorm
> from sqlalchemy.ext.hybrid import hybrid_property
> from sqlalchemy.ext.declarative import declarative_base
>
> Base = declarative_base()
>
> class Sample(Base):
> __tablename__ = 'sample'
>
> pk = sa.Column(sa.Integer, primary_key=True)
> nsa_mstar = sa.Column(sa.Float)
>
> @hybrid_property
> def nsa_logmstar(self):
> try:
> return math.log10(self.nsa_mstar)
> except ValueError:
> return -.0
> except TypeError:
> return None
>
> @nsa_logmstar.expression
> def nsa_logmstar(cls):
> return sa.func.log(cls.nsa_mstar)
>
>
> if __name__ == '__main__':
> sm = saorm.sessionmaker()
> session = sm()
> print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9)
>
>
> And here's the output:
>
>
> SELECT sample.pk AS sample_pk
> FROM sample
> WHERE log(sample.nsa_mstar) < :log_1
>
>
> Simon
>
>
> On Fri, Jan 15, 2016 at 2:23 PM, Brian Cherinka <havo...@gmail.com 
> > wrote:
>
>> Ahh.  Thanks.  Here is the class side then.  Still None.
>>
>> In [14]: print datadb.Sample.nsa_logmstar
>> None
>>
>> Brian
>>
>> On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote:
>>>
>>> "Sample()" is an instance. "Sample" is the class. Try:
>>>
>>> print datadb.Sample.nsa_logmstar
>>>
>>> Simon
>>>
>>> On Fri, Jan 15, 2016 at 1:46 PM, Brian Cherinka <havo...@gmail.com> 
>>> wrote:
>>>
>>>> Hi Simon, 
>>>>
>>>> Printing on the class side, I get 
>>>>
>>>> In [11]: print datadb.Sample().nsa_logmstar
>>>> None
>>>>
>>>> It looks like it's getting set to None (or remaining None).  I'm not 
>>>> quite sure what this tells me, except that it's not working.  Printing on 
>>>> in the instance side, I get
>>>>
>>>> In [12]: print cube.sample[0].nsa_mstar
>>>> 138616.0
>>>>
>>>> In [13]: print cube.sample[0].nsa_logmstar
>>>> 9.14181336239
>>>>
>>>> nsa_mstar is a column in my database table, and nsa_logmstar I want to 
>>>> be simply the log-base10 of that quantity.  
>>>>
>>>> If this doesn't give any insight, then it will take me some time to 
>>>> provide a small script.  This code is embedded into a bunch of stuff.  But 
>>>> I'll work on it.  
>>>>
>>>> Brian
>>>>
>>>>
>>>> On Friday, January 15, 2016 at 5:00:51 AM UTC-5, Simon King wrote:
>>>>
>>>>> On Fri, Jan 15, 2016 at 6:16 AM, Brian Cherinka <havo...@gmail.com> 
>>>>> wrote:
>>>>>
>>>>>> I'm trying to set up a hybrid property / expression in a custom 
>>>>>> class, that I can use in queries. I think I have the syntax correct, 
>>>>>> however the query returns the entire table, instead of the correct 
>>>>>> subset 
>>>>>> of results.  And the where clause just indicates True rather than the 
>>>>>> correct expression. 
>>>>>>
>>>>>>
>>>>>> Here is my hybrid property/expression definition
>>>>>>
>>>>>> class Sample(Base,ArrayOps):
>>>>>>__tablename__ = 'sample'
>>>>>>__table_args__ = {'autoload' : True, 'schema' : 'datadb'}
>>>>>>
>>>>>>def __repr__(self):
>>>>>>return '>>>>>
>>>>>>@hybrid_property
>>>>>>def nsa_logmstar(self):
>>>>>>try:

Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
Here is the print immediately after my original class definition:

print 'sample nsa log mstar', Sample.nsa_logmstar 

and the result

sample nsa log mstar None

When I run your script exactly as is, I get the same output as you.  

When I replace my class definition with yours, inside my code, as follows

class Sample(Base):
__tablename__ = 'sample'
__table_args__ = {'autoload' : True, 'schema' : 'mangadatadb'}  (I 
needed to add this line in)

pk = Column(Integer, primary_key=True)
nsa_mstar = Column(Float)

@hybrid_property
def nsa_logmstar(self):
try:
return math.log10(self.nsa_mstar)
except ValueError:
return -.0
except TypeError:
return None

@nsa_logmstar.expression
def nsa_logmstar(cls):
return func.log(cls.nsa_mstar)

now the print statement :  print 'sample nsa log mstar', Sample.nsa_logmstar
returns 

sample nsa log mstar log(mangadatadb.sample.nsa_mstar)


On Friday, January 15, 2016 at 4:28:31 PM UTC-5, Simon King wrote:
>
> Does my test script produce the right output for you in your installation? 
>
> What does the print statement immediately after the class definition 
> produce? 
>
> Simon 
>
> > On 15 Jan 2016, at 19:10, Brian Cherinka <havo...@gmail.com 
> > wrote: 
> > 
> > Actually, the class definition is entirely what I posted in the original 
> message.  I didn't cut anything out of that.  I don't define the columns in 
> mine, as you did.  The property nsa_logmstar is not defined anywhere else 
> in the class or in any other place in this code, or in any code that 
> interacts with this code. 
> > 
> > class Sample(Base,ArrayOps): 
> >__tablename__ = 'sample' 
> >__table_args__ = {'autoload' : True, 'schema' : 'datadb'} 
> > 
> >def __repr__(self): 
> >return ' > 
> >@hybrid_property 
> >def nsa_logmstar(self): 
> >try: return math.log10(self.nsa_mstar) 
> >except ValueError as e: 
> >return -.0 
> >except TypeError as e: 
> >return None 
> > 
> >@nsa_logmstar.expression 
> >def nsa_logmstar(cls): 
> >return func.log(cls.nsa_mstar)   
> > 
> > My database connection is a singleton and my base is defined inside 
> that, essentially 
> > 
> > engine = create_engine(database_connection_string) 
> > Base = declarative_base(bind=engine) 
> > 
> > Brian 
> > 
> > On Friday, January 15, 2016 at 9:43:39 AM UTC-5, Simon King wrote: 
> > What happens if you put the print statement immediately after the class 
> definition? Is there any chance that you've got "nsa_logmstar = None" 
> somewhere in your class definition? 
> > 
> > Here's a test script which appears to work: 
> > 
> > import math 
> > 
> > import sqlalchemy as sa 
> > import sqlalchemy.orm as saorm 
> > from sqlalchemy.ext.hybrid import hybrid_property 
> > from sqlalchemy.ext.declarative import declarative_base 
> > 
> > Base = declarative_base() 
> > 
> > class Sample(Base): 
> > __tablename__ = 'sample' 
> > 
> > pk = sa.Column(sa.Integer, primary_key=True) 
> > nsa_mstar = sa.Column(sa.Float) 
> > 
> > @hybrid_property 
> > def nsa_logmstar(self): 
> > try: 
> > return math.log10(self.nsa_mstar) 
> > except ValueError: 
> > return -.0 
> > except TypeError: 
> > return None 
> > 
> > @nsa_logmstar.expression 
> > def nsa_logmstar(cls): 
> > return sa.func.log(cls.nsa_mstar) 
> > 
> > 
> > if __name__ == '__main__': 
> > sm = saorm.sessionmaker() 
> > session = sm() 
> > print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9) 
> > 
> > 
> > And here's the output: 
> > 
> > 
> > SELECT sample.pk AS sample_pk 
> > FROM sample 
> > WHERE log(sample.nsa_mstar) < :log_1 
> > 
> > 
> > Simon 
> > 
> > 
> > On Fri, Jan 15, 2016 at 2:23 PM, Brian Cherinka <havo...@gmail.com> 
> wrote: 
> > Ahh.  Thanks.  Here is the class side then.  Still None. 
> > 
> > In [14]: print datadb.Sample.nsa_logmstar 
> > None 
> > 
> > Brian 
> > 
> > On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote: 
> > "Sample()" is an instance. "Sample" is the class. Try: 
> > 
> > print datadb.Sample.nsa_logmstar 
> > 
> > Simon 
> > 
> > On Fri, Jan 1

Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
It looks like I needed to define the columns inside my class.  That's the 
only difference between your class and mine.  And I tested out the query 
and it now works, and returns the correct number of rows. 

In [4]: print 
session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar < 9)

SELECT datadb.sample.pk AS datadb_sample_pk
FROM datadb.sample
WHERE log(datadb.sample.nsa_mstar) < %(log_1)s

In [6]: 
len(session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar < 
9,datadb.Sample.nsa_mstar > 0).all())
Out[6]: 273

Do you have any idea why the column definition matters here?  Thanks for 
all your help.

Brian

On Friday, January 15, 2016 at 5:02:03 PM UTC-5, Brian Cherinka wrote:
>
> Here is the print immediately after my original class definition:
>
> print 'sample nsa log mstar', Sample.nsa_logmstar 
>
> and the result
>
> sample nsa log mstar None
>
> When I run your script exactly as is, I get the same output as you.  
>
> When I replace my class definition with yours, inside my code, as follows
>
> class Sample(Base):
> __tablename__ = 'sample'
> __table_args__ = {'autoload' : True, 'schema' : 'mangadatadb'}  (I 
> needed to add this line in)
> 
> pk = Column(Integer, primary_key=True)
> nsa_mstar = Column(Float)
>
> @hybrid_property
> def nsa_logmstar(self):
> try:
> return math.log10(self.nsa_mstar)
> except ValueError:
> return -.0
> except TypeError:
> return None
>
> @nsa_logmstar.expression
> def nsa_logmstar(cls):
> return func.log(cls.nsa_mstar)
>
> now the print statement :  print 'sample nsa log mstar', 
> Sample.nsa_logmstar
> returns 
>
> sample nsa log mstar log(mangadatadb.sample.nsa_mstar)
>
>
> On Friday, January 15, 2016 at 4:28:31 PM UTC-5, Simon King wrote:
>>
>> Does my test script produce the right output for you in your 
>> installation? 
>>
>> What does the print statement immediately after the class definition 
>> produce? 
>>
>> Simon 
>>
>> > On 15 Jan 2016, at 19:10, Brian Cherinka <havo...@gmail.com> wrote: 
>> > 
>> > Actually, the class definition is entirely what I posted in the 
>> original message.  I didn't cut anything out of that.  I don't define the 
>> columns in mine, as you did.  The property nsa_logmstar is not defined 
>> anywhere else in the class or in any other place in this code, or in any 
>> code that interacts with this code. 
>> > 
>> > class Sample(Base,ArrayOps): 
>> >__tablename__ = 'sample' 
>> >__table_args__ = {'autoload' : True, 'schema' : 'datadb'} 
>> > 
>> >def __repr__(self): 
>> >return '> > 
>> >@hybrid_property 
>> >def nsa_logmstar(self): 
>> >try: return math.log10(self.nsa_mstar) 
>> >except ValueError as e: 
>> >return -.0 
>> >except TypeError as e: 
>> >return None 
>> > 
>> >@nsa_logmstar.expression 
>> >def nsa_logmstar(cls): 
>> >return func.log(cls.nsa_mstar)   
>> > 
>> > My database connection is a singleton and my base is defined inside 
>> that, essentially 
>> > 
>> > engine = create_engine(database_connection_string) 
>> > Base = declarative_base(bind=engine) 
>> > 
>> > Brian 
>> > 
>> > On Friday, January 15, 2016 at 9:43:39 AM UTC-5, Simon King wrote: 
>> > What happens if you put the print statement immediately after the class 
>> definition? Is there any chance that you've got "nsa_logmstar = None" 
>> somewhere in your class definition? 
>> > 
>> > Here's a test script which appears to work: 
>> > 
>> > import math 
>> > 
>> > import sqlalchemy as sa 
>> > import sqlalchemy.orm as saorm 
>> > from sqlalchemy.ext.hybrid import hybrid_property 
>> > from sqlalchemy.ext.declarative import declarative_base 
>> > 
>> > Base = declarative_base() 
>> > 
>> > class Sample(Base): 
>> > __tablename__ = 'sample' 
>> > 
>> > pk = sa.Column(sa.Integer, primary_key=True) 
>> > nsa_mstar = sa.Column(sa.Float) 
>> > 
>> > @hybrid_property 
>> > def nsa_logmstar(self): 
>> >     try: 
>> > return math.log10(self.nsa_mstar) 
>> > except ValueError: 
>> > return -.0 
>> > except TypeError: 
>>

[sqlalchemy] Re: best way to query from a tuple of parameters

2015-10-26 Thread Brian Cherinka
Yeah that almost works.   I needed to add an and_ around each of the 
subclauses, otherwise the or_ breaks.  

abcd = or_(and_(table.col_1 == a, table.col_2 == b),
   and_(table.col_1 == b, table.col_2 == c))

For posterity, to loop over my rows I found I could put it in a generator 
over the columns

col1 = list of column 1 
col2 = list of column 2

or_(and_(table.col_1==p, table.col_2==col2[i]) for i,p in enumerate(col1))

Thanks for the help.

On Tuesday, October 27, 2015 at 2:05:31 PM UTC+11, Brian Cherinka wrote:
>
>
> What's the best way to query, with the ORM, based on a list of multiple 
> parameters without looping over each parameter tuple?  
>
> I have a list of parameters say:
>
> Col-1, Col-2
> a, b
> c, d
>
> where the combination of the parameters in one row defines a unique table 
> entry.  Such that I would normally do 
>
> object = session.query(table).filter(table.Col-1 == a, table.Col-2 == 
> b).one()  , and 
> object = session.query(table).filter(table.Col-1 == c, table.Col-2 == 
> d).one()
>
> to retrieve the two table objects
>
> Can I run a query that would grab the list of objects all at once, in one 
> query, without having to loop over each row, doing each query, and 
> combining the objects into a list?  Something akin to the 
> table.Col-1..in_([list of values) but with a combinatorial component to it. 
>  
>
> I know I can do something like 
> session.query(table).filter(table.Col-1.in_([a,c]).all()  to grab all the 
> objects that have column 1 values of a, or c, but I need to constrain those 
> to a+b, and c+d
>
> Does this make sense?
>
> Cheers, Brian
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] best way to query from a tuple of parameters

2015-10-26 Thread Brian Cherinka

What's the best way to query, with the ORM, based on a list of multiple 
parameters without looping over each parameter tuple?  

I have a list of parameters say:

Col-1, Col-2
a, b
c, d

where the combination of the parameters in one row defines a unique table 
entry.  Such that I would normally do 

object = session.query(table).filter(table.Col-1 == a, table.Col-2 == 
b).one()  , and 
object = session.query(table).filter(table.Col-1 == c, table.Col-2 == 
d).one()

to retrieve the two table objects

Can I run a query that would grab the list of objects all at once, in one 
query, without having to loop over each row, doing each query, and 
combining the objects into a list?  Something akin to the 
table.Col-1..in_([list of values) but with a combinatorial component to it. 
 

I know I can do something like 
session.query(table).filter(table.Col-1.in_([a,c]).all()  to grab all the 
objects that have column 1 values of a, or c, but I need to constrain those 
to a+b, and c+d

Does this make sense?

Cheers, Brian
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: how to tell which tables already joined in a query statement (or best way to dynamically build a query?)

2015-09-10 Thread Brian Cherinka
Hi Michael, 

Thanks for your response.  It helped a lot.  I ended up going with the 
quick and dirty query.from_obj[0] method you described.  That was faster to 
implement and served my purposes exactly. 

Cheers, Brian

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] how to tell which tables already joined in a query statement (or best way to dynamically build a query?)

2015-08-07 Thread Brian Cherinka

Hi, 

I'm trying to build an SQLalchemy ORM query dynamically based on user 
selected options.  Some of these options come from the same table, but the 
user could select either one or both criteria to filter on.  Since I don't 
know which options the user will select ahead of time, I have to join to 
the same table multiple times.  However this throws an error 

ProgrammingError: (psycopg2.ProgrammingError) table name TableB specified 
more than once

when I try to submit the query.  How can I find out which tables have 
already been joined in a query?  Or what's the best way to handle building 
a query based on multiple criteria?   I'm using SQLalchemy 1.0.0. 

Here is my pseudo-code.  

Option 1.  Option 2.  Option 3.   (any or all options can be selected, and 
they all come from the same joined table)

// base table
query = session.query(TableA)

// add on new criteria
if option 1: query = query.join(TableB).filter(TableB.option1  X )
if option 2: query = query.join(TableB).filter(TableB.option2  X )
if option 3: query = query.join(TableB).filter(TableB.option3  X )

However, when attempting query.all(), this throws the above error, if I 
have selected any two options.   What I think it should be is something 
like this...

//base
query = session.query(TableA)
//join
query = query.join(TableB)
// add on new criteria
if option 1: query = query.filter(TableB.option1  X )
if option 2: query = query.filter(TableB.option2  X )
if option 3: query = query.filter(TableB.option3  X )

but I don't want to join to TableB if I don't have to.  I have many 
different tables where this kind of situation applies, and it seems 
inefficient to join to all other tables just in case I may need to filter 
on something.  

Any thoughts, help or suggestions?
Thanks, Brian





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] correct syntax to pass an sqlalchemy table class into postgresql functions?

2015-05-20 Thread Brian Cherinka
Hi, 

I've built a postgresql function that takes as input a row from a table, 
and returns the sum of a particular column (of type array) between two 
specified indices.  Here is how I've defined my function

CREATE OR REPLACE FUNCTION specres_sum_segment(cube datadb.cube, index1 
integer, index2 integer) RETURNS numeric
LANGUAGE plpgsql STABLE
AS $$

DECLARE result numeric;
BEGIN
select sum(f) from unnest(cube.specres[index1:index2]) as f into result;
return result;
END; $$;


and here is how it works in psql.  

select c.pk from datadb.cube as c where(specres_sum_segment(c,2000,2005)  
12000);


This works and returns the cube entries where this condition is true.  Now 
I'm trying to call this function with an SQLalchemy query.  I've mapped a 
DeclarativeMeta class called Cube to my datadb.cube table, but when I try 
to run my session query I'm getting an error.   

My sqlalchemy session query is 

session.query(datadb.Cube).filter(func.specres_sum_segment(datadb.Cube,2000,2005)==
 
12000).all()


but I get the error

ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 
'DeclarativeMeta'


What is the right syntax to use when passing a mapped SQLalchemy class into 
a function so postgresql will understand it?  I'm using SQLalchemy 1.0.0 
and PostgreSQL 9.3.  Any help would be appreciated.  Thanks.  

Cheers, Brian



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] correct syntax to pass an sqlalchemy table class into postgresql functions?

2015-05-20 Thread Brian Cherinka
Hi Michael, 

Here is the beginning of my Cube class in SQLalchemy.  It also has a bunch 
of properties and methods I'm not printing here, to keep it short   

class Cube(Base,ArrayOps):
__tablename__ = 'cube'
__table_args__ = {'autoload' : True, 'schema' : 'mangadatadb', 
'extend_existing':True}

specres = deferred(Column(ARRAY(Float)))

def __repr__(self):
return 'Cube (pk={0}, plate={1}, ifudesign={2}, tag={3})'.format(self.pk, 
self.plate, self.ifu.name,self.pipelineInfo.version.version)

Here is the chain (going backwards) that produces my Base

Base = db.Base

db = DatabaseConnection()

and here is my DatabaseConnection class

class DatabaseConnection(object):

_singletons = dict()

def __new__(cls, database_connection_string=None, expire_on_commit=True):
This overrides the object's usual creation mechanism.

if not cls in cls._singletons:
assert database_connection_string is not None, A database connection 
string must be specified!
cls._singletons[cls] = object.__new__(cls)

# 
# This is the custom initialization
# 
me = cls._singletons[cls] # just for convenience (think self)

me.database_connection_string = database_connection_string

# change 'echo' to print each SQL query (for debugging/optimizing/the 
curious)
me.engine = create_engine(me.database_connection_string, echo=False)

me.metadata = MetaData()
me.metadata.bind = me.engine
me.Base = declarative_base(bind=me.engine)
me.Session = scoped_session(sessionmaker(bind=me.engine, autocommit=True,

 expire_on_commit=expire_on_commit))

Cheers, Brian

On Wednesday, May 20, 2015 at 12:51:36 PM UTC-4, Michael Bayer wrote:

  

 On 5/20/15 12:09 PM, Brian Cherinka wrote:
  
 Hi,  

  I've built a postgresql function that takes as input a row from a table, 
 and returns the sum of a particular column (of type array) between two 
 specified indices.  Here is how I've defined my function

   CREATE OR REPLACE FUNCTION specres_sum_segment(cube datadb.cube, index1 
 integer, index2 integer) RETURNS numeric
  LANGUAGE plpgsql STABLE
  AS $$
  
   DECLARE result numeric;
  BEGIN
   select sum(f) from unnest(cube.specres[index1:index2]) as f into result;
   return result;
  END; $$;
  
  
  and here is how it works in psql.  

  select c.pk from datadb.cube as c where(specres_sum_segment(c,2000,2005) 
  12000);


  This works and returns the cube entries where this condition is true. 
  Now I'm trying to call this function with an SQLalchemy query.  I've 
 mapped a DeclarativeMeta class called Cube to my datadb.cube table, but 
 when I try to run my session query I'm getting an error.   

  My sqlalchemy session query is 

  
 session.query(datadb.Cube).filter(func.specres_sum_segment(datadb.Cube,2000,2005)==
  
 12000).all()


  but I get the error

  ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 
 'DeclarativeMeta'

  sounds like your use of declarative is incorrect, please share the means 
 by which the Cube class is declared as well as its base.




  
  What is the right syntax to use when passing a mapped SQLalchemy class 
 into a function so postgresql will understand it?  I'm using SQLalchemy 
 1.0.0 and PostgreSQL 9.3.  Any help would be appreciated.  Thanks.  

  Cheers, Brian

  
  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.