[sqlalchemy] sqlacodegen --noclaases depracated?

2023-10-23 Thread Peter Daniels
SQLAlchemy and sqlacodegen noob here. I'd like to just get some simple 
SQLAlchemy tables generated for my database.  I tried using  --noclasses, 
but it doesn't seem to recognize the option.

sqlacodegen --noclasses mysql+pymysql://root:@localhost:3306/mydb

sqlacodegen: error: unrecognized arguments: --noclasses

How can I get it to just gen some tables?

This is what I read in the project docs:
"Unless the --noclasses option is used, sqlacodegen tries to generate 
declarative model classes from each table."
From: https://pypi.org/project/sqlacodegen/

Thanks!

-Peter

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/99353284-321c-429c-9e5d-65ab480ded44n%40googlegroups.com.


Re: [sqlalchemy] Modifying Query Object

2023-04-14 Thread Peter Harrison
I have some additional context on the issue Luis mentioned.

   1. We are using the graphene-sqlalchemy package.
   2. When you do a GraphQL web api query, the package creates a 
sqlalchemy.orm.query.Query 
   object.
   3. We want to modify this standard query that the package creates so 
   that we can do the group_by action to help us get the maximum time series 
   value for every hour because this is not possible using GraphQL.
   4. Talking to the graphene-sqlalchemy team they told us the best place 
   to do the modification to the standardized query is in the get_query method 
   (line 67) in this file: 
   
https://github.com/graphql-python/graphene-sqlalchemy/blob/master/graphene_sqlalchemy/fields.py
   5. The standardized query we need to modify translates to this SQL 
   statement: 

SELECT sy_data.oid_id, sy_data.rrd_timestamp, sy_data.cabinet_id, sy_data
.customer_id, sy_data.value, sy_data.active_flag, sy_data.timestamp
FROM sy_data
WHERE sy_data.rrd_timestamp >= %(rrd_timestamp_1)s AND 
sy_data.rrd_timestamp <= %(rrd_timestamp_2)s AND (sy_data.oid_id = 
%(oid_id_1)s OR sy_data.oid_id = %(o
id_id_2)s) ORDER BY sy_data.oid_id ASC, sy_data.rrd_timestamp ASC

Therefore we need to find a way to insert a func.max for the values in the 
first part of the SELECT statement, (before the FROM). It is easy for us to 
apend the group_by like this.

query.group_by(
func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
)

The big issue for us is to figure out how to insert the func.max

Getting a solution to this will help the graphene-sqlalchemy team create 
better documentation.

On Tuesday, April 11, 2023 at 4:59:15 PM UTC-7 S Mahabl wrote:

> Do you get many rows?
>
> SELECT  date_format(data.timestamp, "%Y-%m-%d %H"), max(data.value)  AS 
> data_value
> from data
> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
> On Tue, Apr 11, 2023 at 4:24 PM Luis Del Rio IV  
> wrote:
>
>> Hello,
>>
>> I am currently using the following sqlalchemy code,
>>
>> _query = super().get_query(model, info, sort, **args)
>> query = _query.group_by(
>> func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
>> )
>> return query
>>
>> I am trying to aggregate the the max value of a field from 
>> DataModel.value utilizing the group by clause.
>>
>> In simple sql, you would do the following.
>>
>> SELECT  max(data.value)  AS data_value
>> from data
>> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
>>
>> What would the proper way to express this? The various methods I have 
>> tried somehow overwrite the original query and do not map to our attributes.
>>
>> -- 
>> SQLAlchemy - 
>> The Python SQL Toolkit and Object Relational Mapper
>>  
>> http://www.sqlalchemy.org/
>>  
>> To post example code, please provide an MCVE: Minimal, Complete, and 
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>> description.
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy/a73c1830-8c64-437a-8ea7-a171767e2223n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/9766a396-ba23-44fc-b0d7-4c72bea2f779n%40googlegroups.com.


Re: [sqlalchemy] Modifying Query Object

2023-04-14 Thread Peter Harrison
I have some additional context on the issue Luis mentioned.

   1. We are using the graphene-sqlalchemy package.
   2. When you do a GraphQL web api query, the package creates a
sqlalchemy.orm.query.Query
   object.
   3. We want to modify this standard query that the package creates so
   that we can do the group_by action to help us get the maximum time series
   value for every hour because this is not possible using GraphQL.
   4. Talking to the graphene-sqlalchemy team they told us the best place
   to do the modification to the standardized query is in the get_query method
   (line 67) in this file:
   
https://github.com/graphql-python/graphene-sqlalchemy/blob/master/graphene_sqlalchemy/fields.py
   5. The standardized query we need to modify translates to this SQL
   statement:

SELECT sy_data.oid_id, sy_data.rrd_timestamp, sy_data.cabinet_id, sy_data
.customer_id, sy_data.value, sy_data.active_flag, sy_data.timestamp
FROM sy_data
WHERE sy_data.rrd_timestamp >= %(rrd_timestamp_1)s AND
sy_data.rrd_timestamp <= %(rrd_timestamp_2)s AND (sy_data.oid_id =
%(oid_id_1)s OR sy_data.oid_id = %(o
id_id_2)s) ORDER BY sy_data.oid_id ASC, sy_data.rrd_timestamp ASC

Therefore we need to find a way to insert a func.max for the values in the
first part of the SELECT statement, (before the FROM). It is easy for us to
apend the group_by like this.

query.group_by(
func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
)

The big issue for us is to figure out how to insert the func.max

Getting a solution to this will help the graphene-sqlalchemy team create
better documentation.

Peter


On Tue, Apr 11, 2023 at 4:59 PM S Mahabl  wrote:

> Do you get many rows?
>
> SELECT  date_format(data.timestamp, "%Y-%m-%d %H"), max(data.value)  AS
> data_value
> from data
> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
>
> On Tue, Apr 11, 2023 at 4:24 PM Luis Del Rio IV  wrote:
>
>> Hello,
>>
>> I am currently using the following sqlalchemy code,
>>
>> _query = super().get_query(model, info, sort, **args)
>> query = _query.group_by(
>> func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
>> )
>> return query
>>
>> I am trying to aggregate the the max value of a field from
>> DataModel.value utilizing the group by clause.
>>
>> In simple sql, you would do the following.
>>
>> SELECT  max(data.value)  AS data_value
>> from data
>> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
>>
>> What would the proper way to express this? The various methods I have
>> tried somehow overwrite the original query and do not map to our attributes.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/a73c1830-8c64-437a-8ea7-a171767e2223n%40googlegroups.com
>> <https://groups.google.com/d/msgid/sqlalchemy/a73c1830-8c64-437a-8ea7-a171767e2223n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAOV%2B3C2PU2ndh9Uf-ZGtDj_ao-3rhQATE9MjYAppSSnwKT6%2Beg%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAOV%2B3C2PU2ndh9Uf-ZGtDj_ao-3rhQATE9MjYAppSSnwKT6%2Beg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAN61F1xnZNX2HfwB0jMY6iOnzNLHQ__t9k7ARPJZpowVFVeSUg%40mail.gmail.com.


Re: [sqlalchemy] Modifying Query Object

2023-04-14 Thread Peter Harrison
I have some additional context on the issue Luis mentioned.

   1. We are using the graphene-sqlalchemy package.
   2. When you do a GraphQL web api query, the package creates a
sqlalchemy.orm.query.Query
   object.
   3. We want to modify this standard query that the package creates so
   that we can do the group_by action to help us get the maximum time series
   value for every hour because this is not possible using GraphQL.
   4. Talking to the graphene-sqlalchemy team they told us the best place
   to do the modification to the standardized query is in the get_query method
   (line 67) in this file:
   
https://github.com/graphql-python/graphene-sqlalchemy/blob/master/graphene_sqlalchemy/fields.py
   5. The standardized query we need to modify translates to this SQL
   statement:

SELECT sy_data.oid_id, sy_data.rrd_timestamp, sy_data.cabinet_id, sy_data
.customer_id, sy_data.value, sy_data.active_flag, sy_data.timestamp
FROM sy_data
WHERE sy_data.rrd_timestamp >= %(rrd_timestamp_1)s AND
sy_data.rrd_timestamp <= %(rrd_timestamp_2)s AND (sy_data.oid_id =
%(oid_id_1)s OR sy_data.oid_id = %(o
id_id_2)s) ORDER BY sy_data.oid_id ASC, sy_data.rrd_timestamp ASC

Therefore we need to find a way to insert a func.max for the values in the
first part of the SELECT statement, (before the FROM). It is easy for us to
apend the group_by like this.

query.group_by(
func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
)

The big issue for us is to figure out how to insert the func.max

Getting a solution to this will help the graphene-sqlalchemy team create
better documentation.

Peter


On Tue, Apr 11, 2023 at 4:59 PM S Mahabl  wrote:

> Do you get many rows?
>
> SELECT  date_format(data.timestamp, "%Y-%m-%d %H"), max(data.value)  AS
> data_value
> from data
> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
>
> On Tue, Apr 11, 2023 at 4:24 PM Luis Del Rio IV  wrote:
>
>> Hello,
>>
>> I am currently using the following sqlalchemy code,
>>
>> _query = super().get_query(model, info, sort, **args)
>> query = _query.group_by(
>> func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
>> )
>> return query
>>
>> I am trying to aggregate the the max value of a field from
>> DataModel.value utilizing the group by clause.
>>
>> In simple sql, you would do the following.
>>
>> SELECT  max(data.value)  AS data_value
>> from data
>> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
>>
>> What would the proper way to express this? The various methods I have
>> tried somehow overwrite the original query and do not map to our attributes.
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/a73c1830-8c64-437a-8ea7-a171767e2223n%40googlegroups.com
>> <https://groups.google.com/d/msgid/sqlalchemy/a73c1830-8c64-437a-8ea7-a171767e2223n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAOV%2B3C2PU2ndh9Uf-ZGtDj_ao-3rhQATE9MjYAppSSnwKT6%2Beg%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAOV%2B3C2PU2ndh9Uf-ZGtDj_ao-3rhQATE9MjYAppSSnwKT6%2Beg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAN61F1zZeAk69nVdc_VhnBgjdn%2B2-JzutG0xy6YFwrWdtNG%2BLg%40mail.gmail.com.


Re: [sqlalchemy] Query object modification

2023-04-13 Thread Peter Harrison
Thanks Mike,

Ideally we'd prefer to find a solution via Graphene-SQLAlchemy. 
Unfortunately we don't have the luxury of creating our own query when 
interacting with Graphene-SQLAlchemy.

So the key question for us is, can you modify an existing 
sqlalchemy.orm.query.Query object to insert a "func.max(Data.value)" object 
into the select? This is what Graphene-SQLAlchemy is giving us.
If this is possible, then the group_by part is easy. We have tested that 
frequently, the hard part is the modifying the original "select" object.

1. We have tried add_columns, but that adds in incompatible object type in 
the GraphQL results making it an unusable option.
2. We thought that modifying the select would be  possible using data with 
"statement.froms" but can't figure out how to update the MetaData object in 
it

If modifying the "select" after its creation is not possible, we need to 
start considering using a separate reporting table with hourly data.

On Wednesday, April 12, 2023 at 3:12:46 PM UTC-7 Mike Bayer wrote:

>
>
> On Wed, Apr 12, 2023, at 5:21 PM, Luis Del Rio IV wrote:
>
> I am currently using the following sqlalchemy code,
>
> _query = super().get_query(model, info, sort, **args)
> query = _query.group_by(
> func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
> )
> return query
>
> I am trying to aggregate the the max value of a field from DataModel.value 
> utilizing the group by clause.
>
> In simple sql, you would do the following.
>
> SELECT  max(data.value)  AS data_value
> from data
> GROUP BY date_format(data.timestamp, "%Y-%m-%d %H")
>
> What would the proper way to express this? The various methods I have 
> tried somehow overwrite the original query and do not map to our attributes.
>
>
> using legacy query style:
>
> q1 = session.query(func.max(Data.value)).group_by(
> func.date_format(Data.timestamp, "%Y-%m-%d %H")
> )
>
>
> using 2.0 style select(), replace "session.query" with "select":
>
> s1 = select(func.max(Data.value)).group_by(
> func.date_format(Data.timestamp, "%Y-%m-%d %H")
> )
>
> POC script is at the bottom of this email.
>
>
>
> I have some additional context on the issue
>
>1. We are using the graphene-sqlalchemy package.
>2. When you do a GraphQL web api query, the package creates a 
> sqlalchemy.orm.query.Query 
>object.
>3. We want to modify this standard query that the package creates so 
>that we can do the group_by action to help us get the maximum time series 
>value for every hour because this is not possible using GraphQL.
>4. Talking to the graphene-sqlalchemy team they told us the best place 
>to do the modification to the standardized query is in the get_query 
> method 
>(line 67) in this file: https://github.com/graphql-
>python/graphene-sqlalchemy/blob/master/graphene_sqlalchemy/fields.py 
>
> 
>5. The standardized query we need to modify translates to this SQL 
>statement: 
>
> SELECT sy_data.oid_id, sy_data.rrd_timestamp, sy_data.cabinet_id, sy_data
> .customer_id, sy_data.value, sy_data.active_flag, sy_data.timestamp 
> FROM sy_data 
> WHERE sy_data.rrd_timestamp >= %(rrd_timestamp_1)s AND 
> sy_data.rrd_timestamp <= %(rrd_timestamp_2)s AND (sy_data.oid_id = 
> %(oid_id_1)s OR sy_data.oid_id = %(o
> id_id_2)s) ORDER BY sy_data.oid_id ASC, sy_data.rrd_timestamp ASC
>
> Therefore we need to find a way to insert a func.max for the values in the 
> first part of the SELECT statement, (before the FROM). It is easy for us to 
> apend the group_by like this.
>
>
> if you have a Query which renders the above SQL, you can add group_by() to 
> it in place.   but if these queries are being translated into GraphQL, and 
> GraphQL does not have any concept of GROUP BY, then it wont work, what you 
> want to do would not be possible unless a GraphQL query exists that does 
> what you need.
>
>
> query.group_by(
> func.date_format(DataModel.timestamp, "%Y-%m-%d %H")
> )
>
> The big issue for us is to figure out how to insert the func.max
>
>
> see func at 
> https://docs.sqlalchemy.org/en/20/core/sqlelement.html#sqlalchemy.sql.expression.func
>
>
> from sqlalchemy import Column
> from sqlalchemy import DateTime
> from sqlalchemy import func
> from sqlalchemy import Integer
> from sqlalchemy import select
> from sqlalchemy.orm import declarative_base
> from sqlalchemy.orm import Session
>
> Base = declarative_base()
>
>
> class Data(Base):
> __tablename__ = "data"
>
> id = Column(Integer, primary_key=True)
> value = Column(Integer)
> timestamp = Column(DateTime)
>
>
> s = Session()
>
> q1 = s.query(func.max(Data.value)).group_by(
> func.date_format(Data.timestamp, "%Y-%m-%d %H")
> )
>
> print(q1)
>
> s1 = select(func.max(Data.value)).group_by(
> func.date_format(Data.timestamp, "%Y-%m-%d %H")
> )
>
> print(s1)
>
>

-- 
SQLAlchemy - 
The 

Re: [sqlalchemy] Using Mapped[str | None]

2022-08-31 Thread Peter Schutt
Thanks Mike, I will submit at next opportunity. 

I believe Union[str, None] is fine, just that pep 604 brought in UnionType 
in 3.10+ which represents the "str | None" union type, but I'll stick more 
detail in the issue.

Not having to import Optional or Union everywhere is one of the nice 
ergonomic improvements to typing of late.

Cheers:)

On Wednesday, 31 August 2022 at 23:32:37 UTC+10 Mike Bayer wrote:

>
>
> On Wed, Aug 31, 2022, at 5:00 AM, Peter Schutt wrote:
>
> Hi,
>
> I've been using 2.0 from main and notice that annotating an attribute with 
> `mapped[str | None]` raises with:
>
> sqlalchemy.exc.ArgumentError: Could not locate SQLAlchemy Core type for 
> Python type: str | None
>
>
> I've been able to get it to work with a couple of mods in util.typing and 
> orm.properties and the basic tox run doesn't show any failures.
>
> I've searched both GH issues and in the group here for that exception 
> string without result, is this worth opening an issue for?
>
>
> you should be using Mapped[Optional[str]] ; that said if Mapped[Union[str, 
> None]] and/or Mapped[str | None] is not being parsed then yes this is 
> worthy of a bug report for 2.0.
>
>
>
> Thanks!
>
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/32e27c41-659b-4748-8ef7-3255ff13bf71n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/32e27c41-659b-4748-8ef7-3255ff13bf71n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e0dd3f91-1b01-4bce-a921-0c694f846891n%40googlegroups.com.


[sqlalchemy] Using Mapped[str | None]

2022-08-31 Thread Peter Schutt
Hi,

I've been using 2.0 from main and notice that annotating an attribute with 
`mapped[str | None]` raises with:

sqlalchemy.exc.ArgumentError: Could not locate SQLAlchemy Core type for 
Python type: str | None

I've been able to get it to work with a couple of mods in util.typing and 
orm.properties and the basic tox run doesn't show any failures.

I've searched both GH issues and in the group here for that exception 
string without result, is this worth opening an issue for?

Thanks!


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/32e27c41-659b-4748-8ef7-3255ff13bf71n%40googlegroups.com.


Re: [sqlalchemy] ".contains" query with VARBINARY Column

2022-07-13 Thread Peter Harrison
Thanks Mike,

The examples on the documentation page only show how to work with strings.
Could this be updated?
https://docs.sqlalchemy.org/en/14/core/sqlelement.html

Specifically:

   1. ColumnOperators.endswith()
   2. ColumnOperators.contains()
   3. ColumnOperators.like()
   4. ColumnOperators.startswith()

I created this issue with suggested text
https://github.com/sqlalchemy/sqlalchemy/issues/8253

Peter


On Wed, Jul 13, 2022 at 6:09 AM Mike Bayer  wrote:

> you're sending a Python bytestring as the expression for which there's no
> explicit support for operators like concat, contains, etc.
>
> the solution is to build the SQL composition directly using func.concat ,
> or just building up the LIKE expression in Python, so that there's no
> ambiguity what's being asked for.
>
> hostname = 'CHJWNNEK'
> statement = select(MacIp.hostname).where(
> MacIp.hostname.like(func.concat(func.concat('%', hostname.encode()),
> '%'))
> )
>
>
>
> On Wed, Jul 13, 2022, at 2:29 AM, Peter Harrison wrote:
>
> Hello,
>
> I'm having an issue with a "contains" query on a VARBINARY column. It
> appears the statement compiles incorrectly, or I am not using SQLAlchemy
> correctly.
>
> I know the MySQL CLI query that works correctly as you will see below, but
> I don't know how to get it. The CLI query is only one character different
> from the one that SQLAlchemy creates.
>
> I've spent a few days googling this with no luck.
>
> Any assistance with my syntax would be greatly appreciated.
>
> *PIP packages*
>
> *PyMySQL*==1.0.2
> SQL*Alchemy*==1.4.39
>
> *Code Snippet*
>
> hostname = 'CHJWNNEK'
> statement = select(MacIp.hostname).where(
> MacIp.hostname.contains(hostname.encode()
> )
>
> *Issue*
>
> The SQLAlchemy example compiles to this when adding this argument to the
> compile function *"compile_kwargs={'literal_binds': True}"*:
>
> SELECT smap_macip.hostname
> FROM smap_macip
> WHERE (smap_macip.hostname LIKE concat('%%' + 'CHJWNNEK', '%%'))
>
> This gives no results, however it works when I do the query from the CLI
> like this. ('+' replaced with ',')
>
> SELECT smap_macip.hostname
> FROM smap_macip
> WHERE (smap_macip.hostname LIKE concat(*'%%', 'CHJWNNEK', '%%'*))
>
>
> *Column Contents*
>
> select hostname from smap_macip;
> ++
> | hostname   |
> ++
> | TVUPQBAZJX |
> | *CHJWNNEKYE* |
> | LODFHBAWVT |
> | QMQRDNJJPV |
> | ICHGULIMUU |
> | AMXHISKNVT |
> ++
>
> *Table Definition*
>
> class MacIp(BASE):
> """Database table definition."""
>
> __tablename__ = 'smap_macip'
> __table_args__ = (
> UniqueConstraint('idx_device', 'ip_', 'idx_mac'),
> {'mysql_engine': 'InnoDB'}
> )
>
> idx_macip = Column(
> BIGINT(20, unsigned=True), primary_key=True, unique=True)
> idx_device = Column(
> ForeignKey('smap_device.idx_device'),
> nullable=False, index=True, default=1, server_default=text('1'))
> idx_mac = Column(
> ForeignKey('smap_mac.idx_mac'),
> nullable=False, index=True, default=1, server_default=text('1'))
> ip_ = Column(VARBINARY(256), nullable=True, default=Null)
> hostname = Column(VARBINARY(256), nullable=True, default=Null)
> type = Column(BIGINT(unsigned=True), nullable=True, default=Null)
> enabled = Column(BIT(1), default=1)
> ts_modified = Column(
> DateTime, nullable=False,
> default=datetime.datetime.utcnow, onupdate=datetime.datetime.now)
> ts_created = Column(
> DateTime, nullable=False, default=datetime.datetime.utcnow)
>
>
>
> Peter
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAN61F1wyz99HCon%2BaQkgoXv%2B3YMwV97XWo6SL1oYGb2KhNc%2Bcg%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAN61F1wyz99HCon%2BaQkgoXv%2B3YMwV97XWo6SL1oYGb2KhNc%2Bcg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To pos

[sqlalchemy] ".contains" query with VARBINARY Column

2022-07-13 Thread Peter Harrison
Hello,

I'm having an issue with a "contains" query on a VARBINARY column. It
appears the statement compiles incorrectly, or I am not using SQLAlchemy
correctly.

I know the MySQL CLI query that works correctly as you will see below, but
I don't know how to get it. The CLI query is only one character different
from the one that SQLAlchemy creates.

I've spent a few days googling this with no luck.

Any assistance with my syntax would be greatly appreciated.

*PIP packages*

PyMySQL==1.0.2
SQLAlchemy==1.4.39

*Code Snippet*

hostname = 'CHJWNNEK'
statement = select(MacIp.hostname).where(
MacIp.hostname.contains(hostname.encode()
)

*Issue*

The SQLAlchemy example compiles to this when adding this argument to the
compile function *"compile_kwargs={'literal_binds': True}"*:

SELECT smap_macip.hostname
FROM smap_macip
WHERE (smap_macip.hostname LIKE concat('%%' + 'CHJWNNEK', '%%'))

This gives no results, however it works when I do the query from the CLI
like this. ('+' replaced with ',')

SELECT smap_macip.hostname
FROM smap_macip
WHERE (smap_macip.hostname LIKE concat(*'%%', 'CHJWNNEK', '%%'*))

*Column Contents*

select hostname from smap_macip;
++
| hostname   |
++
| TVUPQBAZJX |
| *CHJWNNEKYE* |
| LODFHBAWVT |
| QMQRDNJJPV |
| ICHGULIMUU |
| AMXHISKNVT |
++

*Table Definition*

class MacIp(BASE):
"""Database table definition."""

__tablename__ = 'smap_macip'
__table_args__ = (
UniqueConstraint('idx_device', 'ip_', 'idx_mac'),
{'mysql_engine': 'InnoDB'}
)

idx_macip = Column(
BIGINT(20, unsigned=True), primary_key=True, unique=True)
idx_device = Column(
ForeignKey('smap_device.idx_device'),
nullable=False, index=True, default=1, server_default=text('1'))
idx_mac = Column(
ForeignKey('smap_mac.idx_mac'),
nullable=False, index=True, default=1, server_default=text('1'))
ip_ = Column(VARBINARY(256), nullable=True, default=Null)
hostname = Column(VARBINARY(256), nullable=True, default=Null)
type = Column(BIGINT(unsigned=True), nullable=True, default=Null)
enabled = Column(BIT(1), default=1)
ts_modified = Column(
DateTime, nullable=False,
default=datetime.datetime.utcnow, onupdate=datetime.datetime.now)
ts_created = Column(
DateTime, nullable=False, default=datetime.datetime.utcnow)



Peter

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAN61F1wyz99HCon%2BaQkgoXv%2B3YMwV97XWo6SL1oYGb2KhNc%2Bcg%40mail.gmail.com.


Re: [sqlalchemy] Integrating graphene-sqlalchemy-filter into graphene-sqlalchemy

2022-03-30 Thread Peter Harrison
Thanks Mike,

I contacted one of the maintainers of graphene-sqlalchemy and he suggested
this list as a source of interested parties who use the package with
sqlalchemy.

That repo has become less active in recent months, and I'm sure there are
users out there like me who want it rejuvenated.

sqlalchemy and graphene-sqlalchemy have been life savers for me, and I'd
like to contribute financially to the combined health of both.

If you are a user of the graphene-sqlalchemy-filter package and are
interested, please contact me off list.

Thanks for your understanding.

Peter


On Wed, Mar 30, 2022 at 12:04 PM Mike Bayer 
wrote:

> we dont maintain graphene-sqlalchemy-filter on this list, you would need
> to reach out to the developers of that library.
>
> On Wed, Mar 30, 2022, at 2:30 PM, Peter Harrison wrote:
>
> Hello everyone,
>
> My organization uses both graphene-sqlalchemy and
> graphene-sqlalchemy-filter for some subsystems. Unfortunately development
> in graphene-sqlalchemy-filter has stalled and does not support the latest
> version of graphene which supports nifty batching.
>
> The graphene-sqlalchemy-filter feature is very useful.
>
> We are  interested in partially sponsoring work on graphene-sqlalchemy to
> integrate filters like graphene-sqlalchemy-filter. Identical filter syntax
> would be ideal for seamless integration.
>
> Please let me know the best way to proceed, both on and off list, as
> appropriate.
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/8a2fc6b8-b7e3-443b-a49f-53e50c749307n%40googlegroups.com
> <https://groups.google.com/d/msgid/sqlalchemy/8a2fc6b8-b7e3-443b-a49f-53e50c749307n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/04a90bda-4461-471b-bf91-6cf8d4e92bba%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/04a90bda-4461-471b-bf91-6cf8d4e92bba%40www.fastmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAN61F1yHMpHozNSjs6AH%3DPecUAmGR9z%2B1OUHRtVfj3E0gr8LSw%40mail.gmail.com.


[sqlalchemy] Integrating graphene-sqlalchemy-filter into graphene-sqlalchemy

2022-03-30 Thread Peter Harrison
Hello everyone,

My organization uses both graphene-sqlalchemy and 
graphene-sqlalchemy-filter for some subsystems. Unfortunately development 
in graphene-sqlalchemy-filter has stalled and does not support the latest 
version of graphene which supports nifty batching.

The graphene-sqlalchemy-filter feature is very useful.

We are  interested in partially sponsoring work on graphene-sqlalchemy to 
integrate filters like graphene-sqlalchemy-filter. Identical filter syntax 
would be ideal for seamless integration.

Please let me know the best way to proceed, both on and off list, as 
appropriate.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/8a2fc6b8-b7e3-443b-a49f-53e50c749307n%40googlegroups.com.


[sqlalchemy] SQLAlchemy exasol dialect maintainership transfer to Exasol AG

2022-01-27 Thread Peter Hoffmann
Hi all,

after years of maintaining the sqlalchemy exasol dialect within Blue Yonder
https://github.com/blue-yonder/sqlalchemy_exasol the Exasol AG has stepped 
up and offered to officially maintain the dialect in the future as part of 
their open source strategy. 

We are more than happy about this development and have transferred the git 
repository to it's new home https://github.com/exasol/sqlalchemy-exasol.

We will also transfer pypi access to sqlalchemy-exasol. The exasol team 
will provide an official announcement within the next days.


Kind Regards,

Peter Hoffmann
Blue Yonder

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/77148a82-8680-4884-98c3-7e2f7642a1bbn%40googlegroups.com.


[sqlalchemy] inspection does not update mapper schema when schema assigned to declarative during runtime

2020-07-02 Thread Peter Lai
It appears that if during runtime I assign a schema to declarative, then 
`inspect()` it, the resulting Selectable does not have the schema assigned 
to it:

in model.py:

from sqlalchemy import Column, String, DateTime
from sqlalchemy.ext.declarative import declarative_base


Base = declarative_base()


class MyClass(Base):
__tablename__ = 'mytable'


username = Column(String(128), primary_key=True)



in run.py:

from sqlalchemy import create_engine, inspect

from model import MyClass

engine = sqlalchemy.create_engine(...)

MyClass.schema = 'notme'

# I just want to do a manual select on the table not use a session

mytable = inspect(MyClass).local_table

result = engine.execute(mytable.select().first())

#This fails because for the given connection the select table must be 
qualified under schema 'notme'

#The solution seems to be to assign schema to the Selectable itself, not 
the declarative:

mytable.schema = 'notme'



Is this an expected behavior?


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a2270e39-d211-461c-af20-fb73a886086co%40googlegroups.com.


[sqlalchemy] Pyodbc creator function ignored in create_engine (mssql)

2020-05-22 Thread Peter Lai
example:

import pyodbc

from sqlalchemy import create_engine

def creator():
config = {
'driver': 'ODBC Driver 13 for SQL Server',
'host': 'localhost',
'port': 1433,
'user': 'me',
'pw': 'mypw',
'dbname': 'mydb'
}

return pyodbc.connect(

"DRIVER={{{driver}}};SERVER={host},{port};DATABASE={dbname};UID={user};PWD={pw}"
.format(
driver=config['driver'],
host=config['host'],
port=config.get('port',1433),
dbname=config['dbname'],
user=config['user'],
pw=config['pw']
)
)

# works
odbc_conn = creator()

# fails
e = create_engine('mssql://', creator=creator)




/usr/lib/python3.6/site-packages/sqlalchemy/connectors/pyodbc.py:79: 
SAWarning: No driver name specified; this is expected by PyODBC when using 
DSN-less connections
  "No driver name specified; "


This is on SqlAlchemy 1.3.17


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d3691dac-431b-4ed1-8670-59b8b959f81f%40googlegroups.com.


Re: [sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-25 Thread Peter Schutt
Good to hear and you're welcome:)

On Monday, 26 August 2019 13:00:02 UTC+10, Ira Fuchs wrote:
>
> That's it. (postal_code). The ORM query now works from the iPad! Now I 
> need to sort things out with Python on the Mac and with MySQL server. 
> Having a working example like this helps as I go through the documentation. 
> Thanks very much much for your patient assistance. 
>
> On Sunday, August 25, 2019 at 9:35:28 PM UTC-4, Peter Schutt wrote:
>>
>> HI Ira,
>>
>> Again, that is an error that originates from inside the database layer 
>> and its telling you that one of the columns that you've queried on doesn't 
>> exist, which could mean a few things that will be hard for anyone to debug 
>> without access to the schema that you are trying to abstract upon. E.g., it 
>> could be that there is a column in civicrm_address that is called 
>> "postcode" or "postal_code" and so the name that we have defined on the 
>> Address class is simply wrong, e.g. Address.postalcode might need to be 
>> Address.postal_code, or something else. It is also possible that no such 
>> column actually exists in the civicrm_address table in the database, it 
>> might be defined on another table, or just not be there at all. Inspecting 
>> the result of 'SHOW CREATE TABLE civicrm_address' should provide you with 
>> enough detail to work out what is going on.
>>
>> On Monday, 26 August 2019 10:57:14 UTC+10, Ira Fuchs wrote:
>>>
>>> Until I can get a new version of the server installed, I decided to try 
>>> running this scipt on my iPad using Pythonista. The script now looks like 
>>> this:
>>>
>>> from sqlalchemy import *
>>> from sqlalchemy.ext.declarative import declarative_base
>>> from sqlalchemy.orm import relationship
>>> from sqlalchemy.orm import sessionmaker
>>> from sqlalchemy import Column, Integer, String
>>> import datetime
>>> engine = create_engine('mysql+pymysql://root:pw!@ipaddr:3306/civicrm2')
>>> Base = declarative_base()
>>> Session = sessionmaker(bind=engine)
>>> session = Session()
>>>
>>> class Contact(Base):
>>> __tablename__ = "civicrm_contact"
>>> id = Column(Integer, primary_key=True)
>>> first_name = Column(String(64, u'utf8_unicode_ci'), index=True)
>>> middle_name = Column(String(64, u'utf8_unicode_ci'))
>>> last_name = Column(String(64, u'utf8_unicode_ci'), index=True)
>>> display_name = Column(String(128, u'utf8_unicode_ci'))
>>>   
>>> class Contribution(Base):
>>> __tablename__ = 'civicrm_contribution'
>>>
>>> id = Column(INTEGER, primary_key=True)
>>> contact_id = Column(ForeignKey(u'civicrm_contact.id', 
>>> ondelete=u'CASCADE'), nullable=False, index=True)
>>> financial_type_id = Column(ForeignKey(u'civicrm_financial_type.id'), 
>>> index=True)
>>> contribution_page_id = Column(ForeignKey(u'
>>> civicrm_contribution_page.id', ondelete=u'SET NULL'), index=True)
>>> payment_instrument_id = Column(INTEGER, index=True)
>>> receive_date = Column(DateTime, index=True)
>>> non_deductible_amount = Column(DECIMAL(20, 2), 
>>> server_default=text("'0.00'"))
>>> total_amount = Column(DECIMAL(20, 2), nullable=False)
>>> fee_amount = Column(DECIMAL(20, 2))
>>> net_amount = Column(DECIMAL(20, 2))
>>> trxn_id = Column(String(255, u'utf8_unicode_ci'), unique=True)
>>> invoice_id = Column(String(255, u'utf8_unicode_ci'))
>>> currency = Column(String(3, u'utf8_unicode_ci'))
>>> cancel_date = Column(DateTime)
>>> cancel_reason = Column(Text(collation=u'utf8_unicode_ci'))
>>> receipt_date = Column(DateTime)
>>> thankyou_date = Column(DateTime)
>>> source = Column(String(255, u'utf8_unicode_ci'), index=True)
>>> amount_level = Column(Text(collation=u'utf8_unicode_ci'))
>>> contribution_recur_id = Column(ForeignKey(u'
>>> civicrm_contribution_recur.id', ondelete=u'SET NULL'), index=True)
>>> is_test = Column(Integer, server_default=text("'0'"))
>>> is_pay_later = Column(Integer, server_default=text("'0'"))
>>> contribution_status_id = Column(INTEGER, index=True)
>>> address_id = Column(ForeignKey(u'civicrm_address.id', 
>>> ondelete=u'SET NULL'), index=True)
>>> check_number = Column(String(255, u'utf8_unicode_ci'))
>>> campaign_id = Column(ForeignKey(u

Re: [sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-25 Thread Peter Schutt
ite-packages-2/pymysql/cursors.py",
>  
> line 170, in execute
> result = self._query(query)
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/cursors.py",
>  
> line 328, in _query
> conn.query(q)
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/connections.py",
>  
> line 517, in query
> self._affected_rows = self._read_query_result(unbuffered=unbuffered)
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/connections.py",
>  
> line 732, in _read_query_result
> result.read()
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/connections.py",
>  
> line 1075, in read
> first_packet = self.connection._read_packet()
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/connections.py",
>  
> line 684, in _read_packet
> packet.check_error()
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/protocol.py",
>  
> line 220, in check_error
> err.raise_mysql_exception(self._data)
>   File 
> "/private/var/mobile/Containers/Shared/AppGroup/05D14C00-CCAC-4E98-A3CC-96C6944B8534/Pythonista3/Documents/site-packages-2/pymysql/err.py",
>  
> line 109, in raise_mysql_exception
> raise errorclass(errno, errval)
> InternalError: (pymysql.err.InternalError) (1054, u"Unknown column 
> 'civicrm_address.postalcode' in 'field list'") [SQL: u'SELECT DISTINCT 
> civicrm_contact.last_name AS civicrm_contact_last_name, 
> civicrm_contact.first_name AS civicrm_contact_first_name, 
> civicrm_address.street_address AS civicrm_address_street_address, 
> civicrm_address.city AS civicrm_address_city, civicrm_address.postalcode AS 
> civicrm_address_postalcode, civicrm_state_province.name AS 
> civicrm_state_province_name, civicrm_country.name AS country \nFROM 
> civicrm_contact, civicrm_address, civicrm_state_province, civicrm_country, 
> civicrm_entity_tag \nWHERE civicrm_contact.id = 
> civicrm_entity_tag.entity_id AND civicrm_entity_tag.tag_id = %(tag_id_1)s 
> AND civicrm_contact.id = civicrm_address.contact_id AND 
> civicrm_address.state_province_id = civicrm_state_province.id AND 
> civicrm_address.country_id = civicrm_country.id AND 
> civicrm_contact.display_name NOT IN (SELECT civicrm_contact.display_name 
> \nFROM civicrm_contribution \nWHERE civicrm_contribution.receive_date > 
> %(receive_date_1)s AND civicrm_contribution.contact_id = 
> civicrm_contact.id AND civicrm_contact.id = civicrm_entity_tag.entity_id 
> AND civicrm_entity_tag.tag_id = %(tag_id_2)s)'] [parameters: 
> {u'receive_date_1': datetime.date(2005, 7, 1), u'tag_id_1': 6, u'tag_id_2': 
> 6}]
>
>
> On Thursday, August 22, 2019 at 9:51:22 PM UTC-4, Ira Fuchs wrote:
>>
>> Yes, that would appear to be the problem. When I was in 2.7 it worked and 
>> in 3.6 the version of pymysql requires a later version (>5.5) of the 
>> server. I am not able to upgrade the server at this point so I need to 
>> figure out how to get my notebook back to Python 2 for the time being.
>>
>> On Aug 22, 2019, at 9:37 PM, Peter Schutt > > wrote:
>>
>> Some time between yesterday and today you have switched python 
>> interpreters between 2.7 and 3.6. Yesterday your errors were originating 
>> from modules located in "/Users/ihf/anaconda2/lib/python2.7/", today they 
>> seem to be coming from "~/anaconda2/lib/python3.6/". To be honest, it's 
>> better if you are using python 3.x as 2.7 goes end of life in only a few 
>> short months. This would explain why you had to reinstall pymysql as you 
>> are now working in a totally different environment.
>>
>> The last error you've shown originates from the database layer. What 
>> version of mysql are you using and might that have changed along with your 
>> environment? utf8mb4 was introduced in 5.5.3, read more here: 
>> https://stackoverflow.com/questions/21911733/error-1115-42000-unknown-character-set-utf8mb4
>> .
>>
>> On Friday, 23 August 2019 11:05:14 UTC+10, Ira Fuchs wrote:
>>>
>>> OK, I made a few changes/corrections to the Class definitions:
>>>
>>> class Contact(Base):
>>> __tablen

[sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-22 Thread Peter Schutt
Some time between yesterday and today you have switched python interpreters 
between 2.7 and 3.6. Yesterday your errors were originating from modules 
located in "/Users/ihf/anaconda2/lib/python2.7/", today they seem to be 
coming from "~/anaconda2/lib/python3.6/". To be honest, it's better if you 
are using python 3.x as 2.7 goes end of life in only a few short months. 
This would explain why you had to reinstall pymysql as you are now working 
in a totally different environment.

The last error you've shown originates from the database layer. What 
version of mysql are you using and might that have changed along with your 
environment? utf8mb4 was introduced in 5.5.3, read more here: 
https://stackoverflow.com/questions/21911733/error-1115-42000-unknown-character-set-utf8mb4
.

On Friday, 23 August 2019 11:05:14 UTC+10, Ira Fuchs wrote:
>
> OK, I made a few changes/corrections to the Class definitions:
>
> class Contact(Base):
> __tablename__ = "civicrm_contact"
> id = Column(Integer, primary_key=True)
> first_name = Column(String(64, u'utf8_unicode_ci'), index=True)
> middle_name = Column(String(64, u'utf8_unicode_ci'))
> last_name = Column(String(64, u'utf8_unicode_ci'), index=True)
> display_name = Column(String(128, u'utf8_unicode_ci'))
>   
> class Contribution(Base):
> __tablename__ = 'civicrm_contribution'
>
> id = Column(INTEGER, primary_key=True, comment=u'Contribution ID')
> contact_id = Column(ForeignKey(u'civicrm_contact.id', 
> ondelete=u'CASCADE'), nullable=False, index=True)
> financial_type_id = Column(ForeignKey(u'civicrm_financial_type.id'), 
> index=True)
> contribution_page_id = Column(ForeignKey(u'
> civicrm_contribution_page.id', ondelete=u'SET NULL'), index=True)
> payment_instrument_id = Column(INTEGER, index=True, comment=u'FK to 
> Payment Instrument')
> receive_date = Column(DateTime, index=True, comment=u'when was gift 
> received')
> non_deductible_amount = Column(DECIMAL(20, 2), 
> server_default=text("'0.00'"))
> total_amount = Column(DECIMAL(20, 2), nullable=False)
> fee_amount = Column(DECIMAL(20, 2), comment=u'actual processor fee if 
> known - may be 0.')
> net_amount = Column(DECIMAL(20, 2))
> trxn_id = Column(String(255, u'utf8_unicode_ci'), unique=True)
> invoice_id = Column(String(255, u'utf8_unicode_ci'))
> currency = Column(String(3, u'utf8_unicode_ci'))
> cancel_date = Column(DateTime, comment=u'when was gift cancelled')
> cancel_reason = Column(Text(collation=u'utf8_unicode_ci'))
> receipt_date = Column(DateTime)
> thankyou_date = Column(DateTime, comment=u'when (if) was donor 
> thanked')
> source = Column(String(255, u'utf8_unicode_ci'), index=True, 
> comment=u'Origin of this Contribution.')
> amount_level = Column(Text(collation=u'utf8_unicode_ci'))
> contribution_recur_id = Column(ForeignKey(u'
> civicrm_contribution_recur.id', ondelete=u'SET NULL'), index=True)
> is_test = Column(Integer, server_default=text("'0'"))
> is_pay_later = Column(Integer, server_default=text("'0'"))
> contribution_status_id = Column(INTEGER, index=True)
> address_id = Column(ForeignKey(u'civicrm_address.id', ondelete=u'SET 
> NULL'), index=True)
> check_number = Column(String(255, u'utf8_unicode_ci'))
> campaign_id = Column(ForeignKey(u'civicrm_campaign.id', 
> ondelete=u'SET NULL'), index=True)
> tax_amount = Column(DECIMAL(20, 2), comment=u'Total tax amount of this 
> contribution.')
> creditnote_id = Column(String(255, u'utf8_unicode_ci'), index=True)
> revenue_recognition_date = Column(DateTime, comment=u'Stores the date 
> when revenue should be recognized.')
> invoice_number = Column(String(255, u'utf8_unicode_ci'), 
> comment=u'Human readable invoice number')
>
> address = relationship(u'CivicrmAddress')
> contact = relationship(u'CivicrmContact')
> class Address(Base):
> __tablename__ = "civicrm_address"
> id = Column(Integer, primary_key=True)
> contact_id = Column(ForeignKey(u'civicrm_contact.id', 
> ondelete=u'CASCADE'), index=True)
> street_address = Column(String(96, u'utf8_unicode_ci'))
> city = Column(String(64, u'utf8_unicode_ci'))
> postalcode = Column(String(64, u'utf8_unicode_ci'))
> state_province_id = Column(String(64))
> country_id = Column(ForeignKey(u'civicrm_country.id', ondelete=u'SET 
> NULL'))
> class Country(Base):
> __tablename__ = "civicrm_country"
> id = Column(Integer, primary_key=True)
> name = Column(String(64, u'utf8_unicode_ci'))
> class State(Base):
> __tablename__ = "civicrm_state_province"
> id = Column(Integer, primary_key=True)
> name = Column(String(64, u'utf8_unicode_ci'))
> abbreviation = Column(String(4, u'utf8_unicode_ci'))
> country_id = Column(ForeignKey(u'civicrm_country.id'))
> class Entity_Tag(Base):
> __tablename__ = "civicrm_entity_tag"
> id = Column(INTEGER, primary_key=True)
> 

[sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-22 Thread Peter Schutt
The `ondelete=...` keyword argument is a parameter to the `ForeignKey()` 
constructor, not the `Column`.

This:

contact_id = Column(ForeignKey(u'civicrm_contact.id'), ondelete=u'CASCADE', 
nullable=False, index=True)

Should be this:

contact_id = Column(ForeignKey(u'civicrm_contact.id', ondelete=u'CASCADE'), 
nullable=False, index=True)


On Friday, 23 August 2019 05:16:32 UTC+10, Ira Fuchs wrote:
>
> I fixed the syntax errors and tried your rewritten query but I  got an 
> error in the definitions:
>
> TypeError Traceback (most recent call 
> last) in ()  7 display_name = 
> Column(String(128, u'utf8_unicode_ci'))  8 > 9 class 
> CivicrmContribution(Base): 10 __tablename__ = 'civicrm_contribution'  
>11 
>  in CivicrmContribution() 11  12 id 
> = Column(INTEGER, primary_key=True, comment=u'Contribution ID')---> 13 
> contact_id = Column(ForeignKey(u'civicrm_contact.id'), ondelete=u'CASCADE', 
> nullable=False, index=True) 14 financial_type_id = 
> Column(ForeignKey(u'civicrm_financial_type.id'), index=True) 15 
> contribution_page_id = Column(ForeignKey(u'civicrm_contribution_page.id', 
> ondelete=u'SET NULL'), index=True)
> /Users/ihf/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/schema.pyc in 
> __init__(self, *args, **kwargs)   1382 self.info = 
> kwargs.pop("info")   1383 -> 1384 self._extra_kwargs(**kwargs)   1385 
>1386 def _extra_kwargs(self, **kwargs):
> /Users/ihf/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/schema.pyc in 
> _extra_kwargs(self, **kwargs)   13851386 def _extra_kwargs(self, 
> **kwargs):-> 1387 self._validate_dialect_kwargs(kwargs)   1388
> 1389 #@property
> /Users/ihf/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/base.pyc in 
> _validate_dialect_kwargs(self, kwargs)287 raise 
> TypeError(288 "Additional arguments should be "--> 
> 289 "named _, got '%s'" % k290 
> )291 dialect_name, arg_name = m.group(1, 2)
> TypeError: Additional arguments should be named _, got 
> 'ondelete'
>
>
> On Wednesday, August 21, 2019 at 8:55:24 PM UTC-4, Peter Schutt wrote:
>>
>> A couple of typos found re-reading my post, the Contribution.contact_id 
>> with a foreign key definition should be  `contact_id = Column(Integer, 
>> ForeignKey('civicrm_contact'), nullable=False)`, I left out the closing 
>> parenthesis to the ForeignKey constructor.
>>
>> Also, the very last line, I didn't close the parenthesis for the 
>> placeholders example, should read:  "(e.g. `%(tag_id_2)s`) are passed...".
>>
>> Couldn't find a way to edit the original.
>>
>> On Thursday, 22 August 2019 10:43:17 UTC+10, Peter Schutt wrote:
>>>
>>> Hi Ira,
>>>
>>> For example Integer(xx) says that Integer cannot have parameters and 
>>>> Tinyint seems not to exist.
>>>
>>>
>>> I'm aware of sqlacodegen, although never had the need to use it myself. 
>>> Those issues sound to me like it is using the mysql dialect types, not the 
>>> standard sqlalchemy types. For example `from sqlalchemy import Integer` 
>>> will not accept an argument but `from sqlalchemy.dialects.mysql import 
>>> INTEGER` does accept display_width as a parameter amongst others. Also, 
>>> `from sqlalchemy import TinyInteger` will raise an import error but `from 
>>> sqlalchemy.dialects.mysql import TINYINT` works.
>>>
>>> I presume that this front-end overhead will pay back when I write more 
>>>> interesting scripts that use data from Civicrm. If this works then I can 
>>>> map the remainder of the (many) tables.
>>>
>>>
>>> Well, that's certainly the idea. The abstraction allows for writing code 
>>> that is both easier to write and read. Essentially you write a Python app 
>>> and let sqlalchemy handle the bridge between your application and the 
>>> database. It's not a magic bullet, there are plenty of sql paradigms still 
>>> exposed to you through the ORM and there are limitations to using it, 
>>> primarily performance relative to doing bulk operations (for example, see 
>>> here 
>>> <https://docs.sqlalchemy.org/en/13/faq/performance.html#i-m-inserting-400-000-rows-with-the-orm-and-it-s-really-slow>).
>>>  
>>> Like any tool, you need to weigh up the pros and cons. For example, if you 
>>> are just using the ORM in order to generate schema or construct raw 
>>

[sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-21 Thread Peter Schutt
A couple of typos found re-reading my post, the Contribution.contact_id 
with a foreign key definition should be  `contact_id = Column(Integer, 
ForeignKey('civicrm_contact'), nullable=False)`, I left out the closing 
parenthesis to the ForeignKey constructor.

Also, the very last line, I didn't close the parenthesis for the 
placeholders example, should read:  "(e.g. `%(tag_id_2)s`) are passed...".

Couldn't find a way to edit the original.

On Thursday, 22 August 2019 10:43:17 UTC+10, Peter Schutt wrote:
>
> Hi Ira,
>
> For example Integer(xx) says that Integer cannot have parameters and 
>> Tinyint seems not to exist.
>
>
> I'm aware of sqlacodegen, although never had the need to use it myself. 
> Those issues sound to me like it is using the mysql dialect types, not the 
> standard sqlalchemy types. For example `from sqlalchemy import Integer` 
> will not accept an argument but `from sqlalchemy.dialects.mysql import 
> INTEGER` does accept display_width as a parameter amongst others. Also, 
> `from sqlalchemy import TinyInteger` will raise an import error but `from 
> sqlalchemy.dialects.mysql import TINYINT` works.
>
> I presume that this front-end overhead will pay back when I write more 
>> interesting scripts that use data from Civicrm. If this works then I can 
>> map the remainder of the (many) tables.
>
>
> Well, that's certainly the idea. The abstraction allows for writing code 
> that is both easier to write and read. Essentially you write a Python app 
> and let sqlalchemy handle the bridge between your application and the 
> database. It's not a magic bullet, there are plenty of sql paradigms still 
> exposed to you through the ORM and there are limitations to using it, 
> primarily performance relative to doing bulk operations (for example, see 
> here 
> <https://docs.sqlalchemy.org/en/13/faq/performance.html#i-m-inserting-400-000-rows-with-the-orm-and-it-s-really-slow>).
>  
> Like any tool, you need to weigh up the pros and cons. For example, if you 
> are just using the ORM in order to generate schema or construct raw 
> queries, there are better ways. The benefit of the ORM comes once you start 
> understanding and using the richer feature set. For example, relationships.
>
> In the models that you built above, I notice that you haven't defined any 
> foreign keys on columns. I understand that your schema is already created 
> and so the foreign keys are already defined at the database level, but that 
> information is useful to sqlalchemy when you want to create relationships 
> <https://docs.sqlalchemy.org/en/13/orm/tutorial.html#building-a-relationship> 
> and 
> explicit joins in your queries. One example of applying a foreign key to 
> your columns would be on the `Contribution.contact_id` column. Defining 
> that as a foreign key would be as simple as changing the definition to 
> `contact_id = Column(Integer, ForeignKey('civicrm_contact', 
> nullable=False)`. This would then allow you to define a relationship 
> attribute on your Contribution model, e.g. `contact = 
> relationship('Contact')`. That allows you to access the instance that 
> represents the contact associated with a given contribution through 
> instance attribute access, e.g. `contribution_instance.contact` would 
> return an instance of `Contact` and sqlalchemy will issue the necessary 
> queries behind the scenes to make that happen.
>
> Now, how to create the equivalent SQL query?
>
>
> Well, I've had to make a couple of guesses to fill in a couple of blanks. 
> I assumed that the column in your query `state_province_id` should be an 
> attribute on the `Address` model, and that the `display_name` column 
> belongs to the `Entity_Tag` model, as neither of those fields are defined 
> elsewhere. We haven't spoken about the Session in any detail yet either, 
> but you can read 
> https://docs.sqlalchemy.org/en/13/orm/tutorial.html#creating-a-session if 
> you need. I've purely tried to emulate your original query as closely as 
> possible.
>
> s = Session()
> subquery = (
> s.query(Entity_Tag.display_name)
> .filter(
> Contribution.receive_date > datetime.date(2005, 7, 1),
> Contribution.contact_id == Contact.id,
> Contact.id == Entity_Tag.entity_id,
> Entity_Tag.tag_id == 6,
> )
> .subquery()
> )
> result = (
> s.query(
> Contact.last_name,
> Contact.first_name,
> Address.street_address,
> Address.city,
> Address.postalcode,
> State.name,
> Country.name.label("country"),
> )
> .filter(
> Co

[sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-21 Thread Peter Schutt
(tag_id_1)s AND civicrm_contact.id = 
civicrm_address.contact_id AND civicrm_address.state_province_id = 
civicrm_state_province.id AND civicrm_address.country_id = civicrm_country.id 
AND civicrm_entity_tag.display_name NOT IN (SELECT civicrm_entity_tag.
display_name
FROM civicrm_contribution
WHERE civicrm_contribution.receive_date > %(receive_date_1)s AND 
civicrm_contribution.contact_id = civicrm_contact.id AND civicrm_contact.id 
= civicrm_entity_tag.entity_id AND civicrm_entity_tag.tag_id = %(tag_id_2)s)

Notice that the `FROM` clause in the subquery only contains one column, 
this is becuase the other columns have been automatically correlated with 
the columns expressed in the enclosing query. See here: 
https://docs.sqlalchemy.org/en/13/core/tutorial.html#correlated-subqueries.

The scalar values in the query that have been substituted with placeholders 
(e.g. `%(tag_id_2)s` are passed to the db driver in dictionary form along 
with the query, e.g. `{'tag_id_1': 6, 'receive_date_1': datetime.date(2005, 
7, 1), 'tag_id_2': 6}`.

On Wednesday, 21 August 2019 10:20:30 UTC+10, Ira Fuchs wrote:
>
> I noticed that some of the definitions created by sqlacodegen are 
> resulting in errors in python. For example Integer(xx) says that Integer 
> cannot have parameters and Tinyint seems not to exist. Perhaps this is a 
> result of my running a less than current version of mysql?
>
> On Tuesday, August 20, 2019 at 6:58:03 PM UTC-4, Ira Fuchs wrote:
>>
>> Just an aside: I discovered the sqlacodegen tool which will create the 
>> sqlalchemy class definitions automatically. I ran it against the civicrm 
>> mysql db and it worked. The definitions comprise 5881 lines (428KB). 
>> Fortunately I don't need much of it for my purposes.
>>
>> On Tuesday, August 20, 2019 at 6:15:46 PM UTC-4, Ira Fuchs wrote:
>>>
>>> I think I may have all the definitions:
>>>
>>> class Contact(Base):
>>> __tablename__ = "civicrm_contact"
>>> id = Column(Integer, primary_key=True)
>>> last_name = Column(String(20), nullable=False)
>>> first_name = Column(String(20), nullable=False)
>>> class Contribution(Base):
>>> __tablename__ = "civicrm_contribution"
>>> id = Column(Integer, primary_key=True)
>>> contact_id = Column(Integer, nullable=False)
>>> receive_date = Column(DateTime,nullable=False)
>>> total_amount = Column(DECIMAL(precision=20,scale=2), nullable=False)
>>> class Address(Base):
>>> __tablename__ = "civicrm_address"
>>> id = Column(Integer, primary_key=True)
>>> contact_id = Column(Integer, nullable=False)
>>> street_address = Column(String(96), nullable=False)
>>> city = Column(String(64), nullable=False)
>>> postalcode = Column(String(64), nullable=False)
>>> country_id = Column(Integer, nullable=False)
>>> class Country(Base):
>>> __tablename__ = "civicrm_country"
>>> id = Column(Integer, primary_key=True)
>>> name = Column(String(64))
>>> class State(Base):
>>> __tablename__ = "civicrm_state_province"
>>> id = Column(Integer, primary_key=True)
>>> name = Column(String(64), nullable=False)
>>> abbreviation = Column(String(4), nullable=False)
>>> country_id = Column(Integer, nullable=False)
>>> class Entity_Tag(Base):
>>> __tablename__ = "civicrm_entity_tag"
>>> id = Column(Integer, primary_key=True)
>>> entity_id = Column(Integer, nullable=False)
>>> tag_id = Column(Integer, nullable=False)
>>>
>>> Now, how to create the equivalent SQL query?  I presume that this 
>>> front-end overhead will pay back when I write more interesting scripts that 
>>> use data from Civicrm. If this works then I can map the remainder of the 
>>> (many) tables.
>>>
>>> On Tuesday, August 20, 2019 at 5:15:36 PM UTC-4, Ira Fuchs wrote:
>>>>
>>>> OK, fixed the case:
>>>>
>>>> class Contact(Base):
>>>> __tablename__ = "civicrm_contact"
>>>> id = Column(Integer, primary_key=True)
>>>> last_name = Column(String(20), nullable=False)
>>>> first_name = Column(String(20), nullable=False)
>>>> class Contribution(Base):
>>>> __tablename__ = "civicrm_contribution"
>>>> id = Column(Integer, primary_key=True)
>>>> contact_id = Column(Integer, nullable=False)
>>>> receive_date = Column(DateTime,nullable=False)
>>>> total_amount = Column(DECIM

[sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-19 Thread Peter Schutt
Cool, how about we walk through creating a model for your schema as an 
example. As we go I can point you to the relevant sections of the 
tutorials/docs as they are great and will explain the details much better 
than I can, and I'll try to address any questions you have along the way.

The first thing you need is your Base class, all of your ORM models will 
inherit from this:

from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


Your models need to inherit from Base, need a __tablename__ class attribute 
and at least a primary key column. For example, a class to represent your 
"db_contact" table might look like this:

from sqlalchemy import Column, Integer, String

class Contact(Base):

__tablename__ = "db_contact"

id = Column(Integer, primary_key=True)
last_name = Column(String(20), nullable=False)
first_name = Column(String(20), nullable=False)

Some things to note this far along:

   - In this example, the name of the Column in the database is inferred 
   from the name of the class attribute that the Column is assigned to.
   - Column constructors take arguments that define specifics of how the 
   column is defined in the database. E.g. `primary_key=True`, 
   `nullable=False`. By declaring a single integer column as primary key, it 
   will automatically be specified as autoincrement in the schema, as well as 
   the primary key. Setting nullable=False on other columns declares that a 
   NOT NULL constraint should be set on those columns.
   - As you are using MySQL, String types (which represent VARCHAR type in 
   the database) require a length. If you were creating models for other 
   backends such as SQLite or postgres, then that isn't required.

Nothing that I've covered here isn't covered in the early stages of the ORM 
Tutorial 
<https://docs.sqlalchemy.org/en/13/orm/tutorial.html#declare-a-mapping>. 
Also, you can read the Column API 
<https://docs.sqlalchemy.org/en/13/core/metadata.html#sqlalchemy.schema.Column> 
to 
get a feel for the arguments that you can pass to Column in order to define 
your schema.

One more important element is the Foreign Key. I'll use your 
"db_entity_tag" table to give you an example of that:

from sqlalchemy import ForeignKey

class EntityTag(Base):

__tablename__ = "db_entity_tag"

id = Column(Integer, primary_key=True)
entity_id = Column(
Integer, 
ForeignKey('db_contact.id', ondelete="CASCADE"), 
nullable=False,
)

Pass the name of the related column to the ForeignKey object (note that 
this isn't "Class.attributename", it is "tablename.columnname") and pass 
the ForeignKey object as a positional argument to the Column constructor 
_after_ the column type argument. ForeignKey api is documented here 
<https://docs.sqlalchemy.org/en/13/core/constraints.html#sqlalchemy.schema.ForeignKey>
 and 
touched on in the orm tutorial here 
<https://docs.sqlalchemy.org/en/13/orm/tutorial.html#building-a-relationship>
.

Have a go at filling in the columns that I've missed in those models above 
and defining the rest of the tables in your schema and once you get that 
done we can move on. Happy to try to answer any questions you might have 
along the way.


On Monday, 19 August 2019 12:36:21 UTC+10, Ira Fuchs wrote:
>
> Thanks for your reply and offer to help. I am able to create an Engine and 
> connect to the MySQL db. I can execute simple sql queries although I wasn't 
> able to get the query I posted to work due to a syntax error (probably 
> having to do with the quotes). I have not mapped any tables to classes.
>
> On Sunday, August 18, 2019 at 8:54:57 PM UTC-4, Peter Schutt wrote:
>>
>> Hi Ira, I'd be happy to help you find your feet with the SQLAlchemy ORM.
>>
>> In general when creating an application that uses the SQLAlchemy ORM, you 
>> would start with an Engine (for connecting to the db), a declarative base 
>> class (maps db table to python class) and a Session instance (for using a 
>> connection to issue queries). Do you have any familiarity with those 
>> concepts?
>>
>>
>> On Monday, 19 August 2019 02:34:05 UTC+10, Ira Fuchs wrote:
>>>
>>> I am new to sqlalchemy and I would like to begin using it to create 
>>> scripts with some sql queries that I have been using but need to change 
>>> from time to time. It would help a lot if someone could translate one of 
>>> these queries to python sqlalchemy using an ORM (not simply executing the 
>>> query as is) so that I could use this as a template for other queries, or 
>>> at least learn from it.
>>>
>>> One of my queries looks like this:
>>>
>>> SELECT DISTINCT last_name, first_name, street_address, city, a.name, 
>>> postal_code

[sqlalchemy] Re: Translating sql query to sqlalchemy

2019-08-18 Thread Peter Schutt
Hi Ira, I'd be happy to help you find your feet with the SQLAlchemy ORM.

In general when creating an application that uses the SQLAlchemy ORM, you 
would start with an Engine (for connecting to the db), a declarative base 
class (maps db table to python class) and a Session instance (for using a 
connection to issue queries). Do you have any familiarity with those 
concepts?


On Monday, 19 August 2019 02:34:05 UTC+10, Ira Fuchs wrote:
>
> I am new to sqlalchemy and I would like to begin using it to create 
> scripts with some sql queries that I have been using but need to change 
> from time to time. It would help a lot if someone could translate one of 
> these queries to python sqlalchemy using an ORM (not simply executing the 
> query as is) so that I could use this as a template for other queries, or 
> at least learn from it.
>
> One of my queries looks like this:
>
> SELECT DISTINCT last_name, first_name, street_address, city, a.name, 
> postal_code, f.name as country FROM db_contact c, db_entity_tag d , 
> db_address e, db_state_province a, db_country f WHERE c.id = d.entity_id 
> and tag_id = 6 and c.id = e.contact_id AND state_province_id = a.id  and 
> e.country_id = f.id and  display_name not in ( SELECT display_name FROM 
> db_contribution, db_contact c, db_entity_tag d WHERE receive_date > 
> '2005-07-01' and contact_id = c.id and c.id = entity_id and tag_id = 6 )
>
> Thanks for any help.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/0fbaaee5-5814-492d-b4c6-6b4c4631cb87%40googlegroups.com.


Re: [sqlalchemy] loss of precision when retrieving DATETIME2 column from MSSQL

2019-07-29 Thread peter bell
A belated thank you for your response.

This worked fine for individual tables but I got an unexpected result (at 
least, unexpected to me) when using this approach with the union or 
union_all functions.

The TypeDecorator was only applied to the first table in the union / 
union_all.  I'm sure I can workaround this (but just thought I'd let you 
know)

Example code below.

Regards,
Peter


from sqlalchemy import (create_engine, TypeDecorator, String, Integer, 
event, MetaData, cast)
from sqlalchemy.dialects.mssql import DATETIME2
from sqlalchemy.schema import (Table, Column)

# TypeDecorator to cast DATETIME2 columns to String
class StringDate(TypeDecorator):
impl = DATETIME2

def column_expression(self, col):
return cast(col, String)

# event listener to apply StringDate on Table reflection
def listen_for_reflect (inspector, table, column_info):
"receive a column reflect event"
if isinstance(column_info['type'],DATETIME2):
column_info['type'] = StringDate

event.listen(Table,'column_reflect',listen_for_reflect)

engine = create_engine("some_db_url")

# create test tables (to mimic mssql temporal tables)
meta = MetaData()
Table('t', meta,
Column('Id', Integer, primary_key=True),
Column('SysVerStart', DATETIME2),
Column('SysVerEnd', DATETIME2)
)

Table('t_history', meta,
Column('Id', Integer),
Column('SysVerStart', DATETIME2),
Column('SysVerEnd', DATETIME2)
)
# create tables in our database
meta.create_all(engine)

# generate select statements using table reflection
meta.clear()
t = Table('t', meta, autoload=True, autoload_with=engine)
t_history = Table('t_history', meta, autoload=True, autoload_with=engine)

print('** StringDate TypeDecorator applied as expected to t 
:')
print(t.select())

print('** StringDate TypeDecorator applied as expected to 
t_history :')
print(t_history.select())

print('** StringDate TypeDecorator only applied to the 
first table in a union_all :')
print(t.select().union_all(t_history.select()))

print('** StringDate TypeDecorator only applied to the 
first table in a union :')
print(t_history.select().union(t.select()))


On Thursday, 18 July 2019 21:42:16 UTC+3, Mike Bayer wrote:
>
>
>
> On Thu, Jul 18, 2019, at 1:27 PM, peter bell wrote: 
> >> It seems that you would like to retrieve this value as a string so that 
> you can have precision that's not supported by Python datetime 
> > 
> > Yes. If a table contains DATETIME2 columns, I would like to return all 
> those columns as a string. 
> > 
> > I was able to achieve that by applying your StringDate class explicitly 
> to the 'created' column using table reflection : 
> > 
> > t = Table('t', meta, Column('created', StringDate),autoload=True, 
> autoload_with=engine) 
> > 
> > Is there a way to apply such a transformation to all DATETIME2 columns 
> by default, without explicitly naming them, as above ? 
>
> there are mulitple contexts in which you may be concerned with when you 
> say "by default". if the autoload case is the one you want to address, then 
> you can use the column_reflect event: 
>
>
> https://docs.sqlalchemy.org/en/13/core/events.html?highlight=column_reflect#sqlalchemy.events.DDLEvents.column_reflect
>  
>
> from sqlalchemy.schema import Table 
> from sqlalchemy import event 
>
> def listen_for_reflect(inspector, table, column_info): 
> "receive a column_reflect event" 
> if isinstance(column_info['type'], DATETIME2): 
>     column_info['type'] = StringDatetime 
>
> event.listen( 
> Table, 
> 'column_reflect', 
>     listen_for_reflect) 
>
>
>
>
> > 
> > br 
> > Peter 
> > 
> > 
> > On Thursday, 18 July 2019 18:30:44 UTC+3, Mike Bayer wrote: 
> >> 
> >> 
> >> On Thu, Jul 18, 2019, at 7:56 AM, peter bell wrote: 
> >>> You are correct - it seems the issue is in pyodbc 
> >> 
> >> but the pyodbc issue was fixed over a year ago. It seems that you would 
> like to retrieve this value as a string so that you can have precision 
> that's not supported by Python datetime, so that is not what the pyodbc 
> issue addresses. 
> >> 
> >> For a canonical solution that won't break if pymssql ever changes this, 
> you should use CAST: 
> >> 
> >> stmt = text("SELECT CAST(datecol AS VARCHAR) FROM t") 
> >> stmt = stmt.columns(created=String) 
> >> 
> >> if you want to work with the SQL expression language you can make this 
> cast automatic using column_expression documented at 
> https://docs.sqlalchemy.org/en/13/core/custom_types.html#applying-sql-level-bind-result-processin

Re: [sqlalchemy] loss of precision when retrieving DATETIME2 column from MSSQL

2019-07-18 Thread peter bell


It seems that you would like to retrieve this value as a string so that you 
can have precision that's not supported by Python datetime


Yes. If a table contains DATETIME2 columns, I would like to return all 
those columns as a string. 

I was able to achieve that by applying your StringDate class explicitly to 
the 'created' column using table reflection :

t = Table('t', meta, Column('created', StringDate),autoload=True, 
autoload_with=engine)

Is there a way to apply such a transformation to all DATETIME2 columns by 
default, without explicitly naming them, as above ?

br
Peter


On Thursday, 18 July 2019 18:30:44 UTC+3, Mike Bayer wrote:
>
>
>
> On Thu, Jul 18, 2019, at 7:56 AM, peter bell wrote:
>
> You are correct - it seems the issue is in pyodbc
>
>
> but the pyodbc issue was fixed over a year ago. It seems that you 
> would like to retrieve this value as a string so that you can have 
> precision that's not supported by Python datetime, so that is not what the 
> pyodbc issue addresses.
>
> For a canonical solution that won't break if pymssql ever changes this, 
> you should use CAST:
>
> stmt = text("SELECT CAST(datecol AS VARCHAR) FROM t")
> stmt = stmt.columns(created=String)
>
> if you want to work with the SQL expression language you can make this 
> cast automatic using column_expression documented at 
> https://docs.sqlalchemy.org/en/13/core/custom_types.html#applying-sql-level-bind-result-processing
>
> from sqlalchemy import TypeDecorator, String
>
> class StringDate(TypeDecorator):
> impl = DATETIME2
>
> def column_expression(self, col):
> return cast(col, String)
>
>
>
>
>
>
> I installed pymssql and used that when creating the sqlalchemy engine 
> object.
>
> The DATETIME2 column is now mapped to a string (which has all 7 digits 
> preserved)
>
> Thanks for your help,
>
> br
> Peter
>
> So this :
>
> from sqlalchemy import *
> URL = "mssql+pymssql://MyUser:MyPwd@MyServer/Mydb"
> # engine to the source database
> engine = create_engine(URL)
> # select from t
> stmt = text("SELECT * FROM t")
> # connection object
> conn = engine.connect()
> # run stmt
> result = conn.execute(stmt)
> # print results
> for row in result:
> print(row)
>
> Produces this :
>
> (1, '2019-07-18 09:37:05.2347191')
>
> On Thursday, 18 July 2019 13:58:13 UTC+3, Simon King wrote:
>
> Right, but I don't know if the conversion from number to datetime is 
> being done by SQLAlchemy or pyodbc. If it's pyodbc, then you'll need 
> to find the fix there, rather than in SQLAlchemy. 
>
> Simon 
>
> On Thu, Jul 18, 2019 at 11:43 AM peter bell  wrote: 
> > 
> > 
> > I think the issue is more fundamental than that. 
> > 
> > Based on the output in my test program, the mssql DATETIME2 column is 
> being mapped to the Python datetime data type. 
> > 
> > Based on the documentation (
> https://docs.python.org/3/library/datetime.html), that data type can only 
> hold fractional seconds to microsecond precision (6 digits) 
> > 
> > class datetime.datetime 
> > 
> > A combination of a date and a time. Attributes: year, month, day, hour, 
> minute, second, microsecond, and tzinfo. 
> > 
> > 
> > So, is there anyway in SQLAlchemy to map the results of a query to a 
> data type other than Python's datetime.datetime ? 
> > 
> > 
> > regards 
> > 
> > Peter 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Thursday, 18 July 2019 13:02:16 UTC+3, Simon King wrote: 
> >> 
> >> I've never used SQL Server or ODBC, but I wonder if this is a pyodbc 
> issue: 
> >> 
> >> https://github.com/mkleehammer/pyodbc/issues/235 
> >> 
> >> Do you have the same problem if you use pyodbc directly, rather than 
> SQLAlchemy? 
> >> 
> >> Simon 
> >> 
> >> On Thu, Jul 18, 2019 at 10:44 AM peter bell  
> wrote: 
> >> > 
> >> > 
> >> > I am new to sqlalchemy and I am trying to retrieve results from a 
> table containing a DATETIME2 column in a SQL Server database.  A SQL Server 
> DATETIME2 column includes a seven-digit number from 0 to 999 that 
> represents the fractional seconds. 
> >> > 
> >> > When I retrieve the results of the table into sqlalchemy (version 
> 1.3.5), the DATETIME2 column seems to be mapped to a python datetime object 
> (which only has a precision of 6 digits) 
> >> > 
> >> > Is there anyway I can avoid this loss of precision ?? 
> >> > 
> >> > Here's a simple test case t

Re: [sqlalchemy] loss of precision when retrieving DATETIME2 column from MSSQL

2019-07-18 Thread peter bell
You are correct - it seems the issue is in pyodbc

I installed pymssql and used that when creating the sqlalchemy engine 
object.

The DATETIME2 column is now mapped to a string (which has all 7 digits 
preserved)

Thanks for your help,

br
Peter

So this :

from sqlalchemy import *
URL = "mssql+pymssql://MyUser:MyPwd@MyServer/Mydb"
# engine to the source database
engine = create_engine(URL)
# select from t
stmt = text("SELECT * FROM t")
# connection object
conn = engine.connect()
# run stmt
result = conn.execute(stmt)
# print results
for row in result:
print(row)

Produces this :

(1, '2019-07-18 09:37:05.2347191')

On Thursday, 18 July 2019 13:58:13 UTC+3, Simon King wrote:
>
> Right, but I don't know if the conversion from number to datetime is 
> being done by SQLAlchemy or pyodbc. If it's pyodbc, then you'll need 
> to find the fix there, rather than in SQLAlchemy. 
>
> Simon 
>
> On Thu, Jul 18, 2019 at 11:43 AM peter bell  > wrote: 
> > 
> > 
> > I think the issue is more fundamental than that. 
> > 
> > Based on the output in my test program, the mssql DATETIME2 column is 
> being mapped to the Python datetime data type. 
> > 
> > Based on the documentation (
> https://docs.python.org/3/library/datetime.html), that data type can only 
> hold fractional seconds to microsecond precision (6 digits) 
> > 
> > class datetime.datetime 
> > 
> > A combination of a date and a time. Attributes: year, month, day, hour, 
> minute, second, microsecond, and tzinfo. 
> > 
> > 
> > So, is there anyway in SQLAlchemy to map the results of a query to a 
> data type other than Python's datetime.datetime ? 
> > 
> > 
> > regards 
> > 
> > Peter 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Thursday, 18 July 2019 13:02:16 UTC+3, Simon King wrote: 
> >> 
> >> I've never used SQL Server or ODBC, but I wonder if this is a pyodbc 
> issue: 
> >> 
> >> https://github.com/mkleehammer/pyodbc/issues/235 
> >> 
> >> Do you have the same problem if you use pyodbc directly, rather than 
> SQLAlchemy? 
> >> 
> >> Simon 
> >> 
> >> On Thu, Jul 18, 2019 at 10:44 AM peter bell  
> wrote: 
> >> > 
> >> > 
> >> > I am new to sqlalchemy and I am trying to retrieve results from a 
> table containing a DATETIME2 column in a SQL Server database.  A SQL Server 
> DATETIME2 column includes a seven-digit number from 0 to 999 that 
> represents the fractional seconds. 
> >> > 
> >> > When I retrieve the results of the table into sqlalchemy (version 
> 1.3.5), the DATETIME2 column seems to be mapped to a python datetime object 
> (which only has a precision of 6 digits) 
> >> > 
> >> > Is there anyway I can avoid this loss of precision ?? 
> >> > 
> >> > Here's a simple test case to demonstrate - In my SQL Server database 
> : 
> >> > 
> >> > drop table if exists t 
> >> > 
> >> > create table t (id int, created datetime2 default sysutcdatetime()); 
> >> > 
> >> > insert into t (id) values (1) 
> >> > 
> >> > If I select from my table in SQL Server, the fractional seconds has 7 
> digits : 
> >> > 
> >> > 2019-07-18 09:37:05.2347191 
> >> > 
> >> > Here's my python code using sqlalchemy version 1.3.5 : 
> >> > 
> >> > import urllib 
> >> > from sqlalchemy import * 
> >> > from sqlalchemy.dialects.mssql import \ 
> >> > BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \ 
> >> > DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \ 
> >> > NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \ 
> >> > SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \ 
> >> > TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR 
> >> > 
> >> > params = 'DRIVER={SQL Server Native Client 11.0};' \ 
> >> >  'SERVER=MyDbServer;' \ 
> >> >  'PORT=1433;' \ 
> >> >  'DATABASE=MyDb;' \ 
> >> >  'UID=MyUser;' \ 
> >> >  'PWD=MyPwd;' 
> >> > 
> >> > params = urllib.parse.quote_plus(params) 
> >> > # engine to the source database 
> >> > engine = create_engine('mssql+pyodbc:///?odbc_connect=%s' % params) 
> >> > # select from t 
> >> > stmt = text("SELECT * FROM t") 
> >> > # specify return data type of columns 
> >> > stmt = stmt.

Re: [sqlalchemy] loss of precision when retrieving DATETIME2 column from MSSQL

2019-07-18 Thread peter bell

I think the issue is more fundamental than that.

Based on the output in my test program, the mssql DATETIME2 column is being 
mapped to the Python datetime data type.  

Based on the documentation (https://docs.python.org/3/library/datetime.html), 
that data type can only hold fractional seconds to microsecond precision (6 
digits)

*class *datetime.datetime

A combination of a date and a time. Attributes: year 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.year>, 
month 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.month>, 
day <https://docs.python.org/3/library/datetime.html#datetime.datetime.day>
, hour 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.hour>, 
minute 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.minute>, 
second 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.second>, 
microsecond 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.microsecond>,
 
and tzinfo 
<https://docs.python.org/3/library/datetime.html#datetime.datetime.tzinfo>.


So, is there anyway in SQLAlchemy to map the results of a query to a data 
type other than Python's datetime.datetime ?


regards

Peter





On Thursday, 18 July 2019 13:02:16 UTC+3, Simon King wrote:
>
> I've never used SQL Server or ODBC, but I wonder if this is a pyodbc 
> issue: 
>
> https://github.com/mkleehammer/pyodbc/issues/235 
>
> Do you have the same problem if you use pyodbc directly, rather than 
> SQLAlchemy? 
>
> Simon 
>
> On Thu, Jul 18, 2019 at 10:44 AM peter bell  > wrote: 
> > 
> > 
> > I am new to sqlalchemy and I am trying to retrieve results from a table 
> containing a DATETIME2 column in a SQL Server database.  A SQL Server 
> DATETIME2 column includes a seven-digit number from 0 to 999 that 
> represents the fractional seconds. 
> > 
> > When I retrieve the results of the table into sqlalchemy (version 
> 1.3.5), the DATETIME2 column seems to be mapped to a python datetime object 
> (which only has a precision of 6 digits) 
> > 
> > Is there anyway I can avoid this loss of precision ?? 
> > 
> > Here's a simple test case to demonstrate - In my SQL Server database : 
> > 
> > drop table if exists t 
> > 
> > create table t (id int, created datetime2 default sysutcdatetime()); 
> > 
> > insert into t (id) values (1) 
> > 
> > If I select from my table in SQL Server, the fractional seconds has 7 
> digits : 
> > 
> > 2019-07-18 09:37:05.2347191 
> > 
> > Here's my python code using sqlalchemy version 1.3.5 : 
> > 
> > import urllib 
> > from sqlalchemy import * 
> > from sqlalchemy.dialects.mssql import \ 
> > BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \ 
> > DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \ 
> > NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \ 
> > SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \ 
> > TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR 
> > 
> > params = 'DRIVER={SQL Server Native Client 11.0};' \ 
> >  'SERVER=MyDbServer;' \ 
> >  'PORT=1433;' \ 
> >  'DATABASE=MyDb;' \ 
> >  'UID=MyUser;' \ 
> >  'PWD=MyPwd;' 
> > 
> > params = urllib.parse.quote_plus(params) 
> > # engine to the source database 
> > engine = create_engine('mssql+pyodbc:///?odbc_connect=%s' % params) 
> > # select from t 
> > stmt = text("SELECT * FROM t") 
> > # specify return data type of columns 
> > stmt = stmt.columns(created=DATETIME2) 
> > # connection object 
> > conn = engine.connect() 
> > # run stmt 
> > result = conn.execute(stmt) 
> > # print results 
> > for row in result: 
> > print(row) 
> > 
> > The results in the following out (the last digit of the datetime2 column 
> is lost) : 
> > 
> > (1, datetime.datetime(2019, 7, 18, 9, 37, 5, 234719)) 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlal...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> &g

[sqlalchemy] loss of precision when retrieving DATETIME2 column from MSSQL

2019-07-18 Thread peter bell

I am new to sqlalchemy and I am trying to retrieve results from a table 
containing a DATETIME2 column in a SQL Server database.  A SQL Server 
DATETIME2 column includes a seven-digit number from 0 to 999 that 
represents the fractional seconds.

When I retrieve the results of the table into sqlalchemy (version 1.3.5), 
the DATETIME2 column seems to be mapped to a python datetime object (which 
only has a precision of 6 digits)

Is there anyway I can avoid this loss of precision ??

Here's a simple test case to demonstrate - In my SQL Server database :

drop table if exists t

create table t (id int, created datetime2 default sysutcdatetime());

insert into t (id) values (1)

If I select from my table in SQL Server, the fractional seconds has 7 
digits :

2019-07-18 09:37:05.2347191

Here's my python code using sqlalchemy version 1.3.5 :

import urllib
from sqlalchemy import *
from sqlalchemy.dialects.mssql import \
BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \
DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \
NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \
SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR

params = 'DRIVER={SQL Server Native Client 11.0};' \
 'SERVER=MyDbServer;' \
 'PORT=1433;' \
 'DATABASE=MyDb;' \
 'UID=MyUser;' \
 'PWD=MyPwd;'

params = urllib.parse.quote_plus(params)
# engine to the source database
engine = create_engine('mssql+pyodbc:///?odbc_connect=%s' % params)
# select from t
stmt = text("SELECT * FROM t")
# specify return data type of columns
stmt = stmt.columns(created=DATETIME2)
# connection object
conn = engine.connect()
# run stmt
result = conn.execute(stmt)
# print results
for row in result:
print(row)

The results in the following out (the last digit of the datetime2 column is 
lost) :

(1, datetime.datetime(2019, 7, 18, 9, 37, 5, 234719))


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d21f62a3-6327-42f6-b2a0-b698032ca859%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Possible regression?

2019-04-28 Thread peter . m . schutt
Hi Lyla,

I notice Head.id is defined as String:

>     id = Column(String, primary_key=True)

While TasselThread.head_id is defined as Integer:

>     head_id = Column(Integer, ForeignKey('head.id'), nullable=False)

Could it be that causes the merge to not recognize the existing instance and a 
new one is created?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] test/dialect/test_sqlite.py::TypeReflectionTest::()::test_round_trip_direct_type_affinity fails

2018-06-22 Thread Hans-Peter Jansen
Hi %,

packaging 1.2.8 (and before) fails for openSUSE Tumbleweed, while it succeeds 
for many former distributions due to a single failing test with Python 2.7.15, 
sqlite 3.24, pytest 3.6.0:

FAIL test/dialect/test_sqlite.py::TypeReflectionTest::
()::test_round_trip_direct_type_affinity

Full log available here:

https://build.opensuse.org/build/home:frispete:python/openSUSE_Tumbleweed/x86_64/python-SQLAlchemy/_log

Python 3.6.5 testing isn't performed due to this issue.

Packaging overview:
https://build.opensuse.org/package/show/home:frispete:python/python-SQLAlchemy
Note: we're packaging and testing for all Python variants from a single spec.

For comparison, building with a slightly older environment (openSUSE Leap 
15.0) using Python 2.7.14, Python 3.6.5, sqlite 3.23.1, pytest 3.6.0 succeeds:

https://build.opensuse.org/build/home:frispete:python/openSUSE_Leap_15.0/x86_64/python-SQLAlchemy/_log

It strongly smells like sqlite 3.24 related. Nevertheless, I want to mention a 
significant difference of these environments, Tumbleweeds default gcc is 
8.1.1, while 15.0 uses 7.3.1 still, so it might boil down to a compiler issue 
somewhere under the covers..

Does somebody in the audience use sqlite 3.24 already, want to perform the 
SQLAlchemy testsuite, and report back?

Thanks in advance,
Pete

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Any way to suppress CASTs during Concrete Inheritance unions on Oracle ?

2018-06-13 Thread Peter Lai


On Wednesday, June 13, 2018 at 8:28:32 PM UTC-4, Mike Bayer wrote:
>
> On Wed, Jun 13, 2018 at 7:44 PM, Peter Lai  > wrote: 
> > I've implemented a Concrete inheritance model and Oracle 11g is balking 
> on 
> > `CAST(NULL AS CLOB) as fieldn` during the pjoin union query execution 
> with: 
> > 
> > ORA-00932: inconsistent datatypes: expected - got CLOB 
> > 
> > This happens when the unioned tables representing the 2 subclasses have 
> > different `Text` fields which are created as CLOBs on Oracle. This is 
> > independent of cxOracle version, since we get the same problem running 
> the 
> > emitted query in native Oracle SQL Developer: 
> > 
> > `select cast(null as clob) from dual` 
> > 
> > ORA-00932: inconsistent datatypes: expected - got CLOB 
> > 00932. 0 -  "inconsistent datatypes: expected %s got %s" 
> > *Cause: 
> > *Action: 
> > Error at Line: 1 Column: 8 
> > 
> > The rest of the CASTs of NULL work fine, though, but generally NULL 
> columns 
> > should have no datatype in Oracle via NULL definition. I am on 
> SQLAlchemy 
> > 1.1.3. Does 1.2x handle this better? If yes, and I need to hotpatch, 
> which 
> > lines would I need to patch? 
>
> just turn them off: 
>
>
> http://docs.sqlalchemy.org/en/rel_1_1/orm/mapping_api.html?highlight=polymorphic_union#sqlalchemy.orm.util.polymorphic_union.params.cast_nulls
>  
>
> cast_nulls¶ – if True, non-existent columns, which are represented as 
> labeled NULLs, will be passed into CAST. This is a legacy behavior 
> that is problematic on some backends such as Oracle - in which case it 
> can be set to False. 
>
>
Cool! However, I used/ am using AbstractConcreteBase for this and 
apparently my inheritance-foo isn't good enough to avoid completely 
copy-pasting a whole bunch of code from sqlalchemy.ext.declarative.api just 
to override _create_polymorphic_union(). I tried to create a class that 
inherits from AbstractConcreteBase for a simple override and I got a bunch 
of `sqlalchemy.exc.ArgumentError: Mapper 'Mapper|FooBase|None' does not 
have a mapped_table specified.` errors:

Base = declarative_base()

class AbstractConcreteBase_ORA(AbstractConcreteBase):
# override polymorphic union because of Oracle
@classmethod
 def _create_polymorphic_union(cls, mappers):
return polymorphic_union(OrderedDict(
(mp.polymorphic_identity, mp.local_table)
for mp in mappers
), 'type', 'pjoin', False)

class FooBase(AbstractConcreteBase_ORA, Base):
common = Column(Integer, primary_key=True)

class Foo(FooBase, Base):

 noncommon = Column(Text)

__mapper_args__ = {
'polymorphic_identity': 'foo',
'concrete': True
}


class Bar(FooBase, Base):
noncommon2 = Column(Text)

__mapper_args__ = {
'polymorphic_identity': 'bar',
'concrete': True
}

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Any way to suppress CASTs during Concrete Inheritance unions on Oracle ?

2018-06-13 Thread Peter Lai
I've implemented a Concrete inheritance model and Oracle 11g is balking on 
`CAST(NULL AS CLOB) as fieldn` during the pjoin union query execution with:

ORA-00932: inconsistent datatypes: expected - got CLOB

This happens when the unioned tables representing the 2 subclasses have 
different `Text` fields which are created as CLOBs on Oracle. This is 
independent of cxOracle version, since we get the same problem running the 
emitted query in native Oracle SQL Developer:

`select cast(null as clob) from dual`

ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 0 -  "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 8

The rest of the CASTs of NULL work fine, though, but generally NULL columns 
should have no datatype in Oracle via NULL definition. I am on SQLAlchemy 
1.1.3. Does 1.2x handle this better? If yes, and I need to hotpatch, which 
lines would I need to patch?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to create multiple TABLES and INSERTS programatically ?

2018-04-03 Thread Peter Lai
I wonder if one can also do this with Declarative Base classes using `type` 
construction? Like, to dynamically create a Declarative Class from some 
pre-defined namespace dicts:

Base = declarative_base()

metas = [{'__tablename__': 'footable'}, ...]

columns = [{ 'key': 'field1', 'type': Text}, ...]

cls_names = ['Foo', ...]


clses = {name: type(name, Base, **dict(metas[i], **{ c['key']: 
Column(c['type'])  for c in columns})  for i, name in enumerate(cls_names)}

Now I should have a dict of classes I can instantiate by class name:

new_foo = clses['Foo'](...)

My particular use case is that I can overload some of the namespaces, to 
decouple the model namespace (the class attributes) from SQLAlchemy, but my 
instance will still have the predefined attribute names - I am storing data 
in either the database or a flatfile and I instantiate either a SQLAlchemy 
declarative object or a generic `object` depending on a flag; after 
instancing (whether to run a query or parse the flatfile), my application 
can refer to the object attributes regardless of where the data comes from.

On Monday, March 26, 2018 at 2:42:51 PM UTC-4, Mike Bayer wrote:
>
> On Mon, Mar 26, 2018 at 1:54 PM, Prashanth Budihal 
>  wrote: 
> > I have this project(stock market data) where I need many tables(200+) 
> and 
> > each one will receive fresh data inserts periodically(every 5 minutes). 
> I 
> > have gone thru couple of tutorials about sqlalchemy ORM and they all 
> show 
> > how to create a single table and do few inserts. But if I have to create 
> > 200+ tables then should I create/type manually 200+ mapped classes ? 
> Thats 
> > impractical isnt it ? I face same hurdle while doing INSERTS.  Can 
> anybody 
> > here give me a hint atleast how to go about this ? Which part of 
> sqlalchemy 
> > addresses this hurdle. I admit I am a newbie to sql and sqlalchemy so 
> please 
> > bear with me if there is already a solution and I havent studied it yet. 
> > Thanks in advance. 
>
> Python is a fully dynamic language so if you needed 200 classes given 
> a list of 200 names, you would never have to "type" that manually. 
> However if this is a really simple case then just use core: 
>
> my_twohundred_names = [   ... names ] 
>
> with engine.connect() as conn: 
> for name in my_twohundred_names: 
> t = Table(name, MetaData(), Column('q', Integer), Column('p', 
> Integer), ... ) 
> conn.execute(t.insert(), {"q": 5, "p": 10}) 
>
> something like that. 
>
> if these tables have all kinds of different columns you can reflect them: 
>
>
> my_twohundred_names = [   ... names ] 
>
> with engine.connect() as conn: 
> for name in my_twohundred_names: 
> t = Table(name, MetaData(), autoload_with=conn) 
> conn.execute(t.insert(), {"q": 5, "p": 10}) 
>
> but you need to know what the columns are that you are inserting. 
>
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Differentiate what raised IntegrityError (fkey not found vs. duplicate pkey)

2018-03-20 Thread Peter Lai
I'm implementing a recursive upsert operation for an object whose primary 
key also contains a foreignkey, and I'd like to get some more info from 
IntegrityError, namely whether integrity was violated because the 
foreignkey didn't exist (yet) or I am trying to insert a duplicate pkey. In 
the former case, I'd go and insert the parent, in the latter, I'd just 
ignore it and rollback (since the state is already end state)

Is there a way to fish this out from IntegrityError or do I need to catch 
DBAPI exceptions instead?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: base class not inserting correctly on subclass insertion (joined table inheritance)

2018-03-20 Thread peter . lai
Yeah it took me about 3 hours to realize that in my actual code, I made a 
typo in the __init__ so that I was assigning the uuid to the wrong 
attribute/column. So embarassing :(

On Monday, March 19, 2018 at 11:04:04 PM UTC-4, Peter Lai wrote:
>
> As seen at: 
> https://stackoverflow.com/questions/49374458/how-to-propagate-subclass-primary-key-to-base-class-primary-key-when-creating-ne
>
> I'm running into a problem where I am using joined table inheritance where 
> the base class carries "upstream" relationships:
>
> class EmployeeType(Base):
> __tablename__ = 'employee_types'
>
> emp_type = Column(String(50), primary_key=True)
> emp_type_description = Column(Text)
> # we want to constrain the EmployeeTypes
>
> # UUID is a custom TypeDecorator that generates a UUID correctly for all 
> backends
>
> class Employee(Base):
>
> __tablename__ = 'employees'
>
> emp_id = Column(UUID, primary_key=True)
> emp_type = Column(String(50), ForeignKey(EmployeeType.emp_type))
>
> emp_groups = relationship(EmployeeMembership, backref='employee')
>
> __mapper_args__ = {
> 'polymorphic_on': emp_type
> }
>
> # this is a one-to-many association membership-type of objectclass 
> EmployeeMembership(Base):
> __tablename__ = 'employee_membership'
>
> emp_id = Column(UUID, ForeignKey(Employee.emp_id), primary_key=True)
> group = Column(String(25), primary_key=True)
>
>
> class Engineer(Employee): __tablename__ = 'engineers' emp_id = Column(UUID
> , ForeignKey(Employee.emp_id), primary_key=True) # emp_name belongs to 
> subclass because within the subclass it needs to be unique, but not unique 
> across subclasses # There can be 2 different employees (emp_id are 
> different) named 'Leeroy Jenkins' one is a manager and one is an engineer.
>
>
> emp_name = Column(String(50), unique=True, nullable=False)
>
> engineer_attr = Column(Text)
>
> __mapper_args__ = {
> 'polymorphic_identity': 'engineer'
> }
>
> # create a new UUID if I don't give one (new Engineer/Employee)
> def __init__(self, *args, **kwargs):
> if 'emp_id' not in kwargs:
> self.emp_id = function_to_generate_uuid()
>
> super(Engineer,self).__init__(*args, **kwargs))
> class Manager(Employee):
> __tablename__ = 'manager'
>
> emp_id = Column(UUID, ForeignKey(Employee.emp_id), primary_key=True)
>
> # emp_name belongs to subclass because within the subclass it needs to be 
> unique, but not unique across subclasses
> # There can be 2 different employees (emp_id are different) named 'Leeroy 
> Jenkins' one is a manager and one is an engineer.
> emp_name = Column(String(50), unique=True, nullable=False)
>
> manager_blah = Column(Text)
>
> __mapper_args__ = {
> 'polymorphic_identity': 'manager'
> }
>
> # create a new UUID if I don't give one (new Manager/Employee)
> def __init__(self, *args, **kwargs):
> if 'emp_id' not in kwargs:
> self.emp_id = function_to_generate_uuid()
>
> super(Manager, self).__init__(*args, **kwargs))
>
>
> The problem is when I go to create either an Engineer or Manager:
>
> new_emp = Engineer(emp_name='Leeroy Jenkins')
> session = Session()
> session.add(new_emp)
> session.commit()
>
>
> I get:
>
> python27\lib\site-packages\sqlalchemy\sql\crud.py:692: SAWarning: Column 
> 'employees.emp_id'is marked as a member of the primary key for table 
> 'employees', but has no Python-sideor server-side default generator 
> indicated, nor does it indicate 'autoincrement=True'or 'nullable=True', and 
> no explicit value is passed.  Primary key columns typically may not
> store NULL.
> util.warn(msg)(psycopg2.IntegrityError) null value in column "emp_id" 
> violates not-null constraint
> DETAIL:  Failing row contains (null, engineer).[SQL: 'INSERT INTO employees 
> (emp_type) VALUES (%(emp_type)s)'] [parameters: {'emp_type': u'engineer'}]
>
>
> What do I need to do to get my instance of base class created properly 
> when I create my subclass? I'm still on SQLAlchemy 1.1.13 and py27
>
> Thanks
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] [alembic] Batch migrations with sqlite and naming conventions

2017-04-16 Thread Peter Erickson

Mike -
Thanks for the response (and the all work you've put towards SQLalchemy 
and Alembic). I've played around with the reflect_args and that didn't 
seem to have much affect. When I remove those, the CHECK constraints 
become unnamed. Regardless, I'll continue to mess around with it and see 
if I can get it working. If I still have problems, I'll bug you again. 
Worse case scenario, I'll consider migrating to a database that supports 
the ALTER. I planned on doing it at some point in the future anyway and 
it might make a few things easier anyway.


Thanks again for all the help.


On 4/12/17 8:26 AM, mike bayer wrote:


this area of batch migrations is not well developed, has to deal with
some impossible situations due to SQLite's poor design (allowing unnamed
constraints, not to mention that they refuse to implement ALTER), and is
also in flux (as Alembic is relying upon a buggy behavior in SQLAlchemy
w.r.t the naming convention feature here).

IIRC the naming convention linkage with batch migrations only has to do
with constraints that are unnamed when reflected from the database. That
might also be buggy.

I do notice that you have some reflect_args set up to try helping it
with the Boolean datatype and that might be throwing things off, not
sure.  Or, you can add the constraint_name argument to each of those
Boolean types which would hopefully force it to be the right name (see
http://alembic.zzzcomputing.com/en/latest/batch.html#changing-the-type-of-boolean-enum-and-other-implicit-check-datatypes)


Current SQLAlchemy (1.1) *will* reflect CHECK constraints on SQLite, the
alembic docs are out of date on this.   In theory, your naming
convention should have no impact because these constraints are coming
back with names already.   I'd confirm that these names include the
naming convention already in the database, then try running without any
of the reflect_args added.   in theory these boolean datatypes /
constraints should work without any intervention.

for further help I'd need to build this up here and there are likely
more bugs on the Alembic side to be fixed.




On 04/11/2017 10:58 PM, Peter Erickson wrote:

I'm sure that I'm missing a subtle point with batch migrations and
naming conventions for SQLite databases, but I can't figure out what I'm
doing wrong. After renaming a table, I'm using a batch migration to
update a foreign key in a child table. However, after recreating the
child table 2 CHECK constraints (being created from a Boolean) aren't
being named according to the specified naming convention.

To ensure that this post doesn't get too long, here are the parts I
think are relevant. I can add more if needed.

naming_convention = {
"ix": "ix_%(column_0_label)s",
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}

class RulesetMatch(Base):
__tablename__ = 'ruleset_matches'
...
viewed = Column(Boolean(name='view_bool'), default=False)
hide = Column(Boolean(name='hide'), default=False)

# sqlite3 data-dev.sqlite .schema

CREATE TABLE ruleset_matches (
...
viewed BOOLEAN,
hide BOOLEAN,
CONSTRAINT pk_ruleset_matches PRIMARY KEY (id),
CONSTRAINT ck_ruleset_matches_view_bool CHECK (viewed IN (0, 1)),
CONSTRAINT ck_ruleset_matches_hide CHECK (hide IN (0, 1))
);

# migration script

def upgrade():
with op.batch_alter_table(
'ruleset_matches',
naming_convention=naming_convention,
reflect_args=[
Column('viewed', Boolean(name='view_bool')),
Column('hide', Boolean(name='hide'))]) as batch_op:
# drop_constraint
# create_foreign_key

# sqlite3 data-dev.sqlite .schema

CREATE TABLE IF NOT EXISTS "ruleset_matches" (
...
viewed BOOLEAN,
hide BOOLEAN,
CONSTRAINT pk_ruleset_matches PRIMARY KEY (id),
CONSTRAINT view_bool CHECK (viewed IN (0, 1)),
CONSTRAINT hide CHECK (hide IN (0, 1)),
...
);

As shown in the last schema, the CHECKS are named "view_bool" and "hide"
unlike what it was previously "ck_ruleset_matches_view_bool" and
"ck_ruleset_matches_hide." If I remove the "name" attribute in
reflect_args or remove reflect_args all together, the CHECK constraints
are unnamed. Removing the naming_convention doesn't seem to affect
anything. What do I need to do to ensure that the constraints are named
appropriately?

alembic 0.9.1
sqlalchemy 1.1.9

Thanks in advance.





--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You r

[sqlalchemy] [alembic] Batch migrations with sqlite and naming conventions

2017-04-11 Thread Peter Erickson
I'm sure that I'm missing a subtle point with batch migrations and 
naming conventions for SQLite databases, but I can't figure out what I'm 
doing wrong. After renaming a table, I'm using a batch migration to 
update a foreign key in a child table. However, after recreating the 
child table 2 CHECK constraints (being created from a Boolean) aren't 
being named according to the specified naming convention.


To ensure that this post doesn't get too long, here are the parts I 
think are relevant. I can add more if needed.


naming_convention = {
"ix": "ix_%(column_0_label)s",
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}

class RulesetMatch(Base):
__tablename__ = 'ruleset_matches'
...
viewed = Column(Boolean(name='view_bool'), default=False)
hide = Column(Boolean(name='hide'), default=False)

# sqlite3 data-dev.sqlite .schema

CREATE TABLE ruleset_matches (
...
viewed BOOLEAN,
hide BOOLEAN,
CONSTRAINT pk_ruleset_matches PRIMARY KEY (id),
CONSTRAINT ck_ruleset_matches_view_bool CHECK (viewed IN (0, 1)),
CONSTRAINT ck_ruleset_matches_hide CHECK (hide IN (0, 1))
);

# migration script

def upgrade():
with op.batch_alter_table(
'ruleset_matches',
naming_convention=naming_convention,
reflect_args=[
Column('viewed', Boolean(name='view_bool')),
Column('hide', Boolean(name='hide'))]) as batch_op:
# drop_constraint
# create_foreign_key

# sqlite3 data-dev.sqlite .schema

CREATE TABLE IF NOT EXISTS "ruleset_matches" (
...
viewed BOOLEAN,
hide BOOLEAN,
CONSTRAINT pk_ruleset_matches PRIMARY KEY (id),
CONSTRAINT view_bool CHECK (viewed IN (0, 1)),
CONSTRAINT hide CHECK (hide IN (0, 1)),
...
);

As shown in the last schema, the CHECKS are named "view_bool" and "hide" 
unlike what it was previously "ck_ruleset_matches_view_bool" and 
"ck_ruleset_matches_hide." If I remove the "name" attribute in 
reflect_args or remove reflect_args all together, the CHECK constraints 
are unnamed. Removing the naming_convention doesn't seem to affect 
anything. What do I need to do to ensure that the constraints are named 
appropriately?


alembic 0.9.1
sqlalchemy 1.1.9

Thanks in advance.

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Postgrsql Array of CIDR and return type as String

2016-02-03 Thread Peter Hudec
Hi,

I have found, that sqlalcheny should convert poctgresql ARRAY fri LIST.
This works fine with the TEXT.  See example in some other post 

http://stackoverflow.com/questions/20699196/python-list-to-postgresql-array

(, "['new', 'updated', 'values']")


If I change the column type to CIDR ARRAY I get


(, "'{127.0.0.1/32,::1/128}'")

postgresl version: 9.4 /debian jessie/
python: 2.7.9
bottle (0.12.9)
bottle-pgsql (0.2)
bottle-sqlalchemy (0.4.3)
psycopg2 (2.6.1)
SQLAlchemy (1.0.11)


Please could anybody help me why the ARRAY of CIDR is not converted to the LIST?


  best regards
Peter Hudec

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Obtaining the SQL query for a lazy load programmatically

2014-10-27 Thread Peter Waller
I observe that the query for a lazy load is generated here:

https://bitbucket.org/zzzeek/sqlalchemy/src/e1d1d999c9a688f4c8dbbe885438c63d6ef494c6/lib/sqlalchemy/orm/strategies.py?at=master#cl-551

I would like to run queries through an EXPLAIN QUERY PLAN or an ANALYZE
programmatically for many properties. Is there an easy way to achieve that?

It looks not since it doesn't like the query is exposed. The only way I'm
aware of to get it is to cause a lazy load and have logging enabled (or add
an event handler which observes the executed statement), but I'd like to
know if there is a way to avoid doing that.

Thanks,

- Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml

2014-10-24 Thread Peter Waller
Well I was hoping to just use yaml since yaml understands when two
objects refer to the same underlying object. That means you don't have to
write any logic to de-duplicate objects through relationships, etc.

Since json doesn't have the notion of referencing, that doesn't seem
straightforward there.

I was also hoping to just use yaml to avoid writing custom dumping code,
since it seems in general like a useful capability. So I may yet try and
find the underlying bug and fix it.

On 24 October 2014 15:29, Jonathan Vanasco jvana...@gmail.com wrote:


 Usually for this sort of stuff, I serialize the object's data into a JSON
 dict ( object columns to JSON dict, object relations to a dict, list of
 dicts, or reference to another object).  ( Custom dump/load is needed to
 handle Timestamp, Floats, etc).  You might be able to iterate over the data
 in YAML and not require custom encoding/decoding.  When I need to treat
 the json data as objects, I'll load them into a custom dict class that will
 treat attributes as keys.

 The downside of this is that you don't have all the SqlAlchemy relational
 stuff or any ancillary methods (though they can be bridged in with more
 work).  The benefit though is that you can get a nearly 1:1 parity between
 the core needs without much more work.  When using a read only context,
 you can flip between SqlAlchemy objects and dicts.  If you need to use the
 SqlAlchemy model itself, you could load the column/relationship data into
 it manually.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml

2014-10-24 Thread Peter Waller
The oddity is that calling `__reduce_ex__` on the instance is fine, but on
the class it is not. When serialising a declarative class it finds itself
serialising the class type, which fails. This actually fails for the
`object`, too (see below).

So I think what's happening is that serialisation fails because
`_sa_instance_state`
(somewhere inside it) contains a class. This is probably a yaml bug, then.

In [1]: object().__reduce_ex__(2)
Out[1]: (function copy_reg.__newobj__, (object,), None, None, None)

In [2]: object.__reduce_ex__(2)
---
TypeError Traceback (most recent call last)
ipython-input-1-eebec0cadfee in module()
 1 object.__reduce_ex__(2)

/usr/lib/python2.7/copy_reg.pyc in _reduce_ex(self, proto)
 68 else:
 69 if base is self.__class__:
--- 70 raise TypeError, can't pickle %s objects %
base.__name__
 71 state = base(self)
 72 args = (self.__class__, base, state)

TypeError: can't pickle int objects


On 24 October 2014 17:55, Jonathan Vanasco jvana...@gmail.com wrote:


 On Friday, October 24, 2014 10:39:43 AM UTC-4, Peter Waller wrote:

 I was also hoping to just use yaml to avoid writing custom dumping
 code, since it seems in general like a useful capability. So I may yet try
 and find the underlying bug and fix it.


 It might not be a bug, and the effect of an implementation feature of
 SqlAlchemy.  I tried (naively) playing around with your example, and
 thought back to how SqlAlchemy accomplishes much of it's magic by creating
 custom comparators (and other private methods) on the classes and columns.

 Playing around with it, the problem seems to be with the SqlAlchemy
 object's __reduce_ex__ method. If you simply use __reduce__ in yaml, it
 works.  I couldn't figure out what Foo inherits __reduce_ex__ from , or if
 any of the columns have it.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] A complicated composite secondary join relationship() property involving four tables

2014-10-20 Thread Peter Waller
Hi All,

I am wondering if it is possible to describe in sqlalchemy the
following relationship with the new features introduced since 0.9.2.
I've been reading this document and it looks close to what I'm trying
to achieve:

 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#composite-secondary-joins

As does this previous mailing list post:

 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/discussion

There is a pre-existing structure to deal with. I would rather not
change the structure if it can be avoided, but may have to resort to
doing so. Here are the tables:

TableName (column, ...)

 PersonAction (PersonID, ActionID)
 PersonUnit (PersonID, UnitID)
 DepartmentUnit (UnitID, DepartmentID)
 ActionMeaning (DepartmentID, ActionID)

It would be nice to be able to go from an ActionMeaning to a
PersonAction and back again. To do that, a conversion between
DepartmentName and DepartmentID needs to happen.

Here are a few of the relationships:

One PersonAction: one ActionMeaning
One ActionMeaning: many PersonAction
One PersonUnit many  PersonAction
One PersonUnit one DepartmentUnit
One DepartmentID: many UnitID
  (therefore one ActionMeaning: many DepartmentUnit)
One ActionMeaning: Many PersonUnit (via DepartmentUnit)

I'm using the ORM and I would like to be able to write a property on
the PersonAction which allows me to efficiently get to the
ActionMeaning (preferably joinedload'ed), and likewise I would like to
be able to go from an ActionMeaning to all of the related
PersonActions.

Is this possible? Or are there alternative tasteful strategies for
dealing with this tricky structure?

Life would be much easier if I had the DepartmentID on the PersonUnit...

Keywords to help anyone searching in the future:

Composite relationship between multiple tables
Joining multiple tables in a relationship
relationship secondary join primaryjoin secondaryjoin

Thanks,

- Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A complicated composite secondary join relationship() property involving four tables

2014-10-20 Thread Peter Waller
Outstanding! My puny mind was just beginning to tickle this
documentation and was starting to get the idea that this is the
right direction. Good to know I'm on track.

Is it also possible that having a class which maps to a join* will
help? Then rather than having PersonAction and PersonUnit I would just
have Person. Would that simplify the relationships?

I'll have a play around with this and see where I get.

Thanks,

- Peter

* http://docs.sqlalchemy.org/en/rel_0_9/orm/mapper_config.html#maptojoin

On 20 October 2014 18:26, Michael Bayer mike...@zzzcomputing.com wrote:

 On Oct 20, 2014, at 1:14 PM, Peter Waller pe...@scraperwiki.com wrote:

 Hi All,

 I am wondering if it is possible to describe in sqlalchemy the
 following relationship with the new features introduced since 0.9.2.
 I've been reading this document and it looks close to what I'm trying
 to achieve:

 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#composite-secondary-joins

 As does this previous mailing list post:

 https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/discussion

 There is a pre-existing structure to deal with. I would rather not
 change the structure if it can be avoided, but may have to resort to
 doing so. Here are the tables:

 TableName (column, ...)

 PersonAction (PersonID, ActionID)
 PersonUnit (PersonID, UnitID)
 DepartmentUnit (UnitID, DepartmentID)
 ActionMeaning (DepartmentID, ActionID)

 So as far as the patterns at 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html, this one 
 qualifies for 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#relationship-to-non-primary-mapper.
   The reason is because PersonAction has a direct FK to ActionMeaning 
 (ActionID) as well as towards elements on what would be “secondary” here, 
 PersonUnit/DepartmentUnit.  The criteria for that is: we seek to join from A 
 to B, making use of any number of C, D, etc. in between, however there are 
 also join conditions between A and B directly.”

 So you map ActionMeaning using a non-primary mapper to a join of 
 ActionMeaning, DepartmentUnit, and PersonUnit, then PersonAction can relate 
 to this mapper directly.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A complicated composite secondary join relationship() property involving four tables

2014-10-20 Thread Peter Waller
This is how far I got, before running into a wall:

 j = join(ActionMeaning, DepartmentUnit).join(PersonUnit)
 disambiguation = {
   UnitID: [j.c.PersonUnit_UnitID,  j.c.DepartmentUnit_UnitID],
   DepartmentID: [j.c.DepartmentUnit_DepartmentID,
j.c.ActionMeaning_DepartmentID],
 }
 ActionMeaningToPerson = mapper(ActionMeaning, j, non_primary=True,
properties=disambiguation)
 PersonAction.meaning = relationship(ActionMeaningToPerson)

ActionMeaning has a DepartmentID -
ForeignKey(DepartmentUnit.DepartmentID)
PersonUnit has UnitID - ForeignKey(DepartmentUnit.UnitID)
PersonAction has PersonID - ForeignKey(PersonUnit.PersonID)

This actually does something when I look at PersonAction.meaning, except
unfortunately I cannot yet figure out where to express the constraint
ActionMeaning.ActionID == PersonAction.ActionID, so I get back all
ActionMeanings with a matching DepartmentID, and ActionID of various sorts.

I've tried expressing it as the relationship(primaryjoin=), but this gave
me:


ArgumentError: Could not locate any relevant foreign key columns for
primary join condition 'ActionMeaning.ActionID =
PersonAction.ActionID' on relationship PersonAction.meaning.  Ensure
that referencing columns are associated with a ForeignKey or
ForeignKeyConstraint, or are annotated in the join condition with the
foreign() annotation.


Any hints?

Thanks again!

On 20 October 2014 18:26, Michael Bayer mike...@zzzcomputing.com wrote:


  On Oct 20, 2014, at 1:14 PM, Peter Waller pe...@scraperwiki.com wrote:
 
  Hi All,
 
  I am wondering if it is possible to describe in sqlalchemy the
  following relationship with the new features introduced since 0.9.2.
  I've been reading this document and it looks close to what I'm trying
  to achieve:
 
 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#composite-secondary-joins
 
  As does this previous mailing list post:
 
  https://groups.google.com/d/topic/sqlalchemy/ov-mYWA7XAM/discussion
 
  There is a pre-existing structure to deal with. I would rather not
  change the structure if it can be avoided, but may have to resort to
  doing so. Here are the tables:
 
  TableName (column, ...)
 
  PersonAction (PersonID, ActionID)
  PersonUnit (PersonID, UnitID)
  DepartmentUnit (UnitID, DepartmentID)
  ActionMeaning (DepartmentID, ActionID)

 So as far as the patterns at
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html, this one
 qualifies for
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#relationship-to-non-primary-mapper.
 The reason is because PersonAction has a direct FK to ActionMeaning
 (ActionID) as well as towards elements on what would be “secondary” here,
 PersonUnit/DepartmentUnit.  The criteria for that is: we seek to join from
 A to B, making use of any number of C, D, etc. in between, however there
 are also join conditions between A and B directly.”

 So you map ActionMeaning using a non-primary mapper to a join of
 ActionMeaning, DepartmentUnit, and PersonUnit, then PersonAction can relate
 to this mapper directly.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Search/Filter using M2M relationships

2014-10-16 Thread Peter Petersky
Hi! 

i'm currently having these Models:

conn_tags_table = sql.Table('conn_tags', Base.metadata,
 sql.Column('conn_id', sql.Integer, ForeignKey(
'connection.id')),
 sql.Column('tag_id', sql.Integer, ForeignKey('tag.id'))
)


class Connection(Base):
 Connection tables 
__tablename__ = connection
#__table_args__ = (sql.UniqueConstraint('url', 'port', 
name='unique_url_port'),)

id = sql.Column(sql.Integer, primary_key=True)
conn_type = sql.Column(sql.String(30))
url = sql.Column(sql.Text)
ip = sql.Column(sql.Text)
port = sql.Column(sql.Integer)
answer = sql.Column(sql.String(50))
redirect = sql.Column(sql.Text)
tags = relationship('Tag', secondary=lambda: conn_tags_table)

def __unicode__(self):
return %d - %s % (self.id, self.url)

def __str__(self):
return %d - %s - %d % (self.id, self.url, self.port)

class Tag(Base):
 Table for tags 
__tablename__ = tag

id = sql.Column(sql.Integer, primary_key=True)
name = sql.Column(sql.Text, unique=True)
desc = sql.Column(sql.Text)

def __unicode__(self):
return self.name

def __str__(self):
return self.name


And these are my views:

class ConnectionAdmin(sqla.ModelView):
column_display_pk = False
form_columns = ['conn_type', 'url', 'port', 'answer', 'redirect', 'tags'
]
column_searchable_list = ('conn_type', 'url', 'answer', 'redirect', 'ip'
, Tag.name)
column_filters = ('conn_type', 'url', 'port', 'answer', 'redirect', 'ip'
, Tag.name)

# Define which fields should be preloaded by Ajax
form_ajax_refs = {
'tags': {
'fields': (Tag.name,)
}
}


I'm using *flask-admin* for all the magic stuff. All I want to do is to 
be able to search in Tag.name as well. Besides that I want to filter by 
Tag.name. However when I search for some tags, nothing happens. And 
filtering for some tag 
(http://127.0.0.1:5000/admin/connection/?flt1_24=Dead) leads to:


sqlalchemy.exc.InvalidRequestError

  

sqlalchemy.exc.InvalidRequestError: Could not find
 a FROM clause to join from.  Tried joining to tag, but got: Can't find 
any foreign key relationships between 'connection' and 'tag'.


I've found some solution 
http://codeseekah.com/2013/08/04/flask-admin-hacks-for-many-to-many-relationships/
 
but that didn't work for me. Any hints/ideas would be appreciated.



Cheers,

Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] senseless warning messages escape python logging

2014-07-02 Thread Hans-Peter Jansen
Dear SQLAchemistas,

this is an issue, that my apps choke on from time to time, _related_ to SQLA. 

Although, logging is set up correctly, some operations spit out senseless 
warning messages like this:

/usr/lib/python2.6/site-packages/sqlalchemy/engine/default.py:324: Warning: 
Data truncated for column 'sendungref1' at row 1
  cursor.execute(statement, parameters)

FYI, this is the solely message, catched by cron, from code that is exercised 
heavily.

Sure, I understand, that with some probability, this is triggered from MySql-
python-1.2.3 under the covers, and I know, that the value given for this 
column was too big, but without any context, it doesn't help in any way. 

So strictly speaking, I'm barking up the wrong tree, but the question is, have 
you figured out a way to either catch or suppress those warnings? Is there a 
mysql adapter, that cooperates better in this respect?

Thanks for your insights,

Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-29 Thread Hans-Peter Jansen
Dear Jonathan,

thank you for your cool remix of recommendations. Very appreciated.

On Donnerstag, 26. Juni 2014 15:09:03 Jonathan Vanasco wrote:
 In case this helps...
 
 This reminds me slightly of some RFID work I did years ago.  We had a lot
 of reads coming in from different units, several reads per unit per second.

Fun.

 I found the best way to handle writing was to just access the db directly,
 but kept the ORM on the read side.

I'm doing this in other projects, but writing SQL feels so 80ies, doesn't it.. 
I even did my own ORM with a small 'r' (how did not?) in about a 150 lines of 
code years ago.

 I recall a few things from that experience.  some of this might be
 applicable to you:
 
 • doing a Select else Insert pattern was faster than Insert All and
 just letting the db constraint catch errors.  there were still a lot of
 exceptions from different transactions.  this is probably dependent on the
 database/version/size/etc.  but it's worth profiling.

Yes, that's what I found, too. In the rollback part of it.

 • we had a low number of devices, so just cached their data into the
 app's memory.  losing the extra select helped a bit.

 * our transaction got pared down to looking like this:
 
  begin;
  select record ;
  insert record;
  commit;

 * i think someone eventually migrated the db interaction to use a stored
 procedure.

An example of that would be interesting.
 
 * we kept an in-memory array of recently seen transactions.  it was just a
 simple list that would have elements inserted at 0 and then truncated.  we
 checked that list first, then hit the database if it wasn't seen .

I'm doing that, but using a dict with the unique index members as the key.

 our performance greatly increased, and frustrations decreased, as we moved
 more logic out of the database and into python.
 
 on another RFID project, the lead engineer i worked with had a similar
 approach... however to keep performance better, he used an in-memory
 database to record data and then did periodic flushes to the real database
 after the data was smoothed out ( to compensate for known hardware issues
 ).

Oh, that case sounds pretty pathologic..

After coping with Mike's notes, I faced some bottlenecks on the sending side, 
which are plain Raspberry Pis, and the socket interface. Disabling Nagle 
helped a lot, as well as avoiding datetime.datetime.strptime(). I've learned, 
that some unavoidable datetime and timezone operations are still the most 
expensive ones.. Oh, well.

Net result of this optimization quest: down to 1/10th compared to the starting 
point, and all that without getting too dirty. ;)

Cheers,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-26 Thread Hans-Peter Jansen
Dear Mike,

sorry for not coping with preferred reply behavior..

On Donnerstag, 26. Juni 2014 15:26:02 Mike Bayer wrote:
 On 6/26/14, 3:07 PM, Hans-Peter Jansen wrote:
  
  Obviously, some operation triggers the flush method with about the
  same consequences..
 
 OK, turn off autoflush - either Session(autoflush=False), or within the
 critical block, do with session.no_autoflush:

Okay, that makes a difference, shaving of about 40% of total runtime,
after dealing with identical records in one session..

Still, attributes is highest in profiles. I guess, this is, what you 
called overkill in your first response. A helper for dealing with this 
situation sounds interesting, though.

Thu Jun 26 20:41:50 2014srelay.pstats

 55993702 function calls (54767154 primitive calls) in 533.479 seconds

   Ordered by: internal time
   List reduced from 1277 to 30 due to restriction 30

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
42160  292.5360.007  292.9100.007 
attributes.py:1321(from_collection)
63209   39.1850.001   39.2940.001 attributes.py:1255(as_state)
   39   39.0371.001   39.0371.001 {time.sleep}
45816   30.9490.001   30.9490.001 {method 'query' of 
'_mysql.connection' objects}
  392   30.0390.077   30.0390.077 {method 'recv' of 
'_socket.socket' objects}
422409.8700.000  302.8040.007 attributes.py:871(get_history)
210834.2840.0006.5000.000 attributes.py:868(__copy)
421582.2300.0002.2520.000 collections.py:711(__len__)
210431.4660.0001.8510.000 topological.py:51(find_cycles)
  41594301.3690.0001.3690.000 {isinstance}
246831.3300.000  391.9840.016 session.py:1790(flush)
642077/247121.2920.000   12.8400.001 
visitors.py:74(_compiler_dispatch)
   1975701.2850.0004.2150.000 compiler.py:389(visit_label)
457351.2350.0003.6150.000 default.py:391(_init_compiled)
   2468471.1450.0001.6370.000 compiler.py:421(visit_column)
245581.0600.000  458.6330.019 srelay.py:74(store_enos_rec)
246361.0480.000   12.7090.001 compiler.py:1136(visit_select)
   1975521.0300.0006.0000.000 
compiler.py:1019(_label_select_column)
246361.0150.0001.7930.000 result.py:171(__init__)
631260.9390.0003.2020.000 
persistence.py:275(_collect_update_commands)
49200/491720.9370.0003.5390.000 db_scheme.py:91(__repr__)
   1472400.9040.0001.2460.000 topological.py:15(sort_as_subsets)
   1265140.8140.000  348.4870.003 unitofwork.py:411(execute)
631910.7710.0001.9840.000 
dependency.py:67(per_property_flush_actions)
   6229250.7510.0000.9730.000 attributes.py:308(__get__)
   6996820.7450.0000.7450.000 state.py:185(dict)
   1898090.7260.000  343.3410.002 
unitofwork.py:177(get_attribute_history)
739280.6530.0000.7810.000 expression.py:3538(__init__)
421700.6480.0000.6480.000 {method 'clear' of 'dict' objects}
   1262520.6390.0001.7950.000 persistence.py:802(_sort_states)


Cheers,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Mapping lots of similar tables / database design

2014-04-01 Thread Peter Stensmyr


I posted this question on Stack Overflow a few days ago, and got some 
response but nothing that really solves my problem. I'm hoping that I can 
get some more input here. The initial recommendation was to keep all the 
data in two tables (one meta and one data table), but this might become 
unwieldy with the number of rows (I estimate it will be about 6 billion 
rows in total).

I have many (~2000) locations with time series data. Each time series has 
millions of rows. I would like to store these in a Postgres database. My 
current approach is to have a table for each location time series, and a 
meta table which stores information about each location (coordinates, 
elevation etc). I am using SQLAlchemy to create and query the tables. I 
would like to have a relationship between the meta table and each time 
series table to do queries like select all locations that have data 
between date A and date B and select all data for date A and export a csv 
with coordinates. What is the best way to create many tables with the same 
structure (only the name is different) and have a relationship with a meta 
table? Or should I use a different database design?

Currently I am using this type of approach to generate a lot of similar 
mappings:

from sqlalchemy import create_engine, MetaDatafrom sqlalchemy.types import 
Float, String, DateTime, Integerfrom sqlalchemy import Column, ForeignKeyfrom 
sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy.orm import 
sessionmaker, relationship, backref
Base = declarative_base()

def make_timeseries(name):
class TimeSeries(Base):

__tablename__ = name
table_name = Column(String(50), ForeignKey('locations.table_name'))
datetime = Column(DateTime, primary_key=True)
value = Column(Float)

location = relationship('Location', backref=backref('timeseries',
lazy='dynamic'))

def __init__(self, table_name, datetime, value):
self.table_name = table_name
self.datetime = datetime
self.value = value

def __repr__(self):
return {}: {}.format(self.datetime, self.value)

return TimeSeries

class Location(Base):

__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
table_name = Column(String(50), unique=True)
lon = Column(Float)
lat = Column(Float)
if __name__ == '__main__':
connection_string = 'postgresql://user:pw@localhost/location_test'
engine = create_engine(connection_string)
metadata = MetaData(bind=engine)
Session = sessionmaker(bind=engine)
session = Session()

TS1 = make_timeseries('ts1')
# TS2 = make_timeseries('ts2')   # this breaks because of the foreign key
Base.metadata.create_all(engine)
session.add(TS1(ts1, 2001-01-01, 999))
session.add(TS1(ts1, 2001-01-02, -555))

qs = session.query(Location).first()
print qs.timeseries.all()


This approach has some problems, most notably that if I create more than 
one TimeSeries object the foreign key doesn't work. Previously I've used 
some work-arounds (such as not defining a foreign key), but it all seems 
like a big hack and I feel that there must be a better way of doing this. 
How should I organise and access my data?

Any advice appreciated, 

Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Dynamically map/reference to another table based on tablename

2013-11-05 Thread Peter Lim
Dear Micheal,

Wow, thank you very, very much for taking your time with clarifying my
problem, this is exactly what I'm looking for. I had problems understanding
polymorphic associations, I'll try to go through it again with the examples
and the test code provided.

I probably should have mentioned from the start for clarity that Entity
Systems has very large similarities with AOP where the Components are
similar to Aspects in AOP. It would have saved the guessing work of my
problem :( You are spot on when you said that each row of any Component's
data is specific to a specific Entity.

Regards,
Peter Lim


On Wed, Nov 6, 2013 at 8:16 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Nov 3, 2013, at 5:41 AM, Peter madp...@gmail.com wrote:

 Hi,

 I'm currently learning the how to set up a basic Entity System with
 sqlalchemy. The entity system requirements are described within this link:
 http://t-machine.org/index.php/2009/10/26/entity-systems-are-the-future-of-mmos-part-5/


 Basically, an Entity that lives with the world is defined by the
 Components that are attached to it. The nature of the Entity is described
 by the combinations of Components attached to the entity. An Entity can
 change it's nature by removing or adding a Component to the entity. Every
 Component is backed by a table with it's associated data. The Components
 are indexed within the 'components_dir' table. As you can see by the schema
 below, each Entity can be mapped differently to different Components based
 of EntityComponentMap and this can change anytime with the state of the
 object.



 OK I tried to scan through that post a bit to get a handle on what schema
 it was proposing, but it wasn’t entirely clear.  The schema you’ve
 presented here gives me a better idea.   These kinds of schemas are very
 common, but I would note that they are troublesome, both in terms of
 scalability as well as querying ability, due to being overly generic.
 Like if you have three types of Entity, you can’t easily query for Entity
 objects of just one particular type as you have all types of entities
 jumbled into the same set of tables.   If an entity of type Q has
 attributes A, B and C, you can’t easily produce a query that’s equivalent
 to “SELECT * FROM Q where A=1 AND B=2 AND C=3”, because you don’t have “A,
 B, C” columns - you have to produce joins out to the Component tables in
 order to figure these things out, producing complex and inefficient
 queries.  If your system has many different types of objects, some of which
 there’s only a handful and others of which there are millions, they all get
 stuffed into the same set of tables, turning those tables into bottlenecks
 for all object access.



 In order to grab an Entity's component data, I'll retrieve the Component's
 tablename via ComponentsDirectory. I'll then manually retrieve the data row
 with component_id from it's individual component table.


 so the next part here, is I didn’t see the term “tablename” in that blog
 post at all, but I didn’t read it fully.   The practice of joining out from
 a single foreign key out to the primary key of table A, B, or C based on a
 string tablename in another column is sometimes called a “generic foreign
 key”.   In relational algebra and SQL, there is no such thing - it’s an
 artificial concept created in application space (and is very popular in the
 Rails world) that works against the spirit and common practice of
 relational databases.I wrote an old post about it here:
 http://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-sqlalchemy/
  and very recently I’ve updated the examples in SQLAlchemy itself to
 include modern versions of each.   In SQLAlchemy 0.9 the examples include a
 pure “generic foreign key” where the “tablename” (more specifically a
 “discriminator”) is stored and used as a means of selecting the related
 table (this is in examples/generic_associations).

 The key here is not so much having a string that says “this object is of
 type X”, SQLAlchemy does have a system called “table inheritance” which
 uses this, it’s the foreign key part.   When a column can point to any
 number of other columns elsewhere, it works against standard relational
 techniques and also makes proper data constraints more difficult to produce.

 In the case of your specific model, I’ve made the observation that a
 Component seems to have data fields on it, which, though I’m not sure, seem
 to be specific to a specific Entity?   In which case we can safely make a
 row in Component a foreign key out to Entity directly.   I’ve adapted this
 into a joined inheritance scheme attached which is one way of achieving the
 basic idea here, though the specifics might not match what you’re looking
 for.   Using the separate ComponentsDirectory entity in order to get at
 “discriminator” values, rather than having a simple column on
 ComponentBase, makes the model much more complicated, but I’ve tried to
 work that out here as well just

[sqlalchemy] Dynamically map/reference to another table based on tablename

2013-11-03 Thread Peter
Hi,

I'm currently learning the how to set up a basic Entity System with 
sqlalchemy. The entity system requirements are described within this link: 
http://t-machine.org/index.php/2009/10/26/entity-systems-are-the-future-of-mmos-part-5/

Basically, an Entity that lives with the world is defined by the Components 
that are attached to it. The nature of the Entity is described by the 
combinations of Components attached to the entity. An Entity can change 
it's nature by removing or adding a Component to the entity. Every 
Component is backed by a table with it's associated data. The Components 
are indexed within the 'components_dir' table. As you can see by the schema 
below, each Entity can be mapped differently to different Components based 
of EntityComponentMap and this can change anytime with the state of the 
object. 

In order to grab an Entity's component data, I'll retrieve the Component's 
tablename via ComponentsDirectory. I'll then manually retrieve the data row 
with component_id from it's individual component table.

I'm sorry, I'm kinda really new to sqlalchemy or database based 
programming, but is there anyway for sqlalchemy to dynamically directly map 
or join the entity to the individual final component data tables?

I've setup the following tables:
class Entity(Base):
__tablename__ = 'entities'
id = Column(Integer, primary_key=True)
name = Column(String)

class ComponentsDirectory(Base):
__tablename__ = 'components_dir'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False, unique=True)
cls_name = Column(String, nullable=False, unique=True)
table_name = Column(String, nullable=False, unique=True)

class EntityComponentMap(Base):
__tablename__ = 'entity_component_map'
id = Column(Integer, primary_key=True)
entity_id = Column(Integer, ForeignKey('entities.id'))
component_type_id = Column(Integer, ForeignKey(components_dir.id))
component_id = Column(Integer)
entity = relationship('Entity', 
backref=backref('component_types_assoc'))
component_type = relationship('ComponentsDirectory', 
backref=backref('entities_assoc'))

class ComponentA(Base):
__tablename__ = 'component_b'
id = Column(Integer, primary_key=True)
data_field1 = Column(Integer)

class ComponentB(Base):
__tablename__ = 'component_b'
id = Column(Integer, primary_key=True)
data_field1 = Column(Integer)
data_field2 = Column(String)

class ComponentC(Base):
__tablename__ = 'component_c'
id = Column(Integer, primary_key=True)
data_field1 = Column(Integer)
data_field2 = Column(String)
data_field3 = Column(Boolean)



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Modification tracking

2013-08-22 Thread Hans-Peter Jansen
Dear Michael,

thanks for the detailed response.

On Mittwoch, 21. August 2013 16:55:18 Michael Bayer wrote:
 On Aug 21, 2013, at 12:40 PM, Hans-Peter Jansen h...@urpla.net wrote:
  Hi,
  
  being new to SQLAlchemy, I try to get my way through it.
  
  In an application, I have rather elaborate needs to track changes.
  
  I've defined 3 classes with declarative, where the main class has
  relationships with two auxiliary classes, that refer to the main class
  with foreign references. All pretty basic stuff.
  
  Now I need to track all modifications to all fields, including the
  relationship list objects.
  
  What is the best way to accomplish this task with SQLAlchemy?
  
  Is there some boilerplate available to support this, or do I have to carry
  around two objects and compare them item by item?
 you can intercept changes on attributes using the attribute event system: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/events.html#attribute-events
 
 otherwise you can get at the changes on an attribute after the fact (but
 before a flush) using the history interface:
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html?highlight=history#sq
 lalchemy.orm.attributes.History

Hmm, it looks like that needs to be applied on every column attribute..

 you can get a list of all attributes mapped using mapper.attrs:
 http://docs.sqlalchemy.org/en/rel_0_8/orm/mapper_config.html?highlight=mapp
 er.attrs#sqlalchemy.orm.mapper.Mapper.attrs

from here. Since I need to track all 'net' changes, that's rather unpractical.

I've changed my code to cope with best practices hopefully (from what I can 
extract from the overwhelming docs).

That is: just change instance attributes in place, and check for dirtiness 
later on, with session.is_modified(obj, passive = True). I've chosen this 
interface, because the fact of *really* being modified is essential here.

This scheme would do just fine, but doesn't work as expected.

See attached code. Depending on modification state of parent (line 51), the 
modification state of the child is detected (parent changed) or not (parent 
unchanged).

In my real code, it's the other way around, modifications to relations are 
detected (as I iterate through all of them), but changes to the parent 
aren't, although the correct SQL UPDATE code is executed after commit(). Since 
it isn't detected correctly, my app falls flat on its nose..

Do you have any idea on this one?

Cheers,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.
from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.orm import relationship, backref, sessionmaker
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

ERR = 1

class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable = False, index = True)
children = relationship('Child', backref = 'parent',
single_parent = True, # lazy = 'joined',
cascade = 'all, delete-orphan')


def __repr__(self):
cl = [repr(c) for c in self.children]
return 'Parent(%r, children: %s)' % (self.name, ', '.join(cl))

class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable = False, index = True)
parent_id = Column(Integer, ForeignKey('parent.id'), nullable = False)

def __repr__(self):
return 'Child(%r)' % (self.name)

if __name__ == '__main__':
engine = create_engine('sqlite://', echo = True)
Base.metadata.create_all(engine)
session = sessionmaker(engine, expire_on_commit=False)()

def pr(obj, exp, msg):
res = session.is_modified(obj, passive = True)
print msg + ',', 'expected: %s,' % exp, 'outcome: %s,' % res, exp == res and 'okay' or 'FAILED'

p = Parent(name = 'pa')
c1 = Child(name = 'li')
p.children.append(c1)

print 'Starting with:', p

session.add(p)
session.commit()

pr(p, False, 'initial session committed')

if ERR:
pr(p, False, 'parent not renamed')
else:
p.name = 'po'
pr(p, True, 'parent renamed to po')

c1.name = 'lo'
pr(c1, True, 'child renamed to lo, testing child')
pr(p, True, 'child renamed to lo, testing parent')

session.commit()




Re: [sqlalchemy] Modification tracking

2013-08-22 Thread Hans-Peter Jansen
On Donnerstag, 22. August 2013 23:58:17 Hans-Peter Jansen wrote:
 Dear Michael,
 
Pardon, I'm using 0.8.2 ATM.
 
 Cheers,
 Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Modification tracking

2013-08-21 Thread Hans-Peter Jansen
Hi,

being new to SQLAlchemy, I try to get my way through it.

In an application, I have rather elaborate needs to track changes.

I've defined 3 classes with declarative, where the main class has relationships 
with two 
auxiliary classes, that refer to the main class with foreign references. All 
pretty basic 
stuff.

Now I need to track all modifications to all fields, including the relationship 
list objects. 

What is the best way to accomplish this task with SQLAlchemy?

Is there some boilerplate available to support this, or do I have to carry 
around two 
objects and compare them item by item?

TIA,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] 0.7.10 build test failures

2013-07-15 Thread Hans-Peter Jansen
On Montag, 15. Juli 2013 13:19:55 Michael Bayer wrote:
 issue http://www.sqlalchemy.org/trac/ticket/2782 is added to merge this to
 all three branches

patch for 0.7 attached to the ticket.

tested-by: Hans-Peter Jansen h...@urpla.net

Thanks, Mike. This nice experience will encourage me to report some more 
issues, and tackle upgrading the package to 0.8.xx.

Cheers,
Pete

 On Jul 15, 2013, at 1:14 PM, Michael Bayer mike...@zzzcomputing.com wrote:
  oh I know, you're running python setup.py test. I'd strongly
  recommend running with the sqla_nose.py runner directly, however this
  issue is a setuptools one, adding this to setup.py will fix:
  
  
  # Hack to prevent TypeError: 'NoneType' object is not callable error
  # in multiprocessing/util.py _exit_function when running `python
  # setup.py test` (see
  # http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
  
  try:
 import multiprocessing
  
  except ImportError:
 pass
  
  Seems like this might not be in the current master so this can be added.
  
  On Jul 15, 2013, at 1:08 PM, Michael Bayer mike...@zzzcomputing.com 
wrote:
  im trying to recall if I've seen that one before.   how are tests being
  run?   this error does not occur in any environment for me. 
  On Jul 14, 2013, at 6:18 PM, Hans-Peter Jansen h...@urpla.net wrote:
  On Sonntag, 14. Juli 2013 17:38:12 Michael Bayer wrote:
  you can get that right here:
  
  www.sqlalchemy.org/changelog/CHANGES_0_7_11
  
  Thanks, great. That fixed all tests, but this  is left still:
  
  [  156s]
  --
  [  156s] Ran 4075 tests in 148.973s
  [  156s]
  [  156s] OK (SKIP=140)
  [  156s] Error in atexit._run_exitfuncs:
  [  156s] Traceback (most recent call last):
  [  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in
  _run_exitfuncs [  156s] func(*targs, **kargs)
  [  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line
  284, in _exit_function [  156s] info('process shutting down')
  [  156s] TypeError: 'NoneType' object is not callable
  [  156s] Error in sys.exitfunc:
  [  156s] Traceback (most recent call last):
  [  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in
  _run_exitfuncs [  156s] func(*targs, **kargs)
  [  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line
  284, in _exit_function [  156s] info('process shutting down')
  [  156s] TypeError: 'NoneType' object is not callable
  
  Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] 0.7.10 build test failures

2013-07-14 Thread Hans-Peter Jansen
Dear Mike,

while building version 0.7.10 on openSUSE build service, I noticed, that there 
are a bunch of test failures to care about. 

My SQLAlchemy project:

https://build.opensuse.org/package/show/home:frispete:python/python-SQLAlchemy

The build logs are located here:

https://build.opensuse.org/package/live_build_log/home:frispete:python/python-SQLAlchemy/openSUSE_12.2/x86_64

Looking from 248s downwards, you will find this:

[  248s] ==
[  248s] ERROR: test.engine.test_execute.EchoTest.test_echo_flag_independence
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 708, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'
[  248s] 
[  248s] ==
[  248s] ERROR: test.engine.test_execute.EchoTest.test_levels
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 708, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'
[  248s] 
[  248s] ==
[  248s] ERROR: test.engine.test_execute.EchoTest.testing_engine
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 708, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'
[  248s] 
[  248s] ==
[  248s] ERROR: test.engine.test_execute.LogParamsTest.test_error_large_dict
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 545, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'


Given, that many of those seem to suffer from a similar issue, there might
be an easy fix. For the time being, I don't fail the build, but it would
be nice to enable those tests and bail out on failures in order to raise
confidence for this package/project.

Some advice is appreciated.

Cheers,
Pete 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] 0.7.10 build test failures

2013-07-14 Thread Hans-Peter Jansen
Hi Mike,

On Sonntag, 14. Juli 2013 17:03:14 Michael Bayer wrote:
 hi Hans -
 
 this issue, a missing import that only triggers on certain platforms,  has
 been fixed in all branches since 0.7.   But there's no 0.7.11 released
 planned at this time.

Thanks for the quick answer. Mind pointing me to the changeset of 0.7?

Cheers,
Pete

 
 - mike

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] 0.7.10 build test failures

2013-07-14 Thread Hans-Peter Jansen
On Sonntag, 14. Juli 2013 17:38:12 Michael Bayer wrote:
 you can get that right here:
 
 www.sqlalchemy.org/changelog/CHANGES_0_7_11

Thanks, great. That fixed all tests, but this  is left still:

[  156s] --
[  156s] Ran 4075 tests in 148.973s
[  156s] 
[  156s] OK (SKIP=140)
[  156s] Error in atexit._run_exitfuncs:
[  156s] Traceback (most recent call last):
[  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in _run_exitfuncs
[  156s] func(*targs, **kargs)
[  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line 284, in 
_exit_function
[  156s] info('process shutting down')
[  156s] TypeError: 'NoneType' object is not callable
[  156s] Error in sys.exitfunc:
[  156s] Traceback (most recent call last):
[  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in _run_exitfuncs
[  156s] func(*targs, **kargs)
[  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line 284, in 
_exit_function
[  156s] info('process shutting down')
[  156s] TypeError: 'NoneType' object is not callable

Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Advice: Best practice for working with an existing database

2013-04-03 Thread Peter Herndon
On Friday, March 22, 2013 9:31:59 AM UTC-4, Jason wrote:



 Hi Peter,
 I think using a the declarative reflected style is a great way to 
 integrate an existing database into your application. It sounds like you 
 are doing this already, but in case you are not the reflection setup is 
 documented at 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/extensions/declarative.html#using-reflection-with-declarative.
  
 It's very easy to have it populate all of your models with the correct 
 table attributes using the DeferredReflection base class. Then you just 
 have to make (almost empty) class definitions for each model. In theory you 
 could even make it so that the table name is inferred from the name of the 
 Model class, but in practice I found this to be annoying (I would 
 constantly forget how the table names compared to the class names).

 --
 Jason


Thanks, Jason! 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Advice: Best practice for working with an existing database

2013-03-18 Thread Peter Herndon
Hi all,

I'm new to SQLAlchemy, and looking for some advice on how to approach 
working with an existing database. The database will not be managed by 
Python, and I will need to maintain whatever library I write to keep up 
with the occasional schema change. I am looking to write a more-or-less 
definitive database access layer for myself and others to use if and when 
we write applications in Python -- the company is not primarily a Python 
shop. 

What's my best approach for building this library? Should I use 
Declarative, or should I use classical mapping -- which would be better, 
easier to maintain, easier for others to use? Should I be writing out the 
classes, or should I rely on reflection? Speaking of reflection, I've been 
using it to analyze what SQLAlchemy thinks of the schema. For a given table 
created by reflection, I'm seeing _autoincrement_column, columns, 
foreign_keys, indexes, and primary_key -- are there other table attributes 
I should be inspecting?

Thanks for your advice!

---Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] How to identify that an InstrumentedAttribute or its ColumnProperty are read-only?

2012-12-12 Thread Peter Bunyan
Much like FormAlchemy, I am trying to update my model from a post in a 
generic manner. I'd like to find out if an Attribute(InstrumentedAttribute) 
or its associated ColumnProperty is read-only. (please excuse ridiculous 
example)


class Foo(Base):
id = Column(Integer, primary_key=True)
name = Column(String)
title = column_property(
select([func.UPPER(name)]).\
where(id==id)
)

Although setting 'title' will not cause an insert or update in the db 
transaction, as it is a column_property, it will be settable. I'd like to 
not set the attribute by using class_mapper(Foo).iterate_properties and 
being able to determine that it was read_only. I do not ask lightly. I've 
read extensively and looked through the source code and come up blank. I 
also acknowledge that I wrote the model so I know it's read only - but I'd 
rather be lazy and get my update to do the work - after all, that is why 
I'm using such an excellent ORM.


 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/kFNDPpm3J9EJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to identify that an InstrumentedAttribute or its ColumnProperty are read-only?

2012-12-12 Thread Peter Bunyan
Thank you so much. Wonderful!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/wcjW6WJRVccJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Object delegation and SA inheritance

2012-03-05 Thread Peter Erickson
Is is possible to set the polymorphic_on attribute on an object that is
not directly tied to a db table, but has access to the db attribute via
delegation? 

I have a generic Product class that processes an XML to obtain its
generic attributes (uuid, type, etc). Afterwards, the product is
encapsulated within one of several proxy objects that perform additional
processing based on the enclosed product type. Each of the various
proxies are derived from a BaseProxy and multiple types can use
the same proxy. Due to this, the BaseProxy has a poly_type to specify
which of the proxies was used for additional processing.

Base = declarative_base()

class Product(Base):
__tablename__ = 'products'

id = Column(Integer, Sequence('id_seq'), primary_key=True)
type = Column(String(16))

class BaseProxy(Base):
__tablename__ = 'products'
__table_args__ = {'extend_existing': True}

poly_type = Column(String(16))

__mapper_args__ = {'polymorphic_on': poly_type}

product = relationship(Product, uselist=False)

def __init__(self, product):
self.product = product

def __getattr__(self, attrib):
return getattr(self.product, attrib)

class HardwareProduct(BaseProxy, Base):
__tablename__ = 'hardware'
__mapper_args__ = {'polymorphic_identity': 'hardware'}

id = Column(Integer, ForeignKey('products.id'), primary_key=True)
serial = Column(String(16))

class SoftwareProduct(BaseProxy, Base):
__tablename__ = 'software'
__mapper_args__ = {'polymorphic_identity': 'software'}

id = Column(Integer, ForeignKey('products.id'), primary_key=True)
product_key = Column(String(16))

However:

 p = Product()
sqlalchemy.exc.ArgumentError: Could not determine join condition between
parent/child tables on relationship ProductProxy.product.  Specify a
'primaryjoin' expression.  If 'secondary' is present, 'secondaryjoin' is
needed as well.


In anticipation of one response, the reason that I didnt have Hardware
and SoftwareProduct inherit directly from Product was because the
product's type wasn't known until after product creation and, due to the
number of products created, I didn't want to waste the time recreating
the underlying product again and again. However, I'm willing to change
it if there is a better approach. Product creation looks something like:

p = Product('product.xml')
p = select_proxy_based_on_type(p.type)(p)

I realize this is a drawn out question, but any help is appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] declarative and late reflection?

2011-12-27 Thread peter sabaini
Cool -- works nicely, thanks again!

On Fri, Dec 23, 2011 at 2:56 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Dec 22, 2011, at 7:28 PM, Michael Bayer wrote:

  this could work really nicely with extend_existing, which has been
 enhanced in 0.7.4, but there seem to be some glitches preventing it from
 being super nice, so I can't get that to work right now.  Just send in
 those columns via your own means:

 Those glitches have a pending patch in ticket 2356:
 http://www.sqlalchemy.org/trac/ticket/2356  See the modified example case
 there.  In 0.7.5 your use case will work exactly as you intend, where
 you'll be able to add Column objects to your declared class that will take
 precedence over what's autoloaded.


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
http://sabaini.at

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] declarative and late reflection?

2011-12-22 Thread peter sabaini
Hey list,

this sounds like it should be a FAQ, didn't find anything though:

I want to use the ORM in a declarative style and have the table
definition reflected, eg sth like:

class A(declarative_base()):
__tablename__ = 'A'
__table_args__ = {'autoload' : True}

However to do this SA (quite reasonably, really) already needs an
engine. For various reasons I can only construct one after import time
however. Is there a way to do a kind of late reflection, ie. have
the above class definition but trigger the reflection part later? I
could of course generate the class object later in a function when the
engine is already available but maybe there's something more
elegant...

Many thanks,
peter.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] declarative and late reflection?

2011-12-22 Thread peter sabaini
Hey!

This works for me -- almost :-)

In my use case I need to override a column (to provide an artificial FK --
some *erm old school mysql db) which seems to trigger SA into trying to
reflect early

Observe:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base, declared_attr

class DeclarativeReflectedBase(object):
_mapper_args = []

@classmethod
def __mapper_cls__(cls, *args, **kw):
Declarative will use this function in lieu of
calling mapper() directly.

Collect each series of arguments and invoke
them when prepare() is called.


cls._mapper_args.append((args, kw))

@classmethod
def prepare(cls, engine):
Reflect all the tables and map !
for args, kw in cls._mapper_args:
klass = args[0]
klass.__table__ = table = Table(
klass.__tablename__,
cls.metadata,
autoload=True,
autoload_with=engine)
klass.__mapper__ = mapper(klass, table, **kw)

@declared_attr
def __table__(cls):
Return a placeholder to lull declarative into complacency
return object()

Base = declarative_base(cls=DeclarativeReflectedBase)

class Foo(Base):
__tablename__ = 'foo'
quux = Column(String)
bars = relationship(Bar)

class Bar(Base):
__tablename__ = 'bar'


Gives:

Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/tmp/Python6244Mux.py, line 40, in module
class Foo(Base):
  File
/usr/lib/python2.6/site-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/ext/declarative.py,
line 1273, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File
/usr/lib/python2.6/site-packages/SQLAlchemy-0.7.4-py2.6-linux-x86_64.egg/sqlalchemy/ext/declarative.py,
line 1177, in _as_declarative
if not table.c.contains_column(c):
AttributeError: 'object' object has no attribute 'c'

Hm, maybe I can try to add the column override later?

Thanks again

peter.

On Thu, Dec 22, 2011 at 4:41 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Dec 22, 2011, at 9:37 AM, peter sabaini wrote:

  Hey list,
 
  this sounds like it should be a FAQ, didn't find anything though:
 
  I want to use the ORM in a declarative style and have the table
  definition reflected, eg sth like:
 
  class A(declarative_base()):
 __tablename__ = 'A'
 __table_args__ = {'autoload' : True}
 
  However to do this SA (quite reasonably, really) already needs an
  engine. For various reasons I can only construct one after import time
  however. Is there a way to do a kind of late reflection, ie. have
  the above class definition but trigger the reflection part later? I
  could of course generate the class object later in a function when the
  engine is already available but maybe there's something more
  elegant...

 I was about to type up this recipe on the wiki and then most awesomely I
 already did it for someone !   hooray.  The current technique for this is
 at
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DeclarativeReflectedBase.


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
http://sabaini.at

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] eager/joined loading a whole graph

2010-10-05 Thread Peter Waller
Hi All,

I have also created a post for this question on StackOverflow:
http://stackoverflow.com/questions/3863508/joinedload-eager-loading-whole-sub-graphs-in-sqlalchemy

Let's say I have a Task object which can be dependent on other Tasks. Is
there a way to sensibly eager/joinedload all of a given set of task's
subtasks?

Example code:

class Task(DeclarativeBase):
__tablename__ = 'task'

task_id = Column(Integer, primary_key=True)
name = Column(String, unique=True)

def add_dependencies(self, *tasks):
for task in tasks:
TaskDependency(dependent=self, dependency=task)
return self

@property
def dependencies(self):
return [x.dependency for x in self.dependency_edges]

@dependencies.setter
def dependencies(self, what):
Note: adds dependencies, doesn't remove them
self.add_dependencies(*what)

@property
def dependents(self):
return [x.dependent for x in self.dependent_edges]

class TaskDependency(DeclarativeBase):
__tablename__ = 'task_dependency'

dependent_id = Column(Integer, ForeignKey(Task.task_id),
primary_key=True)
dependency_id = Column(Integer, ForeignKey(Task.task_id),
primary_key=True)

dependent = relationship(Task, primaryjoin=dependent_id ==
Task.task_id,
 backref='dependent_edges')

dependency = relationship(Task, primaryjoin=dependency_id ==
Task.task_id,
  backref='dependency_edges')

def example_task_maker():
make_cheese = Task(
name=MAKE_CHEESE,
dependencies=[
Task(name=MILK_COWS,
dependencies=[
Task(name=BUY_COWS)
]),
]
)


def load_task()
# How to eagerly load the whole task tree here?
DBSession.query(Task).filter(name=MAKE_CHEESE).all()

Thanks in advance,

- Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: eager/joined loading a whole graph

2010-10-05 Thread Peter Waller
Ah. To answer my own question, what I was actually after was an adjacency
list, of which there are examples to be found here, which I missed:

http://www.sqlalchemy.org/docs/05/mappers.html#adjacency-list-relationships

I believe this will do what I was after.

Sorry for the noise, and thanks anyway.

- Peter

On 5 October 2010 14:39, Peter Waller peter.wal...@cern.ch wrote:

 Hi All,

 I have also created a post for this question on StackOverflow:

 http://stackoverflow.com/questions/3863508/joinedload-eager-loading-whole-sub-graphs-in-sqlalchemy

 Let's say I have a Task object which can be dependent on other Tasks. Is
 there a way to sensibly eager/joinedload all of a given set of task's
 subtasks?

 Example code:

 class Task(DeclarativeBase):
 __tablename__ = 'task'

 task_id = Column(Integer, primary_key=True)
 name = Column(String, unique=True)

 def add_dependencies(self, *tasks):
 for task in tasks:
 TaskDependency(dependent=self, dependency=task)
 return self

 @property
 def dependencies(self):
 return [x.dependency for x in self.dependency_edges]

 @dependencies.setter
 def dependencies(self, what):
 Note: adds dependencies, doesn't remove them
 self.add_dependencies(*what)

 @property
 def dependents(self):
 return [x.dependent for x in self.dependent_edges]

 class TaskDependency(DeclarativeBase):
 __tablename__ = 'task_dependency'

 dependent_id = Column(Integer, ForeignKey(Task.task_id),
 primary_key=True)
 dependency_id = Column(Integer, ForeignKey(Task.task_id),
 primary_key=True)

 dependent = relationship(Task, primaryjoin=dependent_id ==
 Task.task_id,
  backref='dependent_edges')

 dependency = relationship(Task, primaryjoin=dependency_id ==
 Task.task_id,
   backref='dependency_edges')

 def example_task_maker():
 make_cheese = Task(
 name=MAKE_CHEESE,
 dependencies=[
 Task(name=MILK_COWS,
 dependencies=[
 Task(name=BUY_COWS)
 ]),
 ]
 )


 def load_task()
 # How to eagerly load the whole task tree here?
 DBSession.query(Task).filter(name=MAKE_CHEESE).all()

 Thanks in advance,

 - Peter


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Expunge

2010-09-25 Thread Peter Hansen

On 2010-09-25 2:29 PM, Mark Erbaugh wrote:

If I retrieve data strictly for reporting or other read-only use,
e.g. the session will not be used to update data, should I expunge
the objects returned by the query from the session?  If I just let
the session object go out of scope is that sufficient?


If you're not modifying the objects, then you don't have to do anything 
at all.  If you are modifying them, then as long as you don't call 
commit() on the session, the changes will be discarded when the session 
is deleted.  (That's assuming you have autocommit==False).


(Also, I think go out of scope is ambiguous in Python, where what 
really matters is whether there are other references to the session. 
Only when the last reference is removed is the session garbage-collected 
and the changes will discarded/rolled-back.  Having a local variable 
referencing the session go out of scope does nothing if there are 
other non-local references to the same session.  But you probably know 
all that. :) )


--
Peter Hansen

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLite: Rolling back DDL requests

2010-08-14 Thread Peter Hansen
On Aug 14, 11:07 am, Michael Bayer mike...@zzzcomputing.com wrote:
 In addition, isolation_level=None with pysqlite disables the DBAPIs
 entire transactional system.  SQLAlchemy relies upon this system to
 handle proper transactional behavior.

 So I consider this a pysqlite bug, and they should offer a mode
 by which there is no implicit commit for CREATE TABLE.

Michael, I don't doubt that you're right, but the puzzling thing (for
me and, I think, Torsen) is that (if you set echo='debug') you see a
BEGIN and ROLLBACK statement apparently being issued through the DBAPI
layer, and if there's an implicit COMMIT going on we don't actually
see it and don't understand why isolation_level=None isn't preventing
it as it apparently does in the other case (test02).

I suspect this is because with the logging on, we aren't actually
seeing the operations performed, but merely sqlalchemy's report of
what it is about to ask for.  If the DBAPI is doing something under
the covers, we don't know what and therefore can't find a workaround.

To be clear, in test02 I believe we're effectively telling sqlite3
BEGIN; CREATE TABLE ...; ROLLBACK, and in test03 (with the same
isolation_level setting) that's exactly what the logging reports is
happening, yet the behaviour is different.

(I was hoping to put some debugging in the DBAPI layer but
unfortunately that's a C extension so harder to deal with.  Maybe
there's also some feature in sqlite3 itself (not the Python module)
which can be configured for debug purposes to show what's really
happening here.)

--
Peter Hansen
Engenuity Corporation

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLite: Rolling back DDL requests

2010-08-13 Thread Peter Hansen
On Jun 24, 1:23 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 24, 2010, at 12:35 PM, Torsten Landschoff wrote:
  That's what I thought but it does not cure my problem.
  e.raw_connect().isolation_levelis in fact None, but the rollback is not
  done anyway. :-(

 its passing it through.dont know what else we can do there

I ran into this myself today and after struggling for a few hours I
came across this thread.  I then ended up creating a test case to
reproduce the problem and it suggests the problem stems from some
behaviour of transactions or of the engine.base.Connection class.  I
don't quite know what to make of it yet, but I think it shows that the
effect of passing it through is being counteracted by something
else.  I ran this on Python 2.6.5 with SA 0.6.3 on Windows.

'''Test showing unexpected behaviour with sqlalchemy and sqlite3 and
isolation_level settings.  The test creates one table outside of
a transaction (to detect potential problems with the test not
executing
properly) then creates a second table inside a transaction which it
immediately rolls back.

test01() fails basically as expected, since the sqlite3 DBAPI layer
appears to need isolation_level==None to properly roll back DDL
statements.

test02() succeeds because isolation_level==None now.  This test and
test01()
both use a connection from engine.raw_connection(), which is a
sqlalchemy.pool._ConnectionFairy() object.

test03() tries again with isolation_level==None but using a
transaction
created from a connection returned by engine.connect(), which is a
sqlalchemy.engine.base.Connection() object.  This test fails in spite
of the isolation_level setting.
'''

import unittest
from sqlalchemy import create_engine

DBPATH = 'sqlite://'
DDL = 'create table %s (id integer primary key)'

class TestCase(unittest.TestCase):
def setup(self, **kwargs):
self.engine = create_engine(DBPATH, **kwargs)
self.engine.execute(DDL % 'foo')

def rollback_raw(self):
conn = self.engine.raw_connection()
try:
conn.execute('begin')
conn.execute(DDL % 'bar')
raise ValueError
except ValueError:
conn.rollback()

def rollback_txn(self):
conn = self.engine.connect()
try:
txn = conn.begin()
conn.execute(DDL % 'bar')
raise ValueError
except ValueError:
txn.rollback()

def get_table_names(self):
conn = self.engine.raw_connection()
return [x[0] for x in conn.execute('select name from
sqlite_master')]

def test01(self):
'''use engine with default isolation_level'''
self.setup()
self.rollback_raw()
self.assertEqual(['foo'], self.get_table_names())

def test02(self):
'''use raw_connection with isolation_level None'''
self.setup(connect_args={'isolation_level': None})
self.rollback_raw()
self.assertEqual(['foo'], self.get_table_names())

def test03(self):
'''use transaction with isolation_level None'''
self.setup(connect_args={'isolation_level': None})
self.rollback_txn()
self.assertEqual(['foo'], self.get_table_names())

if __name__ == '__main__':
unittest.main()


--
Peter Hansen
Engenuity Corporation

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Filtering based on Composite() value

2010-04-30 Thread Peter Erickson
Is it possible to use like() on a composite object attribute to filter  
a particular query? I'm trying to pull records in from a postgres db  
based on a the value of a single attribute within a composite object.  
For example, with the setup below, I want to search for records that  
have a url_domain like '%google.com' regardless of the other values.  
Looking through the docs and Google, I'm guessing that I need to use  
the comparator factory, but I wasn't quite sure how to accomplish my  
goal.


Thanks in advance.

(Since I had to re-type all this instead of cut-and-paste, I left out  
a lot of the other column names, etc.)


class URL(object):
def __init__(self, url):
url = urlparse(url)

self.scheme = url.scheme
self.domain = url.domain
self.path   = url.path
self.query  = url.query

def __composite_values__(self):
return [self.scheme, self.domain, self.path, self.query]

class Product(Base):
...
url = composite(URL, \
   Column('url_scheme', String), \
   Column('url_domain', String), \
   Column('url_path', String), \
   Column('url_query', String))

recs =  
session.query(Product).filter(Product.url.url_domain.like('%google.com')


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re-using labels

2010-04-26 Thread Peter Waller
Let's say I write something like:

fixed_column = cast(my_column.op(/)(2**32), Integer).label(fixed_column)

print select([fixed_column]).where(fixed_column  100)

I get:

SELECT CAST(my_column / :my_columnn_1 AS INTEGER) AS fixed_column
FROM table
WHERE CAST(my_column / :my_column_1 AS INTEGER)  :param_1

Is there any reason fixed_column is not re-used in the where statement? Is
it possible to make this happen? It would make my statements much more
readable.

Thanks,

- Peter

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Beware of DateTime and sqlite [Problem solved]

2010-01-11 Thread Peter

Hi

This is to warn people working with sqlalchemy and sqlite and date/time 
columns.
I know, the topic has already been discussed, but I found it difficult 
to relate this topic with the error messages I got when trying the 
following:


transactions_table = Table('transactions', metadata,
   ...
Column('enter_date', DateTime))

on a sqlite database created originaly by gnucash, a free accounting 
application. This means, I don't use sqlalchemy to put data in but only 
to take data out.


The above definition worked on mysql but failed on sqlite, with the 
following exception:


Traceback (most recent call last):
  File /p/python/exp/of_sqlalchemy/minimal-gc.py, line 50, in module
filter(Transaction.description.like(unicode(desc))).all()
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/query.py, 
line 1267, in all

return list(self)
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/query.py, 
line 1422, in instances

rows = [process[0](context, row) for row in fetch]
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/query.py, 
line 2032, in main

return _instance(row, None)
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/mapper.py, 
line 1711, in _instance

populate_state(state, dict_, row, isnew, only_load_props)
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/mapper.py, 
line 1596, in populate_state

populator(state, dict_, row, isnew=isnew, **flags)
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/strategies.py, 
line 120, in new_execute

dict_[key] = row[col]
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/base.py, 
line 1348, in __getitem__

return self.__parent._get_col(self.__row, key)
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/base.py, 
line 1620, in _get_col

return processor(row[index])
  File 
/usr/local/python2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/databases/sqlite.py, 
line 183, in process

return fn(*[int(x or 0) for x in regexp.match(value).groups()])
AttributeError: 'NoneType' object has no attribute 'groups'

I had difficulties relating this message to a datetime problem. It was 
fixed by changing the column type to unicode ( sqlite stores dates as 
strings ):


transactions_table = Table('transactions', metadata,
   ...
Column('enter_date', Unicode(50))

Conclusion: Be careful with dates and sqlite

Peter
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Differences in trivial queries between mysql and sqlite backend

2009-12-11 Thread Peter
Hi

I discovered some differences in the results of the following simple 
queries, when switching from a mysql to sqlite backend:


1) for t in of_sqlalchemy.session.query(Transaction).all():

Works as expected with mysql (returns all rows ) , but fails to return 
any data with sqlite. This could be fixed ( or perhaps: hacked ) with:

for t in of_sqlalchemy.session.query(Transaction).filter_by(desc='%').all():

2) case sensitivity

**supposing the dataset is effectively uppercase ( type == 'ROOT' )
The following returns the dataset with uppcerase in mysql (which is 
therefore case insensitive) , but not in sqlite ( the latter seems to 
consider case ):

   accs = of_sqlalchemy.session.query(Account).filter_by(type='root').all()

Is this behavior due to sqlalchemy ? From what I found on typical 
differences between mysql and sqlite, those above are not mentioned.

Any ideas ?

Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] clear_mappers does not really clear

2009-12-10 Thread Peter
Michael Bayer wrote:
 On Dec 9, 2009, at 6:56 PM, Peter wrote:

   
 Hi

 I am sorry I misinterpreted the error message that I mentioned in my 
 previous mail:

 InvalidRequestError: Table 'accounts' is already defined for this
 MetaData instance.  Specify 'useexisting=True' to redefine options and
 columns on an existing Table object.

 Apparently, the error is not raised by the mapper command , but by the 
 Table creation command :

  accounts_table = Table('accounts', metadata,
  Column('guid', String(36),
  autoload=False )

 So how do I get rid of the Table instances ( without dropping the tables 
 in the database ) ?


 I tried this one:

   for table in metadata.sorted_tables:
 metadata.remove(table)

 but same problem.
 

 easiest is to start with a new MetaData object, or call clear() on the 
 MetaData() you have.   Surgically removing individual tables from a MetaData 
 is not really supported since it would involve removing related foreign key 
 information on other Table objects (and there's not much use for such a 
 feature).

 --

 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


   
Thanks a lot, I settled for the new-metadata-object and it works fine ( 
this is for independent unit tests )

Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] clear_mappers does not really clear

2009-12-09 Thread Peter
Hi

Is there a situation where the commands:

session.close()
clear_mappers()

fail to clear the current mappers ?
I get the error :

InvalidRequestError: Table 'accounts' is already defined for this 
MetaData instance.  Specify 'useexisting=True' to redefine options and 
columns on an existing Table object.

when I try to map a class to a table that has already been mapped , 
although I execute the two commands above before remapping.

Thanks for your advice
Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] clear_mappers does not really clear

2009-12-09 Thread Peter
Hi

I am sorry I misinterpreted the error message that I mentioned in my 
previous mail:

InvalidRequestError: Table 'accounts' is already defined for this
MetaData instance.  Specify 'useexisting=True' to redefine options and
columns on an existing Table object.

Apparently, the error is not raised by the mapper command , but by the 
Table creation command :

  accounts_table = Table('accounts', metadata,
  Column('guid', String(36),
  autoload=False )

So how do I get rid of the Table instances ( without dropping the tables 
in the database ) ?


I tried this one:

   for table in metadata.sorted_tables:
 metadata.remove(table)

but same problem.

Any ideas ?
Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Re: how to change a database

2009-12-07 Thread Peter
chris e wrote:
 As far as the creation I'm no help, but I have done something similar
 by connecting to different schemas in oracle. By setting the schema
 argument on your table objects, you can bind them to a particular
 database(schema), by changing the value of the schema argument, you
 can switch from one database to another. I normally have a table
 initializer method that I can pass the schema argument to to handle
 this functionality.


 On Dec 3, 4:37 am, Peter vm...@mycircuit.org wrote:
   
 Hi

 Lets suppose I created an engine on database 'foo' and I want to create
 a database 'bar' and then do my work on 'bar'.
 What is the recommended way to do this ?

 connection.execute('CREATE DATABASE IF NOT EXISTS bar')
 connection.execute('USE bar')

 The former command succeeds with a warning ( already discussed on this
 list ) but the latter seems to be the wrong approach:

 ...
 2009-12-03 13:28:39,221 INFO sqlalchemy.engine.base.Engine.0x...b0ec COMMIT
 TypeError: 'NoneType' object is not callable in function lambda at
 0x8821bc4 ignored

 Thanks a lot for your advice
 Peter
 

 --

 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


   
Thanks a lot, I have to check out if a similar function is available in 
mysql, the db I am working with.
The easiest solution, seems to be the usage of distinct engines and 
session , though.

Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] how to change a database

2009-12-04 Thread Peter
chaouche yacine wrote:
 Why not create a different engine for that database ?

 --- On Thu, 12/3/09, Peter vm...@mycircuit.org wrote:

   
Thanks a lot, I guess I still have to abstract away my standard sql 
behavior ..
Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] how to change a database

2009-12-03 Thread Peter
Hi

Lets suppose I created an engine on database 'foo' and I want to create 
a database 'bar' and then do my work on 'bar'.
What is the recommended way to do this ?

connection.execute('CREATE DATABASE IF NOT EXISTS bar')
connection.execute('USE bar')

The former command succeeds with a warning ( already discussed on this 
list ) but the latter seems to be the wrong approach:

...
2009-12-03 13:28:39,221 INFO sqlalchemy.engine.base.Engine.0x...b0ec COMMIT
TypeError: 'NoneType' object is not callable in function lambda at 
0x8821bc4 ignored


Thanks a lot for your advice
Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Warning: Can't create database 'TEST'; database exists

2009-11-23 Thread Peter
Hi
I do:

engine = create_engine('mysql://r...@localhost/tmp)
Session = sessionmaker()
Session.configure(bind=engine)
connection = engine.connect()
session = Session()

# suppose database TEST exists
connection.execute('CREATE DATABASE IF NOT EXISTS TEST')

Why does this gives a warning :

/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.6-py2.5.egg/sqlalchemy/engine/default.py:123:
 
Warning: Can't create database 'TEST'; database exists
  cursor.execute(statement, parameters)

Thanks for your help
Peter

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] Re: MySQL error

2009-03-23 Thread Peter Douma

Here is the code I am using MySQL_Python 1.2.3 with Python 2.6


db = create_engine('mysql://root:passw...@localhost/dbase', echo = True)

metadata = MetaData()

user = Table( 'user',metadata,
Column('id' ,Integer, primary_key = True),
Column('password'   ,String, nullable = True),
Column('user_name'  ,String, nullable = True, unique = True  ),
Column('real_name'  ,String, nullable = True),
Column('email_addr' ,String, nullable = True, unique = True  ),
Column('address',String, nullable = True),
 
)

try:
metadata.create_all(db)
except Exception,err:
print err
exit(1)

morecowbell wrote:
 it should. how/where are you setting up the tables? can you post the
 essential parts of your code?

 On Mar 22, 9:11 pm, pete.do...@gmail.com wrote:
   
 I am having trouble connecting to mysql database. I keep getting
 Programming error 1146 after trying to do a metadata.create_all(). I
 created the schema, but it has no tables I thought the create_all
 would make the tables
 
 

   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: app memory leak - execute() with SA 0.5.2, PG 8.3, Psycopg2

2009-02-25 Thread Peter Hansen

Allen Bierbaum wrote:
 On Tue, Feb 24, 2009 at 4:44 AM, Chris Miles miles.ch...@gmail.com wrote:
 On Feb 22, 6:08 am, Allen Bierbaum abierb...@gmail.com wrote:
 Python 2.5 and later will free up garbage collected memory, handing it
 back to the system.  Previous versions of Python would never free up
 memory (hence never shrink in size).

 Are you using Python 2.4?
 
 I am using Python 2.5.  But now that I understand the issue better I
 have come up with a workaround.  The biggest issue was that I didn't
 understand what I should be seeing as far as memory usage.

Although your workaround may not be generally useful, it would still be 
nice for posterity (i.e. those searching through this thread in future) 
if you could summarize how you've actually addressed this issue to your 
satisfaction, however crude or unusual that might be.  Thanks. :)

-- 
Peter Hansen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot abort wxPython thread with SQLAlchemy

2008-06-11 Thread Peter Hansen

Dominique wrote:
 On 10 juin, 02:38, Peter Hansen [EMAIL PROTECTED] wrote:
 As Python has no way to actually terminate a thread, can you explain
 what you mean by stop this thread?  Are you simply cloning the code
 from the wxPython example, with the delayedresult.AbortEvent() object,
 and calling .set() on it?
 
 That's exactly what I do.
 My Abort button is linked to an abort function which calls
 abortEvent.set(), like in the demo.
 In the producer function, I launch the query.
 What I'd like to do  is to be able to stop the thread, while the query
 is being done.
 Is it possible or am I trying to do something impossible ?

As no one else has chimed in, I'll go out on a limb a bit and say that 
it's impossible.  Python itself definitely doesn't have any way to 
forcibly kill a thread, at least not one that is buried in an external 
call (e.g. in the sqlite library).

There is a mechanism that's been added in recent versions that can 
terminate (under certain conditions) pure Python code in another thread 
by asynchronously raising an exception: search for python asynchronous 
exception and make sure you understand the issues before trying to use it.

If you could restructure your application so the long-running query 
occurs in a separate process, you could kill the process using 
operating system support for that, though perhaps not in a clean fashion.

Aside from that, you don't have many options.  What about changing the 
query so that it will return its results in increments, rather than all 
at once?  If it's a long-running query but you can break it up that way, 
then the check event flag approach you're using would be able to work.

-Peter

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot abort wxPython thread with SQLAlchemy

2008-06-09 Thread Peter Hansen

Dominique wrote:
 I am using delayedresult (which is a class to do threading in
 wxPython) for a query with SQLAlchemy, using SQLite.
 
 I have an 'opened' session in the main App thread.
 
 I create another session under the delayedresult thread.
 When I try to stop this thread with a dedicated button, the thread
 doesn't abort and goes on till it sends the result.

As Python has no way to actually terminate a thread, can you explain 
what you mean by stop this thread?  Are you simply cloning the code 
from the wxPython example, with the delayedresult.AbortEvent() object, 
and calling .set() on it?

-Peter

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] howto map python-class and stored procedures

2007-07-29 Thread peter

Hello,

recently i started working with SQLAlchemy and i have one question
regarding the stored procedures.
On the following website, there is a quide of how to execute the
stored procedures:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/d0b3ad0379606e81/eb1447a0fd3129cc?lnk=gstq=func+graphrnum=1
My question is how to map Python-class and stored procedures.
For exapmple:

class Person:
givenName
surName
id

and the procedures:
getPerson( id )
setPerson( id, gName, sName )
createPerson( )
removePerson( id )

Can you please tell me if it is possible to use getPerson instead of
select...; and setPerson... instead of update?

Thank you for your help.

Best regards,
Peter


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sqlalchemy and py2exe...

2007-05-30 Thread Peter Damoc
PROBLEM SOLVED

Thank you very much!
Peter.

On 5/30/07, Expo [EMAIL PROTECTED] wrote:




 On 29 Mag, 18:07, pdamoc [EMAIL PROTECTED] wrote:
  Hello,
 
  I've tried packing an app I've made with py2exe and run into
  trouble...
  first it was an EGG issue, I've fixed that installing everything as a
  directory
  next came the sqlite issue... I simply cannot get it to work
 
  after I build the .exe I keep getting:
 
  Traceback (most recent call last):
File patch.py, line 53, in module
File model.pyc, line 87, in __init__
File model.pyc, line 56, in createDB
File sqlalchemy\schema.pyc, line 1190, in __init__
File sqlalchemy\schema.pyc, line 1095, in __init__
File sqlalchemy\schema.pyc, line 1114, in connect
File sqlalchemy\engine\__init__.pyc, line 154, in create_engine
File sqlalchemy\engine\strategies.pyc, line 44, in create
File sqlalchemy\engine\url.pyc, line 92, in get_dialect
  ImportError: unknown database 'sqlite'
 
  The app works fine when I start it as a script.
 
  any idea is welcomed!


 Tell py2exe to load sqlite database from sqlalchemy. This setup.py
 show how:

 #!/usr/bin/env python

 from distutils.core import setup
 import glob
 import py2exe


 def main():
 setup(
 windows=[dict(
 script='myapp.py',
 icon_resources=[(1, myicon.ico)]
 )],
 options=dict(
 py2exe=dict(
 compressed=1,
 optimize=2,
 packages=['sqlalchemy.databases.sqlite', 'other
 package']
 )
 )
 )

 if __name__ == '__main__':
 main()


 



-- 
There is NO FATE, we are the creators.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: KeyError: 'inet'

2007-01-24 Thread Peter Nixon
On Wed 24 Jan 2007 00:37, Michael Bayer wrote:
 no, youre testing it for me first :)

Well it seems to work ok :-)

Thanks
-- 

Peter Nixon
http://www.peternixon.net/
PGP Key: http://www.peternixon.net/public.asc


pgpAi1IpJWujj.pgp
Description: PGP signature


[sqlalchemy] Re: KeyError: 'inet'

2007-01-23 Thread Peter Nixon
Hi There

Thanks for the patch. It seems to fix the error. I will test it more over the 
following days.

I see that you just released a new SA, but I dont see this patch in the 
changelog. Did it go in?

Thanks

Peter

On Tue 16 Jan 2007 21:44, Michael Bayer wrote:
 its quite easy to add this type in.  im not exactly sure what python
 type it should accept/return so you can start with just this patch:

 Index: lib/sqlalchemy/databases/postgres.py
 ===
 --- lib/sqlalchemy/databases/postgres.py(revision 2199)
 +++ lib/sqlalchemy/databases/postgres.py(working copy)
 @@ -32,6 +32,10 @@
  except:
  psycopg = None

 +class Inet(sqltypes.TypeEngine):
 +def get_col_spec(self):
 +return INET
 +
  class PGNumeric(sqltypes.Numeric):
  def get_col_spec(self):
  if not self.precision:
 @@ -161,6 +165,7 @@
  'numeric' : PGNumeric,
  'float' : PGFloat,
  'real' : PGFloat,
 +'inet': Inet,
  'double precision' : PGFloat,
  'timestamp' : PG2DateTime,
  'timestamp with time zone' : PG2DateTime,

 additionally you can skip the reflection step and/or override the Inet
 columns with any old type explicitly to avoid the failing dictionary
 lookup (if you dont feel like using the above patch).


 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group. To post to this group, send email to
 sqlalchemy@googlegroups.com
 To unsubscribe from this group, send email to
 [EMAIL PROTECTED] For more options, visit this group
 at http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---

-- 

Peter Nixon
http://www.peternixon.net/
PGP Key: http://www.peternixon.net/public.asc


pgplMXmPIMzpQ.pgp
Description: PGP signature


[sqlalchemy] KeyError: 'inet'

2007-01-16 Thread Peter Nixon
Hi Guys

I am a Python newbie, so as my first project I decided to port a relatively 
simple (320 lines) Perl script I wrote to Python to see how it compared. I 
asked on the #python irc channel and they recommended I use sqlalchemy which 
I how I ended up here. I am trying to do the following (which appears to be 
correct as far as I can tell):

db = create_engine('postgres://' + Dbuser + ':' + Dbpass + '@' + Dbhost + ':' 
+ Dbport + '/' + database)
 db.echo = True
metadata = BoundMetaData(db)
radacct_table = Table('radacct', metadata, autoload=True)

However I end up with the following ugly error:

2007-01-16 12:52:24,366 INFO sqlalchemy.engine.base.Engine.0x..cc select 
current_schema()
2007-01-16 12:52:24,367 INFO sqlalchemy.engine.base.Engine.0x..cc None
2007-01-16 12:52:24,374 INFO sqlalchemy.engine.base.Engine.0x..cc
SELECT a.attname,
  pg_catalog.format_type(a.atttypid, a.atttypmod),
  (SELECT substring(d.adsrc for 128) FROM 
pg_catalog.pg_attrdef d
   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND 
a.atthasdef)
  AS DEFAULT,
  a.attnotnull, a.attnum
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = (
SELECT c.oid
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = 
c.relnamespace
WHERE (n.nspname = %(schema)s OR 
pg_catalog.pg_table_is_visible(c.oid))
  AND c.relname = %(table_name)s AND (c.relkind = 'r' 
OR c.relkind = 'v')
) AND a.attnum  0 AND NOT a.attisdropped
ORDER BY a.attnum

2007-01-16 12:52:24,377 INFO sqlalchemy.engine.base.Engine.0x..cc 
{'table_name': 'radacct', 'schema': 'public'}
Traceback (most recent call last):
  File ./radsqlkill.py, line 145, in module
db_connect('localhost');
  File ./radsqlkill.py, line 80, in db_connect
radacct_table = Table('radacct', metadata, autoload=True)
  File /usr/lib/python2.5/site-packages/sqlalchemy/schema.py, line 143, in 
__call__
metadata.get_engine().reflecttable(table)
  File /usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py, line 
505, in reflecttable
self.dialect.reflecttable(conn, table)
  File /usr/lib/python2.5/site-packages/sqlalchemy/databases/postgres.py, 
line 385, in reflecttable
coltype = ischema_names[attype]
KeyError: 'inet'

Now this seems to be related to the fact that I am using INET field types in 
my table. While I could convert my data its a 2GB+ table and the INET field 
types make a BIG difference in speed and usability (I have extensive 
reporting based on them)

I have the following:
# rpm -q python
python-2.5-19.2
# rpm -q python-sqlalchemy
python-sqlalchemy-0.3.3-0.pm.1
# rpm -q python-psycopg2
python-psycopg2-2.0.2-10.1

Is there some way to make this work or do I need to go an find another 
database library to use Postgresql INET types with Python?

Cheers
-- 

Peter Nixon
http://www.peternixon.net/
PGP Key: http://www.peternixon.net/public.asc


pgpjoEKtC0XSY.pgp
Description: PGP signature