Re: [sqlalchemy] Syntax error from sqlalchemy generated sql to sybase ase

2018-11-28 Thread Ken MacKenzie
That makes sense, and as much as I would volunteer to try to maintain the 
dialect, I can't see justify using work resources (where I have access to 
Sybase) to maintain an open source component for a DB we are working to 
migrate away from.

Since the metadata is collecting ok, minus some type catches I have to do 
to write to other DB's, I can maybe just write raw SQL through sql alchemy 
or worst case through pyodbc to deal with collecting the records.

Thank you for the help.

Ken

On Wednesday, November 28, 2018 at 6:12:41 PM UTC-5, Mike Bayer wrote:
>
> Hi there - 
>
> Unfortuantely, the Sybase dialect is unmaintained and will likely be 
> removed from a future SQLAlchemy release.For this database to be 
> usable, a dedicated maintainer would need to volunteer and we can set 
> them up with a new sybase ASE dialect available as a third party 
> download.   The demand for this database is extremely low, the Python 
> drivers are poorly maintained and the database itself is extremely 
> difficult to run for testing purposes. 
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Syntax error from sqlalchemy generated sql to sybase ase

2018-11-28 Thread Ken MacKenzie
Error message:

sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', 
"[42000] [FreeTDS][SQL Server]Incorrect syntax near 
'dbo.APPLICATION_STATUS_DIM.'.\n (102) (SQLExecDirectW)") [SQL: 'SELECT 
dbo."APPLICATION_STATUS_DIM"."APPLICATION_STATUS_SKEY" AS 
"dbo_APPLICATION_STATUS_DIM_APPLICATION_STATUS_SKEY", 
dbo."APPLICATION_STATUS_DIM"."APPLICATION_STATUS_CD" AS 
"dbo_APPLICATION_STATUS_DIM_APPLICATION_STATUS_CD", 
dbo."APPLICATION_STATUS_DIM"."APPLICATION_STATUS_DESC" AS 
"dbo_APPLICATION_STATUS_DIM_APPLICATION_STATUS_DESC" \nFROM 
dbo."APPLICATION_STATUS_DIM"'] (Background on this error at: 
http://sqlalche.me/e/f405)

Code triggering the error:

for record in source.query(table).all():

pip freeze output:

pkg-resources==0.0.0
psycopg2==2.7.6.1
pyodbc==4.0.24
SQLAlchemy==1.2.14

Purpose of the code:

Trying a quick stab at migrating a db from sybase ase to either postgres or 
sql server. Evaluating both targets. Connection works fine, pulling table 
metadata works fine, writing to target db structure from meta works fine, 
but selecting the records leads to that.

If I take that statement as is and paste into dbeaver (replacing the \n 
with an actual carriage return) everything works.

This fails on both linux (using FreeTDS version set to 5) and windows 
(using the dsn name for the actual ASE driver setup in my odbc)

I am confused what is going on here that causes sybase to see this as a 
syntax error.

Ken

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to convert character set of the column with sqlalchemy?

2017-12-20 Thread Ken Liu
very good! It's fine. thank you, Mike Bayer !

On Wednesday, December 20, 2017 at 10:08:49 PM UTC+8, Mike Bayer wrote:
>
> On Wed, Dec 20, 2017 at 4:36 AM, Ken Liu <liuji...@gmail.com > 
> wrote: 
> > I want to convert query.order_by(cast(Channel.cname, String) to 
> cast(cname 
> > AS CHAR CHARACTER SET gbk),how do I do that? please help me. 
>
> this looks like MySQL so the character set option is present in the 
> MySQL CHAR type: 
>
> from sqlalchemy.dialects.mysql import CHAR 
>
> cast(Channel.cname, CHAR(charset="gbk")) 
>
>
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to convert character set of the column with sqlalchemy?

2017-12-20 Thread Ken Liu


I want to convert query.order_by(cast(Channel.cname, String) to cast(cname AS 
CHAR CHARACTER SET gbk),how do I do that? please help me. 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken MacKenzie
On a side note the literal_column change worked fine performance wise 
allowing me to remove the literal_bind/compile step.

I guess I am 50/50 on which is cleaner.  Is either one more advantageous 
from a SQL injection perspective?
SQL Injection is not a primary concern at the moment for this process as 
what we are dealing with is a data replication service pulling from a u2 
data store to sql server.  The delete is part of batch cleanup testing.

On Thursday, August 31, 2017 at 5:13:58 PM UTC-4, Ken MacKenzie wrote:
>
> Tried doing various conversions on the pk values as they enter the 
> statement:
> 1.  to bytes
> 2.  to ascii
> 3. to latin1 (technically the same encoding as the extract source before 
> entering the db)
>
> None of which yielded a performance improvement for the non-compiled 
> version.
>
> I have read that this can be an issue with pyodbc and that there are 
> engine settings related to it.  Also perhaps I should try using the pymssql 
> driver to see if that changes anything.
>
> On Thursday, August 31, 2017 at 5:00:47 PM UTC-4, Ken MacKenzie wrote:
>>
>> So inspecting the elements of the tuple, they are both str, so hence 
>> unicode.
>>
>> Are you saying that if I convert those values to bytes it could improve 
>> performance?
>>
>>  
>>
>>> I'd not bother with the literal_binds and just use a literal value: 
>>>
>>> pkf = [(col == literal_column("'%s'" % v)) for (col, v) in zip(cols, x)] 
>>>
>>> but also I'd look to see what the nature of "v" is, if it's like a 
>>> Unicode object or something, you might be getting bogged down on the 
>>> decode/encode or something like that.   Sending as bytes() perhaps 
>>> might change that. 
>>>
>>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken MacKenzie
Tried doing various conversions on the pk values as they enter the 
statement:
1.  to bytes
2.  to ascii
3. to latin1 (technically the same encoding as the extract source before 
entering the db)

None of which yielded a performance improvement for the non-compiled 
version.

I have read that this can be an issue with pyodbc and that there are engine 
settings related to it.  Also perhaps I should try using the pymssql driver 
to see if that changes anything.

On Thursday, August 31, 2017 at 5:00:47 PM UTC-4, Ken MacKenzie wrote:
>
> So inspecting the elements of the tuple, they are both str, so hence 
> unicode.
>
> Are you saying that if I convert those values to bytes it could improve 
> performance?
>
>  
>
>> I'd not bother with the literal_binds and just use a literal value: 
>>
>> pkf = [(col == literal_column("'%s'" % v)) for (col, v) in zip(cols, x)] 
>>
>> but also I'd look to see what the nature of "v" is, if it's like a 
>> Unicode object or something, you might be getting bogged down on the 
>> decode/encode or something like that.   Sending as bytes() perhaps 
>> might change that. 
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken MacKenzie
So inspecting the elements of the tuple, they are both str, so hence 
unicode.

Are you saying that if I convert those values to bytes it could improve 
performance?

 

> I'd not bother with the literal_binds and just use a literal value: 
>
> pkf = [(col == literal_column("'%s'" % v)) for (col, v) in zip(cols, x)] 
>
> but also I'd look to see what the nature of "v" is, if it's like a 
> Unicode object or something, you might be getting bogged down on the 
> decode/encode or something like that.   Sending as bytes() perhaps 
> might change that. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken MacKenzie
So here is my solution as of now.  As far as I can tell hits the marks of:

1. DB Neutral
2. Performance (9k records deleted in 5 seconds)
3. Generic

(excuse the print statements still there for testing)

Would welcome comments on this proposed solution.  I have learned a lot 
from the advice in this thread and can always stand to learn more for the 
sake of my code.

@classmethod
def dict_delete(cls, dict_in):
"""
Delete records from the database
based on a dictionary keyed by
PK tuple
"""
s = dbc.get_session()

d_start = datetime.datetime.now()
keys_in = [k for k in dict_in.keys()]
batch_size = 1000
cols = [getattr(cls, colname) for colname in cls.SQL_PK]
while len(keys_in):
id_batch = keys_in[:batch_size]
del keys_in[:batch_size]

or_cond = []
for x in id_batch:

pkf = [(col == v) for (col, v) in zip(cols, x)]
and_cond = and_(*pkf)
or_cond.append(and_cond)

stmt = cls.__table__.delete(or_(*or_cond))
stmt_comp = str(stmt.compile(dialect=s.bind.dialect, 
compile_kwargs={"literal_binds": True}))

print(stmt_comp)
print(len(keys_in), "-", datetime.datetime.now() - d_start)

s.execute(stmt_comp)

s.commit()

s.close()


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken MacKenzie
Ending output from batch size of 10:

501 - 0:24:35.558103
491 - 0:24:37.197976
481 - 0:24:38.909318
471 - 0:24:40.645801
461 - 0:24:41.954807
451 - 0:24:43.628955
441 - 0:24:45.246469
431 - 0:24:46.662228
421 - 0:24:48.192343
411 - 0:24:49.839395
401 - 0:24:51.513340
391 - 0:24:53.153668
381 - 0:24:54.836501
371 - 0:24:56.372331
361 - 0:24:58.034510
351 - 0:24:59.714655
341 - 0:25:01.401695
331 - 0:25:03.026082
321 - 0:25:04.657724
311 - 0:25:06.324783
301 - 0:25:07.946133
291 - 0:25:09.347907
281 - 0:25:11.000936
271 - 0:25:12.640046
261 - 0:25:14.281258
251 - 0:25:15.939942
241 - 0:25:17.651838
231 - 0:25:19.307522
221 - 0:25:20.931354
211 - 0:25:22.589152
201 - 0:25:24.223949
191 - 0:25:25.868513
181 - 0:25:27.540003
171 - 0:25:29.198259
161 - 0:25:30.492486
151 - 0:25:31.778539
141 - 0:25:33.292928
131 - 0:25:34.966223
121 - 0:25:36.623493
111 - 0:25:38.276205
101 - 0:25:39.996527
91 - 0:25:41.374338
81 - 0:25:43.011192
71 - 0:25:44.639656
61 - 0:25:46.293672
51 - 0:25:47.954129
41 - 0:25:49.663463
31 - 0:25:51.333620
21 - 0:25:52.842479
11 - 0:25:54.489094
1 - 0:25:56.109371
0 - 0:25:56.431443


Basically the only conclusion I can draw is that it is a case of the bind 
params.

So I know this is not recommended but to retain DB neutrality can I use the 
output from query.statement.compile(dialect=s.bind.dialect, 
compile_kwargs={"literal_binds": True}) and feed that output back in direct 
to execute?


On Thursday, August 31, 2017 at 6:41:17 AM UTC-4, Ken M wrote:
>
> I tried a batch size as small as ten and each step took about 2 seconds. I 
> set it up to show me the time between each batch for that run.  Lowering 
> the batch size actually seems to worsen performance. Which is not what I 
> expected. 
>
> Sent from my iPad 
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken M
I tried a batch size as small as ten and each step took about 2 seconds. I set 
it up to show me the time between each batch for that run.  Lowering the batch 
size actually seems to worsen performance. Which is not what I expected.

Sent from my iPad

> On Aug 31, 2017, at 5:42 AM, Simon King <si...@simonking.org.uk> wrote:
> 
> I'm pretty sure the bad performance is due to pyodbc (or maybe SQL
> Server) not handling *huge* numbers of bind parameters gracefully. You
> are generating a query with (batch_size * n_primary_key_columns) bind
> parameters, which even in your original version meant 2000 bind
> parameters.
> 
> Try with a batch sizes of 10, 50 and 100. You'll run more queries but
> I suspect each query will perform better.
> 
> Simon
> 
>> On Wed, Aug 30, 2017 at 10:02 PM, Ken MacKenzie <deviloc...@gmail.com> wrote:
>> With a batch size of 1 based on the original selection, no improvement.
>> 
>> I am going to see what sql this is creating.
>> 
>> 2328928 function calls (2085124 primitive calls) in 1084.925
>> seconds
>> 
>>   Ordered by: internal time
>> 
>>   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>>   26 1074.733   41.336 1074.733   41.336 {method 'execute' of
>> 'pyodbc.Cursor' objects}
>>20.8710.4360.8720.436 {built-in method connect}
>> 349128/3485230.5560.0000.5730.000 {built-in method
>> isinstance}
>>   260.5020.0190.5020.019 {method 'close' of
>> 'pyodbc.Cursor' objects}
>>   200.4850.0240.4850.024 {method 'commit' of
>> 'pyodbc.Connection' objects}
>> 161062/1610600.3170.0000.3180.000 {built-in method getattr}
>>191230.2590.0000.7890.000 elements.py:861(__init__)
>> 95670.2580.0000.2580.000 {method 'fetchone' of
>> 'pyodbc.Cursor' objects}
>> 76492/191230.2480.0002.8420.000 operators.py:297(__eq__)
>> 67012/240.2360.0002.6010.108
>> visitors.py:75(_compiler_dispatch)
>>191230.2340.0000.4240.000 elements.py:2927(__init__)
>> 95820.2130.0001.1040.000 elements.py:1860(_construct)
>>191320.2050.0000.4350.000 compiler.py:676(visit_column)
>>191230.1890.0001.8060.000
>> compiler.py:1040(_generate_generic_binary)
>>191230.1830.0001.2630.000
>> default_comparator.py:290(_check_literal)
>> 57389/191430.1730.0002.7320.000 {built-in method eq}
>>191230.1710.0001.9120.000
>> default_comparator.py:22(_boolean_compare)
>>286840.1690.0000.3460.000
>> operators.py:1007(is_precedent)
>>191260.1430.0000.3520.000 elements.py:3962(__new__)
>>191230.1290.0000.7950.000
>> compiler.py:1164(visit_bindparam)
>>191230.1280.0000.9180.000 elements.py:3818(_bind_param)
>>191530.1260.0000.5810.000 base.py:1128(decorate)
>>10.1250.125 1084.925 1084.925 del_test.py:1()
>>   1168450.1210.0000.1210.000 {method 'get' of 'dict'
>> objects}
>> 38264/95810.1150.0002.5670.000 compiler.py:806()
>>191230.1130.0002.0480.000 compiler.py:1006(visit_binary)
>>191250.1130.0000.1740.000
>> compiler.py:1265(_process_anon)
>>191250.1090.0000.4510.000
>> compiler.py:1246(_truncated_identifier)
>>191230.1080.0002.4570.000 annotation.py:100(__eq__)
>>191230.1070.0002.1900.000 base.py:1306(visit_binary)
>>191230.1040.0002.0150.000 type_api.py:60(operate)
>>382830.1010.0000.1670.000 compiler.py:2901(quote)
>>191230.0950.0000.5770.000
>> compiler.py:1233(_truncate_bindparam)
>>192060.0940.0000.1630.000 elements.py:3918(__new__)
>>191250.0930.0000.3230.000 elements.py:4073(apply_map)
>>382460.0910.0000.0910.000 elements.py:640(self_group)
>>884600.0880.0000.0880.000 {method 'append' of 'list'
>> objects}
>>191230.0840.0002.6210.000 properties.py:269(operate)
>> 38264/95810.0840.0002.5860.000 compiler.py:804()
>>191230.0820.0000.7320.000
>> annotation.py:78(_compiler_dispatch)
>> 12703/31420.0760.0002.6050.001 {method 'join' of 'str'
>> objects}
>>191230.0750.00

Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-31 Thread Ken M
Yes the double quote was a typo, sorry about that.

Your point about non varchar pk, that is why I will double check other dbs. SQL 
server lets it fly but I think you are right about Postgres.



Sent from my iPad

> On Aug 31, 2017, at 5:36 AM, Simon King <si...@simonking.org.uk> wrote:
> 
> The generic version would look something like this:
> 
> table = cls.__table__
> pkconditions = []
> for pk in cls.SQL_PK:
>pkconditions.append(table.c[pk] == sa.bindparam(pk)
> condition = sa.and_(*pkconditions)
> statement = sa.delete(table, whereclause=condition)
> batchparams = [dict(zip(cls.SQL_PK, v)) for v in id_batch]
> session.execute(statement, batchparams)
> 
> As for the portability of the textual version: your original code
> produced fragments like this:
> 
>  field == 'value'
> 
> a) Is the double-equals a typo, or is it syntax that SQL Server
> expects? I've never used double-equals in MySQL.
> b) Are the primary key fields always strings? You later said that they
> were, but if you ever had an integer primary key column, comparing it
> against a quoted value would be an error on postgres (I believe).
> c) Can the values that you are comparing against ever contain single
> quotes? That would break your query and potentially leave you open to
> SQL injection.
> d) Are your values unicode strings or byte strings? If unicode, does
> the encoding match what the database expects?
> 
> If none of those are issues for you, the textual version is probably
> pretty safe.
> 
> Simon
> 
>> On Wed, Aug 30, 2017 at 5:30 PM, Ken MacKenzie <deviloc...@gmail.com> wrote:
>> Not sure how I would iterate through a non predetermined number of primary
>> keys.
>> 
>> I guess part of me is wondering that although textual sql is not inherently
>> db neutral how different between the db targets is the where field = 'value'
>> syntax?
>> 
>>> On Wednesday, August 30, 2017 at 12:07:52 PM UTC-4, Simon King wrote:
>>> 
>>> You could also try using executemany:
>>> 
>>> 
>>> http://docs.sqlalchemy.org/en/latest/core/tutorial.html#executing-multiple-statements
>>> 
>>> I think it would look something like this:
>>> 
>>> table = cls.__table__
>>> condition = sa.and_(
>>>table.c.pk1 == sa.bindparam('pk1'),
>>>table.c.pk2 == sa.bindparam('pk2'),
>>> )
>>> statement = sa.delete(table, whereclause=condition)
>>> batchparams = [{'pk1': v[0], 'pk2': v[1]} for v in id_batch]
>>> session.execute(statement, batchparams)
>>> 
>>> Simon
>>> 
>>>> On Wed, Aug 30, 2017 at 4:28 PM, Ken MacKenzie <devil...@gmail.com> wrote:
>>>> After the current sorted profile finishes I will revert to the textual
>>>> version and run a profile on that.  I expect another 10-15 minutes for
>>>> this
>>>> to finish right now.
>>>> 
>>>> At present the batch size is set to 1000, total record count is just
>>>> over
>>>> 9000 in these tests.
>>>> 
>>>> The reason for 1000 was at first I was looking at doing this as a
>>>> tuple_(fld, fld).in_((val, val),(val,val)) format.  The 1000 should keep
>>>> me
>>>> under most DB restrictions on the in statement.
>>>> 
>>>> However since SQL Server does not seem to support the tuple_ usage I
>>>> reverted to this method.
>>>> 
>>>> I technically have one more method and that is a concat_ in_ where I
>>>> concat
>>>> the fields.
>>>> 
>>>> Other specifics, the table in question has 2 fields for the PK, both are
>>>> varchar, one length 3, the other length 10.  There are 5 non key fields,
>>>> 3
>>>> short varchars, one decimal at 14,2 precision and one varchar(800) which
>>>> contains description text.
>>>> 
>>>> Total record count of the table before any deletion is about 1.05
>>>> million.
>>>> 
>>>> Python version is 3.4.5, running on a modest CentOS desktop and to be
>>>> fair
>>>> the SQL Server instance is sub optimal for development.
>>>> 
>>>>> On Wednesday, August 30, 2017 at 11:18:13 AM UTC-4, Simon King wrote:
>>>>> 
>>>>> It would be interesting to see the profile of the textual SQL version.
>>>>> It looks like most of the time is being spent inside pyodbc, rather
>>>>> than SQLAlchemy, so I guess it must be something to do with the
>>>>> processing of bind p

Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken M
When I get in tomorrow I will take a look at some of these options. I also 
intend to spin up a quick MySQL or postgresql db to test if my textual solution 
can work against an alternate target.

Sent from my iPad

> On Aug 30, 2017, at 6:20 PM, Jonathan Vanasco  wrote:
> 
> Looking at your code and considering the size of your database, I've run into 
> performance issues under PostgreSQL a few times that may be related... 
> they're all usually because very slight text changes to a query can 
> drastically alter the server's query planner. 
> 
> The cause/fixes were:
> 
> 1. Indexes often got ignored, ending up with lots of row scans.  To fix, I 
> would wrap the "deletion" work with some sql to setup & tear down a new index 
> with all the columns present in the query.  That usually tricked it into 
> using the new index.
> 2. Too many fkey checks onto the table for deleted items.  Deferring 
> constraint checks often helped.
> 3. The position of the 'commit' ended up clearing the db's internal cache 
> each loop, so a reread of rows/indexes would happen each iteration.  
> Expanding the memory allocation for caching on the server helped handle that.
> 4. Sometimes there was a mix of all the above and/or an issue with the size 
> of the query (so many values).  What often helped was replacing the DELETE 
> with INSERTS of ids to delete into a new temporary table, then deleting from 
> that table.
> 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
So the big difference between the methods when seen with print(q)

Textual SQL is building a statement with the values inline

such as where college=888 and check=1234567

The methods using the column object are building with parameters
where college = (?) and check = (?)

That is all I can see different.


On Wednesday, August 30, 2017 at 4:41:38 PM UTC-4, Ken MacKenzie wrote:
>
> I am noticing the 16 execute calls.  There should only be 10 with a batch 
> size of 1000 and a record size of 9,000 and some change.
>
> Are you just wanting to see the sql via a print(q)?
>
> On Wednesday, August 30, 2017 at 2:01:31 PM UTC-4, Mike Bayer wrote:
>>
>> it looks like you have 16 SQL statements that take a very different 
>> amount of time depending on format.  Can you please paste the output 
>> with echo=True on create_engine() so that the difference in SQL 
>> statements can be seen? 
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
With a batch size of 1 based on the original selection, no improvement.

I am going to see what sql this is creating.

 2328928 function calls (2085124 primitive calls) in 1084.925 
seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   26 1074.733   41.336 1074.733   41.336 {method 'execute' of 
'pyodbc.Cursor' objects}
20.8710.4360.8720.436 {built-in method connect}
349128/3485230.5560.0000.5730.000 {built-in method 
isinstance}
   260.5020.0190.5020.019 {method 'close' of 
'pyodbc.Cursor' objects}
   200.4850.0240.4850.024 {method 'commit' of 
'pyodbc.Connection' objects}
161062/1610600.3170.0000.3180.000 {built-in method getattr}
191230.2590.0000.7890.000 elements.py:861(__init__)
 95670.2580.0000.2580.000 {method 'fetchone' of 
'pyodbc.Cursor' objects}
76492/191230.2480.0002.8420.000 operators.py:297(__eq__)
 67012/240.2360.0002.6010.108 
visitors.py:75(_compiler_dispatch)
191230.2340.0000.4240.000 elements.py:2927(__init__)
 95820.2130.0001.1040.000 elements.py:1860(_construct)
191320.2050.0000.4350.000 compiler.py:676(visit_column)
191230.1890.0001.8060.000 
compiler.py:1040(_generate_generic_binary)
191230.1830.0001.2630.000 
default_comparator.py:290(_check_literal)
57389/191430.1730.0002.7320.000 {built-in method eq}
191230.1710.0001.9120.000 
default_comparator.py:22(_boolean_compare)
286840.1690.0000.3460.000 
operators.py:1007(is_precedent)
191260.1430.0000.3520.000 elements.py:3962(__new__)
191230.1290.0000.7950.000 
compiler.py:1164(visit_bindparam)
191230.1280.0000.9180.000 elements.py:3818(_bind_param)
191530.1260.0000.5810.000 base.py:1128(decorate)
10.1250.125 1084.925 1084.925 del_test.py:1()
   1168450.1210.0000.1210.000 {method 'get' of 'dict' 
objects}
38264/95810.1150.0002.5670.000 compiler.py:806()
191230.1130.0002.0480.000 compiler.py:1006(visit_binary)
191250.1130.0000.1740.000 
compiler.py:1265(_process_anon)
191250.1090.0000.4510.000 
compiler.py:1246(_truncated_identifier)
191230.1080.0002.4570.000 annotation.py:100(__eq__)
191230.1070.0002.1900.000 base.py:1306(visit_binary)
191230.1040.0002.0150.000 type_api.py:60(operate)
382830.1010.0000.1670.000 compiler.py:2901(quote)
191230.0950.0000.5770.000 
compiler.py:1233(_truncate_bindparam)
192060.0940.0000.1630.000 elements.py:3918(__new__)
191250.0930.0000.3230.000 elements.py:4073(apply_map)
382460.0910.0000.0910.000 elements.py:640(self_group)
884600.0880.0000.0880.000 {method 'append' of 'list' 
objects}
191230.0840.0002.6210.000 properties.py:269(operate)
38264/95810.0840.0002.5860.000 compiler.py:804()
191230.0820.0000.7320.000 
annotation.py:78(_compiler_dispatch)
12703/31420.0760.0002.6050.001 {method 'join' of 'str' 
objects}
191230.0750.0000.1290.000 
compiler.py:959(_get_operator_dispatch)
10.0710.071 1082.250 1082.250 db_base.py:172(dict_delete)
191230.0700.0000.0890.000 
compiler.py:1271(bindparam_string)
191230.0700.0002.0850.000 :1()
 95810.0670.0000.5200.000 elements.py:1883()
 95610.0640.0002.9050.000 db_base.py:208()
23611/233230.0620.0000.0670.000 {built-in method hasattr}
287050.0610.0000.1590.000 
elements.py:4216(_expression_literal_as_text)
287460.0580.0000.0980.000 
elements.py:4220(_literal_as_text)
191230.0580.0000.1300.000 
type_api.py:504(coerce_compared_value)
 95620.0560.0000.4160.000 result.py:1156(fetchone)
191250.0560.0000.2300.000 
_collections.py:728(__missing__)
191230.0540.0000.2960.000 elements.py:2986(self_group)



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email 

Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
I am noticing the 16 execute calls.  There should only be 10 with a batch 
size of 1000 and a record size of 9,000 and some change.

Are you just wanting to see the sql via a print(q)?

On Wednesday, August 30, 2017 at 2:01:31 PM UTC-4, Mike Bayer wrote:
>
> it looks like you have 16 SQL statements that take a very different 
> amount of time depending on format.  Can you please paste the output 
> with echo=True on create_engine() so that the difference in SQL 
> statements can be seen? 
>
> On Wed, Aug 30, 2017 at 12:30 PM, Ken MacKenzie <devil...@gmail.com 
> > wrote: 
> > Not sure how I would iterate through a non predetermined number of 
> primary 
> > keys. 
> > 
> > I guess part of me is wondering that although textual sql is not 
> inherently 
> > db neutral how different between the db targets is the where field = 
> 'value' 
> > syntax? 
> > 
> > On Wednesday, August 30, 2017 at 12:07:52 PM UTC-4, Simon King wrote: 
> >> 
> >> You could also try using executemany: 
> >> 
> >> 
> >> 
> http://docs.sqlalchemy.org/en/latest/core/tutorial.html#executing-multiple-statements
>  
> >> 
> >> I think it would look something like this: 
> >> 
> >> table = cls.__table__ 
> >> condition = sa.and_( 
> >> table.c.pk1 == sa.bindparam('pk1'), 
> >> table.c.pk2 == sa.bindparam('pk2'), 
> >> ) 
> >> statement = sa.delete(table, whereclause=condition) 
> >> batchparams = [{'pk1': v[0], 'pk2': v[1]} for v in id_batch] 
> >> session.execute(statement, batchparams) 
> >> 
> >> Simon 
> >> 
> >> On Wed, Aug 30, 2017 at 4:28 PM, Ken MacKenzie <devil...@gmail.com> 
> wrote: 
> >> > After the current sorted profile finishes I will revert to the 
> textual 
> >> > version and run a profile on that.  I expect another 10-15 minutes 
> for 
> >> > this 
> >> > to finish right now. 
> >> > 
> >> > At present the batch size is set to 1000, total record count is just 
> >> > over 
> >> > 9000 in these tests. 
> >> > 
> >> > The reason for 1000 was at first I was looking at doing this as a 
> >> > tuple_(fld, fld).in_((val, val),(val,val)) format.  The 1000 should 
> keep 
> >> > me 
> >> > under most DB restrictions on the in statement. 
> >> > 
> >> > However since SQL Server does not seem to support the tuple_ usage I 
> >> > reverted to this method. 
> >> > 
> >> > I technically have one more method and that is a concat_ in_ where I 
> >> > concat 
> >> > the fields. 
> >> > 
> >> > Other specifics, the table in question has 2 fields for the PK, both 
> are 
> >> > varchar, one length 3, the other length 10.  There are 5 non key 
> fields, 
> >> > 3 
> >> > short varchars, one decimal at 14,2 precision and one varchar(800) 
> which 
> >> > contains description text. 
> >> > 
> >> > Total record count of the table before any deletion is about 1.05 
> >> > million. 
> >> > 
> >> > Python version is 3.4.5, running on a modest CentOS desktop and to be 
> >> > fair 
> >> > the SQL Server instance is sub optimal for development. 
> >> > 
> >> > On Wednesday, August 30, 2017 at 11:18:13 AM UTC-4, Simon King wrote: 
> >> >> 
> >> >> It would be interesting to see the profile of the textual SQL 
> version. 
> >> >> It looks like most of the time is being spent inside pyodbc, rather 
> >> >> than SQLAlchemy, so I guess it must be something to do with the 
> >> >> processing of bind parameters. How many parameters are being sent in 
> >> >> per query? ie. what is len(id_batch) * len(cls.SQL_PK)? 
> >> >> 
> >> >> You could try playing with your batch sizes to see what sort of 
> effect 
> >> >> that has. 
> >> >> 
> >> >> Simon 
> >> > 
> >> > -- 
> >> > SQLAlchemy - 
> >> > The Python SQL Toolkit and Object Relational Mapper 
> >> > 
> >> > http://www.sqlalchemy.org/ 
> >> > 
> >> > To post example code, please provide an MCVE: Minimal, Complete, and 
> >> > Verifiable Example. See http://stackoverflow.com/help/mcve for a 
> full 
> >> > description. 
> >> > --- 
> >> > You received this message because you

Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
I tried an alternate version:

@classmethod
def dict_delete2(cls, dict_in):
"""
Delete records from the database
based on a dictionary keyed by
PK tuple
"""
s = dbc.get_session()
keys_in = ["-".join(k) for k in dict_in.keys()]
batch_size = 1000
cols = [getattr(cls, colname) for colname in cls.SQL_PK]
cols_dash = []
for col in cols:
cols_dash.append(col)
cols_dash.append('-')

cols_dash = cols_dash[:-1]

print(cols_dash)
while len(keys_in):
id_batch = keys_in[:batch_size]
# print(id_batch)
del keys_in[:batch_size]
q = s.query(cls)

q = 
q.filter(func.concat(*cols_dash).in_(id_batch)).delete(synchronize_session=False)

s.commit()

s.close()


Performance was even worse?

 673960 function calls (659542 primitive calls) in 2315.534 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   16 2312.810  144.551 2312.810  144.551 {method 'execute' of 
'pyodbc.Cursor' objects}
 95670.1860.0000.1860.000 {method 'fetchone' of 
'pyodbc.Cursor' objects}
 95720.1660.0000.4330.000 elements.py:861(__init__)
10.1190.119 2315.536 2315.536 del_test.py:1()
62442/618370.1010.0000.1180.000 {built-in method isinstance}
 95720.0680.0000.4070.000 
compiler.py:1164(visit_bindparam)
 95750.0620.0000.1710.000 elements.py:3962(__new__)
 95610.0600.0000.4920.000 functions.py:436(_bind_param)
 95740.0590.0000.0900.000 
compiler.py:1265(_process_anon)
 95740.0540.0000.2270.000 
compiler.py:1246(_truncated_identifier)
   100.0520.0050.7380.074 
default_comparator.py:110(_in_impl)
 95620.0510.0000.3190.000 result.py:1156(fetchone)
 96550.0490.0000.0840.000 elements.py:3918(__new__)
 95720.0470.0000.2910.000 
compiler.py:1233(_truncate_bindparam)
20.0470.0230.0480.024 {built-in method connect}
 95740.0470.0000.1630.000 elements.py:4073(apply_map)
   100.0420.0040.0420.004 {method 'commit' of 
'pyodbc.Connection' objects}
406490.0400.0000.0400.000 {method 'append' of 'list' 
objects}
 95720.0390.0000.0490.000 
compiler.py:1271(bindparam_string)
  9696/140.0390.0000.5210.037 
visitors.py:75(_compiler_dispatch)
 96110.0390.0000.4850.000 compiler.py:806()
14090/138020.0370.0000.0420.000 {built-in method hasattr}
17697/176950.0370.0000.0380.000 {built-in method getattr}
 95620.0370.0000.0580.000 result.py:1085(process_rows)
 95610.0360.0000.0800.000 elements.py:4254(_is_literal)
286830.0360.0000.0360.000 del_test.py:14()
 95620.0300.0000.0680.000 
type_api.py:504(coerce_compared_value)
   220.0290.0010.1030.005 elements.py:1784()
127130.0270.0000.5360.000 {method 'join' of 'str' 
objects}
 95720.0260.0000.0380.000 
sqltypes.py:2608(_resolve_value_to_type)
   160.0260.0020.0260.002 {method 'close' of 
'pyodbc.Cursor' objects}
 95740.0260.0000.1160.000 
_collections.py:728(__missing__)
 95840.0260.0000.0670.000 compiler.py:494()
213250.0250.0000.0250.000 {method 'get' of 'dict' 
objects}
  1250.0250.0000.0250.000 {built-in method loads}
  750/7240.0250.0000.3380.000 {built-in method 
__build_class__}
 95720.0250.0000.0410.000 
type_api.py:452(_cached_bind_processor)
 95620.0240.0000.3430.000 result.py:868(__iter__)
 95640.0220.0000.2090.000 result.py:1053(_fetchone_impl)
 96030.0220.0000.0220.000 elements.py:640(self_group)
102060.0210.0000.0210.000 {method 'split' of 'str' 
objects}
 95620.0210.0000.0210.000 result.py:1098()
 96110.0210.0000.5060.000 compiler.py:804()
104210.0200.0000.0200.000 {built-in method __new__ of 
type object at 0x7fc6ee607e40}
 96030.0200.0000.0530.000 
elements.py:4216(_expression_literal_as_text)
 96240.0190.0000.0330.000 
elements.py:4220(_literal_as_text)
18953/188380.0190.0000.0190.000 {built-in method len}
102090.0160.0000.0160.000 weakref.py:364(__getitem__)
290/10.0160.000 2315.538 2315.538 {built-in method exec}
   680.0160.0000.0420.001 inspect.py:264(getmembers)
  3610.0140.000

Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
Not sure how I would iterate through a non predetermined number of primary 
keys.

I guess part of me is wondering that although textual sql is not inherently 
db neutral how different between the db targets is the where field = 
'value' syntax?

On Wednesday, August 30, 2017 at 12:07:52 PM UTC-4, Simon King wrote:
>
> You could also try using executemany: 
>
>
> http://docs.sqlalchemy.org/en/latest/core/tutorial.html#executing-multiple-statements
>  
>
> I think it would look something like this: 
>
> table = cls.__table__ 
> condition = sa.and_( 
> table.c.pk1 == sa.bindparam('pk1'), 
> table.c.pk2 == sa.bindparam('pk2'), 
> ) 
> statement = sa.delete(table, whereclause=condition) 
> batchparams = [{'pk1': v[0], 'pk2': v[1]} for v in id_batch] 
> session.execute(statement, batchparams) 
>
> Simon 
>
> On Wed, Aug 30, 2017 at 4:28 PM, Ken MacKenzie <devil...@gmail.com 
> > wrote: 
> > After the current sorted profile finishes I will revert to the textual 
> > version and run a profile on that.  I expect another 10-15 minutes for 
> this 
> > to finish right now. 
> > 
> > At present the batch size is set to 1000, total record count is just 
> over 
> > 9000 in these tests. 
> > 
> > The reason for 1000 was at first I was looking at doing this as a 
> > tuple_(fld, fld).in_((val, val),(val,val)) format.  The 1000 should keep 
> me 
> > under most DB restrictions on the in statement. 
> > 
> > However since SQL Server does not seem to support the tuple_ usage I 
> > reverted to this method. 
> > 
> > I technically have one more method and that is a concat_ in_ where I 
> concat 
> > the fields. 
> > 
> > Other specifics, the table in question has 2 fields for the PK, both are 
> > varchar, one length 3, the other length 10.  There are 5 non key fields, 
> 3 
> > short varchars, one decimal at 14,2 precision and one varchar(800) which 
> > contains description text. 
> > 
> > Total record count of the table before any deletion is about 1.05 
> million. 
> > 
> > Python version is 3.4.5, running on a modest CentOS desktop and to be 
> fair 
> > the SQL Server instance is sub optimal for development. 
> > 
> > On Wednesday, August 30, 2017 at 11:18:13 AM UTC-4, Simon King wrote: 
> >> 
> >> It would be interesting to see the profile of the textual SQL version. 
> >> It looks like most of the time is being spent inside pyodbc, rather 
> >> than SQLAlchemy, so I guess it must be something to do with the 
> >> processing of bind parameters. How many parameters are being sent in 
> >> per query? ie. what is len(id_batch) * len(cls.SQL_PK)? 
> >> 
> >> You could try playing with your batch sizes to see what sort of effect 
> >> that has. 
> >> 
> >> Simon 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
Sorted Textual SQL profile

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   161.8400.1151.8400.115 {method 'execute' of 
'pyodbc.Cursor' objects}
 95720.1960.0000.7880.000 elements.py:1860(_construct)
577000.1890.0000.1890.000 {method 'sub' of 
'_sre.SRE_Pattern' objects}
 95670.1850.0000.1850.000 {method 'fetchone' of 
'pyodbc.Cursor' objects}
100329/997240.1250.0000.1420.000 {built-in method 
isinstance}
191230.1200.0000.2000.000 elements.py:1225(__init__)
10.1170.1175.0425.042 del_test.py:1()
191230.1000.0000.2220.000 
compiler.py:737(visit_textclause)
38254/95710.0990.0000.5660.000 compiler.py:806()
 28738/140.0960.0000.5980.043 
visitors.py:75(_compiler_dispatch)
191230.0860.0000.2860.000 elements.py:1240(_create_text)
38254/95710.0790.0000.5850.000 compiler.py:804()
12693/31320.0680.0000.6030.000 {method 'join' of 'str' 
objects}
10.0650.0653.8113.811 db_base.py:138(dict_delete)
 95610.0620.0000.3980.000 db_base.py:178()
 95710.0600.0000.2400.000 elements.py:1883()
286950.0600.0000.1560.000 
elements.py:4216(_expression_literal_as_text)
287160.0580.0000.0960.000 
elements.py:4220(_literal_as_text)
 95620.0520.0000.3170.000 result.py:1156(fetchone)
 95620.0510.0000.1050.000 
operators.py:1007(is_precedent)
501540.0500.0000.0500.000 {method 'append' of 'list' 
objects}
191230.0500.0000.3360.000 :1(text)
 95720.0480.0000.2030.000 elements.py:1865()
27063/270610.0470.0000.0480.000 {built-in method getattr}
20.0440.0220.0450.022 {built-in method connect}
  9571/100.0370.0000.5950.059 
compiler.py:797(visit_clauselist)
 95620.0370.0000.5340.000 elements.py:1894(and_)
286830.0360.0000.0360.000 del_test.py:14()
 95620.0360.0000.0570.000 result.py:1085(process_rows)
 95610.0290.0000.1590.000 elements.py:1971(self_group)
28504/283890.0280.0000.0280.000 {built-in method len}
  1250.0260.0000.0260.000 {built-in method loads}
  750/7240.0260.0000.3400.000 {built-in method 
__build_class__}
 95610.0250.0000.5600.000 :1(and_)
 95610.0240.0000.1300.000 elements.py:1818(self_group)
 95620.0240.0000.3410.000 result.py:868(__iter__)
 95640.0220.0000.2070.000 result.py:1053(_fetchone_impl)
 95720.0220.0000.0320.000 
_collections.py:788(coerce_generator_arg)
 95620.0210.0000.0210.000 result.py:1098()
191220.0210.0000.0210.000 elements.py:1601(self_group)
212320.0210.0000.0210.000 {method 'get' of 'dict' 
objects}
   100.0190.0020.0190.002 {method 'commit' of 
'pyodbc.Connection' objects}
191230.0190.0000.0190.000 
compiler.py:734(post_process_text)
290/10.0160.0005.0445.044 {built-in method exec}
   680.0150.0000.0420.001 inspect.py:264(getmembers)
  3610.0140.0000.1030.000 inspect.py:943(getfullargspec)
 28840.0140.0000.0200.000 sre_parse.py:197(__next)
104200.0130.0000.0130.000 {built-in method __new__ of 
type object at 0x7f9abdca4e40}
 95610.0120.0000.0120.000 result.py:168(keys)
  3610.0120.0000.0650.000 inspect.py:2383(from_function)
  7140.0100.0000.0180.000 inspect.py:1056(formatargspec)
  2960.0100.0000.0410.000 :2016(find_spec)
4486/41980.0100.0000.0150.000 {built-in method hasattr}
   130/380.0100.0000.0440.001 sre_parse.py:448(_parse)


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
Sorted:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   16 1053.794   65.862 1053.794   65.862 {method 'execute' of 
'pyodbc.Cursor' objects}
348916/3483110.5390.0000.5560.000 {built-in method 
isinstance}
160920/1609180.3130.0000.3130.000 {built-in method getattr}
191230.2820.0000.4710.000 elements.py:2927(__init__)
 66982/140.2340.0002.5610.183 
visitors.py:75(_compiler_dispatch)
191230.2310.0000.7020.000 elements.py:861(__init__)
76492/191230.2280.0002.7400.000 operators.py:297(__eq__)
 95720.2130.0001.0920.000 elements.py:1860(_construct)
191230.2010.0001.8780.000 
default_comparator.py:22(_boolean_compare)
191320.1940.0000.4240.000 compiler.py:676(visit_column)
 95670.1900.0000.1900.000 {method 'fetchone' of 
'pyodbc.Cursor' objects}
191230.1830.0001.7610.000 
compiler.py:1040(_generate_generic_binary)
191230.1800.0001.1550.000 
default_comparator.py:290(_check_literal)
286840.1710.0000.3470.000 
operators.py:1007(is_precedent)
57379/191330.1570.0002.6340.000 {built-in method eq}
191230.1330.0000.7860.000 
compiler.py:1164(visit_bindparam)
191430.1230.0000.5660.000 base.py:1128(decorate)
   1168350.1200.0000.1200.000 {method 'get' of 'dict' 
objects}
10.1170.117 1061.906 1061.906 del_test.py:1()
191230.1160.0002.0050.000 compiler.py:1006(visit_binary)
191230.1140.0000.8160.000 elements.py:3818(_bind_param)
191230.1130.0002.1540.000 base.py:1306(visit_binary)
191250.1110.0000.1730.000 
compiler.py:1265(_process_anon)
38254/95710.1110.0002.5280.000 compiler.py:806()
382730.1020.0000.1670.000 compiler.py:2901(quote)
191230.0990.0001.9760.000 type_api.py:60(operate)
191230.0980.0000.5650.000 
compiler.py:1233(_truncate_bindparam)
191250.0970.0000.4380.000 
compiler.py:1246(_truncated_identifier)
191260.0970.0000.2950.000 elements.py:3962(__new__)
382460.0930.0000.0930.000 elements.py:640(self_group)
191230.0920.0002.3760.000 annotation.py:100(__eq__)
191250.0920.0000.3210.000 elements.py:4073(apply_map)
192060.0890.0000.1550.000 elements.py:3918(__new__)
883980.0870.0000.0870.000 {method 'append' of 'list' 
objects}
38254/95710.0840.0002.5470.000 compiler.py:804()
191230.0820.0002.5260.000 properties.py:269(operate)
191230.0750.0000.1280.000 
compiler.py:959(_get_operator_dispatch)
12693/31320.0730.0002.5660.001 {method 'join' of 'str' 
objects}
191230.0690.0000.7040.000 
annotation.py:78(_compiler_dispatch)
191230.0680.0000.0870.000 
compiler.py:1271(bindparam_string)
10.0680.068 1060.665 1060.665 db_base.py:138(dict_delete)
 95710.0650.0000.5200.000 elements.py:1883()
191230.0650.0002.0410.000 :1()
 95610.0620.0002.8020.000 db_base.py:174()
23610/233220.0620.0000.0680.000 {built-in method hasattr}
286950.0610.0000.1560.000 
elements.py:4216(_expression_literal_as_text)
191230.0590.0000.1290.000 
type_api.py:504(coerce_compared_value)
287160.0570.0000.0950.000 
elements.py:4220(_literal_as_text)
191250.0570.0000.2290.000 
_collections.py:728(__missing__)
191230.0530.0000.2940.000 elements.py:2986(self_group)
 95620.0530.0000.3250.000 result.py:1156(fetchone)
191350.0530.0000.1340.000 compiler.py:494()
191230.0500.0000.0700.000 
sqltypes.py:2608(_resolve_value_to_type)
191230.0500.0000.0820.000 
type_api.py:452(_cached_bind_processor)



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
After the current sorted profile finishes I will revert to the textual 
version and run a profile on that.  I expect another 10-15 minutes for this 
to finish right now.

At present the batch size is set to 1000, total record count is just over 
9000 in these tests.

The reason for 1000 was at first I was looking at doing this as a 
tuple_(fld, fld).in_((val, val),(val,val)) format.  The 1000 should keep me 
under most DB restrictions on the in statement.

However since SQL Server does not seem to support the tuple_ usage I 
reverted to this method.

I technically have one more method and that is a concat_ in_ where I concat 
the fields.

Other specifics, the table in question has 2 fields for the PK, both are 
varchar, one length 3, the other length 10.  There are 5 non key fields, 3 
short varchars, one decimal at 14,2 precision and one varchar(800) which 
contains description text.

Total record count of the table before any deletion is about 1.05 million.

Python version is 3.4.5, running on a modest CentOS desktop and to be fair 
the SQL Server instance is sub optimal for development.

On Wednesday, August 30, 2017 at 11:18:13 AM UTC-4, Simon King wrote:
>
> It would be interesting to see the profile of the textual SQL version. 
> It looks like most of the time is being spent inside pyodbc, rather 
> than SQLAlchemy, so I guess it must be something to do with the 
> processing of bind parameters. How many parameters are being sent in 
> per query? ie. what is len(id_batch) * len(cls.SQL_PK)? 
>
> You could try playing with your batch sizes to see what sort of effect 
> that has. 
>
> Simon 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
After I finish the current profile run I will revert and profile that one 
for comparison.  I am expecting another 10-15 minutes for this to finish.

On Wednesday, August 30, 2017 at 11:18:13 AM UTC-4, Simon King wrote:
>
> It would be interesting to see the profile of the textual SQL version. 
> It looks like most of the time is being spent inside pyodbc, rather 
> than SQLAlchemy, so I guess it must be something to do with the 
> processing of bind parameters. How many parameters are being sent in 
> per query? ie. what is len(id_batch) * len(cls.SQL_PK)? 
>
> You could try playing with your batch sizes to see what sort of effect 
> that has. 
>
> Simon 
>
> On Wed, Aug 30, 2017 at 4:01 PM, Ken MacKenzie <devil...@gmail.com 
> > wrote: 
> > So I implemented this version with one change.  I moved this line: 
> > 
> > cols = [getattr(cls, colname) for colname in cls.SQL_PK] 
> > 
> > To the beginning before the loop, actually before the outer loop that 
> parses 
> > the batch. 
> > 
> > However, the new version is MUCH slower than the original, so I profiled 
> it 
> > and this is what I get: 
> > 
> >  %prun import db_api_lib.del_test 
> >  1421296 function calls (1277536 primitive calls) in 620.434 
> seconds 
> > 
> >Ordered by: internal time 
> > 
> >ncalls  tottime  percall  cumtime  percall filename:lineno(function) 
> >12  615.512   51.293  615.512   51.293 {method 'execute' of 
> > 'pyodbc.Cursor' objects} 
> > 208876/2082710.3160.0000.3320.000 {built-in method 
> > isinstance} 
> > 96876/968740.1840.0000.1850.000 {built-in method 
> getattr} 
> > 111230.1370.0000.2420.000 elements.py:2927(__init__) 
> >  38970/100.1360.0001.5020.150 
> > visitors.py:75(_compiler_dispatch) 
> > 111230.1280.0000.3970.000 elements.py:861(__init__) 
> > 44492/111230.1270.0001.5040.000 operators.py:297(__eq__) 
> >  55680.1160.0000.6170.000 
> elements.py:1860(_construct) 
> > 111320.1130.0000.2460.000 
> compiler.py:676(visit_column) 
> > 111230.1110.0001.0390.000 
> > compiler.py:1040(_generate_generic_binary) 
> >  55670.1080.0000.1080.000 {method 'fetchone' of 
> > 'pyodbc.Cursor' objects} 
> > 111230.0990.0000.6460.000 
> > default_comparator.py:290(_check_literal) 
> > 166840.0950.0000.1950.000 
> > operators.py:1007(is_precedent) 
> > 111230.0940.0001.0100.000 
> > default_comparator.py:22(_boolean_compare) 
> > 33375/111290.0900.0001.4450.000 {built-in method eq} 
> > 111230.0750.0000.4650.000 
> > compiler.py:1164(visit_bindparam) 
> > 10.0750.075  619.467  619.467 
> db_base.py:138(dict_delete) 
> > 111390.0730.0000.3310.000 base.py:1128(decorate) 
> > 688310.0700.0000.0700.000 {method 'get' of 'dict' 
> > objects} 
> > 10.0680.068  620.471  620.471 del_test.py:1() 
> > 22250/55670.0670.0001.4820.000 
> compiler.py:806() 
> > 111250.0660.0000.1020.000 
> > compiler.py:1265(_process_anon) 
> > 111230.0650.0001.1790.000 
> compiler.py:1006(visit_binary) 
> > 111230.0630.0001.2630.000 base.py:1306(visit_binary) 
> > 111230.0610.0000.4580.000 
> elements.py:3818(_bind_param) 
> > 222690.0580.0000.0970.000 compiler.py:2901(quote) 
> > 111250.0580.0000.2610.000 
> > compiler.py:1246(_truncated_identifier) 
> > 111230.0580.0001.2990.000 annotation.py:100(__eq__) 
> > 111230.0560.0001.0660.000 type_api.py:60(operate) 
> > 563860.0550.0000.0550.000 {method 'append' of 'list' 
> > objects} 
> > 111260.0550.0000.1680.000 elements.py:3962(__new__) 
> > 111230.0550.0000.3340.000 
> > compiler.py:1233(_truncate_bindparam) 
> > 111250.0540.0000.1920.000 
> elements.py:4073(apply_map) 
> > 112060.0510.0000.0890.000 elements.py:3918(__new__) 
> > 222460.0510.0000.0510.000 
> elements.py:640(self_group) 
> > 22250/55670.0490.0001.4930.000 
> compiler.py:804() 
> > 20.0480.0240.0490.024 {built-in method connect} 
> > 111230.0

Re: [sqlalchemy] Is Textual SQL DB neutral

2017-08-30 Thread Ken MacKenzie
So I implemented this version with one change.  I moved this line:

cols = [getattr(cls, colname) for colname in cls.SQL_PK] 

To the beginning before the loop, actually before the outer loop that 
parses the batch.

However, the new version is MUCH slower than the original, so I profiled it 
and this is what I get:

 %prun import db_api_lib.del_test
 1421296 function calls (1277536 primitive calls) in 620.434 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   12  615.512   51.293  615.512   51.293 {method 'execute' of 
'pyodbc.Cursor' objects}
208876/2082710.3160.0000.3320.000 {built-in method 
isinstance}
96876/968740.1840.0000.1850.000 {built-in method getattr}
111230.1370.0000.2420.000 elements.py:2927(__init__)
 38970/100.1360.0001.5020.150 
visitors.py:75(_compiler_dispatch)
111230.1280.0000.3970.000 elements.py:861(__init__)
44492/111230.1270.0001.5040.000 operators.py:297(__eq__)
 55680.1160.0000.6170.000 elements.py:1860(_construct)
111320.1130.0000.2460.000 compiler.py:676(visit_column)
111230.1110.0001.0390.000 
compiler.py:1040(_generate_generic_binary)
 55670.1080.0000.1080.000 {method 'fetchone' of 
'pyodbc.Cursor' objects}
111230.0990.0000.6460.000 
default_comparator.py:290(_check_literal)
166840.0950.0000.1950.000 
operators.py:1007(is_precedent)
111230.0940.0001.0100.000 
default_comparator.py:22(_boolean_compare)
33375/111290.0900.0001.4450.000 {built-in method eq}
111230.0750.0000.4650.000 
compiler.py:1164(visit_bindparam)
10.0750.075  619.467  619.467 db_base.py:138(dict_delete)
111390.0730.0000.3310.000 base.py:1128(decorate)
688310.0700.0000.0700.000 {method 'get' of 'dict' 
objects}
10.0680.068  620.471  620.471 del_test.py:1()
22250/55670.0670.0001.4820.000 compiler.py:806()
111250.0660.0000.1020.000 
compiler.py:1265(_process_anon)
111230.0650.0001.1790.000 compiler.py:1006(visit_binary)
111230.0630.0001.2630.000 base.py:1306(visit_binary)
111230.0610.0000.4580.000 elements.py:3818(_bind_param)
222690.0580.0000.0970.000 compiler.py:2901(quote)
111250.0580.0000.2610.000 
compiler.py:1246(_truncated_identifier)
111230.0580.0001.2990.000 annotation.py:100(__eq__)
111230.0560.0001.0660.000 type_api.py:60(operate)
563860.0550.0000.0550.000 {method 'append' of 'list' 
objects}
111260.0550.0000.1680.000 elements.py:3962(__new__)
111230.0550.0000.3340.000 
compiler.py:1233(_truncate_bindparam)
111250.0540.0000.1920.000 elements.py:4073(apply_map)
112060.0510.0000.0890.000 elements.py:3918(__new__)
222460.0510.0000.0510.000 elements.py:640(self_group)
22250/55670.0490.0001.4930.000 compiler.py:804()
20.0480.0240.0490.024 {built-in method connect}
111230.0470.0001.3860.000 properties.py:269(operate)
8689/31280.0450.0001.5080.000 {method 'join' of 'str' 
objects}
111230.0450.0000.0760.000 
compiler.py:959(_get_operator_dispatch)
111230.0450.0000.0560.000 
compiler.py:1271(bindparam_string)
111230.0400.0000.4110.000 
annotation.py:78(_compiler_dispatch)
15610/153220.0380.0000.0430.000 {built-in method hasattr}
 55670.0380.0000.2940.000 elements.py:1883()
111250.0360.0000.1380.000 
_collections.py:728(__missing__)
166910.0350.0000.0900.000 
elements.py:4216(_expression_literal_as_text)
 55610.0350.0001.5390.000 db_base.py:174()
167040.0330.0000.0550.000 
elements.py:4220(_literal_as_text)
111230.0330.0000.0730.000 
type_api.py:504(coerce_compared_value)
31584/314690.0310.0000.0320.000 {built-in method len}
111230.0300.0000.1650.000 elements.py:2986(self_group)
 55620.0300.0000.1870.000 result.py:1156(fetchone)
111310.0300.0000.0770.000 compiler.py:494()
111230.0290.0000.0470.000 
type_api.py:452(_cached_bind_processor)

Any guidance to how I can improve the performance of this solution and 
remain DB neutral?

On Wednesday, August 30, 2017 at 4:43:01 AM UTC-4, Simon King wrote:
>
> On Tue, Aug 29, 2017 at 9:49 PM, Ken MacKenzie <devil...@gmail.com 
> > wrote: 
> > I have a query I have construc

[sqlalchemy] Is Textual SQL DB neutral

2017-08-29 Thread Ken MacKenzie
I have a query I have constructed and I had to deal with a composite 
primary key to select items;

q = s.query(cls)
or_cond = []
for x in id_batch:

pkf = [text(f + "=='" + v + "'") for f,v in zip(cls.SQL_PK, 
x)]
and_cond = (and_(*pkf))
or_cond.append(and_cond)


q = q.filter(or_(*or_cond)).delete(synchronize_session=False)

cls.SQL_PK is a tuple of the primary key fields for the model described by 
class.  This is a class method that is part of a inherited class to the 
model

The current target is SQL Server.  My concern is using text('field = 
'value'), is that going to work for other DB targets like say postgres?

The first round of doing this I tried using a 
tuple_(*cls.SQL_PK).in_(id_batch), but that did not work and the resulting 
SQL id not work in SSMS leading me to believe that SQL Server (or at least 
the version we are using) does not support tuples.

Ken

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A story of a field named object... and is it a big deal

2017-08-11 Thread Ken MacKenzie
Thank you everyone for the responses.

I think it is not as big a deal as I might have been expecting,.  The
purist in me does not like redefining object.  But from an api perspective
there is not cleaner and purpose built name for the field in question.

Ken

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A story of a field named object... and is it a big deal

2017-08-09 Thread Ken MacKenzie
yeah but I have it in the model as

class gltable(Base):
...
object = column(string(6))

On Wed, Aug 9, 2017 at 6:33 PM, Mike Bayer <mike...@zzzcomputing.com> wrote:

> On Wed, Aug 9, 2017 at 2:12 PM, Ken MacKenzie <deviloc...@gmail.com>
> wrote:
> > So I have been using SQL alchemy to convert some unidata data stores
> into ms
> > sql data.
> >
> > One of the GL components in our system is called object, well object
> code.
> >
> > Most refer to it as object so when I defined my model for the table
> > including it I named it object.
> >
> > It all works fine, but object is technically is something else in
> python.  I
> > guess in theory within the lexical scope of that class I am redefining
> what
> > object means.
> >
> > Is this a big deal?  I am viewing it as a big deal and I want to get it
> > changed, which requires some coordination because what was an experiment
> > turned into an in use prototype (ain't that always the way).
> >
> > I just wanted to get some more experienced feedback in case any of the
> data
> > consumers start asking why I am wanting to change something that works to
> > rename this field.
>
> the column can be named object in the database, that's just a string name.
>
> Python side, you can name a field "object", Python doesn't complain:
>
> >>> class Foo(object):
> ... def __init__(self, object):
> ... self.object = object
> ...
> >>> f1 = Foo(object='hi')
> >>> print f1.object
> hi
>
> if you wanted to be perfect you'd name it "object_" or something else
> totally but it isn't essential.
>
>
>
> >
> > Ken
> >
> > --
> > SQLAlchemy -
> > The Python SQL Toolkit and Object Relational Mapper
> >
> > http://www.sqlalchemy.org/
> >
> > To post example code, please provide an MCVE: Minimal, Complete, and
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> > description.
> > ---
> > You received this message because you are subscribed to the Google Groups
> > "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/sqlalchemy/cnigdkAb2fY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] A story of a field named object... and is it a big deal

2017-08-09 Thread Ken MacKenzie
So I have been using SQL alchemy to convert some unidata data stores into 
ms sql data.

One of the GL components in our system is called object, well object code.

Most refer to it as object so when I defined my model for the table 
including it I named it object.

It all works fine, but object is technically is something else in python. 
 I guess in theory within the lexical scope of that class I am redefining 
what object means.

Is this a big deal?  I am viewing it as a big deal and I want to get it 
changed, which requires some coordination because what was an experiment 
turned into an in use prototype (ain't that always the way).

I just wanted to get some more experienced feedback in case any of the data 
consumers start asking why I am wanting to change something that works to 
rename this field.

Ken

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Question about eagerloading and depth of inheritance with respect to polymorphic mappings

2016-06-14 Thread Ken Linehan
Hello,

I'm working with a class inheritance structure which can be illustrated by 
this example:
class Entity(object):

  def __init__(self,
entity_type=EntityTypeEntity,
id=None):

  self.entity_type = entity_type  
  self.id = id

class Person(Entity):

  def __init__(self,
entity_class=None,
entity_type=EntityTypePerson,
id=None,
name_first=None,
name_last=None,
name_middle=None,
name_prefix=None,
name_suffix=None):

  super(Person, self).__init__(entity_type,
   id)

  self.entity_class = entity_class
  self.name_first = name_first
  self.name_last = name_last
  self.name_middle = name_middle
  self.name_prefix = name_prefix
  self.name_suffix = name_suffix

class SupporterPerson(Person):
  
  def __init__(self,
entity_class=EntityClassSupporterPerson,
entity_type=EntityTypePerson,
id=None,
name_first=None,
name_last=None,
name_middle=None,
name_prefix=None,
name_suffix=None,
relationship=None):

  super(SupporterPerson, self).__init__(entity_class,
entity_type,
id,
name_first,
name_last,
name_middle,
name_prefix,
name_suffix)

  self.relationship = relationship

My current mappings look like this:
# Mappers
self.entity_table_mapper = mapper(
  Entity,
  self.entities_table,
  polymorphic_on=self.entities_table.c.entity_type,
  polymorphic_identity=EntityTypeEntity,
  with_polymorphic='*',
  properties={
'entity_type': self.entities_table.c.entity_type,
'id': self.entities_table.c.id,
  }
)

self.persons_table_mapper = mapper(
  Person,
  self.persons_table,
  polymorphic_on=self.persons_table.c.entity_class,
  polymorphic_identity=EntityTypePerson,
  with_polymorphic='*',
  inherits=Entity,
)

self.supporter_persons_table_mapper = mapper(
  SupporterPerson,
  self.supporter_persons_table,
  polymorphic_identity=EntityClassSupporterPerson,
  with_polymorphic='*',
  inherits=Person,
)

I am able to instantiate a SupporterPerson() and successfully store all 
attributes correctly in their respective tables. So the mapping seems to 
work well. However...

In the broader context of my application, a query to load Causes() ( 
sketched out below ) will eagerload a list of that causes supporters.  When 
supporters are eagerloaded, the supporter is reliably loaded to the level 
of Person() however the attributes specific to SupporterPerson() are not 
loaded.  In other words, the attributes specific to the youngest child in 
my class inheritance, structure are NOT loaded by the eagerload.

class Cause(object):

  def __init__(self,
id=None
supporters=None):

  self.id = id
  self.supporters = [] if supporters is None else None

So my question is, Is there a problem with my mapping? Or is this depth of 
inheritance not supported by the eagerload? Or perhaps there is something 
else I'm missing altogether ( unrelated to the eagerload ) ? Any advice or 
assistance would be greatly appreciated. Sorry if I'm missing something 
that is already clearly stated in the documentation.

Best,

Ken

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy: Database Access Using Python - Developer's Library book

2015-11-08 Thread Ken Lareau
On Thu, Nov 5, 2015 at 7:34 AM, Mike Bayer <mike...@zzzcomputing.com> wrote:

>
>
> On 11/03/2015 06:55 PM, Ken Lareau wrote:
>
>> I came across this during a search, and in the four different sites
>> I've checked, I've seen four different release dates varying from
>> 2008 to 2018, for example:
>>
>>
>> http://www.amazon.com/SQLAlchemy-Database-Access-Developers-Library/dp/0132364670
>>
>> (which has 2018 for the release date).  So I'm curious...
>>
>> Is this a very old book (2008) or a book that has several years
>> to be released (2018)?  Does anyone actually know which of
>> the various dates are most likely correct? :)  (I've seen a few
>> around May-June of 2016, which seems the most likely, but...)
>>
>> Given one of the authors of the book, I'm hoping I might be able
>> to find out more here. :)
>>
>
>
> it's unfortunately a non-existent book for the time being and I'm not able
> to make any predictions when that might change.


Understood, and thanks for the response.

-- 
Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy: Database Access Using Python - Developer's Library book

2015-11-03 Thread Ken Lareau
I came across this during a search, and in the four different sites
I've checked, I've seen four different release dates varying from
2008 to 2018, for example:

http://www.amazon.com/SQLAlchemy-Database-Access-Developers-Library/dp/0132364670

(which has 2018 for the release date).  So I'm curious...

Is this a very old book (2008) or a book that has several years
to be released (2018)?  Does anyone actually know which of
the various dates are most likely correct? :)  (I've seen a few
around May-June of 2016, which seems the most likely, but...)

Given one of the authors of the book, I'm hoping I might be able
to find out more here. :)

-- 
Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Sane way to monitor external DB changes from application?

2015-09-02 Thread Ken Lareau
Dave,

Thanks for the response.  Unfortunately the switch to PostgreSQL, while not
a long ways away, probably will not occur until at least the beginning of
next year and this rewrite is needed now.  If there are no other options
available then I'll just need to brute force it. :)

- Ken


On Wed, Sep 2, 2015 at 6:32 PM, Dave Vitek <dvi...@grammatech.com> wrote:

> At least one database (postgres) has a pub/sub messaging facility
> (NOTIFY/LISTEN) that you can use to do this.  See the postgres docs.  We
> use this extensively.
>
> On the listen end, you basically want to get down to the psycopg layer,
> because sqlalchemy's layers aren't going to be helpful.
>
> 1.  Get it using engine.raw_connection()
> 2.  Detach it from the thread pool (c.detach())
> 3.  Change the isolation level to autocommit
> c.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
> (note that if you return the connection to the connection pool, this
> change may survive after the connection is returned to the pool, causing
> subtle havoc later)
> 4. Call c.poll() to wait for events (see psycopg2 docs)
> 5. Use c.notifies to get messages
>
> On the notify end, you don't need to do these special things to the
> connection and can issue messages using raw sql text on sqlalchemy
> connection objects.
>
> I would assume other backends will be completely different.  You can use
> select/epoll/whatever to do async IO if needed.  You may find postgres'
> advisory locks useful if any synchronization needs arise.
>
> References:
> http://www.postgresql.org/docs/9.4/static/sql-listen.html
> http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications
>
> - Dave
>
>
> On 9/2/2015 8:48 PM, Ken Lareau wrote:
>
> I'm going to try to see if I can give enough detail here to allow folks to
> make sense, but I will be simplifying things a bit to prevent a 1,000+ line
> email, too...
>
> So I have an in-house application that handles software deployments.  It
> uses a database backend to keep track of current deployments and maintain
> history (actually, the database is used more generally for our Site
> Operations site management with specific tables created just for the
> application).  I am working on a major refactoring of the deployment code
> itself where the actually installation of the software is handled by a
> daemon that runs constantly in the background looking for new deployments
> to perform.  The key tables look like this:
>
>
> class Deployment(Base):
> __tablename__ = 'deployments'
>
> id = Column(u'DeploymentID', INTEGER(), primary_key=True)
> package_id = Column(
> INTEGER(),
> ForeignKey('packages.package_id', ondelete='cascade'),
> nullable=False
> )
>
> package = relationship(
> "Package",
> uselist=False,
> back_populates='deployments'
> )
>
> user = Column(String(length=32), nullable=False)
> status = Column(
> Enum('pending', 'queued', 'inprogress', 'complete', 'failed',
>  'canceled', 'stopped'),
> server_default='pending',
> nullable=False,
> )
> declared = Column(
> TIMESTAMP(),
> nullable=False,
> server_default=func.current_timestamp()
> )
> created_at = synonym('declared')
> app_deployments = relationship(
> 'AppDeployment', order_by="AppDeployment.created_at,
> AppDeployment.id"
> )
> host_deployments = relationship(
> 'HostDeployment', order_by="HostDeployment.created_at,
> HostDeployment.id"
> )
>
>
> class AppDeployment(Base):
> __tablename__ = 'app_deployments'
>
> id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
> deployment_id = Column(
> u'DeploymentID',
> INTEGER(),
> ForeignKey('deployments.DeploymentID', ondelete='cascade'),
> nullable=False
> )
> app_id = Column(
> u'AppID',
> SMALLINT(display_width=6),
> ForeignKey('app_definitions.AppID', ondelete='cascade'),
> nullable=False
> )
>
> application = relationship("AppDefinition", uselist=False)
> target = synonym('application')
> deployment = relationship("Deployment", uselist=False)
>
> user = Column(String(length=32), nullable=False)
> status = Column(
> Enum(
> 'complete',
> 'incomplete',
> 'inprogress',
> 'invalidated',
> 'validated',
> ),
> nullable=False
> )
> environment_id = Column(
> u'environment_id',
> 

[sqlalchemy] Sane way to monitor external DB changes from application?

2015-09-02 Thread Ken Lareau
e relevant entries
2) Check against the previous state to see which have changed; if there are
changes, notify the user
3) Update the tracked state for all
4) Sleep for a given amount of time, then go back to (1)

This is of course very database intensive, so to finally get to the
point... I'm wondering if there's a way to do this such that the
application can 'listen' for changes and be notified when they occur.  I've
tried doing a bit of reading on events, but not having used them before I'm
unsure if this will work, especially since multiple deployments can be
occurring so the 'listener' will need to be only paying attention to the
entries related to that one given deployment (basically just the
AppDeployment and HostDeployment entries that are tied to the specific
Deployment entry).  If it makes a difference, the database server itself is
MySQL currently, though there are plans to migrate to PostgreSQL in the
not-too-distant future.  This also leads to the possibly using triggers on
the DB side, but I'm unsure if that's a valid possibility here?

Hopefully this made sense, but if any clarification is needed, please let
me know.

-- 
Ken Lareau
klar...@ifwe.co

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Multiple delete attempts of same rows with a star schema configuration

2014-09-04 Thread Ken Lareau
So I have a few tables as follows (abbreviated for unnecessary columns):

class Project(Base):
__tablename__ = 'projects'

id = Column(u'project_id', INTEGER(), primary_key=True)

applications = relationship(
'AppDefinition',
secondary=lambda: Base.metadata.tables['project_package'],
passive_deletes=True,
lazy=False,
)

package_definitions = relationship(
'PackageDefinition',
secondary=lambda: Base.metadata.tables['project_package'],
passive_deletes=True,
)


class PackageDefinition(Base):
__tablename__ = 'package_definitions'

id = Column(u'pkg_def_id', INTEGER(), primary_key=True)

applications = relationship(
'AppDefinition',
secondary=lambda: Base.metadata.tables['project_package'],
passive_deletes=True,
)

projects = relationship(
'Project',
secondary=lambda: Base.metadata.tables['project_package'],
passive_deletes=True,
)


class AppDefinition(Base):
__tablename__ = 'app_definitions'

id = Column(u'AppID', SMALLINT(display_width=2), primary_key=True)

package_definitions = relationship(
'PackageDefinition',
secondary=lambda: Base.metadata.tables['project_package'],
passive_deletes=True,
)

projects = relationship(
'Project',
secondary=lambda: Base.metadata.tables['project_package'],
passive_deletes=True,
)


class ProjectPackage(Base):
__tablename__ = 'project_package'

project_id = Column(
INTEGER(),
ForeignKey('projects.project_id', ondelete='cascade'),
primary_key=True
)
pkg_def_id = Column(
INTEGER(),
ForeignKey('package_definitions.pkg_def_id', ondelete='cascade'),
primary_key=True
)
app_id = Column(
SMALLINT(display_width=6),
ForeignKey('app_definitions.AppID', ondelete='cascade'),
primary_key=True
)

app_definition = relationship('AppDefinition', uselist=False)
package_definition = relationship('PackageDefinition', uselist=False)
project = relationship('Project', uselist=False)


If I select a row from the projects table and try to delete it, when I try
to commit
the session I get something like this:

[INFO] DELETE FROM project_package WHERE project_package.project_id = ? AND
project_package.app_id = ?
[INFO] (1L, 1L)
[INFO] DELETE FROM project_package WHERE project_package.project_id = ? AND
project_package.pkg_def_id = ?
[INFO] (1L, 1L)
[INFO] ROLLBACK

At the time of the delete, the project_package table simply has this:

[('project_id', 1L), ('pkg_def_id', 1L), ('app_id', 1L)]

Obviously the combination of the ondelete='cascade' settings in the
project_package table and the 'passive_deletes' settings in the
relationships
is what's triggering this, but after try many different searches on Google,
I'm
not finding the right solution to fix this; obviously I only want it to
attempt to
delete the row once, but each relationship in project is triggering its own
delete which is bad.

So... help?  Am I missing something simple here, or is my model not good
enough to do the right thing?

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multiple delete attempts of same rows with a star schema configuration

2014-09-04 Thread Ken Lareau
On Thu, Sep 4, 2014 at 3:30 PM, Michael Bayer mike...@zzzcomputing.com
wrote:


 On Sep 4, 2014, at 5:37 PM, Ken Lareau klar...@tagged.com wrote:

 So I have a few tables as follows (abbreviated for unnecessary columns):

 class Project(Base):
 __tablename__ = 'projects'

 id = Column(u'project_id', INTEGER(), primary_key=True)

 applications = relationship(
 'AppDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 lazy=False,
 )

 package_definitions = relationship(
 'PackageDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )


 class PackageDefinition(Base):
 __tablename__ = 'package_definitions'

 id = Column(u'pkg_def_id', INTEGER(), primary_key=True)

 applications = relationship(
 'AppDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )

 projects = relationship(
 'Project',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )


 class AppDefinition(Base):
 __tablename__ = 'app_definitions'

 id = Column(u'AppID', SMALLINT(display_width=2), primary_key=True)

 package_definitions = relationship(
 'PackageDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )

 projects = relationship(
 'Project',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )


 class ProjectPackage(Base):
 __tablename__ = 'project_package'

 project_id = Column(
 INTEGER(),
 ForeignKey('projects.project_id', ondelete='cascade'),
 primary_key=True
 )
 pkg_def_id = Column(
 INTEGER(),
 ForeignKey('package_definitions.pkg_def_id', ondelete='cascade'),
 primary_key=True
 )
 app_id = Column(
 SMALLINT(display_width=6),
 ForeignKey('app_definitions.AppID', ondelete='cascade'),
 primary_key=True
 )

 app_definition = relationship('AppDefinition', uselist=False)
 package_definition = relationship('PackageDefinition', uselist=False)
 project = relationship('Project', uselist=False)


 this model has serious issues, though it can be made to hobble along with
 some small changes.  The “three way association table” is not a great
 pattern in the first place.  Additionally, using that same table as
 “secondary” in all those places is clearly the issue here, and they don’t
 even have back_populates between them, which are used in many-to-many in
 order to try avoid “double deleting”.   But that wouldn’t solve the issue
 here anyway.


As I feared, I figured this model would be frowned upon. :)  Note that this
model (or the underlying database)
is not set in stone; if you have suggestions on a better way to link the
three main tables, I would love to hear
it (and we did add back_populates (somehow missed them during the
restructuring of the models), but as
you said, didn't help in this case).



 In this case, you have mapped to the project_package table directly;
 you’re using the association object pattern.  The note at the end of
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#association-object
 applies here - you absolutely should have viewonly=True on all of those
 relationship() directives that refer to this association table as the
 “secondary” table.  The ORM otherwise has no way to know what changes
 should result in what deletions where.


Will add the 'viewonly' option as suggested and see if that helps, but as
mentioned above, if you can suggest
a better model for this case, please let me know!

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multiple delete attempts of same rows with a star schema configuration

2014-09-04 Thread Ken Lareau
On Thu, Sep 4, 2014 at 4:35 PM, Michael Bayer mike...@zzzcomputing.com
wrote:


 On Sep 4, 2014, at 6:58 PM, Ken Lareau klar...@tagged.com wrote:

 On Thu, Sep 4, 2014 at 3:30 PM, Michael Bayer mike...@zzzcomputing.com
 wrote:


 On Sep 4, 2014, at 5:37 PM, Ken Lareau klar...@tagged.com wrote:

 So I have a few tables as follows (abbreviated for unnecessary columns):

 class Project(Base):
 __tablename__ = 'projects'

 id = Column(u'project_id', INTEGER(), primary_key=True)

 applications = relationship(
 'AppDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 lazy=False,
 )

 package_definitions = relationship(
 'PackageDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )


 class PackageDefinition(Base):
 __tablename__ = 'package_definitions'

 id = Column(u'pkg_def_id', INTEGER(), primary_key=True)

 applications = relationship(
 'AppDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )

 projects = relationship(
 'Project',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )


 class AppDefinition(Base):
 __tablename__ = 'app_definitions'

 id = Column(u'AppID', SMALLINT(display_width=2), primary_key=True)

 package_definitions = relationship(
 'PackageDefinition',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )

 projects = relationship(
 'Project',
 secondary=lambda: Base.metadata.tables['project_package'],
 passive_deletes=True,
 )


 class ProjectPackage(Base):
 __tablename__ = 'project_package'

 project_id = Column(
 INTEGER(),
 ForeignKey('projects.project_id', ondelete='cascade'),
 primary_key=True
 )
 pkg_def_id = Column(
 INTEGER(),
 ForeignKey('package_definitions.pkg_def_id', ondelete='cascade'),
 primary_key=True
 )
 app_id = Column(
 SMALLINT(display_width=6),
 ForeignKey('app_definitions.AppID', ondelete='cascade'),
 primary_key=True
 )

 app_definition = relationship('AppDefinition', uselist=False)
 package_definition = relationship('PackageDefinition', uselist=False)
 project = relationship('Project', uselist=False)


 this model has serious issues, though it can be made to hobble along with
 some small changes.  The “three way association table” is not a great
 pattern in the first place.  Additionally, using that same table as
 “secondary” in all those places is clearly the issue here, and they don’t
 even have back_populates between them, which are used in many-to-many in
 order to try avoid “double deleting”.   But that wouldn’t solve the issue
 here anyway.


 As I feared, I figured this model would be frowned upon. :)  Note that
 this model (or the underlying database)
 is not set in stone; if you have suggestions on a better way to link the
 three main tables, I would love to hear
 it (and we did add back_populates (somehow missed them during the
 restructuring of the models), but as
 you said, didn't help in this case).


 A row in the 3-way table means that for this project_id, and pkg_def_id,
 and app_id, that’s a single allowed value, so that’s star schema, OK
 (though they call star schema “denormalized” - i don’t exactly understand
 why, if you make all the columns part of the primary key).  But then you
 have functions that give you lists of things based on two-column slices of
 that table, in order to relate different dimensions to each other;
 projects-app_definitions, means you can have Project (X) -
 AppDefnition(Y) any number of times, for every value of pkg_def_id.I
 guess that’s the awkward part here.   Those aren’t really collection-based
 “relationships” so much as analytical queries across dimension tables.
 They are valid queries but aren’t really “object/relational” - when we map
 a class to another with a many-to-many collection in between, that means
 something more specific.

 So maybe its just, object relational mapping assumes certain patterns are
 mappable to objects and collections, and 3-way associations aren’t quite in
 that pattern :).


Hmm, understood, though the main reason for these relationships were to
avoid having to make
DB queries within the main application code (pushing them out into the
models) and simplify the
logic; if you had seen what our application code looked like before, you
might shudder even more
than over what we're doing here. :)

Is there a way to essentially allow something like 'for app in
project.applications:' without having
to make an explicit query to the DB first, but avoiding the awkwardness of
the current setup?

-- 
- Ken Lareau

-- 
You received this message because you

Re: [sqlalchemy] Multiple delete attempts of same rows with a star schema configuration

2014-09-04 Thread Ken Lareau
On Thu, Sep 4, 2014 at 4:54 PM, Michael Bayer mike...@zzzcomputing.com
wrote:


 On Sep 4, 2014, at 7:50 PM, Ken Lareau klar...@tagged.com wrote:


 Is there a way to essentially allow something like 'for app in
 project.applications:' without having
 to make an explicit query to the DB first, but avoiding the awkwardness of
 the current setup?


 just use the viewonly=True and we can all go home :)  I’m not too upset
 about it


Got it... the change worked fine, thanks for all the help. :)

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Multiple tables and foreignkey constraints

2014-07-25 Thread Ken Roberts
This is probably violating some normal form, but after playing with it 
(Thanks Michael and Jonathan), this seems to work for what I'm using. 
Suggestions welcome on improving it.

#!/usr/bin/env python

from sqlalchemy import Column, ForeignKey, Integer, MetaData, Sequence, 
String, \
create_engine
from sqlalchemy.orm import backref, joinedload, relationship, \
scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.pool import NullPool

engine = create_engine(usqlite:///./test.sql, poolclass=NullPool)
metadata = MetaData(bind=engine)
session = scoped_session(sessionmaker(autoflush=True,
  autocommit=False,
  bind=engine)
 )
Base = declarative_base(metadata=metadata)

# Table base class to assign table name based on class name and
# add id column
class CommonBase(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)

# Table class to keep track of manufacturers
class Manufacturer(CommonBase, Base):
def __repr__(self):
return Manufacturer(name='%s') % self.name

name = Column(String(30))
models = relationship(Model, 
  order_by=Model.id, 
  backref=manufacturer,
  cascade=all, delete-orphan,
  
primaryjoin=Manufacturer.id==Model.manufacturer_id,
  lazy='joined')

# table class to keep track of models related to Manufacturer.
class Model(CommonBase, Base):
def __repr__(self):
return Model(name=%s') % self.name
manufacturer_id = Column(Integer, ForeignKey(manufacturer.id))
name = Column(String(20))
sources = relationship(Source, 
   order_by=Source.id, 
   backref=model,
   cascade=all, delete-orphan,
   primaryjoin=Model.id==Source.model_id,
   lazy='joined')

# Table class to keep track of sources related to Model.
class Source(CommonBase, Base):
def __repr__(self):
return Source(pjlink_name='%s', pjlink_code='%s', text='%s') % \
(self.pjlink_name, self.pjlink_code, self.text)
model_id = Column(Integer, ForeignKey('model.id'))
pjlink_name = Column(String(15))
pjlink_code = Column(String(2))
text = Column(String(30))

# Table class to keep track of installed projectors.
# Use a separate query to get manufacturer/model/sources rather than
# using a relationship (one-time at program start so minimal impact).
class Installed(CommonBase, Base):
name = Column(String(20))
location = Column(String(30))
ip = Column(String(50))

# Class for projector instances.
class Projector(object):
def __init__(name=None, location=None, ip=None):
self.name = name
self.location = location
self.ip = ip
# Following variables will be filled-in after projector connected.
self.make = None
self.model = None
# Following variable will be filled in after projector connected 
and db queried.
# List of pjlink dictionary items: [ {name='', code='', text=''}, 
...]
self.sources = None

metadata.create_all()

m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
  options(joinedload(Manufacturer.models), joinedload(Model.sources)).\
  filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
  all()

if len(p) == 0:
m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
options(joinedload(Manufacturer.models), joinedload(Model.sources)).\
filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
all()

m=p[0][0]

print m.name
print m.models[0].name
for i in m.models[0].sources:
print PJLink name: %s  code: %s  Text: %s % \
(i.pjlink_name, i.pjlink_code, i.text)



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Multiple tables and foreignkey constraints

2014-07-25 Thread Ken Roberts
Thanks for the help Jonathan and Michael.

The below seems to work for what I'm doing. Let me know what normal forms 
I'm violating with it :)


from sqlalchemy import Column, ForeignKey, Integer, MetaData, Sequence, 
String, \
create_engine
from sqlalchemy.orm import backref, joinedload, relationship, \
scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.pool import NullPool

engine = create_engine(usqlite:///./test.sql, poolclass=NullPool)
metadata = MetaData(bind=engine)
session = scoped_session(sessionmaker(autoflush=True,
autocommit=False,
bind=engine)
)
Base = declarative_base(metadata=metadata)

# Table base class to assign table name based on class name and
# add id column
class CommonBase(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)

# Table class to keep track of manufacturers
class Manufacturer(CommonBase, Base):
def __repr__(self):
return Manufacturer(name='%s') % self.name

name = Column(String(30))
models = relationship(Model, 
order_by=Model.id, 
backref=manufacturer,
cascade=all, delete-orphan,

primaryjoin=Manufacturer.id==Model.manufacturer_id,
lazy='joined')

# table class to keep track of models related to Manufacturer.
class Model(CommonBase, Base):
def __repr__(self):
return Model(name=%s') % self.name
manufacturer_id = Column(Integer, ForeignKey(manufacturer.id))
name = Column(String(20))
sources = relationship(Source, 
order_by=Source.id, 
backref=model,
cascade=all, delete-orphan,
primaryjoin=Model.id==Source.model_id,
lazy='joined')

# Table class to keep track of sources related to Model.
class Source(CommonBase, Base):
def __repr__(self):
return Source(pjlink_name='%s', pjlink_code='%s', 
text='%s') % \
(self.pjlink_name, self.pjlink_code, self.text)
model_id = Column(Integer, ForeignKey('model.id'))
pjlink_name = Column(String(15))
pjlink_code = Column(String(2))
text = Column(String(30))

# Table class to keep track of installed projectors.
# Use a separate query to get manufacturer/model/sources rather than
# using a relationship (one-time at program start so minimal impact).
class Installed(CommonBase, Base):
name = Column(String(20))
location = Column(String(30))
ip = Column(String(50))

# Class for projector instances.
class Projector(object):
def __init__(name=None, location=None, ip=None):
self.name = name
self.location = location
self.ip = ip
# Following variables will be filled-in after projector 
connected.
self.make = None
self.model = None
# Following variable will be filled in after projector 
connected and db queried.
# List of pjlink dictionary items: [ {name='', code='', 
text=''}, ...]
self.sources = None

metadata.create_all()

m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
options(joinedload(Manufacturer.models), joinedload(Model.sources)).\
filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
all()

if len(p) == 0:
m = Manufacturer(name='eiki')
m.models = [ Model(name='lc/xl200') ]
m.models[0].sources = [
Source(pjlink_name='RGB1', pjlink_code='11', text='HDMI RGB'),
Source(pjlink_name='RGB2', pjlink_code='12', text='VGA RGB')
]
session.add(m)
session.commit()

p = session.query(Manufacturer, Model).\
options(joinedload(Manufacturer.models), 
joinedload(Model.sources)).\
filter(Manufacturer.name=='eiki', Model.name=='lc/xl200').\
all()

m=p[0][0]

print m.name
print m.models[0].name
for i in m.models[0].sources:
print PJLink name: %s  code: %s  Text: %s % \
(i.pjlink_name, i.pjlink_code, i.text)





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 

Re: [sqlalchemy] Multiple tables and foreignkey constraints

2014-07-22 Thread Ken Roberts
If I'm understanding correctly, something like the following:

# Keep track of manufacturer names
manufacturers_table = Table(u'manufacturer', metadata,
Column(u'id', Integer, primary_key=True,
Column(u'name', String(20))
)

# Keep track of model names
models_table = Table(u'model', metadata,
Column(u'id', Integer, primary_key=True,
Column(u'name', String(20))
)

# Keep track of inputs available to models
sources_table = Table(u'source', metadata,
Column(u'id', Integer, primary_key=True,
Column(u'pjlink', String(2)),
Column(u'text', String(20))
)

# Foreign key table for mfgr/model/sources
projector_table = Table(u'projector', metadata,
Column(u'manufacturer_id', Integer, primary_key=True, 
ForeignKey(u'manufacturer.id')),
Column(u'model_id', Integer, primary_key=True, ForeignKey(u'model.id')),
Column(u'source_id', Integer, primary_key=True, 
ForeignKey(u'source.id')),
)

# Persistent storage for installed projectors
# model_id would be an index for model_table.id
installed_table = Table(u'installed', metadata,
Column(u'id', Integer, primary_key=True),
Column(u'name', String(20)),
Column(u'model_id', Integer)
)


Or am I still off in left field?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multiple tables and foreignkey constraints

2014-07-21 Thread Ken Roberts


On Monday, July 21, 2014 8:38:54 AM UTC-7, Jonathan Vanasco wrote:

 I agree with what Mike said, but I would just suggest renaming 
 projector_table to something like purchased_table or inventory_table. 
  Everything in models is a different model of a projector, so the table 
 names are a bit confusing. 
 snip


Short answer:

manufacturer/model tables are projectors, and sources are the video inputs 
available for that particular model. The sources table is going to be used 
to keep track of what the manufacturer has listed in their documentation 
for selecting a particular video source. The projectors_table is persistent 
storage use for what projectors that the user can control via networking. 
Basic information is some text columns that the end-user can assign for 
their own (short) notes, but the model_id field is used so that the text 
can be matched with the projector that they have control over.

Longer answer.

Column 1 is PJLink code for selecting that input, the rest of the line is 
text.

Example for Eiki model LC/XL200 projector: (Text is from Eiki webpage 
control)
  11 RGB (pc analog)
  12 RGB (Scart)
  13 RGB (PC Digital)


Example Hitachi CP-X2514: (Text is from PJLink user manual from Hitachi)

  11 Computer IN 1
  12 Computer IN 2
  13 Component


As noted, different manufacturers may have different text for the same 
inputs, so the sources table is just keeping track of the text for the 
input source - hopefully text that the end-user does not find too confusing 
:)

This is not an inventory program. The part I'm looking to add is basic 
projector control to a program that will send some output via a second 
computer video output to a projector. One point is that there may be 
multiple computers connected to a single projector via multiple inputs 
(sources). I was thinking of having manufacturer/model/source tables so the 
end-user doesn't have to re-enter the information if they just happen to 
have multiple projectors with the same model - as a side possibility, also 
having an xml file with this information available that can be imported 
into those tables.

When the end-user adds a projector to the program, they can select the 
projector by manufacturer (Eiki) - model (LC/XL200) - then the sources 
(video inputs) would be added to the projector class so they can then 
select the video source to display. Since using PJLink codes would be 
confusing (What does input 11 select?), the text pulled from the sources 
table would then let them use the familiar text (documented in their 
projector user manual - like RGB (pc analog) ) to select the source.

An example xml file for importing would look something like:

  projector manufacturer='Eiki'
model name='LC/XL200'
  source pjlink='11'RGB (PC analog)/source
  source pjlink='12'RGB (Scart)/source
  source pjlink='13'RGB (PC digital)/source
/model
model name=.'...'
   
/model
  /projector

With the importing in mind, there still has to be the option for the 
end-user to manually add an entry for projector manufacturer/model/sources 
(technical note, with PJLink, I can retrieve the manufacturer name, model 
name, and the available sources via the network, just not the text for the 
sources).

With that, then Jonathan's suggestion of removing the foreign_key on the 
sources table and create a 4th table that keeps track of the 
model-sources constraints.

As for the projectors_table, instead of a foreign_key just use an integer 
column as an index into the models_table would be the suggestion?

projector_table = Table(u'projector', metadata,
Column(u'id', Integer, primary_key=True),
Column(u'model_2_source_id', Integer)
)

The way things are looking, looks like I'm going to have multiple selects. 
Not an issue, since they will only be used on program startup, not during 
normal operations.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multiple tables and foreignkey constraints

2014-07-21 Thread Ken Roberts
Forgot to mention that during typical operation, the only time the database 
will be accessed would be during down time (add/delete) or program startup 
(retrieve list of projectors to control) - not during a presentation.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] (Semi-)automated way to adjust constraint names via Alembic?

2014-07-02 Thread Ken Lareau
So, in my ongoing quest to make my team's operations database far more
sane than it currently is, I want to fix all the constraint naming in the
database
to match the naming convention setting I have added to my SQLAlchemy
configuration for the database.  I could of course go through each table and
determine each by hand, but I was wondering if there was a less manual
(and error-prone) way to approach this, possibly via the autogeneration
feature?  In case it matters, the database server is MySQL.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] (Semi-)automated way to adjust constraint names via Alembic?

2014-07-02 Thread Ken Lareau
On Wed, Jul 2, 2014 at 6:44 PM, Mike Bayer mike...@zzzcomputing.com wrote:


 Well you can get at the names that were used in the DB (using Inspector,
 or reflection) as well as the names that are in your metadata ([constraint
 for constraint in table.constraints for table in metadata.tables.values()],
 but as far as matching them up I'm not sure, it depends on what patterns
 you can find in the existing DB that you can use.   maybe you can write a
 script that guesses, then it spits out a list of oldname-newname, then you
 can manually correct it.


Heh, very few patterns to be found, sadly.  I could easily create a tabular
set
of data that would allow me to map names to the type of constraint and hope-
fully that would be enough for me to run through them all; the biggest
issues
I suspect will be the multi-column constraints along with the primary keys
(since it seems that MySQL uses 'PRIMARY' for the name of the constraint
and I'm not even sure that's changeable (been putting together a test data-
base to try it out on)).  But I'll see what I can hack together. :)

- Ken





 On 7/2/14, 6:08 PM, Ken Lareau wrote:

   So, in my ongoing quest to make my team's operations database far more
 sane than it currently is, I want to fix all the constraint naming in the
 database
  to match the naming convention setting I have added to my SQLAlchemy
  configuration for the database.  I could of course go through each table
 and
  determine each by hand, but I was wondering if there was a less manual
  (and error-prone) way to approach this, possibly via the autogeneration
 feature?  In case it matters, the database server is MySQL.

 --
 - Ken Lareau

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Regarding the use of reciprocal relationships

2014-07-01 Thread Ken Lareau
Related to one of my recent posted threads here, I'm recalling a certain
conversation at PyCon where I was mentioning how a friend would define
a many-to-many relationship by defining a relationship on both declarative
classes involved, each pointing to the other, and the look of abject horror
I received gave a good indication that this is very often not a good thing.
However, I've run into a situation where this has proven useful:

class Deployment(Base):
__tablename__ = 'deployments'

id = Column(u'DeploymentID', INTEGER(), primary_key=True)
package_id = Column(
INTEGER(),
ForeignKey('packages.package_id', ondelete='cascade'),
nullable=False
)

package = relationship('Package')


class Package(Base):
__tablename__ = 'packages'

id = Column(u'package_id', INTEGER(), primary_key=True)
pkg_name = Column(String(length=255), nullable=False)
version = Column(String(length=63), nullable=False)
revision = Column(String(length=63), nullable=False)

deployments = relationship('Deployment')

In this case, most of the time I'm looking to determine which deployments
a given package belongs to, but there are times when I have a given
deployment and am curious as to what package(s) are related to it, and
unless I'm misunderstanding something (which I won't rule out I could
be), a backref won't easily help in this instance.

Of course, one of the gotchas in using this (and we did hit it in one of
our queries) is if not careful, one can trigger a nice 'maximum recursion
depth exceeded' exception (in our particular case, a 'joinedload' within
the query was able to resolve that issue) and I'm sure there other poten-
tial gotchas, so I guess this leads back to the main question at hand:

Are there times when using reciprocal relationships is okay, and are
there certain things that should be done to mitigate potential issues that
can be caused by doing so, or are there better ways to accomplish the
same thing?

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Regarding the use of reciprocal relationships

2014-07-01 Thread Ken Lareau
On Tue, Jul 1, 2014 at 2:17 PM, Mike Bayer mike...@zzzcomputing.com wrote:


 On 7/1/14, 4:54 PM, Ken Lareau wrote:

  Related to one of my recent posted threads here, I'm recalling a certain
 conversation at PyCon where I was mentioning how a friend would define
 a many-to-many relationship by defining a relationship on both declarative
 classes involved, each pointing to the other, and the look of abject horror
  I received gave a good indication that this is very often not a good
 thing.
  However, I've run into a situation where this has proven useful:

 class Deployment(Base):
 __tablename__ = 'deployments'

 id = Column(u'DeploymentID', INTEGER(), primary_key=True)
 package_id = Column(
 INTEGER(),
 ForeignKey('packages.package_id', ondelete='cascade'),
 nullable=False
 )

 package = relationship('Package')


 class Package(Base):
 __tablename__ = 'packages'

 id = Column(u'package_id', INTEGER(), primary_key=True)
 pkg_name = Column(String(length=255), nullable=False)
 version = Column(String(length=63), nullable=False)
 revision = Column(String(length=63), nullable=False)

 deployments = relationship('Deployment')

  In this case, most of the time I'm looking to determine which deployments
  a given package belongs to, but there are times when I have a given
  deployment and am curious as to what package(s) are related to it, and
  unless I'm misunderstanding something (which I won't rule out I could
 be), a backref won't easily help in this instance.

 I'm not actually familiar with what you mean by a backref won't easily
 help.   Above, Package.deployments and Deployment.package refer to the
 same linkage between the two classes, just in different directions.   A
 backref above is equivalent to what you have, plus:

 class Deployment(Base):
# ...

   package = relationship(Package, back_populates='deployments')

 class Package(Base):
   # ...

   deployments = relationship(Deployment, back_populates='package')

 the importance of back_populates (or backref, essentially a shortcut to
 the same thing) is mostly in how the collection or attribute responds to
 in-Python changes and to a lesser (but sometimes important) extent how the
 attributes are treated during the flush process.   The two relationships
 don't have to be related to each other at the Python level like this, but
 there's usually no reason to omit this information as it only allows the
 ORM to better keep the two sides in sync.


Oy, leave it to me to misinterpret once again (and I have read that part
of the docs before, so I should have known better).  For some reason
I had in my head that the backref could only be used from the class that
the relationship was defined on. :-/





  Of course, one of the gotchas in using this (and we did hit it in one of
 our queries) is if not careful, one can trigger a nice 'maximum recursion
  depth exceeded' exception (in our particular case, a 'joinedload' within
  the query was able to resolve that issue) and I'm sure there other poten-
 tial gotchas, so I guess this leads back to the main question at hand:

 I don't doubt that this can happen but I'm not familiar at the moment what
 the nature of this recursion issue would be.


Talking with my coworker again, I believe I may have misspoke on the
'joinedload' solving the issue (we used that for performance improvements
I think), but it was while we were trying things out that we caused the
recursion issue and sadly neither of us can remember exactly what trig-
gered that!   So please forget that part, my memory is terrible as of late.
*smile*


   Are there times when using reciprocal relationships is okay, and are
 there certain things that should be done to mitigate potential issues that
  can be caused by doing so, or are there better ways to accomplish the
 same thing?


 It's not clear to me what the thing is that you want to accomplish
 here.  If it's just, you want to set up the two relationships as explicit
 code for readability, that's great, use back_populates.   This is probably
 how apps should be doing it anyway, in the early SQLAlchemy days there was
 a lot of pressure to not require too much boilerplate, hence backref.
 These days, the community has moved well past the whole notion of super
 minimal declaration / magic == good, thankfully.


We've gone ahead and utilized the 'back_populates' for this case, and
I thank you for the assistance and clarification here!

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-07-01 Thread Ken Lareau
On Thu, Jun 26, 2014 at 7:47 PM, Ken Lareau klar...@tagged.com wrote:

 On Jun 26, 2014 7:40 PM, Mike Bayer mike...@zzzcomputing.com wrote:
 
  right, so a few emails ago I said:
 
you need to put .label('environment') on that column before it finds
 its way into subq.  I dont have the mappings here to review.
 
  here's that:
 
 
  @environment.expression
  def environment(cls):
  return select(
  [Environment.environment]
  ).where(
  Environment.id == cls.environment_id
  ).correlate(cls).label('environment')
 
 Aha... thanks.  Now I'm afraid to ask if you looked at the after file,
 and if so, considered it insane. :)

 - Ken

 
 
 
 
 
  On 6/26/14, 9:50 PM, Ken Lareau wrote:
 
  Done, new file attached (this gives the same error message as the one I
 showed initially, at least on my system).
 
  - Ken
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.
 
 
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.

Okay, trying the updated code worked with queries, but apparently not
with an insert.  Here's the basic code involved:

class AppDeployment(Base):
__tablename__ = 'app_deployments'

id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)
app_id = Column(
u'AppID',
SMALLINT(display_width=6),
ForeignKey('app_definitions.AppID', ondelete='cascade'),
nullable=False
)
user = Column(String(length=32), nullable=False)
status = Column(
Enum(
'complete',
'incomplete',
'inprogress',
'invalidated',
'validated',
),
nullable=False
)
environment_id = Column(
u'environment_id',
INTEGER(),
ForeignKey('environments.environmentID', ondelete='cascade'),
nullable=False
)
realized = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)
environment_obj = relationship('Environment')

@hybrid_property
def environment(self):
return self.environment_obj.environment

@environment.expression
def environment(cls):
return select([Environment.environment]).\
where(Environment.id == cls.environment_id).correlate(cls).\
label('environment')

and

def _calculate_environment_id(environment):
return (Session.query(Environment.id)
   .filter_by(environment=environment)
   .one())[0]

def add_app_deployment(dep_id, app_id, user, status, environment):
Add a tier deployment for a given deployment ID

environment_id = _calculate_environment_id(environment)

app_dep = AppDeployment(
deployment_id=dep_id,
app_id=app_id,
user=user,
status=status,
environment_id=environment_id,
realized=func.current_timestamp()
)

# Commit to DB immediately
Session.add(app_dep)
Session.commit()

return app_dep

An actual call to add_app_deployment() results in this exception:

File util.pyx, line 91, in oursql._do_warnings_query
(oursqlx/oursql.c:3969)
sqlalchemy.exc.DBAPIError: (CollatedWarningsError) (None, 'query caused
warnings', [(class 'oursql.Warning', (uField 'environment' doesn't have
a default value, 1364L))]) u'INSERT INTO app_deployments (`DeploymentID`,
`AppID`, user, status, environment_id, realized) VALUES (?, ?, ?, ?, ?,
CURRENT_TIMESTAMP)' (4162L, 405L, 'klareau', 'inprogress', 1L)

I'm guessing something's still not quite right with the 'environment'
hybrid property
in the class? :-/


-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-07-01 Thread Ken Lareau
**SIGH** Please ignore last message... I had forgotten to actually
update the database schema itself. :(  Pardon me while I go shoot
myself...

- Ken


On Tue, Jul 1, 2014 at 4:19 PM, Ken Lareau klar...@tagged.com wrote:

 On Thu, Jun 26, 2014 at 7:47 PM, Ken Lareau klar...@tagged.com wrote:

 On Jun 26, 2014 7:40 PM, Mike Bayer mike...@zzzcomputing.com wrote:
 
  right, so a few emails ago I said:
 
you need to put .label('environment') on that column before it finds
 its way into subq.  I dont have the mappings here to review.
 
  here's that:
 
 
  @environment.expression
  def environment(cls):
  return select(
  [Environment.environment]
  ).where(
  Environment.id == cls.environment_id
  ).correlate(cls).label('environment')
 
 Aha... thanks.  Now I'm afraid to ask if you looked at the after file,
 and if so, considered it insane. :)

 - Ken

 
 
 
 
 
  On 6/26/14, 9:50 PM, Ken Lareau wrote:
 
  Done, new file attached (this gives the same error message as the one
 I showed initially, at least on my system).
 
  - Ken
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.
 
 
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.

 Okay, trying the updated code worked with queries, but apparently not
 with an insert.  Here's the basic code involved:

 class AppDeployment(Base):
 __tablename__ = 'app_deployments'

 id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
 deployment_id = Column(
 u'DeploymentID',
 INTEGER(),
 ForeignKey('deployments.DeploymentID', ondelete='cascade'),
 nullable=False
 )
 app_id = Column(
 u'AppID',
 SMALLINT(display_width=6),
 ForeignKey('app_definitions.AppID', ondelete='cascade'),
 nullable=False
 )
 user = Column(String(length=32), nullable=False)
 status = Column(
 Enum(
 'complete',
 'incomplete',
 'inprogress',
 'invalidated',
 'validated',
 ),
 nullable=False
 )
 environment_id = Column(
 u'environment_id',
 INTEGER(),
 ForeignKey('environments.environmentID', ondelete='cascade'),
 nullable=False
 )
 realized = Column(
 TIMESTAMP(),
 nullable=False,
 server_default=func.current_timestamp()

 )
 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

 @environment.expression
 def environment(cls):
 return select([Environment.environment]).\
 where(Environment.id ==
 cls.environment_id).correlate(cls).\
 label('environment')

 and

 def _calculate_environment_id(environment):
 return (Session.query(Environment.id)
.filter_by(environment=environment)
.one())[0]

 def add_app_deployment(dep_id, app_id, user, status, environment):
 Add a tier deployment for a given deployment ID

 environment_id = _calculate_environment_id(environment)

 app_dep = AppDeployment(
 deployment_id=dep_id,
 app_id=app_id,
 user=user,
 status=status,
 environment_id=environment_id,
 realized=func.current_timestamp()
 )

 # Commit to DB immediately
 Session.add(app_dep)
 Session.commit()

 return app_dep

 An actual call to add_app_deployment() results in this exception:

 File util.pyx, line 91, in oursql._do_warnings_query
 (oursqlx/oursql.c:3969)
 sqlalchemy.exc.DBAPIError: (CollatedWarningsError) (None, 'query caused
 warnings', [(class 'oursql.Warning', (uField 'environment' doesn't have
 a default value, 1364L))]) u'INSERT INTO app_deployments (`DeploymentID`,
 `AppID`, user, status, environment_id, realized) VALUES (?, ?, ?, ?, ?,
 CURRENT_TIMESTAMP)' (4162L, 405L, 'klareau', 'inprogress', 1L)

 I'm guessing something's still not quite right with the 'environment'
 hybrid property
 in the class? :-/


 --
 - Ken Lareau




-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group

Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Ken Lareau
On Thu, Jun 26, 2014 at 5:57 AM, Mike Bayer mike...@zzzcomputing.com
wrote:


 On 6/25/14, 8:06 PM, Ken Lareau wrote:

 On Wed, Jun 25, 2014 at 6:28 AM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/25/14, 2:26 AM, Ken Lareau wrote:

 On Tue, Jun 24, 2014 at 3:35 PM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/23/14, 8:09 PM, Ken Lareau wrote:
 
  if apptier:
  subq = (
  Session.query(
  Package.pkg_name,
  Package.version,
  Package.revision,
  AppDefinition.app_type,
  AppDeployment.environment
  ).join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(Package.pkg_name == package_name)
   .filter(AppDeployment.environment == env)
   .filter(AppDeployment.status != 'invalidated'))
 
  [...]
 
  # The actual column name must be used in the subquery
  # usage below; DB itself should be corrected
  versions = (Session.query(subq.c.appType,
  subq.c.version,
  subq.c.revision)
  .group_by(subq.c.appType, subq.c.environment)
  .all())
 
  The parts in the subquery work fine, but the access of the 'column'
  in the final query leads to this:
 
  Traceback (most recent call last):
File stdin, line 1, in module
File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
  line 234, in find_deployed_version
  .group_by(subq.c.appType, subq.c.environment)
File
 
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
  line 174, in __getattr__
  raise AttributeError(key)
  AttributeError: environment
 
  This is not completely surprising, but I'm uncertain as the best way
  to fix this... help? :)
  nothing is obviously wrong, assuming you are calling subquery() on
 subq at the end.  It will be a select() construct which will have a
 column called .environment on the .c. collection because
 AppDeployment.environment is present.   Can't say why this attribute is
 not here without full details.


 Oops there were some additional filters I left out, but this is added
 before the final query:

  subq = (subq.order_by(AppDeployment.realized.desc())
 .subquery(name='t_ordered'))

  Not sure why it isn't working, but today my coworker an I massively
 rewrote one of the other methods to avoid this same issue, so maybe
 we should try the same for this one... though it would be nice to know
 what I goofed here, but not sure what additional information I can
 give that would help offhand...


  what are the names that are actually present on subq.c ?   print
 list(subq.c) should give an indication.


 Adding that in to the method gave this:

  deploy.find_deployed_version('tdstest', 'development', version='9',
 apptier=True)
 [Column('pkg_name', String(length=255), table=t_ordered,
 nullable=False), Column('version', String(length=63), table=t_ordered,
 nullable=False), Column('revision', String(length=63), table=t_ordered,
 nullable=False), Column('appType', String(length=100), table=t_ordered,
 nullable=False), sqlalchemy.sql.elements.ColumnClause at 0x1d20e10;
 %(30521360 anon)s]
 Traceback (most recent call last):
   File stdin, line 1, in module
   File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py, line
 237, in find_deployed_version
 .group_by(subq.c.appType, subq.c.environment)
   File
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
 line 174, in __getattr__
 raise AttributeError(key)
 AttributeError: environment

 you need to put .label('environment') on that column before it finds its
 way into subq.  I dont have the mappings here to review.

 for the next go around please just provide a one-file example of the
 mapping plus the query, thanks.

 Do you desire the example to be fully functional, or just have the relevant
mapping and query?  We ran into the same issue with another query and
ended up completely reworking the code to avoid the subquery, but it in-
volves quite a bit of various mappings and I'm uncertain if it's the proper
way to move forward.  I could try to post the before/after here, but if it
needs to be able to to run standalone it may take me a bit of time to get
it into a fully workable state (given how extensive it is).


-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Ken Lareau
On Thu, Jun 26, 2014 at 1:45 PM, Mike Bayer mike...@zzzcomputing.com
wrote:

  minimal mapping + the query against that mapping.

 Okay, for the other method I needed to change, I think I've put together
complete enough examples and have attached the files.  The 'before'
file is what we have currently (and doesn't quite do the right thing as
it's only using the environment_id), and the 'after' file is what seemingly
worked after my coworker and I tried a different route to avoid having
to use a subquery.  One thing to note is there's a reciprocating rela-
tionship added on the Deployment class to the Package class which
initially caused a lovely stack overflow in Python until the joinedload
options were added; this may be a bad idea, but it was the only way
we knew how to get things working the way we needed. :)

I tried to add all the imports as well in case there were any questions
that might come from the code itself; hopefully I didn't miss any, but
if there's still confusion, let me know.

- Ken




 On 6/26/14, 4:11 PM, Ken Lareau wrote:

 On Thu, Jun 26, 2014 at 5:57 AM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/25/14, 8:06 PM, Ken Lareau wrote:

 On Wed, Jun 25, 2014 at 6:28 AM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/25/14, 2:26 AM, Ken Lareau wrote:

 On Tue, Jun 24, 2014 at 3:35 PM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/23/14, 8:09 PM, Ken Lareau wrote:
 
  if apptier:
  subq = (
  Session.query(
  Package.pkg_name,
  Package.version,
  Package.revision,
  AppDefinition.app_type,
  AppDeployment.environment
  ).join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(Package.pkg_name == package_name)
   .filter(AppDeployment.environment == env)
   .filter(AppDeployment.status != 'invalidated'))
 
  [...]
 
  # The actual column name must be used in the subquery
  # usage below; DB itself should be corrected
  versions = (Session.query(subq.c.appType,
  subq.c.version,
  subq.c.revision)
  .group_by(subq.c.appType, subq.c.environment)
  .all())
 
  The parts in the subquery work fine, but the access of the 'column'
  in the final query leads to this:
 
  Traceback (most recent call last):
File stdin, line 1, in module
File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
  line 234, in find_deployed_version
  .group_by(subq.c.appType, subq.c.environment)
File
 
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
  line 174, in __getattr__
  raise AttributeError(key)
  AttributeError: environment
 
  This is not completely surprising, but I'm uncertain as the best way
  to fix this... help? :)
  nothing is obviously wrong, assuming you are calling subquery() on
 subq at the end.  It will be a select() construct which will have a
 column called .environment on the .c. collection because
 AppDeployment.environment is present.   Can't say why this attribute is
 not here without full details.


 Oops there were some additional filters I left out, but this is added
 before the final query:

  subq = (subq.order_by(AppDeployment.realized.desc())
 .subquery(name='t_ordered'))

  Not sure why it isn't working, but today my coworker an I massively
 rewrote one of the other methods to avoid this same issue, so maybe
 we should try the same for this one... though it would be nice to know
 what I goofed here, but not sure what additional information I can
 give that would help offhand...


  what are the names that are actually present on subq.c ?   print
 list(subq.c) should give an indication.


 Adding that in to the method gave this:

  deploy.find_deployed_version('tdstest', 'development', version='9',
 apptier=True)
 [Column('pkg_name', String(length=255), table=t_ordered,
 nullable=False), Column('version', String(length=63), table=t_ordered,
 nullable=False), Column('revision', String(length=63), table=t_ordered,
 nullable=False), Column('appType', String(length=100), table=t_ordered,
 nullable=False), sqlalchemy.sql.elements.ColumnClause at 0x1d20e10;
 %(30521360 anon)s]
 Traceback (most recent call last):
   File stdin, line 1, in module
   File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py, line
 237, in find_deployed_version
 .group_by(subq.c.appType, subq.c.environment)
   File
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
 line 174, in __getattr__
 raise AttributeError(key)
 AttributeError: environment

  you need to put .label('environment') on that column before it finds
 its way into subq.  I dont have the mappings here to review.

 for the next go around please just provide a one-file example

Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Ken Lareau
Done, new file attached (this gives the same error message as the one I
showed initially, at least on my system).

- Ken

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import *
from sqlalchemy.dialects.mysql import INTEGER, SMALLINT, TIMESTAMP
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
import datetime

Base = declarative_base()

class AppDefinition(Base):
__tablename__ = 'app_definitions'

id = Column(u'AppID', SMALLINT(display_width=2), primary_key=True)
app_type = Column(u'appType', String(length=100), nullable=False)
status = Column(
Enum('active', 'inactive'),
nullable=False,
server_default='active'
)

app_deployments = relationship('AppDeployment')


class AppDeployment(Base):
__tablename__ = 'app_deployments'

id = Column(u'AppDeploymentID', INTEGER(), primary_key=True)
deployment_id = Column(
u'DeploymentID',
INTEGER(),
ForeignKey('deployments.DeploymentID', ondelete='cascade'),
nullable=False
)
app_id = Column(
u'AppID',
SMALLINT(display_width=6),
ForeignKey('app_definitions.AppID', ondelete='cascade'),
nullable=False
)
user = Column(String(length=32), nullable=False)
status = Column(
Enum(
'complete',
'incomplete',
'inprogress',
'invalidated',
'validated',
),
nullable=False
)
environment_id = Column(
u'environment_id',
INTEGER(),
ForeignKey('environments.environmentID', ondelete='cascade'),
nullable=False
)
realized = Column(
TIMESTAMP(),
nullable=False,
server_default=func.current_timestamp()
)

application = relationship('AppDefinition')
deployment = relationship('Deployment')
environment_obj = relationship('Environment')

@hybrid_property
def environment(self):
return self.environment_obj.environment

@environment.expression
def environment(cls):
return select(
[Environment.environment]
).where(
Environment.id == cls.environment_id
).correlate(cls).as_scalar()


class Deployment(Base):
__tablename__ = 'deployments'

id = Column(u'DeploymentID', INTEGER(), primary_key=True)
package_id = Column(
INTEGER(),
ForeignKey('packages.package_id', ondelete='cascade'),
nullable=False
)

app_deployments = relationship('AppDeployment')


class Environment(Base):
__tablename__ = 'environments'

id = Column(u'environmentID', INTEGER(), primary_key=True)
environment = Column(String(length=15), nullable=False, unique=True)


class Package(Base):
__tablename__ = 'packages'

id = Column(u'package_id', INTEGER(), primary_key=True)
pkg_name = Column(String(length=255), nullable=False)
version = Column(String(length=63), nullable=False)
revision = Column(String(length=63), nullable=False)

deployments = relationship('Deployment')

Session = scoped_session(sessionmaker())

subq = (Session.query(Package.pkg_name, Package.version,
  Package.revision, AppDefinition.app_type,
  AppDeployment.environment,
  AppDeployment.realized, AppDeployment.user,
  AppDeployment.status)
   .join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(AppDeployment.status != 'invalidated')
   .filter(AppDeployment.environment == some environment)
   .order_by(AppDeployment.realized.desc())
   .subquery(name='t_ordered'))

q = (Session.query(subq.c.pkg_name, subq.c.version, subq.c.revision,
  subq.c.appType, subq.c.environment,
  subq.c.realized, subq.c.user, subq.c.status)
   .group_by(subq.c.appType, subq.c.environment_id,
 subq.c.pkg_name)
   .having(and_(subq.c.status.like('%complete'),
func.unix_timestamp(subq.c.realized) 
func.unix_timestamp(func.now()) -
datetime.timedelta(seconds=60)))
   )

print q



Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Ken Lareau
On Jun 26, 2014 7:40 PM, Mike Bayer mike...@zzzcomputing.com wrote:

 right, so a few emails ago I said:

   you need to put .label('environment') on that column before it finds
its way into subq.  I dont have the mappings here to review.

 here's that:


 @environment.expression
 def environment(cls):
 return select(
 [Environment.environment]
 ).where(
 Environment.id == cls.environment_id
 ).correlate(cls).label('environment')

Aha... thanks.  Now I'm afraid to ask if you looked at the after file,
and if so, considered it insane. :)

- Ken






 On 6/26/14, 9:50 PM, Ken Lareau wrote:

 Done, new file attached (this gives the same error message as the one I
showed initially, at least on my system).

 - Ken
 --
 You received this message because you are subscribed to the Google
Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to the Google Groups
sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-25 Thread Ken Lareau
On Tue, Jun 24, 2014 at 3:35 PM, Mike Bayer mike...@zzzcomputing.com
wrote:


 On 6/23/14, 8:09 PM, Ken Lareau wrote:
 
  if apptier:
  subq = (
  Session.query(
  Package.pkg_name,
  Package.version,
  Package.revision,
  AppDefinition.app_type,
  AppDeployment.environment
  ).join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(Package.pkg_name == package_name)
   .filter(AppDeployment.environment == env)
   .filter(AppDeployment.status != 'invalidated'))
 
  [...]
 
  # The actual column name must be used in the subquery
  # usage below; DB itself should be corrected
  versions = (Session.query(subq.c.appType,
  subq.c.version,
  subq.c.revision)
  .group_by(subq.c.appType, subq.c.environment)
  .all())
 
  The parts in the subquery work fine, but the access of the 'column'
  in the final query leads to this:
 
  Traceback (most recent call last):
File stdin, line 1, in module
File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
  line 234, in find_deployed_version
  .group_by(subq.c.appType, subq.c.environment)
File
 
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
  line 174, in __getattr__
  raise AttributeError(key)
  AttributeError: environment
 
  This is not completely surprising, but I'm uncertain as the best way
  to fix this... help? :)
 nothing is obviously wrong, assuming you are calling subquery() on
 subq at the end.  It will be a select() construct which will have a
 column called .environment on the .c. collection because
 AppDeployment.environment is present.   Can't say why this attribute is
 not here without full details.


Oops there were some additional filters I left out, but this is added
before the final query:

subq = (subq.order_by(AppDeployment.realized.desc())
.subquery(name='t_ordered'))

Not sure why it isn't working, but today my coworker an I massively
rewrote one of the other methods to avoid this same issue, so maybe
we should try the same for this one... though it would be nice to know
what I goofed here, but not sure what additional information I can
give that would help offhand...

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-24 Thread Ken Lareau
Anyone?  This has me dead in the water and nothing in the documentation
has been helping.

- Ken


On Mon, Jun 23, 2014 at 5:09 PM, Ken Lareau klar...@tagged.com wrote:

 On Fri, Jun 20, 2014 at 3:46 PM, Ken Lareau klar...@tagged.com wrote:

 On Fri, Jun 20, 2014 at 1:23 PM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/20/14, 3:38 PM, Ken Lareau wrote:

  So in the ongoing improvement of one of our internal databases, we
 created
  a new table named 'environments' whose SQLA code looks something like
  this:

 class Environment(Base):
 __tablename__ = 'environments'

 id = Column(u'environmentID', INTEGER(), primary_key=True)
 environment = Column(String(length=15), nullable=False, unique=True)
 env = Column(String(length=12), nullable=False, unique=True)
 domain = Column(String(length=32), nullable=False, unique=True)
 prefix = Column(String(length=1), nullable=False)

  Two of our tables recently needed conversion to stop using their own
 local
  'environment' column to using this table.  The first part's been put
 in place
  (a new foreign key for 'environment_id'), but to prevent large swaths
 of code
  from needing changes, a thought of using a hybrid property might allow
 the
  change to be hidden (until the code using it had been rewritten at
 least).

  My naive attempt was the following (just the relevant snippet):

 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

  Unfortunately (and in hindsight for obvious reasons), this code
 doesn't work,

 what does doesn't work mean here?   This will work at the instance
 level.  At the query level, not so much, that's true, if you truly want no
 code to change you'd need to implement an @expression here that's a little
 inefficient, as it needs to do a correlated subq:


 Yes, the doesn't work was specifically related to an attempt to use it
 in
 a query, which of course failed miserably. :)


  class HasEnv(Base):
 __tablename__ = 'has_env'

 id = Column(INTEGER, primary_key=True)
 environment_id = Column(ForeignKey('environments.environmentID'))


 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

 @environment.expression
 def environment(cls):
 return select([Environment.environment]).\
 where(Environment.id ==
 cls.environment_id).correlate(cls).\
 as_scalar()


 s = Session()

 print s.query(HasEnv).filter(HasEnv.environment == 'some env')

 output:

 SELECT has_env.id AS has_env_id, has_env.environment_id AS
 has_env_environment_id
 FROM has_env
 WHERE (SELECT environments.environment
 FROM environments
 WHERE environments.environmentID = has_env.environment_id) = :param_1

 wont perform well from a SQL perspective but will do the job...



 This worked perfectly, thank you!  This is honestly a stop-gap measure
 to allow much of the code to be rewritten (after which it can be removed),
  and for what it's being used for at the moment, it won't be too bad
 regarding
 performance.



but a very brief conversation with someone on the #sqlalchemy
 channel on
  Freenode indicated there was no way to do this and all the relevant
 code
 must be reworked.  While it's only a few dozen places this occurs, I can
 see
 this coming up again in the future as further schema refactorings occur,
 so
  I turn to those with more expertise to find out if there is a way to
 accomplish
  what I desire, or if there's really no hope. :)  Any insight would be
 greatly
 appreciated.

 I don't know how to fix this issue with IRC and stackoverflow that
 people constantly are getting bad information.

 Heh, Sometimes I think I should know better about asking for help
 on IRC, but sometimes I get lucky.  In this case, I decided I might
 have better luck on the mailing list after the answer I got on IRC
 was very unsatisfactory. :)

 Thanks for the help!


 D'oh, seems like I spoke too soon; there's a specific instance where
 this fails due to some subquery stuff I'm doing:

 if apptier:
 subq = (
 Session.query(
 Package.pkg_name,
 Package.version,
 Package.revision,
 AppDefinition.app_type,
 AppDeployment.environment
 ).join(Deployment)
  .join(AppDeployment)
  .join(AppDefinition)
  .filter(Package.pkg_name == package_name)
  .filter(AppDeployment.environment == env)
  .filter(AppDeployment.status != 'invalidated'))

 [...]

 # The actual column name must be used in the subquery
 # usage below; DB itself should be corrected
 versions = (Session.query(subq.c.appType,
 subq.c.version,
 subq.c.revision)
 .group_by

Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-23 Thread Ken Lareau
On Fri, Jun 20, 2014 at 3:46 PM, Ken Lareau klar...@tagged.com wrote:

 On Fri, Jun 20, 2014 at 1:23 PM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 6/20/14, 3:38 PM, Ken Lareau wrote:

  So in the ongoing improvement of one of our internal databases, we
 created
  a new table named 'environments' whose SQLA code looks something like
  this:

 class Environment(Base):
 __tablename__ = 'environments'

 id = Column(u'environmentID', INTEGER(), primary_key=True)
 environment = Column(String(length=15), nullable=False, unique=True)
 env = Column(String(length=12), nullable=False, unique=True)
 domain = Column(String(length=32), nullable=False, unique=True)
 prefix = Column(String(length=1), nullable=False)

  Two of our tables recently needed conversion to stop using their own
 local
  'environment' column to using this table.  The first part's been put in
 place
  (a new foreign key for 'environment_id'), but to prevent large swaths
 of code
  from needing changes, a thought of using a hybrid property might allow
 the
  change to be hidden (until the code using it had been rewritten at
 least).

  My naive attempt was the following (just the relevant snippet):

 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

  Unfortunately (and in hindsight for obvious reasons), this code doesn't
 work,

 what does doesn't work mean here?   This will work at the instance
 level.  At the query level, not so much, that's true, if you truly want no
 code to change you'd need to implement an @expression here that's a little
 inefficient, as it needs to do a correlated subq:


 Yes, the doesn't work was specifically related to an attempt to use it in
 a query, which of course failed miserably. :)


  class HasEnv(Base):
 __tablename__ = 'has_env'

 id = Column(INTEGER, primary_key=True)
 environment_id = Column(ForeignKey('environments.environmentID'))


 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

 @environment.expression
 def environment(cls):
 return select([Environment.environment]).\
 where(Environment.id ==
 cls.environment_id).correlate(cls).\
 as_scalar()


 s = Session()

 print s.query(HasEnv).filter(HasEnv.environment == 'some env')

 output:

 SELECT has_env.id AS has_env_id, has_env.environment_id AS
 has_env_environment_id
 FROM has_env
 WHERE (SELECT environments.environment
 FROM environments
 WHERE environments.environmentID = has_env.environment_id) = :param_1

 wont perform well from a SQL perspective but will do the job...



 This worked perfectly, thank you!  This is honestly a stop-gap measure
 to allow much of the code to be rewritten (after which it can be removed),
 and for what it's being used for at the moment, it won't be too bad
 regarding
 performance.



but a very brief conversation with someone on the #sqlalchemy channel
 on
  Freenode indicated there was no way to do this and all the relevant code
 must be reworked.  While it's only a few dozen places this occurs, I can
 see
 this coming up again in the future as further schema refactorings occur,
 so
  I turn to those with more expertise to find out if there is a way to
 accomplish
  what I desire, or if there's really no hope. :)  Any insight would be
 greatly
 appreciated.

 I don't know how to fix this issue with IRC and stackoverflow that people
 constantly are getting bad information.

 Heh, Sometimes I think I should know better about asking for help
 on IRC, but sometimes I get lucky.  In this case, I decided I might
 have better luck on the mailing list after the answer I got on IRC
 was very unsatisfactory. :)

 Thanks for the help!


D'oh, seems like I spoke too soon; there's a specific instance where
this fails due to some subquery stuff I'm doing:

if apptier:
subq = (
Session.query(
Package.pkg_name,
Package.version,
Package.revision,
AppDefinition.app_type,
AppDeployment.environment
).join(Deployment)
 .join(AppDeployment)
 .join(AppDefinition)
 .filter(Package.pkg_name == package_name)
 .filter(AppDeployment.environment == env)
 .filter(AppDeployment.status != 'invalidated'))

[...]

# The actual column name must be used in the subquery
# usage below; DB itself should be corrected
versions = (Session.query(subq.c.appType,
subq.c.version,
subq.c.revision)
.group_by(subq.c.appType, subq.c.environment)
.all())

The parts in the subquery work fine, but the access of the 'column'
in the final query leads to this:

Traceback (most recent call last

[sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-20 Thread Ken Lareau
So in the ongoing improvement of one of our internal databases, we created
a new table named 'environments' whose SQLA code looks something like
this:

class Environment(Base):
__tablename__ = 'environments'

id = Column(u'environmentID', INTEGER(), primary_key=True)
environment = Column(String(length=15), nullable=False, unique=True)
env = Column(String(length=12), nullable=False, unique=True)
domain = Column(String(length=32), nullable=False, unique=True)
prefix = Column(String(length=1), nullable=False)

Two of our tables recently needed conversion to stop using their own local
'environment' column to using this table.  The first part's been put in
place
(a new foreign key for 'environment_id'), but to prevent large swaths of
code
from needing changes, a thought of using a hybrid property might allow the
change to be hidden (until the code using it had been rewritten at least).

My naive attempt was the following (just the relevant snippet):

environment_obj = relationship('Environment')

@hybrid_property
def environment(self):
return self.environment_obj.environment

Unfortunately (and in hindsight for obvious reasons), this code doesn't
work,
but a very brief conversation with someone on the #sqlalchemy channel on
Freenode indicated there was no way to do this and all the relevant code
must be reworked.  While it's only a few dozen places this occurs, I can see
this coming up again in the future as further schema refactorings occur, so
I turn to those with more expertise to find out if there is a way to
accomplish
what I desire, or if there's really no hope. :)  Any insight would be
greatly
appreciated.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-20 Thread Ken Lareau
On Fri, Jun 20, 2014 at 1:23 PM, Mike Bayer mike...@zzzcomputing.com
wrote:


 On 6/20/14, 3:38 PM, Ken Lareau wrote:

  So in the ongoing improvement of one of our internal databases, we
 created
  a new table named 'environments' whose SQLA code looks something like
  this:

 class Environment(Base):
 __tablename__ = 'environments'

 id = Column(u'environmentID', INTEGER(), primary_key=True)
 environment = Column(String(length=15), nullable=False, unique=True)
 env = Column(String(length=12), nullable=False, unique=True)
 domain = Column(String(length=32), nullable=False, unique=True)
 prefix = Column(String(length=1), nullable=False)

  Two of our tables recently needed conversion to stop using their own
 local
  'environment' column to using this table.  The first part's been put in
 place
  (a new foreign key for 'environment_id'), but to prevent large swaths of
 code
  from needing changes, a thought of using a hybrid property might allow
 the
  change to be hidden (until the code using it had been rewritten at
 least).

  My naive attempt was the following (just the relevant snippet):

 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

  Unfortunately (and in hindsight for obvious reasons), this code doesn't
 work,

 what does doesn't work mean here?   This will work at the instance
 level.  At the query level, not so much, that's true, if you truly want no
 code to change you'd need to implement an @expression here that's a little
 inefficient, as it needs to do a correlated subq:


Yes, the doesn't work was specifically related to an attempt to use it in
a query, which of course failed miserably. :)


 class HasEnv(Base):
 __tablename__ = 'has_env'

 id = Column(INTEGER, primary_key=True)
 environment_id = Column(ForeignKey('environments.environmentID'))


 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

 @environment.expression
 def environment(cls):
 return select([Environment.environment]).\
 where(Environment.id ==
 cls.environment_id).correlate(cls).\
 as_scalar()


 s = Session()

 print s.query(HasEnv).filter(HasEnv.environment == 'some env')

 output:

 SELECT has_env.id AS has_env_id, has_env.environment_id AS
 has_env_environment_id
 FROM has_env
 WHERE (SELECT environments.environment
 FROM environments
 WHERE environments.environmentID = has_env.environment_id) = :param_1

 wont perform well from a SQL perspective but will do the job...



 This worked perfectly, thank you!  This is honestly a stop-gap measure
to allow much of the code to be rewritten (after which it can be removed),
and for what it's being used for at the moment, it won't be too bad
regarding
performance.



but a very brief conversation with someone on the #sqlalchemy channel
 on
  Freenode indicated there was no way to do this and all the relevant code
 must be reworked.  While it's only a few dozen places this occurs, I can
 see
 this coming up again in the future as further schema refactorings occur, so
  I turn to those with more expertise to find out if there is a way to
 accomplish
  what I desire, or if there's really no hope. :)  Any insight would be
 greatly
 appreciated.

 I don't know how to fix this issue with IRC and stackoverflow that people
 constantly are getting bad information.

 Heh, Sometimes I think I should know better about asking for help
on IRC, but sometimes I get lucky.  In this case, I decided I might
have better luck on the mailing list after the answer I got on IRC
was very unsatisfactory. :)

Thanks for the help!


-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Trying to get a 'global' session to work for SQLAlchemy database library tests

2013-10-11 Thread Ken Lareau
In the process of trying to find an efficient way to manage a test database
for a large set of tests for a database library I'm writing that uses
SQLAlchemy,
I came across this page:


http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html

This is definitely what I want to do, with one catch: I already have a
session
management system in place for the library that seems to conflict with the
sample code given on the webpage, and I'm not having luck reconciling it.

Basically I have an 'init_session' I use that looks as follows:

def init_session(db_user, db_password, **kwargs):
Initialize database session

if 'hostname' not in kwargs or 'db_name' not in kwargs:
db_host, db_name = load_db_config()
kwargs.setdefault('hostname', db_host)
kwargs.setdefault('db_name', db_name)

dbconn_string = create_dbconn_string(db_user, db_password, **kwargs)
engine = create_engine(dbconn_string)

# Ensure connection information is valid
try:
engine.execute('select 1').scalar()
except sqlalchemy.exc.DBAPIError, e:
raise PermissionsException(e)

Session.configure(bind=engine)

Session is defined as follows (in another file at module-level):

Session = scoped_session(sessionmaker())

From all other modules, I simply do a:

from db-meta-module import Session

and I am able to use Session to manage all my access to the ORM.
My attempt to modify the code on the webpage currently looks like this:

import unittest2 as unittest

from tagopsdb.database import init_database, init_session
from tagopsdb.database.meta import Session


def setup_module():
global transaction, connection, engine

# Connect to the database and create the schema within a transaction
engine = init_session('dbtest', 'dbtestpasswd', hostname='localhost',
  db_name='TagOpsDB_Testing')
init_database()

# If you want to insert fixtures to the DB, do it here


def teardown_module():
# Roll back the top level transaction and disconnect from the database
Session.rollback()
Session.close()
engine.dispose()


class DatabaseTest(unittest.TestCase):
Base unittest class to manage nested transactions for testing

def setup(self):
self.__transaction = Session.begin_nested()


def teardown(self):
self.__transaction.rollback()

I modified my test classes to subclass DatabaseTest, but an attempt
to run the tests results in:

UnboundExecutionError: Could not locate a bind configured on mapper
Mapper|Environments|environments or this Session

This does not completely surprise me as my understanding about sessions
in SQLAlchemy is still not solid enough to always understand why my
current code works (and I have a full application using this without issue
at the moment).

So I guess the question is: am I missing something really obvious here,
or will I need to rethink how I deal with sessions in my library code?

Thanks in advance.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Trying to get a 'global' session to work for SQLAlchemy database library tests

2013-10-11 Thread Ken Lareau
On Fri, Oct 11, 2013 at 5:53 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Oct 11, 2013, at 7:14 PM, Ken Lareau klar...@tagged.com wrote:

 In the process of trying to find an efficient way to manage a test database
 for a large set of tests for a database library I'm writing that uses
 SQLAlchemy,
 I came across this page:


 http://alextechrants.blogspot.fi/2013/08/unit-testing-sqlalchemy-apps.html

 This is definitely what I want to do, with one catch: I already have a
 session
 management system in place for the library that seems to conflict with the
 sample code given on the webpage, and I'm not having luck reconciling it.


 I find the approach on that page a little awkward - it's using new globals
 for no good reason and also the begin_nested() seems strange.   The test
 itself then has a self.session, so the test itself is using a test-bound
 session, the choice of globals for connection and engine seems even
 more weird.


Yeah, after looking at it a bit, it also seemed awkward to me, but the
reason
I was looking is that it takes a good full 7 seconds to recreate an empty
database
and doing that per test or even per module might get a bit painful
(eventually
I suspect this library will have quite a few test modules to match all the
tables
and uses of the library), but maybe I'll just pursue that for now (I mostly
had
that working, at least).


 The way this works is:

 1. test fixture gets at an engine, from configurational system, locally,
 whereever.

 2. test fixture gets a connection, holds onto it locally.

 3. test fixture gets a transaction from connection - this is a top level
 transaction, using connection.begin()

 4. test fixture then does whatever the test needs to get at a session.  if
 the code being tested relies upon a global registry, it injects the
 connection.  below is using a traditional scoped session:

 def setUp(self):
 self.conn = engine.connect()
 self.trans = self.conn.begin()

 from application.model import the_scoped_session
 self.session = the_scoped_session(bind=self.conn)

 now above, the test fixture has a hold on self.session.  but - this is
 the *same* session that's in the registry (the registry here being
 application.model.the_scoped_session).   if some other code somewhere
 calls upon the_scoped_session(), they get the *same* session.   it's a
 registry, that's the point of it.

 if you have some other kind of registration thing in place, you'd need to
 figure out how to load it up with a new Session bound to that local
 connection.


 5. test fixture releases the session:

 def tearDown(self):
 the_scoped_session.remove()
 self.trans.rollback()
 self.conn.close()

 so note, we don't have to bother doing anything special with the Session
 at teardown time, we just dump it.   we roll back the transaction that
 we've created externally to it.

 an example of running through this is in the docs at:


 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#joining-a-session-into-an-external-transaction


 and I am able to use Session to manage all my access to the ORM.
 My attempt to modify the code on the webpage currently looks like this:

 import unittest2 as unittest

 from tagopsdb.database import init_database, init_session
 from tagopsdb.database.meta import Session


 def setup_module():
 global transaction, connection, engine

 # Connect to the database and create the schema within a transaction
 engine = init_session('dbtest', 'dbtestpasswd', hostname='localhost',
   db_name='TagOpsDB_Testing')
 init_database()

 # If you want to insert fixtures to the DB, do it here


 def teardown_module():
 # Roll back the top level transaction and disconnect from the database
 Session.rollback()
 Session.close()
 engine.dispose()


 I tend to organize things such that the scope of this transaction is *per
 test*, not per module as you're doing.  the engine, that can be per module,
 or preferably per-application.   But i'd be linking the lifespan of the
 Session to that of the transaction (which again begin_nested() should be a
 begin()).


My only concern with that is the enormous time to recreate/reset the
database for each test as mentioned above, but once again, I might
not be fully understanding if this is actually a concern or not.


I modified my test classes to subclass DatabaseTest, but an attempt
to run the tests results in:

UnboundExecutionError: Could not locate a bind configured on mapper
Mapper|Environments|environments or this Session


make sure the Session is created with an explicit bind to the connection.


I have this at the end of my init_session():

Session.configure(bind=engine)

but I'm assuming that's not enough?


-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email

Re: [sqlalchemy] Re: Duplicating primary key value into another column upon insert?

2013-09-25 Thread Ken Lareau
On Tue, Sep 24, 2013 at 5:40 PM, Nicholas Long nick.studioc...@gmail.comwrote:

My mind keeps going back to events but of course there's the limitation to
 modifying Session state while handling various events. (
 http://docs.sqlalchemy.org/en/rel_0_7/orm/events.html#sqlalchemy.orm.events.MapperEvents.after_update
 )

 But what about using the SQL Expression API?

 *Relevant code snippet:*

 class MyModel(Base):
 __tablename__ = MyTable
 priid = Column(Integer(), primary_key=True)
 secid = Column(Integer())


 @listens_for(MyModel, 'after_insert')
 def mymodel_after_insert(mapper, connection, target):
 mytable = MyModel.__table__
 priid = target.priid
 statement = (mytable.update()
 .where(mytable.c.priid == priid)
 .values(secid=priid))
 connection.execute(statement)


 Full code for reference:
 http://paste.pound-python.org/show/WVciGm4jCxgvz84jKrZy/
 SQLA echo: http://paste.pound-python.org/show/hSyCGisr0X5eupVdyk3f/

 The echo looks sane, though I can't be sure how the SQLAchemy internals
 are affected by this.

 Thank you for your input, Michael.


While I'm not using the 'two-step' approach this time, there will be future
column renames that are far more insidious and may require it, so I would
also like to know if the above will work as well. :)

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Duplicating primary key value into another column upon insert?

2013-09-24 Thread Ken Lareau
On Tue, Sep 24, 2013 at 7:25 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Sep 23, 2013, at 10:04 PM, Ken Lareau klar...@tagged.com wrote:

  Hopefully this will make sense...
 
  I have a database which is in need of some normalization of the column
  naming in various tables.  In an effort to minimize disruption (since
 this
  is a live database used by many applications), I'm trying to use a
 two-step
  approach:
 
  1) Add a new column that will 'mirror' the current primary key column,
 then update code to utilize that column once in place
  2) Once all code is converted, remove the original primary key column
 and make the new column the primary key instead.
 
  In an effort to minimize change to the code I have currently using my
  SQLAlchemy model, I'm trying to find a way to deal with this from within
  the declarative classes themselves.  So the questions are:
 
   - Is this even possible, and if so, how can it be done?
   - If not possible, is there a good yet minimally intrusive external
 change that can be done?
 
  My searching through the docs so far hasn't turned up anything useful,
  I'm afraid...



 it depends a lot on how you are generating primary key values.   how are
 the new values being generated?   (things like database platform affect
 this dramatically).   if using something like MySQL's autoincrement, you
 probably want to write a trigger that will populate this new column after
 an INSERT.overall a trigger is probably the best way to get this extra
 value in the table efficiently.

 OTOH if your applications know the new primary key value beforehand, then
 SQLAlchemy events can be used to copy the value into the new column.

 you also want to be mindful of indexing on this column, as applications
 move to use the new column, you probably want queries to still use indexes.



The primary key values are indeed using autoincrement in MySQL.

The reason for doing this is the database is an existing one over which
I've been building a library to be used to programmatically access it,
but it's not the only library accessing the database, so to prevent
potential
breakage with the other libraries, we taking the safe approach.

The normalization is due to lovely decisions like having all columns
be in lowercase except for the primary key (though sometimes other
keys also have the same problem) where we have something like
'PackageID'.  The new column is meant to replace the original column,
but there will be a small period of time that both columns will exist,
so they must remain in sync until all of the current libraries and appli-
cations are using the new column.  At that point, the new column
will take over and hopefully no one will be the wiser that things were
wrong before. *smile*

As far as I know, none of the applications know the primary key
value beforehand, so that isn't a concern.  Indexing might be, though
the hope is the changeover will be relatively short (matter of a day
or so), but I can still index the new file if the impact is great enough
(the database isn't _too_ large at this point).

Of course, this leads back to the original question... how exactly
does one accomplish this? :)  Hopefully the clarification above helps
some in determining this.


-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Duplicating primary key value into another column upon insert?

2013-09-24 Thread Ken Lareau
On Tue, Sep 24, 2013 at 10:32 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Sep 24, 2013, at 12:00 PM, Ken Lareau klar...@tagged.com wrote:

 
 
 
  Of course, this leads back to the original question... how exactly
  does one accomplish this? :)  Hopefully the clarification above helps
  some in determining this.

 as I said, build a trigger:

 http://dev.mysql.com/doc/refman/5.0/en/triggers.html



Oh, you meant on the database side... d'oh.  *slaps hand to forehead*
Thanks, will look at this.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Duplicating primary key value into another column upon insert?

2013-09-24 Thread Ken Lareau
On Tue, Sep 24, 2013 at 11:10 AM, Ken Lareau klar...@tagged.com wrote:


 On Tue, Sep 24, 2013 at 10:32 AM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Sep 24, 2013, at 12:00 PM, Ken Lareau klar...@tagged.com wrote:

 
 
 
  Of course, this leads back to the original question... how exactly
  does one accomplish this? :)  Hopefully the clarification above helps
  some in determining this.

 as I said, build a trigger:

 http://dev.mysql.com/doc/refman/5.0/en/triggers.html



 Oh, you meant on the database side... d'oh.  *slaps hand to forehead*
 Thanks, will look at this.


Sadly after over an hour of fighting with MySQL triggers and being
unsuccessful at getting any variation to work, I've decided that I'm
just going to need to shut down key infrastructure pieces during
the column renaming and avoid the temporary new column as it
would just complicate my current codebase more than it already
is... once again, I do appreciate the help, however.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Duplicating primary key value into another column upon insert?

2013-09-24 Thread Ken Lareau
On Tue, Sep 24, 2013 at 2:53 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Sep 24, 2013, at 5:42 PM, Ken Lareau klar...@tagged.com wrote:

 On Tue, Sep 24, 2013 at 11:10 AM, Ken Lareau klar...@tagged.com wrote:


 On Tue, Sep 24, 2013 at 10:32 AM, Michael Bayer mike...@zzzcomputing.com
  wrote:


 On Sep 24, 2013, at 12:00 PM, Ken Lareau klar...@tagged.com wrote:

 
 
 
  Of course, this leads back to the original question... how exactly
  does one accomplish this? :)  Hopefully the clarification above helps
  some in determining this.

 as I said, build a trigger:

 http://dev.mysql.com/doc/refman/5.0/en/triggers.html



 Oh, you meant on the database side... d'oh.  *slaps hand to forehead*
 Thanks, will look at this.


 Sadly after over an hour of fighting with MySQL triggers and being
 unsuccessful at getting any variation to work, I've decided that I'm
 just going to need to shut down key infrastructure pieces during
 the column renaming and avoid the temporary new column as it
 would just complicate my current codebase more than it already
 is... once again, I do appreciate the help, however.


 I googled for mysql trigger copy autoincrement and got this:

 Copy auto-increment value to another column on insert:

 http://forums.mysql.com/read.php?99,186171,186241#msg-186241

 the trigger here makes use of LAST_INSERT_ID() and is worth a shot.


Closer, but it requires the main tables to not already have any
values in them (if I'm understanding the SQL correctly), and I'm
working with existing tables already populated with plenty of
data.  Honestly, at this point simply renaming the columns is
far less painful. :)  (And thankfully only needs to be done once.)
But thanks for the info, much appreciated.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Duplicating primary key value into another column upon insert?

2013-09-23 Thread Ken Lareau
Hopefully this will make sense...

I have a database which is in need of some normalization of the column
naming in various tables.  In an effort to minimize disruption (since this
is a live database used by many applications), I'm trying to use a two-step
approach:

1) Add a new column that will 'mirror' the current primary key column,
   then update code to utilize that column once in place
2) Once all code is converted, remove the original primary key column
   and make the new column the primary key instead.

In an effort to minimize change to the code I have currently using my
SQLAlchemy model, I'm trying to find a way to deal with this from within
the declarative classes themselves.  So the questions are:

 - Is this even possible, and if so, how can it be done?
 - If not possible, is there a good yet minimally intrusive external
   change that can be done?

My searching through the docs so far hasn't turned up anything useful,
I'm afraid...

Thanks in advance.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] SQLAlchemy introspection of relationships

2013-04-16 Thread ken
This has also been posted here: 
http://stackoverflow.com/questions/15950375/sqlalchemy-introspection-of-relationships,
 
shoulda asked here first.

I have a use case for modelling data in a SQL schema rather than through 
defining SQLAlchemy objects. However, i require a way to construct the 
corresponding SQLAlchemy objects automatically from the SQL schema.

Currently i have:

from sqlalchemy import MetaData 

  from sqlalchemy.ext.declarative import declarative_base   



meta = MetaData(bind=your_engine)   

  
meta.reflect()  

  
# get the table which we would like to represent as a Python object 

  
table = meta.tables['your_table_name']  

  
# define the object which we will use as the base class 

  base = declarative_base() 

# create db object  

  MyObject = 
type(str(table.fullname), (base,), {'__table__': table})


however, the relationships between objects are not created. Any ideas how 
to go about doing this?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] How to handle 'sub-commits'?

2013-01-17 Thread Ken Lareau
Sadly, it looks like when I try the code in my full application, it is
failing miserably.
I'm seeing errors like this:

sqlalchemy.orm.exc.DetachedInstanceError: Instance Deployments at
0x3367910 is not bound to a Session; attribute refresh operation cannot
proceed

and

sqlalchemy.exc.InvalidRequestError: Object 'HostDeployments at 0x21cd050'
is already attached to session '1' (this is '3')

Sadly it's not immediately obvious as to what's going on... not even
certain how
to start debugging this problem.

- Ken


On Tue, Jan 8, 2013 at 2:59 PM, Ken Lareau klar...@tagged.com wrote:

 On Tue, Jan 8, 2013 at 2:16 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Jan 8, 2013, at 4:00 PM, Ken Lareau wrote:


 Given this and your previous comments, and after some conversations with
 a coworker, I decided to expose the 'engine' variable from the database
 library and create a context manager as so:

 @contextlib.contextmanager
 def isolated_transaction(NewSession=None):
 Manage a new transaction (session) within an existing session

 needed_session = True
 existing = Session.registry()

 if NewSession is None:
 NewSession = scoped_session(sessionmaker(bind=engine))
 else:
 needed_session = False

 Session.registry.set(NewSession)

 try:
 yield
 finally:
 Session.registry.set(existing)

 if needed_session:
 NewSession.close()


 (This code hasn't quite been tested yet and may need some tweaking)
 With this, whenever I need an 'isolated' transaction I can simply do
 from my program:

 with isolated_transaction():
 do stuff with Session

 And things should 'just work'... at least that's my hope!  I can also pass
 an existing new session if I want to use one in multiple places (not
 sure I'll need that just yet, but it was easy enough to write into the
 method, so...)

 Hopefully this seems like a sane solution to my problem. :)


 its good, though you can save yourself all the trouble with
 scoped_session/sessionmaker by just saying this:

 from sqlalchemy.orm import Session
  new_session = Session(bind=engine)

 or which might be even more portable, use the sessionmaker from your
 existing registry:

 new_session = Session.session_factory()

 session_factory is the sessionmaker() you stuck onto scoped_session in
 the first place.  In that case you don't need to refer to the Engine
 explicitly.


 Nice, the second one definitely works and I no longer need to export
 'engine'
 and only need to import Session from my library where I have the context
 manager place. :)

 Thanks again!

 - Ken


 Depending on the database in use, using low isolation levels can have
 the effect that other transactions can view dirty reads as the
 transaction proceeds, but this is obviously an all-or-nothing thing.
  When I need certain resources exposed during a long running transaction, I
 transfer that data to a different Session and commit() those changes
 distinctly.

 My current needs would tend to use the short transactions for things
 that are mostly
 isolated from anything going on in the longer running (main)
 transaction, though I do
 suspect I might need what you mention in your last sentence, but might
 you be able
 to refer me to an example of how it would work, perchance?


 I'd advise against going this route, you'd pretty much need to use MySQL
 MyISAM tables to get guaranteed dirty reads, that is, there's no
 transaction at all, and it's not really how transactions were meant to be
 used.Lowering the isolation level is usually just a means to get more
 transaction throughput.


 Okay, I suspect I misunderstood what was being mentioned here, which is
 no problem; I'll avoid it. :)  Thanks once again for all the help.

 -Ken



 --
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




 --
 - Ken Lareau


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




 --
 - Ken Lareau




-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post

Re: [sqlalchemy] How to handle 'sub-commits'?

2013-01-17 Thread Ken Lareau
On Thu, Jan 17, 2013 at 4:13 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Jan 17, 2013, at 7:01 PM, Ken Lareau wrote:

 On Thu, Jan 17, 2013 at 3:54 PM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Jan 17, 2013, at 6:45 PM, Ken Lareau wrote:

 Sadly, it looks like when I try the code in my full application, it is
 failing miserably.
 I'm seeing errors like this:

 sqlalchemy.orm.exc.DetachedInstanceError: Instance Deployments at
 0x3367910 is not bound to a Session; attribute refresh operation cannot
 proceed

 and

 sqlalchemy.exc.InvalidRequestError: Object 'HostDeployments at
 0x21cd050' is already attached to session '1' (this is '3')

 Sadly it's not immediately obvious as to what's going on... not even
 certain how
 to start debugging this problem.


 well an object can only be attached to one Session at a time.   so the
 tricky thing is that it's kind of easy to get objects in the Session
 sometimes when you don't want them to.  like if you have A and B, and
 there's a relationship between both like A.bs an B.a, putting an A into a
 Session will pull in the B and vice versa.

 So if you're putting some kind of object into this other Session to be
 committed immediately, you'd have to make sure it isn't being pulled into
 your primary session.   if you call object_session(someobj) on any object
 it will show you what Session it belongs to.

 The other option is to merge() the state of the object into another
 Session.  merge() makes a copy of an object from one Session to another.
  Though here you're looking to commit those objects in the second session
 and not at all in the first so you probably should jsut make sure those
 objects are only in that one Session.


 That would explain things, since I have things like:

  with isolated_transaction():
 host_dep.status = 'ok'
 Session.commit()

 Obviously the 'host_dep' is from the original session, so this isn't going
 to work.  I know about merge(),
 though in this case I'm not fully certain how to use it, or if it can even
 solve issues like this. :(


 OK you could merge in a situation like that:

 with isolated_transaction():
 merge_host_dep = Session.merge(host_dep)
 merge_host_dep.status = ok
 Session.commit()

 but now your host_dep is in a different state as merge_host_dep, and
 that row has a different value in the isolated transaction than the one
 locally.  if you updated it locally, then you'd see that same UPDATE
 statement happen again and it might even hit upon a conflict when you try
 to commit the main transaction.

 the using two transactions pattern is sort of best if you can isolate
 rows that are intended for one side or the other.


Sadly in this case... I can't isolate, at least not very easily.  The code
is involved enough
that extracting the necessary parts to isolate a given object for change
would be very,
very difficult, if not impossible.  Unfortunately this may require me to
completely rewrite
large parts of my application. :(

I do thank you for all the help you've given so far, though.  It's been
very much appre-
ciated.

- Ken


 - Ken





 - Ken


 On Tue, Jan 8, 2013 at 2:59 PM, Ken Lareau klar...@tagged.com wrote:

 On Tue, Jan 8, 2013 at 2:16 PM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Jan 8, 2013, at 4:00 PM, Ken Lareau wrote:


 Given this and your previous comments, and after some conversations with
 a coworker, I decided to expose the 'engine' variable from the database
 library and create a context manager as so:

 @contextlib.contextmanager
 def isolated_transaction(NewSession=None):
 Manage a new transaction (session) within an existing session

 needed_session = True
 existing = Session.registry()

 if NewSession is None:
 NewSession = scoped_session(sessionmaker(bind=engine))
 else:
 needed_session = False

 Session.registry.set(NewSession)

 try:
 yield
 finally:
 Session.registry.set(existing)

 if needed_session:
 NewSession.close()


 (This code hasn't quite been tested yet and may need some tweaking)
 With this, whenever I need an 'isolated' transaction I can simply do
 from my program:

 with isolated_transaction():
 do stuff with Session

 And things should 'just work'... at least that's my hope!  I can also
 pass
 an existing new session if I want to use one in multiple places (not
 sure I'll need that just yet, but it was easy enough to write into the
 method, so...)

 Hopefully this seems like a sane solution to my problem. :)


 its good, though you can save yourself all the trouble with
 scoped_session/sessionmaker by just saying this:

 from sqlalchemy.orm import Session
  new_session = Session(bind=engine)

 or which might be even more portable, use the sessionmaker from your
 existing registry:

 new_session = Session.session_factory()

 session_factory is the sessionmaker() you stuck onto scoped_session in
 the first place

Re: [sqlalchemy] How to handle 'sub-commits'?

2013-01-17 Thread Ken Lareau
On Thu, Jan 17, 2013 at 4:26 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Jan 17, 2013, at 7:24 PM, Ken Lareau wrote:



 On Thu, Jan 17, 2013 at 4:13 PM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Jan 17, 2013, at 7:01 PM, Ken Lareau wrote:

 On Thu, Jan 17, 2013 at 3:54 PM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Jan 17, 2013, at 6:45 PM, Ken Lareau wrote:

 Sadly, it looks like when I try the code in my full application, it is
 failing miserably.
 I'm seeing errors like this:

 sqlalchemy.orm.exc.DetachedInstanceError: Instance Deployments at
 0x3367910 is not bound to a Session; attribute refresh operation cannot
 proceed

 and

 sqlalchemy.exc.InvalidRequestError: Object 'HostDeployments at
 0x21cd050' is already attached to session '1' (this is '3')

 Sadly it's not immediately obvious as to what's going on... not even
 certain how
 to start debugging this problem.


 well an object can only be attached to one Session at a time.   so the
 tricky thing is that it's kind of easy to get objects in the Session
 sometimes when you don't want them to.  like if you have A and B, and
 there's a relationship between both like A.bs an B.a, putting an A into a
 Session will pull in the B and vice versa.

 So if you're putting some kind of object into this other Session to be
 committed immediately, you'd have to make sure it isn't being pulled into
 your primary session.   if you call object_session(someobj) on any object
 it will show you what Session it belongs to.

 The other option is to merge() the state of the object into another
 Session.  merge() makes a copy of an object from one Session to another.
  Though here you're looking to commit those objects in the second session
 and not at all in the first so you probably should jsut make sure those
 objects are only in that one Session.


 That would explain things, since I have things like:

  with isolated_transaction():
 host_dep.status = 'ok'
 Session.commit()

 Obviously the 'host_dep' is from the original session, so this isn't
 going to work.  I know about merge(),
 though in this case I'm not fully certain how to use it, or if it can
 even solve issues like this. :(


 OK you could merge in a situation like that:

 with isolated_transaction():
 merge_host_dep = Session.merge(host_dep)
 merge_host_dep.status = ok
 Session.commit()

 but now your host_dep is in a different state as merge_host_dep, and
 that row has a different value in the isolated transaction than the one
 locally.  if you updated it locally, then you'd see that same UPDATE
 statement happen again and it might even hit upon a conflict when you try
 to commit the main transaction.

 the using two transactions pattern is sort of best if you can isolate
 rows that are intended for one side or the other.


 Sadly in this case... I can't isolate, at least not very easily.  The code
 is involved enough
 that extracting the necessary parts to isolate a given object for change
 would be very,
 very difficult, if not impossible.  Unfortunately this may require me to
 completely rewrite
 large parts of my application. :(

 I do thank you for all the help you've given so far, though.  It's been
 very much appre-
 ciated.



 since you're refactoring, perhaps there's a way to break out the things
 that need to be visible into new rows of their own ?


With the current base design, that may not be possible.  I could get into
the gory details
about what the application is doing and how it's interacting with the
database, but I fear
it may be more than the rest of the people on the mailing list would like
to hear.


- Ken


 - Ken


 - Ken





 - Ken


 On Tue, Jan 8, 2013 at 2:59 PM, Ken Lareau klar...@tagged.com wrote:

 On Tue, Jan 8, 2013 at 2:16 PM, Michael Bayer mike...@zzzcomputing.com
  wrote:


 On Jan 8, 2013, at 4:00 PM, Ken Lareau wrote:


 Given this and your previous comments, and after some conversations
 with
 a coworker, I decided to expose the 'engine' variable from the database
 library and create a context manager as so:

 @contextlib.contextmanager
 def isolated_transaction(NewSession=None):
 Manage a new transaction (session) within an existing session

 needed_session = True
 existing = Session.registry()

 if NewSession is None:
 NewSession = scoped_session(sessionmaker(bind=engine))
 else:
 needed_session = False

 Session.registry.set(NewSession)

 try:
 yield
 finally:
 Session.registry.set(existing)

 if needed_session:
 NewSession.close()


 (This code hasn't quite been tested yet and may need some tweaking)
 With this, whenever I need an 'isolated' transaction I can simply do
 from my program:

 with isolated_transaction():
 do stuff with Session

 And things should 'just work'... at least that's my hope!  I can also
 pass
 an existing new session if I want to use one in multiple places (not
 sure I'll need

Re: [sqlalchemy] How to handle 'sub-commits'?

2013-01-08 Thread Ken Lareau
On Tue, Jan 8, 2013 at 2:16 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Jan 8, 2013, at 4:00 PM, Ken Lareau wrote:


 Given this and your previous comments, and after some conversations with
 a coworker, I decided to expose the 'engine' variable from the database
 library and create a context manager as so:

 @contextlib.contextmanager
 def isolated_transaction(NewSession=None):
 Manage a new transaction (session) within an existing session

 needed_session = True
 existing = Session.registry()

 if NewSession is None:
 NewSession = scoped_session(sessionmaker(bind=engine))
 else:
 needed_session = False

 Session.registry.set(NewSession)

 try:
 yield
 finally:
 Session.registry.set(existing)

 if needed_session:
 NewSession.close()


 (This code hasn't quite been tested yet and may need some tweaking)
 With this, whenever I need an 'isolated' transaction I can simply do
 from my program:

 with isolated_transaction():
 do stuff with Session

 And things should 'just work'... at least that's my hope!  I can also pass
 an existing new session if I want to use one in multiple places (not
 sure I'll need that just yet, but it was easy enough to write into the
 method, so...)

 Hopefully this seems like a sane solution to my problem. :)


 its good, though you can save yourself all the trouble with
 scoped_session/sessionmaker by just saying this:

 from sqlalchemy.orm import Session
 new_session = Session(bind=engine)

 or which might be even more portable, use the sessionmaker from your
 existing registry:

 new_session = Session.session_factory()

 session_factory is the sessionmaker() you stuck onto scoped_session in the
 first place.  In that case you don't need to refer to the Engine explicitly.


Nice, the second one definitely works and I no longer need to export
'engine'
and only need to import Session from my library where I have the context
manager place. :)

Thanks again!

- Ken


 Depending on the database in use, using low isolation levels can have
 the effect that other transactions can view dirty reads as the
 transaction proceeds, but this is obviously an all-or-nothing thing.
  When I need certain resources exposed during a long running transaction, I
 transfer that data to a different Session and commit() those changes
 distinctly.

 My current needs would tend to use the short transactions for things
 that are mostly
 isolated from anything going on in the longer running (main) transaction,
 though I do
 suspect I might need what you mention in your last sentence, but might
 you be able
 to refer me to an example of how it would work, perchance?


 I'd advise against going this route, you'd pretty much need to use MySQL
 MyISAM tables to get guaranteed dirty reads, that is, there's no
 transaction at all, and it's not really how transactions were meant to be
 used.Lowering the isolation level is usually just a means to get more
 transaction throughput.


 Okay, I suspect I misunderstood what was being mentioned here, which is
 no problem; I'll avoid it. :)  Thanks once again for all the help.

 -Ken



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




 --
 - Ken Lareau


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to handle 'sub-commits'?

2013-01-06 Thread Ken Lareau
Michael,

Thanks for the response, see further questions/issues below...

On Fri, Jan 4, 2013 at 8:41 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Jan 3, 2013, at 10:18 PM, Ken Lareau wrote:

 I recently (today) ran into an issue that has me perplexed as to how to
 resolve it,
 so I'm asking here to see if anyone can shed some insight.  Hopefully I
 can ex-
 plain it clearly enough to make me not sound completely incompetent...

 I currently have an application that during it's run starts a session via
 SQLAlchemy
 to one of our databases and keeps it available until the program exits.
 During this
 time it does multiple changes (primarily inserts and updates) to the
 database, but
 of course nothing is actually written to the database until a commit() is
 done.  The
 problem is that there are times when I have a single change that must be
 available
 in the database immediately due to external resources needing to access to
 that
 updated/new information.



 for this use case you use a distinct transaction, which means a different
 Session object.  You commit() that Session when you need this short-term
 data to be exposed.


Okay, this is what I suspected and feared. :)  Creating new sessions isn't
much of an
issue, and I came up with a class to manage this for me before realizing my
problem
is going to end up being much deeper...

My current library that uses SQLAlchemy was based off a single session...
so in a
file called meta.py I had the following line:

Session = scoped_session(sessionmaker())

(swiped from an old Pylons example).  In my __init__.py file, I did:

from tagopsdb.database.meta import Session

and then in when initializing my connection to the database, simply did:

Session.configure(bind=engine)

From all other areas of the library, I simple re-used the aforementioned
import, then
actually directly used Session, such as:

Session.add(obj)

or

Session.commit()

Now... this may be very poor usage of it, though I'm trying to improve my
under-
standing and utilize SQLAlchemy better.  Of course, with the sudden need
for more
than one session, I'm finding that I'm running into an issue.  Specifically:

1) To make it easy to find the correct session, I'm using a dictionary
which I
pass around instead of Session... but referencing a given session is a
bit
clunky - 'sessions.current[name]' is quite a bit less succinct than
the
use of just 'Session', and while I could just assign a given entry to a
shorter
name, that just seems to add to the mistake. :)

2) All the methods in my library currently expect the session to be
'Session';
that changes with the need for multiple sessions, and it means either I
will now need to explicitly pass the session into every method, or find
a
way to have the session automatically determined... which may not be
possible or reasonable.

Given the above, I am open to alternative suggestions, especially if someone
has solved this problem already. :)  It looks like I will need the multiple
session
solution sooner rather than later, which means I have to find some way to
solve
this quandary of mine.


 So this leads to the question: is there any way to do an 'isolated' commit
 from
 within a session and if so, how is it done?  As an alternative, is there a
 way to
 use temporary new sessions to accomplish the same thing?  My current use
 in my application is I have a 'Session = scoped_session(sessionmaker())'
 line
 in a module which I import wherever I need it (essentially as a singleton)
 to be
 able to access the same session throughout the code.  This would of course
 need to change, at least with an application requiring such 'sub commits'.


 Depending on the database in use, using low isolation levels can have the
 effect that other transactions can view dirty reads as the transaction
 proceeds, but this is obviously an all-or-nothing thing.When I need
 certain resources exposed during a long running transaction, I transfer
 that data to a different Session and commit() those changes distinctly.

 My current needs would tend to use the short transactions for things that
are mostly
isolated from anything going on in the longer running (main) transaction,
though I do
suspect I might need what you mention in your last sentence, but might you
be able
to refer me to an example of how it would work, perchance?

Once again, thanks for help.

- Ken



  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group

[sqlalchemy] How to handle 'sub-commits'?

2013-01-03 Thread Ken Lareau
I recently (today) ran into an issue that has me perplexed as to how to
resolve it,
so I'm asking here to see if anyone can shed some insight.  Hopefully I can
ex-
plain it clearly enough to make me not sound completely incompetent...

I currently have an application that during it's run starts a session via
SQLAlchemy
to one of our databases and keeps it available until the program exits.
During this
time it does multiple changes (primarily inserts and updates) to the
database, but
of course nothing is actually written to the database until a commit() is
done.  The
problem is that there are times when I have a single change that must be
available
in the database immediately due to external resources needing to access to
that
updated/new information.

Initially I thought that using 'begin_nested()' .. 'commit()' would
accomplish this
(possibly naively), but found out today in a very bad way that it did not;
after re-
reading the documentation this became abundantly clear.  Further analysis
of my
current code revealed that I actually do not need to have 'isolated'
commits from
within a transaction, but I have this feeling as I expand my usage of the
library
I've created to deal with the database access I may find a need for this;
the only
other option is to keep track of every single change in any application
that re-
quires this and create a 'reverse' set of changes if I need to rollback.

So this leads to the question: is there any way to do an 'isolated' commit
from
within a session and if so, how is it done?  As an alternative, is there a
way to
use temporary new sessions to accomplish the same thing?  My current use
in my application is I have a 'Session = scoped_session(sessionmaker())'
line
in a module which I import wherever I need it (essentially as a singleton)
to be
able to access the same session throughout the code.  This would of course
need to change, at least with an application requiring such 'sub commits'.

If any further clarification is needed on the above, please don't hesitate
to ask,
and I thank folks in advance for any assistance they can give.

-- 
- Ken Lareau

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Specifying descending order_by on a backref

2009-02-17 Thread Ken

I'm using the declarative extension. When I specify a backref on a
relation, I see the order_by argument and I'm pointing it to the
column object. That works, but I need to sort the backref
descendingly, not ascendingly. I don't see an option for that in the
Relation()/backref() arguments. What am I missing?

Thanks ahead of time.

-Ken
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Specifying descending order_by on a backref

2009-02-17 Thread Ken

On Feb 17, 3:13 pm, Michael Trier mtr...@gmail.com wrote:
 backref supports the same arguments as relation(), which include an order_by
 argument.

 http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html?highligh...

Michael, I'm using the order_by argument. The problem is that it's
sorting my values ascendingly instead of descendingly.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Specifying descending order_by on a backref

2009-02-17 Thread Ken

On Feb 17, 3:29 pm, Michael Trier mtr...@gmail.com wrote:

 Sorry, I misunderstood. You should be able to do a order_by=field.desc().

Ah, that answers my question. Thanks!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New problem with synonym (getter/setter) use in 0.5.2

2009-02-04 Thread Ken

On Feb 4, 6:53 am, Michael Bayer mike...@zzzcomputing.com wrote:
 its not really a bug.   use real column objects for your order by  
 expression, i.e.:

 order_by=_username,

Oh, that makes sense. Thanks for the explanation.

-Ken

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New problem with synonym (getter/setter) use in 0.5.2

2009-02-03 Thread Ken

Oh, sorry, forgot to mention:

Python 2.5.2
SQLAlchemy 0.5.2

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Vague InterfaceError (threading issue?) running normal query

2008-12-19 Thread Ken

I've created a full test case that should reproduce the error for you.
You'll need to create a database called 'test' on your local machine.
I think I've isolated the problem to the use of the creator keyword
argument, which I use in my application for various reasons.

http://rafb.net/p/8Ayjxc63.html

Results in:

http://rafb.net/p/QPoesQ74.html

Here are the versions I used to cause this bug:

mysqld  Ver 5.0.67-0ubuntu6 for debian-linux-gnu on i486 ((Ubuntu))
MySQLdb 1.2.2

For sqlalchemy, I've tried 0.5rc4 and 0.4.8; it happens with either
version.

On Dec 18, 10:11 am, Michael Bayer mike...@zzzcomputing.com wrote:

 Would need to see how your create_engine() is configured, what  
 versions of MySQL/MySQLdb are in use.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Vague InterfaceError (threading issue?) running normal query

2008-12-19 Thread Ken

Oh, I see. I was unaware that lambda evaluated the result only once,
not each use. Thanks so much for your help.

On Dec 19, 4:36 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 def setup(**kwargs):
      connection = MySQLdb.connections.Connection(**kwargs)
      engine = create_engine('mysql://', creator=lambda: connection,  
 pool_recycle=2)

 the creator argument is a callable that returns a new connection  
 when the pool needs one.  Above, you are pre-connecting a single MySQL  
 connection and returning it from the lambda.  Besides producing non-
 mutexed multithreaded access to the single Connection itself (which  
 may or may not be OK for MySQLdb), it also prevents a new connection  
 from being created once it has been recycled.  The previous  
 connection, now closed, is all that's available.

 The correct form is:

 def setup(**kwargs):
      def connect():
          return MySQLdb.connections.Connection(**kwargs)
      engine = create_engine('mysql://', creator=lambda: connect,  
 pool_recycle=2)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Vague InterfaceError (threading issue?) running normal query

2008-12-18 Thread Ken

On Dec 17, 9:24 am, Michael Bayer mike...@zzzcomputing.com wrote:

 yes but pool_recycle is documented as a number of seconds, and a  
 typical value is 3600 for one hour.  setting it to True means it will  
 recycle the connection every second.   Technically, that shouldn't  
 cause any broken behavior either, but that seems to be the source  
 here.

Right, 3600 just causes it to break every hour instead of every
second. I get the same error, but only after an hour.

In other words, this error seems to happen on recycled connections.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Vague InterfaceError (threading issue?) running normal query

2008-12-17 Thread Ken

Actually, I found that the error goes away if I don't set
pool_recycle=True. However, I'm told pool_recycle is necessary to
prevent MySQL has gone away errors. My guess then is that SQLAclehmy
preparing a statement, it works right the first time, and thereafter
there is an issue with it.

So you're right, it is deterministic. Ideas on how to address it
though?

On Dec 16, 5:02 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 ive never seen that error before.  threading issues are usually not  
 very deterministic either (happens on the 2nd request guaranteed is  
 deterministic).

 On Dec 16, 2008, at 1:09 PM, Ken wrote:



  I'm using CherryPy with SQLAlchemy. I've tried this with both
  SQLAlchemy's scoped_session thread-local sessions, as well as (also
  local-to-thread) sessions I create in my web code manually.

  2008-12-16 11:51:14,524 DEBUG cherrypy.error.140068141119376: [16/Dec/
  2008:11:51:14] HTTP Traceback (most recent call last):
   File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
  line 1027, in first
     ret = list(self[0:1])
   File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
  line 952, in __getitem__
     return list(res)
   File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
  line 1088, in __iter__
     return self._execute_and_instances(context)
   File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
  line 1091, in _execute_and_instances
     result = self.session.execute(querycontext.statement,
  params=self._params, mapper=self._mapper_zero_or_none(),
  _state=self._refresh_state)
   File /usr/lib/python2.4/site-packages/sqlalchemy/orm/session.py,
  line 749, in execute
     return self.__connection(engine, close_with_result=True).execute(
   File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
  line 806, in execute
     return Connection.executors[c](self, object, multiparams, params)
   File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
  line 856, in execute_clauseelement
     return self.__execute_context(context)
   File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
  line 878, in __execute_context
     self._cursor_execute(context.cursor, context.statement,
  context.parameters[0], context=context)
   File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
  line 927, in _cursor_execute
     self._handle_dbapi_exception(e, statement, parameters, cursor)
   File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
  line 909, in _handle_dbapi_exception
     raise exc.DBAPIError.instance(statement, parameters, e,
  connection_invalidated=is_disconnect)
  InterfaceError: (InterfaceError) (0, '') u'SELECT login.username AS
  login_username, login.password AS login_password, login.`loginId` AS
  `login_loginId`, login.salt AS login_salt, login.email AS login_email,
  login.status AS login_status, login.message AS login_message,
  login.`accountId` AS `login_accountId` \nFROM login \nWHERE
  login.`loginId` = %s \n LIMIT 0, 1' [36L]

  The first time this query runs, it works fine. Thereafter I get this
  error message. Any ideas?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Vague InterfaceError (threading issue?) running normal query

2008-12-16 Thread Ken

I'm using CherryPy with SQLAlchemy. I've tried this with both
SQLAlchemy's scoped_session thread-local sessions, as well as (also
local-to-thread) sessions I create in my web code manually.

2008-12-16 11:51:14,524 DEBUG cherrypy.error.140068141119376: [16/Dec/
2008:11:51:14] HTTP Traceback (most recent call last):
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
line 1027, in first
ret = list(self[0:1])
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
line 952, in __getitem__
return list(res)
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
line 1088, in __iter__
return self._execute_and_instances(context)
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/query.py,
line 1091, in _execute_and_instances
result = self.session.execute(querycontext.statement,
params=self._params, mapper=self._mapper_zero_or_none(),
_state=self._refresh_state)
  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/session.py,
line 749, in execute
return self.__connection(engine, close_with_result=True).execute(
  File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
line 806, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
line 856, in execute_clauseelement
return self.__execute_context(context)
  File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
line 878, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
line 927, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor)
  File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py,
line 909, in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
InterfaceError: (InterfaceError) (0, '') u'SELECT login.username AS
login_username, login.password AS login_password, login.`loginId` AS
`login_loginId`, login.salt AS login_salt, login.email AS login_email,
login.status AS login_status, login.message AS login_message,
login.`accountId` AS `login_accountId` \nFROM login \nWHERE
login.`loginId` = %s \n LIMIT 0, 1' [36L]

The first time this query runs, it works fine. Thereafter I get this
error message. Any ideas?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Access to AS/400 data

2008-05-20 Thread Ken Kuhlman

On Fri, May 16, 2008 at 3:18 PM, Carlos Hanson [EMAIL PROTECTED] wrote:
 On Fri, May 16, 2008 at 10:49 AM, Jim Steil [EMAIL PROTECTED] wrote:
 Carlos Hanson wrote:
 On Fri, May 16, 2008 at 8:14 AM, Michael Bayer [EMAIL PROTECTED]
 wrote:
 On May 16, 2008, at 10:55 AM, Carlos Hanson wrote:
 On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote:

 Hi:

 Can anyone tell me if it is possible to access data on an AS/400
 through
 SQLAlchemy?

   -Jim


It's possible if you use db2 connect to make your connection to the
iSeries.  See http://code.google.com/p/ibm-db/.  The support group is
[EMAIL PROTECTED]

Right now, the as/400 is listed as a Future supported database [1],
but there have been some reports from people getting it working.  You
might want to make your needs known on the support list  also follow
up with your IBM rep.

[1] http://code.google.com/p/ibm-db/wiki/README



 I'm connecting to an AS/400 using pyodbc, so I am sure that I can do
 it through SQLAlchemy.  If I have a chances to test it, I'll post my
 success.  But if you get an ODBC connection set up, the re should be
 no problem.


 well, connecting is just the beginning.  to take advantage of SQLA,
 you would also want an AS/400 dialect that knows how to render SQL in
 the way an AS/400 likes.  Im not familiar with anyone working on an AS/
 400 dialect at the moment.   I only know of the DB2 dialect which is a
 separate project (but maybe ask on their list since they work for IBM).


 This is a good point. I have to create aliases to a file/member
 combination to select data. I guess I wouldn't expect SQLAlchemy to
 implement that by default, since most every other database uses
 tables.


Right... but there's no reason you can't drop down to the db-api level
for one-off things like partitioning data with members.   Once the
alias is created, it can be treated like any other table.

-ken

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] one-to-many access and modification

2007-09-28 Thread Ken Pierce

Hello all,

I just got into using sqlalchemy today and I have a question about
accessing data in a one-to-many relationship.

I've got a simplified example below. Basically the system is for a
group of us voting on DVDs we want to rent. There are users and films
and for each film a user makes a yes / no choice (where ? is
undecided).

# create tables
users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(40)),
mysql_engine='InnodB')

films_table = Table('films', metadata,
Column('fid', Integer, primary_key=True),
Column('title', String(128)),
mysql_engine='InnodB')

choices_table = Table('choices', metadata,
Column('fid', Integer, ForeignKey('films.fid'), primary_key=True),
Column('id', Integer, ForeignKey('users.id'), primary_key=True),
Column('choice', MSEnum('?','Y','N')),
mysql_engine='InnodB')

# classes here
class User(object):
...
class Film(object):

class Choice(object):


# mappers
mapper(User, users_table)
mapper(Film, films_table, properties={'choices':relation(Choice)})
mapper(Choice, choices_table)

So, if I retrieve a Film object f, f.choices gives me a list of Choice
objects. From that Film object, I want to look up (and possibly
modify) a users choice.

My question is, is there a way to set up this relationship so that I
could access a users choice like a dictionary (either with a user id
or User object), or do I have to write a method to search the list and
return the object -- in which case, would the changes be reflected in
the session / database?

Thanks in advance,
Most impressed so far,

Ken Pierce.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] django + sqlalchemy = ?

2007-09-12 Thread Ken Kuhlman
Cross-posted to django  sqlalchemy lists

What is the state of SQLAlchemy integration with Django?  I saw somewhere
that it was possible to use them together, but the author didn't provide any
details, and the SQLAlchemy branch of Django hasn't had any commits since
late last year.  Was the original goal of the branch [1] too aggressive?

Thanks in advance.
-Ken

[1] Integrating Django and SQLAlchemy, Aug 29, 2006:
http://groups.google.com/group/django-developers/browse_thread/thread/5149e1c60dc65bff

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update() with a unique but not primary key

2007-03-14 Thread Ken Kuhlman
Perhaps he's looking for an upsert function?  That's sometimes handy, and to
be truly useful would have to be able to use any given key on the table.

I hacked up an upsert for SQLObject once, but it was so ugly I never
contributed it.   It did make the poor man's replication system that I was
working on simpler, though.


On 3/13/07, Michael Bayer [EMAIL PROTECTED] wrote:



 save_or_update() doesnt take any kind of primary key or unique key
 argument.  no specification of anything is needed.

 Sean Davis wrote:
 
  On Tuesday 13 March 2007 07:35, Sean Davis wrote:
  We are creating a database that will have a set of autoincrement
 primary
  keys on the tables.  However, many of the tables also have one or more
  unique keys associated with them.  Can we use save_or_update() (and, by
  extension, cascade='save_or_update', etc.) by specifying one of the
  unique
  keys rather than specifying the primary key directly?
 
  Tried it.  Looks like not.  Sorry for the noise on the list.
 
  Sean
 
  
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] assign_mapper breaks polymorphic multi-table inheritance in 3.1?

2006-12-08 Thread Ken Kuhlman

I'm trying to use assign_mapper with polymorphic multiple table
inheritance, and running into problems with the primary key sequencing.
 Is this a supported use of sqlalchemy?  I'm running version 0.3.1.

Sample code below.  If use_assign_mapper is false, then the script
works with both postgres  sqlite.  If it's true, then sqlite fails on
the assertion that the manager  employee id's be different, and
postgres throws an exception that 'managers_person_id_seq does not
exist'

Thanks in advance for any help, and thanks for an otherwise awesome
product!
-Ken


#!/usr/bin/env python
# The setup here is identical to the example at:
#
http://sqlalchemy.org/docs/adv_datamapping.myt#advdatamapping_inheritance_multiple
# Except that it conditionally uses assignmapper instead of the default
mapper
from sqlalchemy import *
from sqlalchemy.ext.assignmapper import assign_mapper
from sqlalchemy.ext.sessioncontext import SessionContext

use_postgres = False
use_assign_mapper = True

if not use_postgres:
url = 'sqlite:///:memory:'
else:
url=postgres://ken:[EMAIL PROTECTED]/tutorial

engine = create_engine(url)
metadata = BoundMetaData(engine)
context = SessionContext(create_session)

employees = Table('employees', metadata,
   Column('person_id', Integer, primary_key=True),
   Column('name', String(50)),
   Column('type', String(30)))

engineers = Table('engineers', metadata,
   Column('person_id', Integer, ForeignKey('employees.person_id'),
primary_key=True),
   Column('engineer_info', String(50)),
  )

managers = Table('managers', metadata,
   Column('person_id', Integer, ForeignKey('employees.person_id'),
primary_key=True),
   Column('manager_data', String(50)),
   )

class Employee(object):
pass
class Engineer(Employee):
pass
class Manager(Employee):
pass

person_join = polymorphic_union(
{   'engineer':employees.join(engineers),
'manager':employees.join(managers),
'person':employees.select(employees.c.type=='person'),
}, None, 'pjoin')

metadata.create_all()

###
if not use_assign_mapper:
#This works:
person_mapper = mapper(Employee, employees,
select_table=person_join,
   polymorphic_on=person_join.c.type,
polymorphic_identity='person')
mapper(Engineer, engineers, inherits=person_mapper,
polymorphic_identity='engineer')
mapper(Manager, managers, inherits=person_mapper,
polymorphic_identity='manager')
else:
#This doesn't
person_mapper = assign_mapper(context, Employee, employees,
select_table=person_join,
   polymorphic_on=person_join.c.type,
polymorphic_identity='person')
assign_mapper(context, Engineer, engineers, inherits=person_mapper,
polymorphic_identity='engineer')
assign_mapper(context, Manager, managers, inherits=person_mapper,
polymorphic_identity='manager')

context.current.echo_uow=True

eng = Engineer()
mgr = Manager()
context.current.save(eng)
context.current.save(mgr)
context.current.flush()

print eng: %d mgr: %d % (eng.person_id, mgr.person_id)
assert(eng.person_id != mgr.person_id)


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---