Re: [sqlalchemy] Re: Working with func in dates, No response, no error message

2023-04-05 Thread James Paul Chibole
Sorry, it is a type, it should actually be  .filter(func.strftime('%m',
Event.event_date == datetime.today().strftime('%m')))

Let me go through your latest response and will get back to you. Thank you
for the prompt response.

On Wed, Apr 5, 2023 at 10:16 AM Lele Gaifax  wrote:

> Nancy Andeyo  writes:
>
> > However, the part with problems is this one: .filter(func.strftime('%m',
> > Event.event_date = datetime.today().strftime('%m'))) where the aim to
> > filter out events that will take place in the current month. This is the
> > section that I posted, yet I needed to post the entire query for what I
> am
> > intending to achieve is understood.
>
> I can't say if the typos are due to you rewriting the cide in these
> messages, or if instead they are effectively present in the real code,
> but also the above is not correct:
>
>   .filter(func.strftime('%m', Event.event_date =
> datetime.today().strftime('%m')))
>
> This should raise a syntax error when evaluated by Python...
>
> For comparison, the following complete script works for me:
>
>   from datetime import date
>   from pprint import pprint
>
>   from sqlalchemy import create_engine
>   from sqlalchemy import func
>   from sqlalchemy.orm import Session
>
>   from sol.models import Tourney
>
>
>   engine = create_engine('sqlite:///development.db')
>   session = Session(engine)
>
>   q = session.query(Tourney)
>   q = q.filter(func.strftime('%m', Tourney.date) ==
> date.today().strftime('%m'))
>
>   pprint([(t.description, t.date) for t in q.limit(3).all()])
>
> and emits
>
>   [('2° Torneo', datetime.date(2001, 4, 1)),
>('7° Torneo', datetime.date(2004, 4, 24)),
>('7° Torneo', datetime.date(2005, 4, 30))]
>
> Hope this helps,
> ciao, lele.
> --
> nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
> real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
> l...@metapensiero.it  | -- Fortunato Depero, 1929.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/4oPfuzAjw48/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/87pm8ier3i.fsf%40metapensiero.it
> .
>


-- 
*James Paul Chibole*
   -In the name of God-

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CA%2B9bh8dc1ph6UC49pqjVeVfg-DEo%2BE2vFxu0NCXoB9AonsWXtQ%40mail.gmail.com.


Re: [sqlalchemy] Working with func in dates, No response, no error message

2023-03-30 Thread James Paul Chibole

Thank you Philip for your suggestion.
On Thursday, March 30, 2023 at 9:38:08 PM UTC+3 Philip Semanchuk wrote:

>
>
> > On Mar 30, 2023, at 2:32 PM, James Paul Chibole  
> wrote:
> > 
> > Hi everyone, I am trying to retrieve deceased persons who died in the 
> current month but the output gives no result. Here is my code with query 
> done in Python Flask:
> > from datetime import datetime from sqlalchemy import func 
> > @app.route('/user/') @login_required def user(username): 
> current_month = datetime.today().date().strftime("%B") 
> monthly_anniversaries = 
> current_user.followed_deaths().filter(Deceased.burial_cremation_dat e 
>  f_death== current_month)).order_by(Deceased.timestamp.desc()) return 
> render_template("user.html", monthly_anniversaries =monthly_anniversaries)
>
>
> Flask is an extra layer of complication here that’s getting in the way of 
> what you’re trying to debug. That’s not a knock on Flask (I use it too), 
> it’s just not germane to a SQLAlchemy problem. My suggestion is that you 
> try putting a breakpoint in your flask app so you can play with the query 
> inside the debugger. Ensure that current_month is what you think it is, 
> hardcode query params instead of passing variables to see if that changes 
> results, remove some of the filter clauses to see if the results change the 
> way you expect, etc. And of course having a look at the SQL that’s being 
> sent to the server will give you some clues too, although that can be hard 
> to get to depending on your environment.
>
>
> Hope this helps
> Philip

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f6a17840-cdfd-4f26-bdc5-0a75af82fa91n%40googlegroups.com.


[sqlalchemy] Working with func in dates, No response, no error message

2023-03-30 Thread James Paul Chibole


Hi everyone, I am trying to retrieve deceased persons who died in the 
current month but the output gives no result. Here is my code with query 
done in Python Flask:
from datetime import datetime  from 
sqlalchemy import func  
@app.route('/user/')@login_required  
   def user(username):   
 current_month = datetime.today().date().strftime("%B")  
 monthly_anniversaries =  
 current_user.followed_deaths().filter(Deceased.burial_cremation_dat  e
  http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/3b972e00-e172-402a-983d-30ef7e22aef7n%40googlegroups.com.


Re: [sqlalchemy] SQL Server, OPENJSON and large blobs of JSON as placeholder bind values

2020-11-30 Thread Paul Harrington
No worries! I will have a shot at some of them myself. I think it will be 
much easier than my first Python project in 2008 which was trying to write 
a backend for Sybase for SQLAlchemy!

On Monday, November 30, 2020 at 6:19:50 PM UTC-5 Mike Bayer wrote:

> I had a notion of writing a server side ORM likely for PostgreSQL where 
> we'd leverage PostgreSQL's Python scripting platform to do something 
> interesting along those lines.
>
> sorry my answers were so negative, those are just all very 
> SQL-Server-esque patterns which are fine but don't have much applicability 
> elsewhere! 
>
>
>
> On Mon, Nov 30, 2020, at 2:22 PM, Paul Harrington wrote:
>
> OPENJSON is awesome! I think you may find it useful as a performance 
> optimization for persisting a session with  thousands of dirty objects from 
> the same class: you could serialize the state as a JSON object and send it 
> over to the server as a single scalar and 'inflate' it back into rowsets at 
> the server side.
>
> Negative as your answers were, they are still very useful as I know what 
> needs to be done.
>
> pjjH
>
>
> From 
> https://docs.microsoft.com/en-us/sql/t-sql/functions/openjson-transact-sql?view=sql-server-ver15
> *OPENJSON* is a table-valued function that parses JSON text and returns 
> objects and properties from the JSON input as rows and columns. In other 
> words, *OPENJSON* provides a rowset view over a JSON document. You can 
> explicitly specify the columns in the rowset and the JSON property paths 
> used to populate the columns. Since *OPENJSON* returns a set of rows, you 
> can use *OPENJSON* in the FROM clause of a Transact-SQL statement just as 
> you can use any other table, view, or table-valued function.  
>
> On Monday, November 30, 2020 at 1:52:03 PM UTC-5 Mike Bayer wrote:
>
>
>
> On Mon, Nov 30, 2020, at 1:39 PM, Paul Harrington wrote:
>
> Hi Mike!
> I guess that I am looking for advice on a 'reasonably good way' to do each 
> of the different parts of the solution. What I have at the moment is all 
> hand-rolled SQL and I guess I am looking at some way of moving some of this 
> to sqlalchemy so that the technique is more composable.
>
>1. Is there any SQL Server dialect support for CROSS APPLY OPENJSON? 
>any tie-ins with json schema? How do you suggest making a CTE given a 
>table-like json schema?
>
>
> I don't know anything about OPENJSON so I don't have any insight on how 
> that might work.
>
>
>
>1. What is the best way to emulate multi-statement batches? 
>
>
> it depends on what you are using them for.   in your case it looks like 
> you need them to support SQL-side variables which we have no support for, 
> this is imperative programming and SQLAlchemy's expression langauge draws 
> the line at where declarative becomes imperative.
>
>
>1. Make individual statements in SA and then compile them to text and 
>reassemble them with sql.text? This is the equivalent to what is referred 
>to as 'inner_sql' in my example program.
>
>
> yeah I saw that you were doing that, more robustly you'd use a @compiles 
> recipe, see <https://docs.sqlalchemy.org/en/13/core/compiler.html>
> https://docs.sqlalchemy.org/en/13/core/compiler.html .
>
>
>
>1. Is there a SQL Alchemy block for EXEC sproc? how might I rewrite 
>the equivalent of what is assigned to outer_sql in myexample program?
>
>
> the Python DBAPI only guarantees that EXEC is available via the callproc() 
> interface: <https://www.python.org/dev/peps/pep-0249/#callproc>
> https://www.python.org/dev/peps/pep-0249/#callproc  which SQLAlchemy does 
> not include in its facade, as there's not much point.   Our guidelines for 
> using callproc are at 
> <https://docs.sqlalchemy.org/en/13/core/connections.html#calling-stored-procedures>
>  
> <https://docs.sqlalchemy.org/en/13/core/connections.html#calling-stored-procedures>
> https://docs.sqlalchemy.org/en/13/core/connections.html#calling-stored-procedures
>  .   
> to the degree that your Python driver (pyodbc?) supports EXEC passed into 
> cursor.execute(), you could send it to regular connection.execute() as well.
>
>
>
>
>1. is dropping down to dbapi/pyodbc still the best way to handle 
>multiple result-sets? 
>
>
> yes, there was a proposal for this long ago but multple result sets are 
> really not a thing that are commonly used outside of SQL server for very 
> special cases. it's not worth the extra complexity and luckily I was able 
> to not have to deal with it when I just did the 1.4 refactoring of the 
> result object.
>
>
> Hope you are doing well and looking forward to getting back up to 

Re: [sqlalchemy] SQL Server, OPENJSON and large blobs of JSON as placeholder bind values

2020-11-30 Thread Paul Harrington
OPENJSON is awesome! I think you may find it useful as a performance 
optimization for persisting a session with  thousands of dirty objects from 
the same class: you could serialize the state as a JSON object and send it 
over to the server as a single scalar and 'inflate' it back into rowsets at 
the server side.

Negative as your answers were, they are still very useful as I know what 
needs to be done.

pjjH


>From 
>https://docs.microsoft.com/en-us/sql/t-sql/functions/openjson-transact-sql?view=sql-server-ver15
*OPENJSON* is a table-valued function that parses JSON text and returns 
objects and properties from the JSON input as rows and columns. In other 
words, *OPENJSON* provides a rowset view over a JSON document. You can 
explicitly specify the columns in the rowset and the JSON property paths 
used to populate the columns. Since *OPENJSON* returns a set of rows, you 
can use *OPENJSON* in the FROM clause of a Transact-SQL statement just as 
you can use any other table, view, or table-valued function.  

On Monday, November 30, 2020 at 1:52:03 PM UTC-5 Mike Bayer wrote:

>
>
> On Mon, Nov 30, 2020, at 1:39 PM, Paul Harrington wrote:
>
> Hi Mike!
> I guess that I am looking for advice on a 'reasonably good way' to do each 
> of the different parts of the solution. What I have at the moment is all 
> hand-rolled SQL and I guess I am looking at some way of moving some of this 
> to sqlalchemy so that the technique is more composable.
>
>1. Is there any SQL Server dialect support for CROSS APPLY OPENJSON? 
>any tie-ins with json schema? How do you suggest making a CTE given a 
>table-like json schema?
>
>
> I don't know anything about OPENJSON so I don't have any insight on how 
> that might work.
>
>
>
>1. What is the best way to emulate multi-statement batches? 
>
>
> it depends on what you are using them for.   in your case it looks like 
> you need them to support SQL-side variables which we have no support for, 
> this is imperative programming and SQLAlchemy's expression langauge draws 
> the line at where declarative becomes imperative.
>
>
>1. Make individual statements in SA and then compile them to text and 
>reassemble them with sql.text? This is the equivalent to what is referred 
>to as 'inner_sql' in my example program.
>
>
> yeah I saw that you were doing that, more robustly you'd use a @compiles 
> recipe, see <https://docs.sqlalchemy.org/en/13/core/compiler.html>
> https://docs.sqlalchemy.org/en/13/core/compiler.html .
>
>
>
>1. Is there a SQL Alchemy block for EXEC sproc? how might I rewrite 
>the equivalent of what is assigned to outer_sql in myexample program?
>
>
> the Python DBAPI only guarantees that EXEC is available via the callproc() 
> interface: <https://www.python.org/dev/peps/pep-0249/#callproc>
> https://www.python.org/dev/peps/pep-0249/#callproc  which SQLAlchemy does 
> not include in its facade, as there's not much point.   Our guidelines for 
> using callproc are at 
> <https://docs.sqlalchemy.org/en/13/core/connections.html#calling-stored-procedures>
>  
> <https://docs.sqlalchemy.org/en/13/core/connections.html#calling-stored-procedures>
> https://docs.sqlalchemy.org/en/13/core/connections.html#calling-stored-procedures
>  .   
> to the degree that your Python driver (pyodbc?) supports EXEC passed into 
> cursor.execute(), you could send it to regular connection.execute() as well.
>
>
>
>
>1. is dropping down to dbapi/pyodbc still the best way to handle 
>multiple result-sets? 
>
>
> yes, there was a proposal for this long ago but multple result sets are 
> really not a thing that are commonly used outside of SQL server for very 
> special cases. it's not worth the extra complexity and luckily I was able 
> to not have to deal with it when I just did the 1.4 refactoring of the 
> result object.
>
>
> Hope you are doing well and looking forward to getting back up to speed on 
> what sqlalchemy and seeing what has been incorporated since 0.8!
>
>
> oh quite a lot!  just none of what you're working on above :(  
>
>
>
>
>
>
> pjjH
>
> On Monday, November 30, 2020 at 1:08:14 PM UTC-5 Mike Bayer wrote:
>
>
> hey there -
>
> took a quick look and we don't support anything with variable 
> declarations, multiple statements in a single string, or multiple result 
> sets.  All of that is outside of SQLAlchemy expression languages scope.
>
> Since what you're doing is extremely specific to a certain database, if 
> your approach works what is the problem you are seeking to solve ?
>
>
>
> On Mon, Nov 30, 2020, at 10:03 AM, Paul Harrington wrote:
>
> Hello Community!
> I have no

[sqlalchemy] SQL Server, OPENJSON and large blobs of JSON as placeholder bind values

2020-11-30 Thread Paul Harrington
Hello Community!
I have not posted here for several years as I have been getting along just 
fine with the excellent SQLAlchemy toolkit and the excellent documentation. 
However, I am trying to do something a bit fancy since all the SQL Server 
dataservers I use are running versions that support OPENJSON. A number of 
our applications have used staging techniques to copy tables of data to the 
dataserver so that that can be used in JOINs (e.g. send up 10,000 
identifiers into the table rather than doing RBAR or batching into IN 
clauses). This technique is IO intensive and has quite a bit of overhead. 
An alternative approach is to serialize the data into a big blob of JSON, 
send it over as a single scalar and then expand it out to a table via a CTE 
on the dataserver. We have found the performance of this to be excellent 
from scripting language, presumably because we are avoiding so many API 
calls. However, I have not found a way to do this cleanly from SQLAlchemy 
and end up using sql.text a lot. Also, because it does not seem possible to 
re-use the same placeholder multiple times in a statement, if we want to 
expand out the JSON multiple times (e.g. if the JSON is nested and has 
multiple tables in it), we work around that using [sys].[sp_executesql]. 
Please see snippet below and a complete script (with create_engine 
redacted) attached.

What I am looking for is some guidance on how to use SQLALchemy to produce 
CTEs that reference the same variable (@blob_o_json) multiple times without 
having to pass in the same value multiple times.

thanks in advance!
pjjH



# TODO
: formulate this as a SQLAlchemy expression (is there an exec() element?)
outer_sql = r'''
EXEC [sys].[sp_executesql] @statement = :sql,
   @params = N'@bind_value_blob_o_json varchar(max)',
   @bind_value_blob_o_json = :json_blob
'''

# we prepare the *outer* statement that has a single command in it
statement = sql.text(outer_sql)

# now execute that passing in unicode value of the inner SQL.

logging.basicConfig(format='%(asctime)s %(message)s')
logging.getLogger().setLevel(logging.DEBUG)
nrows = [1000,1,10, 20, 50]
for n in nrows:

# we cons up a list of dicts to represent a single 'inline' table with lots of 
rows
logging.debug("passing  %d items" % (n))
blob_o_json =json.dumps([dict(rn=x,foo=x*2,bar=x-1,fruit='banana', 
flavor='yummy') for x in 
range(0,n)])

# Note: we named arguments in this next line correspond to the :sql and 
:json_blob placeholders in outer_sql
result =  engine.execute(statement,sql=six.text_type(inner_sql), 
json_blob=blob_o_json)
rs = result.fetchall()
#logging.debug((len(rs), rs[]))

# drop down to DBAPI to process each result-set. The application should know 
how many result-sets will be present

# I think it is likely that the vast majority of queries will just have a 
single result-set


# I did not know how to get to anything other than the first result-set ... I 
thought there 

# was probably some pyodbc magic to  make it work with SQLAlchemy and -- sure 
enough! -- there

# was a ticket with a workaround: 
https://github.com/sqlalchemy/sqlalchemy/issues/1635#issuecomment-441907416

#cursor = result.cursor
#rs1 = cursor.fetchall()
#cursor.nextset()
#rs2 = cursor.fetchall()

#logging.debug((len(rs1), rs1[-4:], rs2[-4:]))
logging.debug("finished %d" % (n))

logging.debug("done")

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/345f67ce-a4b9-4575-87bc-81653868908en%40googlegroups.com.
import six
import json
import logging

import urllib

import sqlalchemy
from sqlalchemy import create_engine, sql


# we are using the SQL Express 2017 'metatweaker' instance as that is a readily accessible
# engine that is running a recent enough revision of SQL Server ... OPENJSON is not in SQL 2014
# Also, if you want to see what the performance characteristics of this look like on the server side, you need to 
# have some roles such as VIEW SERVER STATE (e.g. to look at query plans)

# engine = create_engine("mssql+pyodbc://@dbreins.deshaw.com\DBREINS/master?driver=SQL+Server")
# engine = create_engine(r'mssql+pyodbc://@fretsaw.deshaw.com\sqlexpress/metatweaker?driver=SQL+Server')


#engine = create_engine(r"mssql+pyodbc://@frosthawk.deshaw.com\sqlexpress2017/metatweaker?driver=SQL+Server")
#engine = 

[sqlalchemy] RE: [sachem] Continuing support for Firebird database

2019-12-24 Thread Paul Graves-DesLauriers
Fantastic!  I could use some help getting over some hurdles in getting things 
up and running.

 

I’ve already used your project as my example for getting going with 
sqlalchemy-firebird.  Let me commit what I have so far, along with some 
questions about testing.  I’ll probably be doing this after Christmas – I’ll 
let you know.

 

Thanks again, I’m looking forward to working with you.

 

Paul

 

From: sqlalchemy@googlegroups.com  On Behalf Of 
Gord Thompson
Sent: Saturday, December 21, 2019 10:50 AM
To: sqlalchemy 
Subject: Re: [sqlalchemy] Continuing support for Firebird database

 

Hi Paul.

 

I'm the one who volunteered to revive the sqlalchemy-access dialect and 
maintain it. Feel free to have a look at

 

https://github.com/sqlalchemy/sqlalchemy-access

 

to get an idea of the project structure. If you need a hand getting things up 
and running I'd be happy to help.

 

Cheers,

Gord



On Monday, November 11, 2019 at 10:21:51 PM UTC-7, Paul Graves-DesLauriers 
wrote:

I’m reviewing the README.dialects and README.unittests files.  It certainly 
appears that this is a task that I can take on.

 

How do we go about splitting the Firebird support into a separate plugin?  Or 
perhaps, how was the sqlalchemy-access project created from the sqlalchemy 
master?

 

Thanks,

Paul

 

From: sqlal...@googlegroups.com> On Behalf Of Mike Bayer
Sent: Monday, November 11, 2019 12:05 PM
To: noreply-spamdigest via sqlalchemy  >
Subject: Re: [sqlalchemy] Continuing support for Firebird database

 

 

 

On Fri, Nov 8, 2019, at 11:05 PM, Paul Graves-DesLauriers wrote:

This is unexpected…

 

I can do this if we can get things setup similarly to the sqlalchemy-access 
dialect.

 

Is there a test set for confirming the dialect is acceptable for SQLAlchemy?  
Is there a sample test set I can use to confirm Firebird will work as expected 
(Like you use for SQL Server, Oracle, PostgreSQL, etc.)?  If so, I’m willing to 
give it a shot.  I am extremely happy with Firebird as a database and will 
probably end up doing this kind of work anyway…

 

 

SQLAlchemy contains a test suite that is explicitly for third party dialects to 
aim towards. The vast majority of SQLAlchemy tests aren't needed for third 
party dialects to support, so the third party suite contains a much more 
carefully maintained set of "exportable" tests that every dialect should 
address, either by ensuring support is present or by marking a certain 
capability as skipped.

 

The background for this system is discussed at 
https://github.com/sqlalchemy/sqlalchemy/blob/master/README.dialects.rst. 
<https://github.com/sqlalchemy/sqlalchemy/blob/master/README.dialects.rst>
The sqlalchemy-access work we just did allowed us to fix a few cases, so if 
Firebird can target this that would be how it does its tests once on the 
outside.

 

 

 

 

 

Paul

 

From: sqlal...@googlegroups.com> On Behalf Of Mike Bayer

Sent: Friday, November 08, 2019 12:36 PM

To: noreply-spamdigest via sqlalchemy  >

Subject: Re: [sqlalchemy] Continuing support for Firebird database

 

Would you be interested in maintaining / owning / releasing Firebird 
separately, under   github.com/sqlalchemy/sqlalchemy-firebird 
<http://github.com/sqlalchemy/firebird>  ?Because this is not a widely used 
dialect and I can't maintain CI for it under my own umbrella,  it would benefit 
by being maintained  and released separately.  We have recently modernized 
sqlalchemy-access with a new maintainer in a similar way.

 

 

 

On Fri, Nov 8, 2019, at 1:57 PM, Paul Graves-DesLauriers wrote:

I saw in the SQLAlchemy 1.3 Documentation page for Firebird 
<https://docs.sqlalchemy.org/en/13/dialects/firebird.html>  that the 'Firebird 
dialect within SQLAlchemy is not currently supported.  The dialect is not 
tested within continuous integration and is likely to have many issues and 
caveats not currently handled.'

 

I'd like to see the Firebird database have ongoing support within SQLAlchemy.  
Firebird has been my database of choice since version 1.0, am currently using 
it, and plan to continue using it in the future.  It is still in active 
development.

 

For Python, I use the fdb DBAPI (official driver) along with Firebird dialect3 
(current).  I would suggest removing support for the old dialect1 and 
kinterbasdb driver to simplify maintenance and testing.

 

 

Can I help with getting the dialect tested within the continuous integration 
and with resolving issues and caveats?  I use Jenkins on a daily basis and have 
been using Python for about 8 years. I'm now a performance test engineer, but 
have many years of experience as a software developer as well.  I am also 
willing to help with documentation and support.

 

Are there any other issues causing the Firebird database to losing ongoing 
support for it?

 

Thanks,

Paul

 

 

--

SQLAlchemy -

The Python SQL Toolkit and Object Relational Mapper

 

http:/

RE: [sqlalchemy] Continuing support for Firebird database

2019-11-11 Thread Paul Graves-DesLauriers
I’m reviewing the README.dialects and README.unittests files.  It certainly 
appears that this is a task that I can take on.

 

How do we go about splitting the Firebird support into a separate plugin?  Or 
perhaps, how was the sqlalchemy-access project created from the sqlalchemy 
master?

 

Thanks,

Paul

 

From: sqlalchemy@googlegroups.com  On Behalf Of 
Mike Bayer
Sent: Monday, November 11, 2019 12:05 PM
To: noreply-spamdigest via sqlalchemy 
Subject: Re: [sqlalchemy] Continuing support for Firebird database

 

 

 

On Fri, Nov 8, 2019, at 11:05 PM, Paul Graves-DesLauriers wrote:

This is unexpected…

 

I can do this if we can get things setup similarly to the sqlalchemy-access 
dialect.

 

Is there a test set for confirming the dialect is acceptable for SQLAlchemy?  
Is there a sample test set I can use to confirm Firebird will work as expected 
(Like you use for SQL Server, Oracle, PostgreSQL, etc.)?  If so, I’m willing to 
give it a shot.  I am extremely happy with Firebird as a database and will 
probably end up doing this kind of work anyway…

 

 

SQLAlchemy contains a test suite that is explicitly for third party dialects to 
aim towards. The vast majority of SQLAlchemy tests aren't needed for third 
party dialects to support, so the third party suite contains a much more 
carefully maintained set of "exportable" tests that every dialect should 
address, either by ensuring support is present or by marking a certain 
capability as skipped.

 

The background for this system is discussed at 
https://github.com/sqlalchemy/sqlalchemy/blob/master/README.dialects.rst. 
<https://github.com/sqlalchemy/sqlalchemy/blob/master/README.dialects.rst>
The sqlalchemy-access work we just did allowed us to fix a few cases, so if 
Firebird can target this that would be how it does its tests once on the 
outside.

 

 

 

 

 

Paul

 

From: sqlalchemy@googlegroups.com <mailto:sqlalchemy@googlegroups.com>  
mailto:sqlalchemy@googlegroups.com> > On Behalf 
Of Mike Bayer

Sent: Friday, November 08, 2019 12:36 PM

To: noreply-spamdigest via sqlalchemy mailto:sqlalchemy@googlegroups.com> >

Subject: Re: [sqlalchemy] Continuing support for Firebird database

 

Would you be interested in maintaining / owning / releasing Firebird 
separately, under   github.com/sqlalchemy/sqlalchemy-firebird 
<http://github.com/sqlalchemy/firebird>  ?Because this is not a widely used 
dialect and I can't maintain CI for it under my own umbrella,  it would benefit 
by being maintained  and released separately.  We have recently modernized 
sqlalchemy-access with a new maintainer in a similar way.

 

 

 

On Fri, Nov 8, 2019, at 1:57 PM, Paul Graves-DesLauriers wrote:

I saw in the SQLAlchemy 1.3 Documentation page for Firebird 
<https://docs.sqlalchemy.org/en/13/dialects/firebird.html>  that the 'Firebird 
dialect within SQLAlchemy is not currently supported.  The dialect is not 
tested within continuous integration and is likely to have many issues and 
caveats not currently handled.'

 

I'd like to see the Firebird database have ongoing support within SQLAlchemy.  
Firebird has been my database of choice since version 1.0, am currently using 
it, and plan to continue using it in the future.  It is still in active 
development.

 

For Python, I use the fdb DBAPI (official driver) along with Firebird dialect3 
(current).  I would suggest removing support for the old dialect1 and 
kinterbasdb driver to simplify maintenance and testing.

 

 

Can I help with getting the dialect tested within the continuous integration 
and with resolving issues and caveats?  I use Jenkins on a daily basis and have 
been using Python for about 8 years. I'm now a performance test engineer, but 
have many years of experience as a software developer as well.  I am also 
willing to help with documentation and support.

 

Are there any other issues causing the Firebird database to losing ongoing 
support for it?

 

Thanks,

Paul

 

 

--

SQLAlchemy -

The Python SQL Toolkit and Object Relational Mapper

 

http://www.sqlalchemy.org/

 

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example. See http://stackoverflow.com/help/mcve for a full description.

---

You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com 
<mailto:sqlalchemy+unsubscr...@googlegroups.com> .

To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/fe77d2d8-a1a0-4082-bf10-6e78e0ed75c0%40googlegroups.com
 
<https://groups.google.com/d/msgid/sqlalchemy/fe77d2d8-a1a0-4082-bf10-6e78e0ed75c0%40googlegroups.com?utm_medium=email_source=footer>
 .

 

-- 

SQLAlchemy - 

The Python SQL Toolkit and Object Relational Mapper

 

http://www.sqlalchemy.org/

 

To post exa

RE: [sqlalchemy] Continuing support for Firebird database

2019-11-08 Thread Paul Graves-DesLauriers
This is unexpected.

 

I can do this if we can get things setup similarly to the sqlalchemy-access
dialect.

 

Is there a test set for confirming the dialect is acceptable for SQLAlchemy?
Is there a sample test set I can use to confirm Firebird will work as
expected (Like you use for SQL Server, Oracle, PostgreSQL, etc.)?  If so,
I'm willing to give it a shot.  I am extremely happy with Firebird as a
database and will probably end up doing this kind of work anyway.

 

Paul

 

From: sqlalchemy@googlegroups.com  On Behalf Of
Mike Bayer
Sent: Friday, November 08, 2019 12:36 PM
To: noreply-spamdigest via sqlalchemy 
Subject: Re: [sqlalchemy] Continuing support for Firebird database

 

Would you be interested in maintaining / owning / releasing Firebird
separately, under   github.com/sqlalchemy/sqlalchemy-firebird
<http://github.com/sqlalchemy/firebird>  ?Because this is not a widely
used dialect and I can't maintain CI for it under my own umbrella,  it would
benefit by being maintained  and released separately.  We have recently
modernized sqlalchemy-access with a new maintainer in a similar way.

 

 

 

On Fri, Nov 8, 2019, at 1:57 PM, Paul Graves-DesLauriers wrote:

I saw in the SQLAlchemy 1.3 Documentation page for Firebird
<https://docs.sqlalchemy.org/en/13/dialects/firebird.html>  that the
'Firebird dialect within SQLAlchemy is not currently supported.  The dialect
is not tested within continuous integration and is likely to have many
issues and caveats not currently handled.'

 

I'd like to see the Firebird database have ongoing support within
SQLAlchemy.  Firebird has been my database of choice since version 1.0, am
currently using it, and plan to continue using it in the future.  It is
still in active development.

 

For Python, I use the fdb DBAPI (official driver) along with Firebird
dialect3 (current).  I would suggest removing support for the old dialect1
and kinterbasdb driver to simplify maintenance and testing.

 

 

Can I help with getting the dialect tested within the continuous integration
and with resolving issues and caveats?  I use Jenkins on a daily basis and
have been using Python for about 8 years. I'm now a performance test
engineer, but have many years of experience as a software developer as well.
I am also willing to help with documentation and support.

 

Are there any other issues causing the Firebird database to losing ongoing
support for it?

 

Thanks,

Paul

 

 

--

SQLAlchemy - 

The Python SQL Toolkit and Object Relational Mapper

 

http://www.sqlalchemy.org/

 

To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.

--- 

You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy+unsubscr...@googlegroups.com> .

To view this discussion on the web visit
https://groups.google.com/d/msgid/sqlalchemy/fe77d2d8-a1a0-4082-bf10-6e78e0e
d75c0%40googlegroups.com
<https://groups.google.com/d/msgid/sqlalchemy/fe77d2d8-a1a0-4082-bf10-6e78e0
ed75c0%40googlegroups.com?utm_medium=email_source=footer> .

 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper
 
http://www.sqlalchemy.org/
 
To post example code, please provide an MCVE: Minimal, Complete, and
Verifiable Example. See http://stackoverflow.com/help/mcve for a full
description.
--- 
You received this message because you are subscribed to a topic in the
Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/yvsgiNtVYSk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy+unsubscr...@googlegroups.com> .
To view this discussion on the web visit
https://groups.google.com/d/msgid/sqlalchemy/2932583d-e98e-4dea-9f0a-efb200c
4648e%40www.fastmail.com
<https://groups.google.com/d/msgid/sqlalchemy/2932583d-e98e-4dea-9f0a-efb200
c4648e%40www.fastmail.com?utm_medium=email_source=footer> .

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/012c01d596b2%24dda46020%2498ed2060%24%40dexmicro.com.


[sqlalchemy] Continuing support for Firebird database

2019-11-08 Thread Paul Graves-DesLauriers
I saw in the SQLAlchemy 1.3 Documentation page for Firebird 
<https://docs.sqlalchemy.org/en/13/dialects/firebird.html> that the 
'Firebird dialect within SQLAlchemy is not currently supported.  The 
dialect is not tested within continuous integration and is likely to have 
many issues and caveats not currently handled.'

I'd like to see the Firebird database have ongoing support within 
SQLAlchemy.  Firebird has been my database of choice since version 1.0, am 
currently using it, and plan to continue using it in the future.  It is 
still in active development.

For Python, I use the fdb DBAPI (official driver) along with Firebird 
dialect3 (current).  I would suggest removing support for the old dialect1 
and kinterbasdb driver to simplify maintenance and testing.


Can I help with getting the dialect tested within the continuous 
integration and with resolving issues and caveats?  I use Jenkins on a 
daily basis and have been using Python for about 8 years. I'm now a 
performance test engineer, but have many years of experience as a software 
developer as well.  I am also willing to help with documentation and 
support.

Are there any other issues causing the Firebird database to losing ongoing 
support for it?

Thanks,
Paul

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/fe77d2d8-a1a0-4082-bf10-6e78e0ed75c0%40googlegroups.com.


[sqlalchemy] SELECT ... INTO Contribution

2018-11-06 Thread Paul Becotte
Hi!

I recently needed to use the select ... into ... construct to build a temp 
table in Redshift. I used the 
recipe https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/SelectInto 
here to get a good idea of how to go about this, and was wondering if I 
should contribute the final working version back to the project. The 
biggest thing that gives me pause is that the recipe says that this would 
be added in 2014, and it hasn't been, so I feel like you probably have a 
good reason that I am ignorant about! The other, less of a blocker, is that 
I am not actually sure of a comprehensive list of database engines that 
support don't support this- I know Postgres, Snowflake, Redshift, and MSSQL 
do... but I am out of expertise at that point. For something like this that 
may or may not be supported in all database engines, are there good 
examples of how that should be handled in the existing code base?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to customize base declarative class to add naming conventions

2018-09-05 Thread René-paul Debroize
It would have been great to be able to do it via a mixin, I have several DB
using decalarative base constructed with this Base mixin and I liked to
have the same naming_convention for all the DBs without repeating myself.
If it's not I guess i can still manage to find an acceptable way of doing
it using the decalarative_base arg.

Thanks.

Le mar. 4 sept. 2018 à 17:10, Simon King  a écrit :

> On Tue, Sep 4, 2018 at 3:48 PM  wrote:
> >
> > I'd like to create a mixin to specify naming conventions.
> >
> > I tried both:
> >
> > class Base:
> > metadata = MetaData(naming_convention={
> > "ix": "ix_%(column_0_label)s",
> > "uq": "uq_%(table_name)s_%(column_0_name)s",
> > "ck": "ck_%(table_name)s_%(constraint_name)s",
> > "fk":
> "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
> > "pk": "pk_%(table_name)",
> > })
> >
> >
> > and
> >
> > class Base:
> > @declared_attr
> > def metadata(cls):
> > return MetaData(naming_convention={
> > "ix": "ix_%(column_0_label)s",
> > "uq": "uq_%(table_name)s_%(column_0_name)s",
> > "ck": "ck_%(table_name)s_%(constraint_name)s",
> > "fk":
> "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
> > "pk": "pk_%(table_name)",
> > })
> >
> > But if I inspect a model created using this base I always got:
> >
> > >>> Test.metadata.naming_convention
> > immutabledict({'ix': 'ix_%(column_0_label)s'})
> >
> > while I correctly have:
> >
> > >>> Base.metadata.naming_convention
> > {'ix': 'ix_%(column_0_label)s',
> >  'uq': 'uq_%(table_name)s_%(column_0_name)s',
> >  'ck': 'ck_%(table_name)s_%(constraint_name)s',
> >  'fk': 'fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s',
> >  'pk': 'pk_%(table_name)'}
> >
> > What is the correct way to do it? what am i doing wrong? Should I do
> this in my migration tool (alembic) ?
> > Also would it works for unique constraint on multiple column or do we
> have to name them explicitly.
> >
>
> Is it important to you to do this via a mixin? declarative_base
> accepts a "metadata" parameter, so something like this should work:
>
> metadata = MetaData(naming_convention={...})
> Base = declarative_base(metadata=metadata)
>
> Hope that helps,
>
> Simon
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: autogenerate revision fails in a multi-db project generated with: `alembic init --template multidb alembic`

2018-08-13 Thread René-paul Debroize
Hello

Thank you for your answer.

The problem was that the entry db_name was missing from the alembic.ini
file. It is not how I configure the engine. However add something like:

config.set_main_option("databases", db_names)


rather than just fill the db_name var in the env.py file fixed the problem.

Thanks again!





2018-08-10 17:16 GMT+02:00 Mike Bayer :

> On Fri, Aug 10, 2018 at 6:47 AM,   wrote:
> > Hello
> >
> > I'm trying to use alembic to generate a migration file in a multi-db
> > environment. I first created the project using `alembic init --template
> > multidb alembic`
> > and then hack the `env.py` script (didn't touch to
> `run_migration_online`)
> > to suit my needs.
> >
> > Here is the error I got after invoking `alembic --raiseerr revision
> > --autogenerate -m 'First revision`:
>
> have you also received the alembic.ini from that run, or was there a
> previous alembic.ini present that prevented the new one from being
> genreated?   the alembic.ini for multidb requires a "databases"
> configuration:
>
> databases = engine1, engine2
>
> [engine1]
> sqlalchemy.url = driver://user:pass@localhost/dbname
>
> [engine2]
> sqlalchemy.url = driver://user:pass@localhost/dbname2
>
>
> I can reproduce your error if that section is missing.   Restore it,
> and also put valid URLs in there, and then it works.If this is not
> how you are configuring, then you need to alter the env.py to locate
> your database URLs somehow.
>
>
>
>
>
> >
> > [...]
> > Generating
> > /home/rdebroiz/Quantivly/services/alembic/versions/
> 201808-10-T12-30-48-first_revision-cb8c6da27574.py
> > ... FAILED
> > Traceback (most recent call last):
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/util/pyfiles.py",
> > line 15, in template_to_file
> > output = template.render_unicode(**kw).encode(output_encoding)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/mako/template.py",
> > line 471, in render_unicode
> > as_unicode=True)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/mako/runtime.py",
> > line 838, in _render
> > **_kwargs_for_callable(callable_, data))
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/mako/runtime.py",
> > line 873, in _render_context
> > _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/mako/runtime.py",
> > line 899, in _exec_template
> > callable_(context, *args, **kwargs)
> > File "alembic_script_py_mako", line 62, in render_body
> > File "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/re.py", line
> 212,
> > in split
> > return _compile(pattern, flags).split(string, maxsplit)
> > TypeError: expected string or bytes-like object
> >
> > During handling of the above exception, another exception occurred:
> >
> > Traceback (most recent call last):
> > File "/home/rdebroiz/.virtualenvs/quantivly/bin/alembic", line 11, in
> > 
> > sys.exit(main())
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/config.py",
> > line 486, in main
> > CommandLine(prog=prog).main(argv=argv)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/config.py",
> > line 480, in main
> > self.run_cmd(cfg, options)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/config.py",
> > line 463, in run_cmd
> > **dict((k, getattr(options, k, None)) for k in kwarg)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/command.py",
> > line 180, in revision
> > revision_context.generate_scripts()
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/command.py",
> > line 179, in 
> > script for script in
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/autogenerate/api.py",
> > line 480, in generate_scripts
> > yield self._to_script(generated_revision)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/autogenerate/api.py",
> > line 412, in _to_script
> > **template_args)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/script/base.py",
> > line 574, in generate_revision
> > **kw
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/script/base.py",
> > line 439, in _generate_template
> > **kw
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/util/messaging.py",
> > line 47, in status
> > ret = fn(*arg, **kw)
> > File
> > "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-
> packages/alembic/util/pyfiles.py",
> > line 24, in template_to_file
> > "template-oriented traceback." % fname)
> > alembic.util.exc.CommandError: Template rendering failed; see
> > /tmp/tmph4dvwm5j.txt for a 

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-05 Thread Paul Morel
It is possible that in the end that was the case, however, the reason I 
even attempted the pip uninstall/install originally was because of the 
error I was getting.  I found some SO posts that recommended rolling back 
to an earlier version of SQLAlchemy.  Rolling back had no impact.  I then 
rolled forward to the latest, also with no impact.  Presumably those 
roll-back/roll-forward attempts didn't work because they were still using 
the 1.1.9 version C extensions that ship with Anaconda.

https://docs.continuum.io/anaconda/packages/pkg-docs

I am trying to get my hands on another similar environment but it appears I 
don't have installation rights on that machine at the moment.  I will see 
if I can reproduce this problem if I can get install privileges.




On Wednesday, July 5, 2017 at 7:47:25 AM UTC-5, Mike Bayer wrote:
>
>
>
> On Jul 5, 2017 12:59 AM, "Paul Morel" <paul@tartansolutions.com 
> > wrote:
>
> It appears that the Anaconda installer placed the .pyd files there.  I 
> discovered that pip uninstall doesn't remove those files like I had 
> assumed.  Therefore, when I pip uninstalled/installed the original .pyd 
> files were still there.
>
>
>
> Ok, would this be simple issue of old version of .pyd files not matching a 
> pip updated version of the python code ?   That is, you manually updated 
> SQLAlchemy from 1.0.x to 1.1.x.   the c extension from 1.0 would not be 
> compatible as there were some changes.
>
>
>
>
> It looks like deleting the .pyd files and pip installing again *does not* 
> recreate the pyd files.  So it appears the Anaconda distribution of 
> SQLAlchemy included may be the culprit here.
>
>
> On Tuesday, July 4, 2017 at 8:52:22 PM UTC-5, Mike Bayer wrote:
>
>> Ok how did the pyd file get there in the first place, was that pre-built 
>> as part of anaconda or did it compile on your local machine?   This is kind 
>> of a serious issue that a broken shared library got created 
>>
>> On Jul 4, 2017 7:55 PM, "Paul Morel" <paul@tartansolutions.com> 
>> wrote:
>>
>> BAM!  That was it.  There must be some type of incompatibility between 
>> the c extensions and the Windows 2008 Server configuration.  I pip 
>> uninstalled/installed and found the c extensions where rebuilt.  After 
>> removing *cresultproxy.pyd* from the sqlalchemy site-packages directory 
>> the queries started working again.
>>
>> Many thanks...
>>
>>
>> On Tuesday, July 4, 2017 at 4:34:38 PM UTC-5, Mike Bayer wrote:
>>
>>> it's the select schema_name() query that's failing and that 
>>> cursor.description is fine. 
>>>
>>> did you build the SQLAlchemy C extensions on windows?   or does your 
>>> anaconda build somehow deliver sqlalchemy/cresultproxy.dll to your 
>>> installation ?my last theory here is that you have C extensions 
>>> set up and somehow they built such that PyLong_CheckExact(0) is 
>>> returning false.if you have any .dll's (or whatever Windows uses 
>>> for native Python extensions these days), try blowing those away for 
>>> SQLAlchemy. 
>>>
>>>
>>> On Tue, Jul 4, 2017 at 2:24 PM, Paul Morel 
>>> <paul@tartansolutions.com> wrote: 
>>> > Sorry, I gave you the cursor.description of the schema_name() query.  
>>> This 
>>> > is the output for the cursor.description for the SELECT * FROM 
>>> EPO_MODELS 
>>> > query: 
>>> > 
>>> > pymssql Cursor Description is: 
>>> >> 
>>> >> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, 
>>> None, 
>>> >> None, N 
>>> >> one, None), (u'MODELNAME', 1, None, None, None, None, None), 
>>> (u'MEMO', 1, 
>>> >> None, 
>>> >> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
>>> >> None), (u' 
>>> >> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, 
>>> None, 
>>> >> None, No 
>>> >> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
>>> >> (u'DATEALTER 
>>> >> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, 
>>> None, 
>>> >> None, No 
>>> >> ne, None)) 
>>> > 
>>> > 
>>> > 
>>> > pyodbc Cursor Description is: 
>>> >> 
>>> >> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, 
>>> None, 
>>> >> None, N 
>>> >> one, None), (u'MODELNAME', 1, None, None, No

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel
It appears that the Anaconda installer placed the .pyd files there.  I 
discovered that pip uninstall doesn't remove those files like I had 
assumed.  Therefore, when I pip uninstalled/installed the original .pyd 
files were still there.

It looks like deleting the .pyd files and pip installing again *does not* 
recreate the pyd files.  So it appears the Anaconda distribution of 
SQLAlchemy included may be the culprit here.


On Tuesday, July 4, 2017 at 8:52:22 PM UTC-5, Mike Bayer wrote:
>
> Ok how did the pyd file get there in the first place, was that pre-built 
> as part of anaconda or did it compile on your local machine?   This is kind 
> of a serious issue that a broken shared library got created 
>
> On Jul 4, 2017 7:55 PM, "Paul Morel" <paul@tartansolutions.com 
> > wrote:
>
> BAM!  That was it.  There must be some type of incompatibility between the 
> c extensions and the Windows 2008 Server configuration.  I pip 
> uninstalled/installed and found the c extensions where rebuilt.  After 
> removing *cresultproxy.pyd* from the sqlalchemy site-packages directory 
> the queries started working again.
>
> Many thanks...
>
>
> On Tuesday, July 4, 2017 at 4:34:38 PM UTC-5, Mike Bayer wrote:
>
>> it's the select schema_name() query that's failing and that 
>> cursor.description is fine. 
>>
>> did you build the SQLAlchemy C extensions on windows?   or does your 
>> anaconda build somehow deliver sqlalchemy/cresultproxy.dll to your 
>> installation ?my last theory here is that you have C extensions 
>> set up and somehow they built such that PyLong_CheckExact(0) is 
>> returning false.if you have any .dll's (or whatever Windows uses 
>> for native Python extensions these days), try blowing those away for 
>> SQLAlchemy. 
>>
>>
>> On Tue, Jul 4, 2017 at 2:24 PM, Paul Morel 
>> <paul@tartansolutions.com> wrote: 
>> > Sorry, I gave you the cursor.description of the schema_name() query.  
>> This 
>> > is the output for the cursor.description for the SELECT * FROM 
>> EPO_MODELS 
>> > query: 
>> > 
>> > pymssql Cursor Description is: 
>> >> 
>> >> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, 
>> None, 
>> >> None, N 
>> >> one, None), (u'MODELNAME', 1, None, None, None, None, None), (u'MEMO', 
>> 1, 
>> >> None, 
>> >> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
>> >> None), (u' 
>> >> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, 
>> None, 
>> >> None, No 
>> >> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
>> >> (u'DATEALTER 
>> >> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, 
>> None, 
>> >> None, No 
>> >> ne, None)) 
>> > 
>> > 
>> > 
>> > pyodbc Cursor Description is: 
>> >> 
>> >> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, 
>> None, 
>> >> None, N 
>> >> one, None), (u'MODELNAME', 1, None, None, None, None, None), (u'MEMO', 
>> 1, 
>> >> None, 
>> >> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
>> >> None), (u' 
>> >> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, 
>> None, 
>> >> None, No 
>> >> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
>> >> (u'DATEALTER 
>> >> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, 
>> None, 
>> >> None, No 
>> >> ne, None)) 
>> > 
>> > 
>> > 
>> > On Tuesday, July 4, 2017 at 1:13:12 PM UTC-5, Mike Bayer wrote: 
>> >> 
>> >> in your direct examples can you show me the output of 
>> >> "cursor.description" as well once you execute the query? 
>> >> 
>> >> On Tue, Jul 4, 2017 at 11:17 AM, Paul Morel 
>> >> <paul@tartansolutions.com> wrote: 
>> >> > I ran that select statement under both the pyodbc and pymssql direct 
>> >> > connections and it appears to have returned a legitimate row of 
>> data. 
>> >> > This 
>> >> > is the excerpt from the test output: 
>> >> > 
>> >> >>  Testing pymssql Directly 
>> >> >>  
>> >> >>  Complete  
>> >> >>  Testing Get Schema Name with pymssql 
>> >> >> (u'dbo',

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel
BAM!  That was it.  There must be some type of incompatibility between the 
c extensions and the Windows 2008 Server configuration.  I pip 
uninstalled/installed and found the c extensions where rebuilt.  After 
removing *cresultproxy.pyd* from the sqlalchemy site-packages directory the 
queries started working again.

Many thanks...

On Tuesday, July 4, 2017 at 4:34:38 PM UTC-5, Mike Bayer wrote:
>
> it's the select schema_name() query that's failing and that 
> cursor.description is fine. 
>
> did you build the SQLAlchemy C extensions on windows?   or does your 
> anaconda build somehow deliver sqlalchemy/cresultproxy.dll to your 
> installation ?my last theory here is that you have C extensions 
> set up and somehow they built such that PyLong_CheckExact(0) is 
> returning false.if you have any .dll's (or whatever Windows uses 
> for native Python extensions these days), try blowing those away for 
> SQLAlchemy. 
>
>
> On Tue, Jul 4, 2017 at 2:24 PM, Paul Morel 
> <paul@tartansolutions.com > wrote: 
> > Sorry, I gave you the cursor.description of the schema_name() query. 
>  This 
> > is the output for the cursor.description for the SELECT * FROM 
> EPO_MODELS 
> > query: 
> > 
> > pymssql Cursor Description is: 
> >> 
> >> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, 
> None, 
> >> None, N 
> >> one, None), (u'MODELNAME', 1, None, None, None, None, None), (u'MEMO', 
> 1, 
> >> None, 
> >> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
> >> None), (u' 
> >> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, None, 
> >> None, No 
> >> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
> >> (u'DATEALTER 
> >> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, None, 
> >> None, No 
> >> ne, None)) 
> > 
> > 
> > 
> > pyodbc Cursor Description is: 
> >> 
> >> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, 
> None, 
> >> None, N 
> >> one, None), (u'MODELNAME', 1, None, None, None, None, None), (u'MEMO', 
> 1, 
> >> None, 
> >> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
> >> None), (u' 
> >> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, None, 
> >> None, No 
> >> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
> >> (u'DATEALTER 
> >> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, None, 
> >> None, No 
> >> ne, None)) 
> > 
> > 
> > 
> > On Tuesday, July 4, 2017 at 1:13:12 PM UTC-5, Mike Bayer wrote: 
> >> 
> >> in your direct examples can you show me the output of 
> >> "cursor.description" as well once you execute the query? 
> >> 
> >> On Tue, Jul 4, 2017 at 11:17 AM, Paul Morel 
> >> <paul@tartansolutions.com> wrote: 
> >> > I ran that select statement under both the pyodbc and pymssql direct 
> >> > connections and it appears to have returned a legitimate row of data. 
> >> > This 
> >> > is the excerpt from the test output: 
> >> > 
> >> >>  Testing pymssql Directly 
> >> >>  
> >> >>  Complete  
> >> >>  Testing Get Schema Name with pymssql 
> >> >> (u'dbo',) 
> >> >>  Complete  
> >> >>  Testing pyodbc Directly 
> >> >>  
> >> >>  Complete  
> >> >>  Testing Get Schema Name with pyodbc 
> >> >> (u'dbo', ) 
> >> >>  Complete  
> >> >>  Testing SQLAlchemy Connection using pymssql 
> >> >>  
> >> >>  Complete  
> >> > 
> >> > 
> >> > I switched over the SQLAlchemy connection string to use pymssql.  It 
> now 
> >> > looks like this: 
> >> > 
> >> > 'mssql+pymssql://:@CMPDSQL01:1433/CMP' 
> >> > 
> >> > The same error is produced.  I attempted to run the SELECT 
> schema_name() 
> >> > query through the SQLAlchemy connection to see what was coming back 
> but 
> >> > it 
> >> > failed in the same way because it is attempting its internal 
> >> > schema_name() 
> >> > query first. 
> >> > 
> >> > This is indeed quite strange since both the pyodbc and pymssql direct 
> >

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel
Sorry, I gave you the cursor.description of the schema_name() query.  This 
is the output for the cursor.description for the SELECT * FROM EPO_MODELS 
query:

*pymssql Cursor Description is:*

> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, None, 
> None, N
> one, None), (u'MODELNAME', 1, None, None, None, None, None), (u'MEMO', 1, 
> None,
> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
> None), (u'
> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, None, 
> None, No
> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
> (u'DATEALTER
> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, None, 
> None, No
> ne, None))



*pyodbc Cursor Description is:*

> ((u'ID', 3, None, None, None, None, None), (u'MODELTYPE', 3, None, None, 
> None, N
> one, None), (u'MODELNAME', 1, None, None, None, None, None), (u'MEMO', 1, 
> None,
> None, None, None, None), (u'NEXTUNIQUEID', 3, None, None, None, None, 
> None), (u'
> MODELSYNC', 3, None, None, None, None, None), (u'MODELSTATUS', 3, None, 
> None, No
> ne, None, None), (u'AUDITUSERID', 3, None, None, None, None, None), 
> (u'DATEALTER
> ED', 4, None, None, None, None, None), (u'CREATIONDATE', 4, None, None, 
> None, No
> ne, None))



On Tuesday, July 4, 2017 at 1:13:12 PM UTC-5, Mike Bayer wrote:
>
> in your direct examples can you show me the output of 
> "cursor.description" as well once you execute the query? 
>
> On Tue, Jul 4, 2017 at 11:17 AM, Paul Morel 
> <paul@tartansolutions.com > wrote: 
> > I ran that select statement under both the pyodbc and pymssql direct 
> > connections and it appears to have returned a legitimate row of data. 
>  This 
> > is the excerpt from the test output: 
> > 
> >>  Testing pymssql Directly 
> >>  
> >>  Complete  
> >>  Testing Get Schema Name with pymssql 
> >> (u'dbo',) 
> >>  Complete  
> >>  Testing pyodbc Directly 
> >>  
> >>  Complete  
> >>  Testing Get Schema Name with pyodbc 
> >> (u'dbo', ) 
> >>  Complete  
> >>  Testing SQLAlchemy Connection using pymssql 
> >>  
> >>  Complete  
> > 
> > 
> > I switched over the SQLAlchemy connection string to use pymssql.  It now 
> > looks like this: 
> > 
> > 'mssql+pymssql://:@CMPDSQL01:1433/CMP' 
> > 
> > The same error is produced.  I attempted to run the SELECT schema_name() 
> > query through the SQLAlchemy connection to see what was coming back but 
> it 
> > failed in the same way because it is attempting its internal 
> schema_name() 
> > query first. 
> > 
> > This is indeed quite strange since both the pyodbc and pymssql direct 
> > connections did return a legit row when asking for the schema name. 
> > 
> > On Tuesday, July 4, 2017 at 9:27:09 AM UTC-5, Mike Bayer wrote: 
> >> 
> >> can you run this query please? 
> >> 
> >> SELECT schema_name() 
> >> 
> >> the issue is, that query is returning a result, there is a row, but it 
> >> no columns, which is nonsensical.  Did you try running with the 
> >> mssql+pymssql:// driver?   Looks like a pyodbc bug so far but need 
> >> more info. 
> >> 
> >> 
> >> 
> >> On Tue, Jul 4, 2017 at 9:59 AM, Simon King <si...@simonking.org.uk> 
> wrote: 
> >> > The key part of the stack trace is: 
> >> > 
> >> > File "c:\Program 
> >> > Files\Anaconda2\lib\site-packages\sqlalchemy\dialects\mssql\base.py", 
> >> > line 1773, in _get_default_schema_name default_schema_name = 
> >> > connection.scalar(query) 
> >> > 
> >> > ...which is in this function: 
> >> > 
> >> > 
> >> > 
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/dialects/mssql/base.py?at=rel_1_1_11=file-view-default#base.py-1768
>  
> >> > 
> >> > It's failing to fetch the single value that ought to come back from 
> >> > the query "SELECT schema_name()". 
> >> > 
> >> > I don't know anything about MSSQL or ODBC, but you could try poking 
> >> > around with pdb in the scalar() function: 
> >> > 
> >> > 
> >> > 
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/engine/result.py?at=rel_1_1_11=file-view-default#result.py-1212
>  
> >> > 
> &

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel

>
>  Testing *pymssql* Directly
> 
>  Complete 
>  Testing Get Schema Name with pymssql
> (u'dbo',)
>
> *Cursor Description is:((u'', 1, None, None, None, None, None),)*
>  Complete 
>  Testing *pyodbc* Directly
> 
>  Complete 
>  Testing Get Schema Name with pyodbc
> (u'dbo', )
>
> *Cursor Description is:((u'', , None, 128, 128, 0, True),)*
>  Complete 


On Tuesday, July 4, 2017 at 1:13:12 PM UTC-5, Mike Bayer wrote:
>
> in your direct examples can you show me the output of 
> "cursor.description" as well once you execute the query? 
>
> On Tue, Jul 4, 2017 at 11:17 AM, Paul Morel 
> <paul@tartansolutions.com > wrote: 
> > I ran that select statement under both the pyodbc and pymssql direct 
> > connections and it appears to have returned a legitimate row of data. 
>  This 
> > is the excerpt from the test output: 
> > 
> >>  Testing pymssql Directly 
> >>  
> >>  Complete  
> >>  Testing Get Schema Name with pymssql 
> >> (u'dbo',) 
> >>  Complete  
> >>  Testing pyodbc Directly 
> >>  
> >>  Complete  
> >>  Testing Get Schema Name with pyodbc 
> >> (u'dbo', ) 
> >>  Complete  
> >>  Testing SQLAlchemy Connection using pymssql 
> >>  
> >>  Complete  
> > 
> > 
> > I switched over the SQLAlchemy connection string to use pymssql.  It now 
> > looks like this: 
> > 
> > 'mssql+pymssql://:@CMPDSQL01:1433/CMP' 
> > 
> > The same error is produced.  I attempted to run the SELECT schema_name() 
> > query through the SQLAlchemy connection to see what was coming back but 
> it 
> > failed in the same way because it is attempting its internal 
> schema_name() 
> > query first. 
> > 
> > This is indeed quite strange since both the pyodbc and pymssql direct 
> > connections did return a legit row when asking for the schema name. 
> > 
> > On Tuesday, July 4, 2017 at 9:27:09 AM UTC-5, Mike Bayer wrote: 
> >> 
> >> can you run this query please? 
> >> 
> >> SELECT schema_name() 
> >> 
> >> the issue is, that query is returning a result, there is a row, but it 
> >> no columns, which is nonsensical.  Did you try running with the 
> >> mssql+pymssql:// driver?   Looks like a pyodbc bug so far but need 
> >> more info. 
> >> 
> >> 
> >> 
> >> On Tue, Jul 4, 2017 at 9:59 AM, Simon King <si...@simonking.org.uk> 
> wrote: 
> >> > The key part of the stack trace is: 
> >> > 
> >> > File "c:\Program 
> >> > Files\Anaconda2\lib\site-packages\sqlalchemy\dialects\mssql\base.py", 
> >> > line 1773, in _get_default_schema_name default_schema_name = 
> >> > connection.scalar(query) 
> >> > 
> >> > ...which is in this function: 
> >> > 
> >> > 
> >> > 
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/dialects/mssql/base.py?at=rel_1_1_11=file-view-default#base.py-1768
>  
> >> > 
> >> > It's failing to fetch the single value that ought to come back from 
> >> > the query "SELECT schema_name()". 
> >> > 
> >> > I don't know anything about MSSQL or ODBC, but you could try poking 
> >> > around with pdb in the scalar() function: 
> >> > 
> >> > 
> >> > 
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/engine/result.py?at=rel_1_1_11=file-view-default#result.py-1212
>  
> >> > 
> >> > Simon 
> >> > 
> >> > 
> >> > On Tue, Jul 4, 2017 at 2:39 PM, Paul Morel 
> >> > <paul@tartansolutions.com> wrote: 
> >> >> Mike, 
> >> >> 
> >> >> Sorry for the lack of information.  Please find the rest of what you 
> >> >> wanted 
> >> >> below. 
> >> >> 
> >> >> Full Stack Trace: 
> >> >> 
> >> >>>  Running Direct SQL Query 
> >> >>> Traceback (most recent call last): 
> >> >>>   File "test.py", line 45, in  result = con.execute('SELECT 
> * 
> >> >>> FROM 
> >> >>> EPO_MODELS') 
> >> >>>   File "c:\Program 
> >> >>> Files\Anaconda2\lib\site-packages\sqlalc

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel
I ran that select statement under both the pyodbc and pymssql direct 
connections and it appears to have returned a legitimate row of data.  This 
is the excerpt from the test output:

 Testing pymssql Directly
> 
>  Complete 
>  Testing Get Schema Name with pymssql
> (u'dbo',)
>  Complete 
>  Testing pyodbc Directly
> 
>  Complete 
>  Testing Get Schema Name with pyodbc
> (u'dbo', )
>  Complete 
>  Testing SQLAlchemy Connection using pymssql
> 
>  Complete 


I switched over the SQLAlchemy connection string to use pymssql.  It now 
looks like this:

'mssql+pymssql://:@CMPDSQL01:1433/CMP'

The same error is produced.  I attempted to run the SELECT schema_name() 
query through the SQLAlchemy connection to see what was coming back but it 
failed in the same way because it is attempting its internal schema_name() 
query first.

This is indeed quite strange since both the pyodbc and pymssql direct 
connections did return a legit row when asking for the schema name.

On Tuesday, July 4, 2017 at 9:27:09 AM UTC-5, Mike Bayer wrote:
>
> can you run this query please? 
>
> SELECT schema_name() 
>
> the issue is, that query is returning a result, there is a row, but it 
> no columns, which is nonsensical.  Did you try running with the 
> mssql+pymssql:// driver?   Looks like a pyodbc bug so far but need 
> more info. 
>
>
>
> On Tue, Jul 4, 2017 at 9:59 AM, Simon King <si...@simonking.org.uk 
> > wrote: 
> > The key part of the stack trace is: 
> > 
> > File "c:\Program 
> > Files\Anaconda2\lib\site-packages\sqlalchemy\dialects\mssql\base.py", 
> > line 1773, in _get_default_schema_name default_schema_name = 
> > connection.scalar(query) 
> > 
> > ...which is in this function: 
> > 
> > 
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/dialects/mssql/base.py?at=rel_1_1_11=file-view-default#base.py-1768
>  
> > 
> > It's failing to fetch the single value that ought to come back from 
> > the query "SELECT schema_name()". 
> > 
> > I don't know anything about MSSQL or ODBC, but you could try poking 
> > around with pdb in the scalar() function: 
> > 
> > 
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/engine/result.py?at=rel_1_1_11=file-view-default#result.py-1212
>  
> > 
> > Simon 
> > 
> > 
> > On Tue, Jul 4, 2017 at 2:39 PM, Paul Morel 
> > <paul@tartansolutions.com > wrote: 
> >> Mike, 
> >> 
> >> Sorry for the lack of information.  Please find the rest of what you 
> wanted 
> >> below. 
> >> 
> >> Full Stack Trace: 
> >> 
> >>>  Running Direct SQL Query 
> >>> Traceback (most recent call last): 
> >>>   File "test.py", line 45, in  result = con.execute('SELECT * 
> FROM 
> >>> EPO_MODELS') 
> >>>   File "c:\Program 
> >>> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 
> 1139, in 
> >>> execute bind, close_with_result=True).execute(clause, params or {}) 
> >>>   File "c:\Program 
> >>> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 
> 1003, in 
> >>> _connection_for_bind engine, execution_options) 
> >>>   File "c:\Program 
> >>> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 403, 
> in 
> >>> _connection_for_bind conn = bind.contextual_connect() 
> >>>   File "c:\Program 
> >>> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 
> 2112, in 
> >>> contextual_connect self._wrap_pool_connect(self.pool.connect, None), 
> >>>   File "c:\Program 
> >>> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 
> 2147, in 
> >>> _wrap_pool_connect return fn() 
> >>>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >>> line 387, in connect return _ConnectionFairy._checkout(self) 
> >>>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >>> line 766, in _checkout fairy = _ConnectionRecord.checkout(pool) 
> >>>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >>> line 516, in checkout rec = pool._do_get() 
> >>>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >>> lin

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel
I don't know the downstream implications of doing this but would it make 
sense to wrap lines 1773-1777 in a try/except with the except block 
returning self.schema_name.

Like this:

def _get_default_schema_name(self, connection):
if self.server_version_info < MS_2005_VERSION:
return self.schema_name
else:
query = sql.text("SELECT schema_name()")

try:
  default_schema_name = connection.scalar(query)
  if default_schema_name is not None:
  return util.text_type(default_schema_name)
  else:
  return self.schema_name
except:
  return self.schema_name

me

On Tuesday, July 4, 2017 at 9:00:22 AM UTC-5, Simon King wrote:
>
> The key part of the stack trace is: 
>
> File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\dialects\mssql\base.py", 
> line 1773, in _get_default_schema_name default_schema_name = 
> connection.scalar(query) 
>
> ...which is in this function: 
>
>
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/dialects/mssql/base.py?at=rel_1_1_11=file-view-default#base.py-1768
>  
>
> It's failing to fetch the single value that ought to come back from 
> the query "SELECT schema_name()". 
>
> I don't know anything about MSSQL or ODBC, but you could try poking 
> around with pdb in the scalar() function: 
>
>
> https://bitbucket.org/zzzeek/sqlalchemy/src/8d740d6bd6b8bcc061713443120c67e611cdcb34/lib/sqlalchemy/engine/result.py?at=rel_1_1_11=file-view-default#result.py-1212
>  
>
> Simon 
>
>
> On Tue, Jul 4, 2017 at 2:39 PM, Paul Morel 
> <paul@tartansolutions.com > wrote: 
> > Mike, 
> > 
> > Sorry for the lack of information.  Please find the rest of what you 
> wanted 
> > below. 
> > 
> > Full Stack Trace: 
> > 
> >>  Running Direct SQL Query 
> >> Traceback (most recent call last): 
> >>   File "test.py", line 45, in  result = con.execute('SELECT * 
> FROM 
> >> EPO_MODELS') 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 1139, 
> in 
> >> execute bind, close_with_result=True).execute(clause, params or {}) 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 1003, 
> in 
> >> _connection_for_bind engine, execution_options) 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 403, 
> in 
> >> _connection_for_bind conn = bind.contextual_connect() 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 2112, 
> in 
> >> contextual_connect self._wrap_pool_connect(self.pool.connect, None), 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 2147, 
> in 
> >> _wrap_pool_connect return fn() 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 387, in connect return _ConnectionFairy._checkout(self) 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 766, in _checkout fairy = _ConnectionRecord.checkout(pool) 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 516, in checkout rec = pool._do_get() 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 1138, in _do_get self._dec_overflow() 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\util\langhelpers.py", line 
> 66, 
> >> in __exit__ compat.reraise(exc_type, exc_value, exc_tb) 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 1135, in _do_get return self._create_connection() 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 333, in _create_connection return _ConnectionRecord(self) 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 461, in __init__ self.__connect(first_connect_check=True) 
> >>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> >> line 661, in __connect exec_once(self.connection, self) 
> >>   File "c:\Program 
> >> Files\Anaconda2\lib\site-packages\sqlalchemy\event\attr.py",line 2

Re: [sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-04 Thread Paul Morel
Mike,

Sorry for the lack of information.  Please find the rest of what you wanted 
below.

*Full Stack Trace:*

 Running Direct SQL Query
> Traceback (most recent call last):
>   File "test.py", line 45, in  result = con.execute('SELECT * FROM 
> EPO_MODELS')
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 1139, in 
> execute bind, close_with_result=True).execute(clause, params or {})
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 1003, in 
> _connection_for_bind engine, execution_options)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\orm\session.py",line 403, in 
> _connection_for_bind conn = bind.contextual_connect()
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 2112, in 
> contextual_connect self._wrap_pool_connect(self.pool.connect, None),
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 2147, in 
> _wrap_pool_connect return fn()
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 387, in connect return _ConnectionFairy._checkout(self)
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 766, in _checkout fairy = _ConnectionRecord.checkout(pool)
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 516, in checkout rec = pool._do_get()
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 1138, in _do_get self._dec_overflow()
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\util\langhelpers.py", line 66, 
> in __exit__ compat.reraise(exc_type, exc_value, exc_tb)
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 1135, in _do_get return self._create_connection()
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 333, in _create_connection return _ConnectionRecord(self)
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 461, in __init__ self.__connect(first_connect_check=True)
>   File "c:\Program Files\Anaconda2\lib\site-packages\sqlalchemy\pool.py", 
> line 661, in __connect exec_once(self.connection, self)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\event\attr.py",line 246, in 
> exec_once self(*args, **kw)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\event\attr.py",line 256, in 
> __call__ fn(*args, **kw)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\util\langhelpers.py", line 
> 1331, in go return once_fn(*arg, **kw)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\strategies.py", line 
> 181, in first_connect dialect.initialize(c)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\connectors\pyodb c.py", line 
> 165, in initialize super(PyODBCConnector, self).initialize(connection)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\dialects\mssql\base.py", line 
> 1742, in initialize super(MSDialect, self).initialize(connection)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\default.py", line 250, 
> in initialize self._get_default_schema_name(connection)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\dialects\mssql\base.py", line 
> 1773, in _get_default_schema_name default_schema_name = 
> connection.scalar(query)
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\base.py",line 877, in 
> scalar return self.execute(object, *multiparams, **params).scalar()
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\result.py", line 1223, 
> in scalar return row[0]
>   File "c:\Program 
> Files\Anaconda2\lib\site-packages\sqlalchemy\engine\result.py", line 563, 
> in _key_fallback expression._string_or_unprintable(key)) 
>   sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for 
> column '0'"


*ODBC Driver Versions:*

> SQL Server (SQLSRV32.DLL) = 6.00.6002.18005
> SQL Server Native Client 10.0 (SQLNCLI10.DLL) = 2007.100.2531.00


*SQL Server Version*

> SQL Server 2008
> Microsoft SQL Server Management Studio = 10.0.6000.29
> Microsoft Analysis Services Client Tools = 10.0.6000.29
> Microsoft Data Access Components (MDAC) = 6.0.6002.18005
> Microsoft MSXML = 3.0 6.0 
> Microsoft Internet Explorer = 9.0

[sqlalchemy] SQLAlchemy won't connect correctly but pyodbc and pymssql will

2017-07-03 Thread Paul Morel
Hi,

I have been trying to diagnose this issue in a Windows Python 2.7 (Anaconda 
installed) environment running SQLAlchemy=1.1.11, pyodbc=4.0.17, and 
pymssql=2.1.3.

Both pyodbc and pymssql connections will successfully connect and query a 
table correctly.  However, when I attempt the same connection and query 
through SQLAlchemy either using an ORM or direct SQL, it fails with the 
following error:

sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for 
> column '0'


The connection string I'm using is the following:


'mssql+pyodbc://:@CMPDSQL01:1433/CMP?driver=SQL+Server+Native+Client+10.0'


The connection and simple query through pyodbc uses the following:

print " Testing pyodbc Directly"
cnxn = pyodbc.connect(
r'Trusted_Connection=yes;'
r'DRIVER={SQL Server Native Client 10.0};'
r'SERVER=CMPDSQL01;'
r'DATABASE=CMP;'
)

print cnxn
print " Complete "

print " Running Direct SQL Query on pyodbc Direct Connection"
cursor = cnxn.cursor()
cursor.execute('SELECT * FROM EPO_MODELS')
for r in cursor:
print r
print " Complete "


The connection and simple query through pymssql uses the following:

print " Testing pymssql Directly"
cnxn = pymssql.connect(server='CMPDSQL01', port='1433', database='CMP')
print cnxn
print " Complete "

print " Running Direct SQL Query on pymssql Direct Connection"
cursor = cnxn.cursor()
cursor.execute('SELECT * FROM EPO_MODELS')
for r in cursor:
print r
print " Complete "


What is even more perplexing is that the SQLAlchemy connection used to work 
but now no longer works.  Unfortunately I don't know what broke it due to a 
clean start install.

I don't think the EPO_MODELS object model comes into play with this error 
because even a direct SQL query fails in the same way.  However, for 
completeness the EPO_MODELS object model is very simple and looks like the 
following:

class EPO_MODELS(Base):
__tablename__ = 'EPO_MODELS'

ID = Column(Integer, primary_key=True, autoincrement=False)
MODELTYPE = Column(Integer, autoincrement=False)
MODELNAME = Column(NVARCHAR(255))
MEMO = Column(NVARCHAR(2000))
NEXTUNIQUEID = Column(Integer, autoincrement=False)
MODELSYNC = Column(Integer, autoincrement=False)
MODELSTATUS = Column(Integer, autoincrement=False)
AUDITUSERID = Column(Integer, autoincrement=False)
DATEALTERED = Column(DateTime)
CREATIONDATE = Column(DateTime)


The direct SQLAlchemy query looks like the following after getting the 
session using the connection string above:

print " Running Direct SQL Query Through SQLAlchemy Connection"
result = con.execute('SELECT * FROM EPO_MODELS')
for r in result:
print r
print " Complete "

Very much appreciate any insight into what is going on here.  I can't seem 
to find the disconnect. Thanks in advance.

Stack Overflow Post is here:  https://stackoverflow.com/q/44893049/227542

-Paul

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] IN() Relationship Eager Loading Strategy

2017-03-14 Thread Paul Brown


tl;dr: I'm looking for an elegant way to eager load related rows using 
separate IN() queries, because MySQL is "Using temporary; Using filesort" 
when I try subquery eager loading.

My first attempt used joined eager loading, but that was causing 
performance issues on my queries with nested relationships. This is because 
joined eager loading uses a LEFT OUTER JOIN which loads the full set of 
columns represented by the parents on each row of the results. The "What 
Kind of Loading to Use?" 

 
part of the docs explains the issue pretty well.

Next, I tried switching to subquery eager loading to fix the problem. But, 
MySQL 5.6 says "Using temporary; Using filesort" when I run an explain on 
the query. It did seem to make the query run ~20% faster, but I'm worried 
about the load that would put on my database. Example: 
https://gist.github.com/pawl/bde2737c4d217b468eba1107a03fbcb5

Next, I was hoping I could get SQLAlchemy to use its own internal storage 
in the session to find related rows if I already queried for them. However, 
this only seems to work one way by default. It will still run queries (if 
you’re not doing eager loading) if you access relations from the “one” side 
of a one to many relationship. Example: 
https://gist.github.com/pawl/abc0e536219144e569c728c8590b0d39

At this point, I'm really wishing there was an eager loading strategy that 
queries for the related rows using IN(). So, I read the "Creating Custom 
Load Rules" 

 
section of the docs and came up with an example that added the related rows 
with "set_committed_value" after I queried for them separately: 
https://gist.github.com/pawl/df5ba8923d9929dd1f4fc4e683eced40

Is there a more elegant way to do this? And, has adding an IN() eager 
loading strategy to SQLAlchemy been considered?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Libreoffice base into sqlalchemy

2017-02-18 Thread Paul Henry
I have a libreoffice database consisting of 104 tables and the developed 
relationships.  I would like to use these tables in sqlalchemy but cannot 
find any way to do this. I have the original csv files for the tables but 
do not really want to take a step back.  Is there away to get the odb files 
into sqlalchemy?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Selecting distinct entries based on max timestamp

2016-11-21 Thread Paul Giralt
Than you Mike and Jonathan for your replies. I will experiment with this 
and let you know how it works out. 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Selecting distinct entries based on max timestamp

2016-11-18 Thread Paul Giralt
I'm having trouble figuring out how to accomplish this task using 
SQLAlchemy. Basically I have a table that maps a user's skill levels as 
follows: 

class Skillmap(db.Model):
__tablename__ = 'skillmap'
id = db.Column(db.UUID(), primary_key=True)
user_id = db.Column(db.UUID(), db.ForeignKey('user.id'))
skill_id = db.Column(db.UUID(), db.ForeignKey('skill.id'))
level_id = db.Column(db.UUID(), db.ForeignKey('level.id'))
timestamp = db.Column(db.DateTime, default=datetime.utcnow(), 
onupdate=datetime.utcnow())

A particular user will be associated with a variety of skills and those 
skills are at a certain level (say 1 - 10 for simplicity). 

So for example, I could insert into the DB an entry that says: 

user1   skill1level 5timestamp 1
user1   skill2level 4timestamp 2
user1   skill3level 7timestamp 3

then later on, the user updates their skill: 

user1   skill1level 10timestamp 4
user1   skill2level 10timestamp 5

When the user updates the skill, I don't update the DB record, Rather, I 
insert a new record into the DB so that I can store the history of the 
skill level over time. 

When I want to query for the current skill levels for a user (latest 
timestamp), I want to get back the latest entries for any given skill. In 
other words, for the above example I'd like to get the results: 

user1   skill1level 10timestamp 4
user1   skill2level 10timestamp 5
user1   skill3level 7  timestamp 3

I've tried playing around with func.max() but I haven't been able to get it 
to do what I want. 

I found this article that describes a solution for a similar question but 
the solution is just SQL syntax: 

http://stackoverflow.com/questions/612231/how-can-i-select-rows-with-maxcolumn-value-distinct-by-another-column-in-sql

Is there an easy way to accomplish this using SQLAlchemy? Any help would be 
greatly appreciated. 



 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] similar to OrderingList but order in code instead of via sql?

2016-10-27 Thread Paul Winkler
Wow, thanks for the very detailed reply Mike!


This is rather anticlimactic followup I'm afraid :)
But we mulled this over a bit and just slapped an index on the column in 
question instead.

- Paul

On Wednesday, October 26, 2016 at 9:19:23 PM UTC-4, Mike Bayer wrote:
>
>
>
> On 10/26/2016 05:50 PM, Paul Winkler wrote: 
> > So I currently have this: 
> > 
> > thingies = relationship( 
> > "Thingy", 
> > order_by="Thingy.sort_order", 
> > collection_class=ordering_list("sort_order"), 
> > ) 
> > 
> > 
> > But could maybe change it to: 
> > 
> > 
> > thingies = relationship( 
> > "Thingy", 
> > collection_class=ordering_list( 
> > "sort_order", reorder_on_append=True) 
> > ) 
> > 
> > I *think* that would do what I want, and the related items would be put 
> > in place by reordering the list as each is appended during load, 
> > but a) that seems like a rather inefficient sort algorithm (I'd rather 
> > load the full list 
> > and then implicitly call reorder() exactly once, but I don't see a hook 
> > for doing something 
> > like that on object load) and b) `reorder_on_append` comes with big 
> > warnings about 
> > dangers of concurrent modification. 
>
>
> ordering list has lots of weird edges to it that I've never had the 
> means to figure out the solution towards.  In this case, sorting by 
> one-append-at-a-time isn't that efficient, but otherwise I don't think 
> you'd have the concurrent modification issue here, I'm pretty sure 
> ordering list is meant to work with the ORM's normal append to the 
> collection as the source.I think the edges have more to do with when 
> the sort key is the primary key and people are trying to change the sort 
> keys and stuff like that. 
>
> But the idea to hit the list after population and sort it just once, and 
> not really worrying about ordering list being involved at all, 
> is a good one. Looking around though, we have a lot of ways to get a 
> hold of an object event for when we first build it from a row, but, 
> there's no hook that corresponds to, "we've eagerly loaded all the 
> collections and we're about to yield the object", nor is there a "we've 
> just lazy loaded this collection and stuffed it all into the list" 
> event.   These are events we probably should have, and it might be good 
> as a "collection_loaded" event that works equally across 
> lazy/eager/whatever loads, and is oriented towards "I want to mutate the 
> collection in a certain way", which I think right now we have via the 
> attributes.set_committed_value method (you'd just set a new list into it 
> with the ordering you want). 
>
> right now, without that "collection_loaded" event, to do this approach 
> you'd be stuck subclassing Query to intercept results as they are 
> emitted via __iter__.  You'd only have to override the __iter__ method 
> though with a filter and it can be programmatically determined what 
> should be done with the objects.  It would be either: 1. detect all the 
> objects in the result that are of type Foo and have a collection ".bar" 
> that you know was eagerly loaded and needs to be sorted, or 2. detect 
> that we just lazy loaded the .bar collection on a Foo, sort the result 
> (the event hook would be applied to "Foo.bar" in the general sense and 
> fire off for any "here's a newly populated Foo.bar for you to mutate in 
> place"). 
>
> We could even write an interim Query subclass that overrides __iter__ 
> using this technique and then produces the "collection loaded" event 
> itself.   You could in theory code to that, we could write tests for it, 
> then when SQLAlchemy adds the actual "collection_loaded" event hook 
> you're all set. 
>
> Of course this all sounds like I'm looking for free contributions to my 
> software, and you'd be right :).  TLDR, sort of doable-ish right now 
> with ugliness, should be doable nicely with a new API feature, and a 
> path can probably be drawn between those points as well so that the 
> feature can live first as an external hack and later as a supported API. 
>
>
> > 
> > I could of course wrap the relationship in a property that does the 
> sorting, 
> > but then would have to update a bunch of existing query code that 
> assumes 
> > this attribute is a relationship that can be joined on. 
>
> rightwell right there's hooks that could automate that too (in 
> fact the "attribute_instrument" event 

[sqlalchemy] similar to OrderingList but order in code instead of via sql?

2016-10-26 Thread Paul Winkler
Hi all,

I've been happily using OrderingList as a collection_class for some time.
Recently discovered a slow query due to multiple JOINs where the
order_by on the relationship ends up causing a table scan
because the field in question is not indexed.

One solution of course would be to add an index, but I'm looking
into other options because alters of the (large) mysql table in 
question are getting quite slow, and the number of related objects is 
always small
(< 20).


I had the thought that if I could figure out the right sqlalchemy hooks
 to leverage, I could have a collection class that behaves like 
OrderingList,
 but where the list representation would be created by ordering in memory 
in 
python code when loaded from the db.  I've been trying to wrap my head 
around
how to make a CollectionClass do this and don't think I grok things yet.
One thought would be to simply keep using OrderingList,
remove the order_by on the relationship, and use reorder_on_append.

So I currently have this:

thingies = relationship(
"Thingy",
order_by="Thingy.sort_order",
collection_class=ordering_list("sort_order"),
)


But could maybe change it to:


thingies = relationship(
"Thingy",
collection_class=ordering_list(
"sort_order", reorder_on_append=True)
)

I *think* that would do what I want, and the related items would be put
in place by reordering the list as each is appended during load,
but a) that seems like a rather inefficient sort algorithm (I'd rather load 
the full list
and then implicitly call reorder() exactly once, but I don't see a hook for 
doing something 
like that on object load) and b) `reorder_on_append` comes with big 
warnings about 
dangers of concurrent modification.

I could of course wrap the relationship in a property that does the sorting,
but then would have to update a bunch of existing query code that assumes
this attribute is a relationship that can be joined on.

I feel like I'm barking up the wrong tree. Any pointers on where I should 
be looking?

- Paul

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] gevent.monkey.patch_all() breaks my Sqlalchemy connection...

2016-07-19 Thread Paul Becotte
Hi!  So, I have a traditional Flask app using flask-sqlalchemy and 
sqlalchemy ORM, served with uwsgi.  Uwsgi is set with processes=4 and 
threads=4.  Yesterday I added grequests to improve a single area of the app 
a bit (I had to do a couple hundred external API calls to build a single 
data structure).  No problem- except that I started getting errors like 

/usr/local/lib/python2.7/dist-packages/flask_sqlalchemy/__init__.py", line 
423, in first_or_404 rv = self.first() File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2634, in first ret = list(self[0:1]) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2457, in __getitem__ return list(res) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2736, in __iter__ return self._execute_and_instances(context) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2751, in _execute_and_instances result = 
conn.execute(querycontext.statement, self._params) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
914, in execute return meth(self, multiparams, params) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 
323, in _execute_on_connection return 
connection._execute_clauseelement(self, multiparams, params) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1010, in _execute_clauseelement compiled_sql, distilled_params File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1078, in _execute_context None, None) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1341, in _handle_dbapi_exception exc_info File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
200, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, 
cause=cause) File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1073, in _execute_context context = constructor(dialect, self, conn, *args) 
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", 
line 558, in _init_compiled self.cursor = self.create_cursor() File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
748, in create_cursor return self._dbapi_connection.cursor() File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 852, in 
cursor return self.connection.cursor(*args, **kwargs) File 
"/usr/local/lib/python2.7/dist-packages/mysql/connector/connection.py", 
line 807, in cursor raise errors.OperationalError("MySQL Connection not 
available.") OperationalError: (mysql.connector.errors.OperationalError) 
MySQL Connection not available. 


>From playing around with it, importing grequests was the line of code that 
made the error appear/dis-appear.  Further experimenting shows that `import 
gevent.monkey; gevent.monkey.patch_all()` anywhere in the code breaks the 
app as well.  This doesn't happen on every request, and I am not actually 
using greenlets or the gevent loop anywhere in the endpoints I am 
accessing.  For now this is easy enough for me to just rip out grequests, 
but I was wondering if anyone had any ideas as to why this is happening?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Composite primary key with nullable columns

2016-01-05 Thread Paul Johnston
Hi,

I have a situation where I'm trying to us a composite primary key, where 
one of the columns can be null.

However, when I try to update an object I get the following error:
sqlalchemy.orm.exc.FlushError: Can't update table test using NULL for 
primary key value on column test.id2

Sample code here: http://dpaste.com/3Q8T09T

Is this something SQLAlchemy can do? From some quick tests, it seems SQLite 
can do it, not sure about other databases.

Paul

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] 10 Reasons to love SQLAlchemy

2015-09-04 Thread Paul Johnston
Hi guys,

I decided to express my love of SQLAlchemy through a blog post:
http://pajhome.org.uk/blog/10_reasons_to_love_sqlalchemy.html

Enjoy,

Paul

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Declarative: defining relationship and column in one line

2015-05-19 Thread Paul Johnston
Hi,

Sorry if this is a FAQ, but is it possible to define a relationship and its 
column all at once. e.g. instead of:

type_id = db.Column(db.Integer, db.ForeignKey('linktype.id'))
type = db.relationship('LinkType')

Something like:

type = db.relationship('LinkType', colname='type_id')

In fact, it'd be good for the colname to default to xxx_id - although 
allow overriding.

This certainly was possible with Elixir.

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Dangers of setting a session's expire_on_commit=False?

2014-07-25 Thread Paul Molodowitch
Hi all - so I'm thinking of disabling the expire_on_commit property for my
default sessionmaker object, and I was wondering what the potential issues
with this were.  Is it simply that the next access of the data on it could
be using out-of-date information?  Don't objects potentially have this
problem anyway, in the sense that if they are accessed TWICE after a
commit, the second access will use the data cached from the first, and
could again be out of date?

To give some background - we're in the middle of converting an existing
codebase to use sqlalchemy, and there are number of classes that act both
as database wrappers, AND data structures.  That is, when first
constructed, they are populated with data from the database; but from then
on out, they just keep the cached data.  So they would behave similarly to
ORM-mapped objects, if expire_on_commit is False.  The thinking here is
that for most of these classes, the data changes fairly infrequently, and
it's not catastrophic if it's somewhat out of date. Also we don't want to
keep hitting the database more than necessary...  and, finally, we might
need to have access to the cached data for a long time (ie, as long as the
user has a ui window open).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-11 Thread Paul Molodowitch
Done:

https://bitbucket.org/zzzeek/sqlalchemy/issue/3123/mysql-reflection-on-python-26-causes-error

FYI, for now we're patching sqlalchemy/engine/reflection.py, changing
Inspector.reflectable after it gets tbl_opts (line 450 in my code):

# reflect table options, like mysql_engine
tbl_opts = self.get_table_options(table_name, schema,
**table.dialect_kwargs)
# Python 2.6 doesn't except using dicts with unicode keys for
kwargs,
# ie, myFunc(**{u'foo':1}) will raise an error
if sys.version_info  (2, 7):
for key, val in tbl_opts.items():
if isinstance(key, unicode):
del tbl_opts[key]
tbl_opts[str(key)] = val

There's likely a much better / more elegant way to handle this, but it
seems to do the trick for us...

Let me know anything's not clear, or you're having troubles replicating, or
there's anything else I can do to help!

- Paul


On Thu, Jul 10, 2014 at 6:06 PM, Mike Bayer mike...@zzzcomputing.com
wrote:


 On 7/10/14, 3:49 PM, Paul Molodowitch wrote:

 Whoops! Just noticed this was the totally wrong traceback!

  Here's the correct trace:

   Traceback (most recent call last):
   File stdin, line 1, in module
   File test.py, line 155, in module
 metadata.reflect(db.engine, only=tables)
   File ./sqlalchemy/sql/schema.py, line 3277, in reflect
  Table(name, self, **reflect_opts)
   File ./sqlalchemy/sql/schema.py, line 352, in __new__
 table._init(name, metadata, *args, **kw)
   File ./sqlalchemy/sql/schema.py, line 425, in _init
 self._autoload(metadata, autoload_with, include_columns)
   File ./sqlalchemy/sql/schema.py, line 437, in _autoload
 self, include_columns, exclude_columns
File ./sqlalchemy/engine/base.py, line 1198, in run_callable
 return callable_(self, *args, **kwargs)
File ./sqlalchemy/engine/default.py, line 355, in reflecttable
 return insp.reflecttable(table, include_columns, exclude_columns)
   File ./sqlalchemy/engine/reflection.py, line 463, in reflecttable
 for col_d in self.get_columns(table_name, schema,
 **table.dialect_kwargs):
 TypeError: get_columns() keywords must be strings


 with metadata.reflect(), OK.  Can you please make a very short and
 self-contained test case and post a bug report?  thanks.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/E3MhX1m8QqQ/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-11 Thread Paul Molodowitch
oops, that line should read Python 2.6 doesn't accept, not except.

My hands are too used to typing except SomeError:...


On Fri, Jul 11, 2014 at 11:38 AM, Paul Molodowitch elron...@gmail.com
wrote:

 Done:


 https://bitbucket.org/zzzeek/sqlalchemy/issue/3123/mysql-reflection-on-python-26-causes-error

 FYI, for now we're patching sqlalchemy/engine/reflection.py, changing
 Inspector.reflectable after it gets tbl_opts (line 450 in my code):

 # reflect table options, like mysql_engine
 tbl_opts = self.get_table_options(table_name, schema,
 **table.dialect_kwargs)
 # Python 2.6 doesn't except using dicts with unicode keys for
 kwargs,
 # ie, myFunc(**{u'foo':1}) will raise an error
 if sys.version_info  (2, 7):
 for key, val in tbl_opts.items():
 if isinstance(key, unicode):
 del tbl_opts[key]
 tbl_opts[str(key)] = val

 There's likely a much better / more elegant way to handle this, but it
 seems to do the trick for us...

 Let me know anything's not clear, or you're having troubles replicating,
 or there's anything else I can do to help!

 - Paul


 On Thu, Jul 10, 2014 at 6:06 PM, Mike Bayer mike...@zzzcomputing.com
 wrote:


 On 7/10/14, 3:49 PM, Paul Molodowitch wrote:

 Whoops! Just noticed this was the totally wrong traceback!

  Here's the correct trace:

   Traceback (most recent call last):
   File stdin, line 1, in module
   File test.py, line 155, in module
 metadata.reflect(db.engine, only=tables)
   File ./sqlalchemy/sql/schema.py, line 3277, in reflect
  Table(name, self, **reflect_opts)
   File ./sqlalchemy/sql/schema.py, line 352, in __new__
 table._init(name, metadata, *args, **kw)
   File ./sqlalchemy/sql/schema.py, line 425, in _init
 self._autoload(metadata, autoload_with, include_columns)
   File ./sqlalchemy/sql/schema.py, line 437, in _autoload
 self, include_columns, exclude_columns
File ./sqlalchemy/engine/base.py, line 1198, in run_callable
 return callable_(self, *args, **kwargs)
File ./sqlalchemy/engine/default.py, line 355, in reflecttable
 return insp.reflecttable(table, include_columns, exclude_columns)
   File ./sqlalchemy/engine/reflection.py, line 463, in reflecttable
 for col_d in self.get_columns(table_name, schema,
 **table.dialect_kwargs):
 TypeError: get_columns() keywords must be strings


 with metadata.reflect(), OK.  Can you please make a very short and
 self-contained test case and post a bug report?  thanks.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/E3MhX1m8QqQ/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-10 Thread Paul Molodowitch
Whoops! Just noticed this was the totally wrong traceback!

Here's the correct trace:

Traceback (most recent call last):
  File stdin, line 1, in module
  File test.py, line 155, in module
metadata.reflect(db.engine, only=tables)
  File ./sqlalchemy/sql/schema.py, line 3277, in reflect
Table(name, self, **reflect_opts)
  File ./sqlalchemy/sql/schema.py, line 352, in __new__
table._init(name, metadata, *args, **kw)
  File ./sqlalchemy/sql/schema.py, line 425, in _init
self._autoload(metadata, autoload_with, include_columns)
  File ./sqlalchemy/sql/schema.py, line 437, in _autoload
self, include_columns, exclude_columns
  File ./sqlalchemy/engine/base.py, line 1198, in run_callable
return callable_(self, *args, **kwargs)
  File ./sqlalchemy/engine/default.py, line 355, in reflecttable
return insp.reflecttable(table, include_columns, exclude_columns)
  File ./sqlalchemy/engine/reflection.py, line 463, in reflecttable
for col_d in self.get_columns(table_name, schema,
**table.dialect_kwargs):
TypeError: get_columns() keywords must be strings
​

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-09 Thread Paul Molodowitch
I just ran into the same problem, using python 2.6 + sqlalchemy 0.9.4 / 
0.9.6 + MySQL.

The problem in my case IS definitely related to python 2.6 - basically, 
python 2.6 doesn't allow unicode keywords, while 2.7 does. Ie, if you do 
this:

def foo(**kwargs):
print kwargs
foo(**{u'thing':1})


...it will work in 2.7, but give this error in 2.6:

TypeError: foo() keywords must be strings


For reference, these were the table.dialect_kwargs.keys() that were making 
trouble in 2.6:

[u'mysql_comment', u'mysql_engine', u'mysql_default charset']


Fine, except for the fact that they're unicode...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-09 Thread Paul Molodowitch
Sure - I think it's the same as the original poster's, but the traceback
I'm getting is:

 inspect(Project).relationships
Traceback (most recent call last):
  File stdin, line 1, in module
  File ./sqlalchemy/util/langhelpers.py, line 712, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File ./sqlalchemy/orm/mapper.py, line 2037, in relationships
return self._filter_properties(properties.RelationshipProperty)
  File ./sqlalchemy/orm/mapper.py, line 2054, in _filter_properties
configure_mappers()
  File ./sqlalchemy/orm/mapper.py, line 2560, in configure_mappers
mapper._post_configure_properties()
  File ./sqlalchemy/orm/mapper.py, line 1673, in
_post_configure_properties
prop.init()
  File ./sqlalchemy/orm/interfaces.py, line 143, in init
self.do_init()
  File ./sqlalchemy/orm/relationships.py, line 1510, in do_init
self._setup_join_conditions()
  File ./sqlalchemy/orm/relationships.py, line 1586, in
_setup_join_conditions
can_be_synced_fn=self._columns_are_mapped
  File ./sqlalchemy/orm/relationships.py, line 1849, in __init__
self._determine_joins()
  File ./sqlalchemy/orm/relationships.py, line 1915, in _determine_joins
consider_as_foreign_keys=consider_as_foreign_keys
  File string, line 2, in join_condition
  File ./sqlalchemy/sql/selectable.py, line 692, in _join_condition
b.foreign_keys,
AttributeError: 'tuple' object has no attribute 'foreign_keys'



On Wed, Jul 9, 2014 at 2:04 PM, Mike Bayer mike...@zzzcomputing.com wrote:


 On 7/9/14, 3:41 PM, Paul Molodowitch wrote:

 I just ran into the same problem, using python 2.6 + sqlalchemy 0.9.4 /
 0.9.6 + MySQL.

  The problem in my case IS definitely related to python 2.6 - basically,
 python 2.6 doesn't allow unicode keywords, while 2.7 does. Ie, if you do
 this:

   def foo(**kwargs):
   print kwargs
  foo(**{u'thing':1})


  ...it will work in 2.7, but give this error in 2.6:

  TypeError: foo() keywords must be strings


  For reference, these were the table.dialect_kwargs.keys() that were
 making trouble in 2.6:

  [u'mysql_comment', u'mysql_engine', u'mysql_default charset']


  Fine, except for the fact that they're unicode...

 OK but this is not a codepath within SQLAlchemy's MySQL reflection code.
 I'm PDBing right now into 0.9, using py2.6 + use_unicode=1; the reflected
 table options are sent directly into table.kwargs, not using the
 constructor or any **kw system.  the tests pass, and the keys are coming
 back as u''.

 if you can show me where table.kwargs gets used implicitly as a
 constructor arg i can fix that.

  --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/E3MhX1m8QqQ/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automatically set primary key to None when deleted?

2014-07-03 Thread Paul Molodowitch
On Wed, Jul 2, 2014 at 9:22 PM, Mike Bayer mike...@zzzcomputing.com wrote:


 On 7/2/14, 10:05 PM, Paul Molodowitch wrote:

  Suppose I have a super simple table like this:

   class Dinosaur(Base):
  __tablename__ = 'dinosaurs'
  id = Column(Integer, primary_key=True)
  name = Column(String(255))


  We assume that the id is set up in such a way that by default it always
 gets a unique value - ie, it uses autoincrement in MySQL, or a sequence in
 postgres, etc.

  Now, suppose I get an instance of this class, and then delete it:

   steggy = session.query(Dinosaur).filter_by(name='Steggy').one()
  print steggy.id
  session.delete(steggy)
  session.commit()
  print steggy.id


  What I'd ideally like to see is that it first print the id of the row
 that it pulled from the database, and then print 'None':

  30
 None


  Is there any way that I can configure the id column / property so that
 it is automatically cleared on delete like this?


 the steggy object is a proxy for a database row.  when you delete that
 row, then commit the transaction, the object is detached from the session,
 and everything on it is expired.  there is no row.  check
 inspect(steggy).deleted, it will say True - that means in your system, the
 object is meaningless.  ideally no part of your program would be looking at
 that proxy any more, you should throw it away.  it means nothing.


That makes sense... but if if it really means nothing, and we shouldn't be
looking at it, then why keep it's attributes around at all?  Particularly
since sqlalchemy has already established that it's willing to expire dict
members when they may not be valid anymore - ie, what it does to clear any
cached values from a row proxy after the session is committed.

Of course, you could make the case that other pieces of the program may
want to inspect the data that was on there, after the fact... maybe you're
going to print out something that says, RIP Steggy, or something - but in
that case, the one field that really DOESN'T make any sense in this case
(and it seems like it would be a common pattern!) is the one that exists
solely as a means to look it up in the database, it's auto-incremented id
column.  Which is what prompted this question...


If not, as a consolation prize, I'd also be interested in the easiest way
 to query if a given instance exists in the database - ie, I could do
 something like:


  session.exists(steggy)


  OR

  steggy.exists()



 from sqlalchemy import inspect
 def exists(session, obj):
 state = inspect(obj)
 return session.query(state.mapper).get(state.identity) is None

 print exists(sess, a1)


Hmm... very interesting.  I'll have to read up what what exactly this is
doing (ie, what is state.identity?)... It's possibly that
inspect(steggy).deleted may just give me what I need though. Thanks for
both those tips! (In case you couldn't tell, I'm still new to / exploring
sqlalchemy...)


 ...which, in this case, would simply run a query to see if any dinosaurs
 exist with the name Steggy.

 that's totally different.  That's a WHERE criterion on the name field,
 which is not the primary key.  that's something specific to your class
 there.


True.  There's really no way for a generic exists function to know what
conditions you want to query a generic class on to determine existence.
 Which is why I was suggesting the uniqueness constraint...

 Needing to set up some extra parameters to make this possible - such as
adding a unique constraint on the name column -

 OK, so you want a function that a. receives an object b. looks for UNIQUE
 constraints on it c. queries by those unique constraints (I guess you want
 the first one present?  not clear.  a table can have a lot of unique
 constraints on it) that would be:


Sort of - the thinking here was that you could just ask, If I tried to
insert this object into the table, would it violate any uniqueness
constraints?, and get back a boolean result... and you could use that as a
reasonable heuristic for determining existence, in a fairly generic way.


 from sqlalchemy import inspect, UniqueConstraint
 def exists(session, obj):
 state = inspect(obj)
 table = state.mapper.local_table
 for const in table.constraints:
 if isinstance(const, UniqueConstraint):
crit = and_(*[col == getattr(obj, col.key) for col in const])
return session.query(state.mapper).filter(crit).count()  0
 else:
return False


Yep, it looks like that's doing basically what I was thinking of.  Thanks!

the unique constraints are a set though.   not necessarily deterministic
 which one it would locate first.  I'd use more of some kind of declared
 system on the class:


Not clear on why this matters - if we're iterating through all the
constraints, and returning True if any of them is matched, what difference
does it make which one is evaluated first?  Except potentially from a
performance standpoint, I suppose...


 class X(Base

Re: [sqlalchemy] Automatically set primary key to None when deleted?

2014-07-03 Thread Paul Molodowitch

 Particularly since sqlalchemy has already established that it's willing to
 expire dict members when they may not be valid anymore - ie, what it does
 to clear any cached values from a row proxy after the session is
 committed.

 well it doesn't expire the deleted object right now because it's been
 evicted from the Session by the time the commit goes to expire things.
 Changing that behavior now would definitely bite a lot of people who depend
 on how it is right now (other people who are also looking at their deleted
 objects against my recommendations... :) )


Makes sense - maybe it could be a configurable option? Dunno how many
people (besides me!) would be interested in such behavior, though... =P

 from sqlalchemy import inspect
 def exists(session, obj):
 state = inspect(obj)
 return session.query(state.mapper).get(state.identity) is None

 print exists(sess, a1)


  Hmm... very interesting.  I'll have to read up what what exactly this is
 doing (ie, what is state.identity?)...

 it's documented here:
 http://docs.sqlalchemy.org/en/rel_0_9/orm/internals.html?highlight=instancestate#sqlalchemy.orm.state.InstanceState.identity


Thanks for the link!

 the unique constraints are a set though.   not necessarily deterministic
which one it would locate first.  I'd use more of some kind of declared
system on the class:


  Not clear on why this matters - if we're iterating through all the
 constraints, and returning True if any of them is matched, what difference
 does it make which one is evaluated first?  Except potentially from a
 performance standpoint, I suppose...

 what if there are two constraints, and only one is satisfied for a given
 object's values (e.g. the constraint is now satisfied by some other row),
 the other one is not present.  Is the answer True or False?


In the scenario I was envisioning, True (ie, it exists).  Basically, Would
it violate ANY unique constraints if I tried to insert it? Yes.

Of course, I see your point: that in some situations, this might not fit
conceptually with the answer to the question, Does THIS object exist in
the database?  But I guess that's likely your point... that there isn't
really a good universal way to answer that question.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Automatically set cascade settings based on ON DELETE / ON UPDATE when reflecting?

2014-07-03 Thread Paul Molodowitch
I noticed that sqlalchemy now properly sets the onpudate / ondelete 
properties of foreign keys when reflecting tables:

https://bitbucket.org/zzzeek/sqlalchemy/issue/2183/support-on-delete-update-in-foreign-key

However, it doesn't seem to set the cascade properties of relationships to 
reflect these properties. ie, if the Child table references the Parent 
table with a foreign key that has ON DELETE CASCADE, and the reference 
column does not allow NULL, when you delete a parent table that has 
children, you will get an error, because sqlalchemy will try to set the 
child's ref to NULL.

ideally we should add delete in the relationship's cascade properties 
(and probably delete-orphan as well), and then set passive_updates=True.

Or am I missing something obvious  / doing something wrong / etc?

- Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automatically set cascade settings based on ON DELETE / ON UPDATE when reflecting?

2014-07-03 Thread Paul Molodowitch
I wasn't advocating making this connection in general (though I like the
autoconfigure option!), but only specifically for the case of reflection -
in this case, we know the DB supports it, and it would result in a better
python interface to the already existing tables.


On Thu, Jul 3, 2014 at 3:20 PM, Mike Bayer mike...@zzzcomputing.com wrote:


 On 7/3/14, 6:15 PM, Mike Bayer wrote:
  On 7/3/14, 5:45 PM, Paul Molodowitch wrote:
  I noticed that sqlalchemy now properly sets the onpudate / ondelete
  properties of foreign keys when reflecting tables:
 
 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/2183/support-on-delete-update-in-foreign-key
 
  However, it doesn't seem to set the cascade properties of
  relationships to reflect these properties. ie, if the Child table
  references the Parent table with a foreign key that has ON DELETE
  CASCADE, and the reference column does not allow NULL, when you
  delete a parent table that has children, you will get an error,
  because sqlalchemy will try to set the child's ref to NULL.
 
  ideally we should add delete in the relationship's cascade
  properties (and probably delete-orphan as well), and then set
  passive_updates=True.
 
  Or am I missing something obvious  / doing something wrong / etc?
  the configuration of a Column or ForeignKey has never been directly
  linked to how relationship() gets configured.   passive_updates in
  particular is a thorny one as not every database supports ON UPDATE
  CASCADE, but for that matter not every database even supports ON DELETE
  CASCADE.   There's also lots of variants to ON UPDATE and ON DELETE and
  SQLAlchemy has no awareness of any of these directly.
 
  If we were to explore some automatic configuration of relationship based
  on these attributes of ForeignKey, it would take place within the
  automap extension: see
  http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html.
 
  There are also recipes such that both relationship() and ForeignKey()
  are generated at once, these are also good places for this kind of thing
  to happen.  See
 
 https://bitbucket.org/zzzeek/pycon2014_atmcraft/src/a6d96575bc497ce0c952bb81db9c05d054c98bb5/atmcraft/model/meta/orm.py?at=master
  for an example of this, I still am thinking of a way recipes like this
  could also be integrated into SQLAlchemy, possibly as an enhancement to
  declarative.

 or a flag like autoconfigure=True on relationship().   this would also
 set up innerjoin=True for joined eager loading if the FK is not null.
 if the primaryjoin condition is too complex (has mulitple FKs),
 autoconfigure would raise an exception.


 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/WaVTCpBOVPk/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Automatically set primary key to None when deleted?

2014-07-02 Thread Paul Molodowitch
Suppose I have a super simple table like this:

class Dinosaur(Base):
__tablename__ = 'dinosaurs'
id = Column(Integer, primary_key=True)
name = Column(String(255))


We assume that the id is set up in such a way that by default it always 
gets a unique value - ie, it uses autoincrement in MySQL, or a sequence in 
postgres, etc.

Now, suppose I get an instance of this class, and then delete it:

steggy = session.query(Dinosaur).filter_by(name='Steggy').one()
print steggy.id
session.delete(steggy)
session.commit()
print steggy.id


What I'd ideally like to see is that it first print the id of the row that 
it pulled from the database, and then print 'None':

30
None


Is there any way that I can configure the id column / property so that it 
is automatically cleared on delete like this?


If not, as a consolation prize, I'd also be interested in the easiest way 
to query if a given instance exists in the database - ie, I could do 
something like:

session.exists(steggy)


OR

steggy.exists()


...which, in this case, would simply run a query to see if any dinosaurs 
exist with the name Steggy.  Needing to set up some extra parameters to 
make this possible - such as adding a unique constraint on the name column 
- would be potentially possible.  And yes, I know I can always fall back on 
just manually constructing a query against the name field myself...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is there a good ORM tutorial? And advice on parent/child management

2014-01-10 Thread Paul Moore
I'm developing an application using the ORM, and I am getting into all 
sorts of trouble with what I think should be a pretty simple data model. 
I've tried following the ORM tutorial from the docs, but it seems to get me 
confused every time I try. So I'm looking for something else that maybe 
takes a different approach. Or maybe someone can point out what's wrong for 
me in the following - but even then, pointers to other tutorial material 
would be useful, as I don't really want to end up pestering the list every 
time I hit an issue :-)

My data model is fairly straightforward. I have a Package class, with a 
name. I then have a Release class - each Release is associated with a 
single package, and has a unique version. Releases have a few children - 
Classifiers, Dependencies, and URLs. All of these are multi-valued with no 
natural key (at least, not one worth worrying about). There is some extra 
data associated with Releases and URLs, but that's not too important. 
(People may recognise this as the PyPI data model). This is a pretty 
trivial parent/child one-to-many hierarchy, and I didn't expect it to be 
hard to model.

The obvious (to me!) model is basically (trimmed down a bit):

class Package(Base):
__tablename__ = 'packages'
# Use a synthetic key, as package names can change
id = Column(Integer, primary_key=True)
name = Column(String, unique=True, nullable=False)
releases = relationship(Release, backref=package,
cascade=all, delete-orphan)

class Release(Base):
__tablename__ = 'releases'
id = Column(Integer, primary_key=True)
package_id = Column(Integer, ForeignKey('packages.id'), nullable=False)
version = Column(String, nullable=False) # Unique within package
classifiers = relationship(Classifier, backref=release,
cascade=all, delete-orphan)

class Classifier(Base):
__tablename__ = 'classifiers'
id = Column(Integer, primary_key=True)
release_id = Column(Integer, ForeignKey('releases.id'), nullable=False)
classifier = Column(String, nullable=False)

So far, so good. But if I want to create a new Release, things get messy. 
This is my basic function:

def new_release(package, version, data):
r = Release(version)
r.package = Package(package)
# Populate the data in r, and create child items
return r

It's that package reference that messes me up. If the release is for a new 
package, then when I merge the release into the session, the package is 
created. But if it's for an existing package, a new package is created 
(which gives a constraint error if the package name is unique, and 
duplicates if it's not) rather than the session recognising that it's an 
existing package and linking the release to it.

If I was working at the SQL core level, I'd expect to have to query for the 
package and create it if it wasn't there. But I'm not sure I can do that 
with a session, as things get cached in memory by the unit of work stuff, 
and I don't know if a query for the release could miss a package that's 
pending insertion, or if the query could cause auto-flushing which might 
cause other issues (either with performance or integrity). Because the 
session is managing the in-memory caching and the transaction management 
by magic, I don't want to interfere with its mechanisms any more than I 
have to. If I have to keep track of what's gone to the database, and query 
for existing instances and manage the transactions, I probably should just 
use the SQL layer directly (I have a lot of experiences with databases, but 
very little with ORMs, so pure DB code isn't too scary for me, but on the 
other hand I don't know what benefits the ORM should be giving me that I'm 
not seeing).

Is this an application that doesn't actually benefit from the ORM? Or am I 
using it wrongly, and my problems come from misunderstanding the way it 
should be used? As I say, I've a lot of database experience but very little 
with ORMs, so maybe I have an unbalanced view of how much data management 
the ORM should be able to handle for me.

The particular problem here is what's affecting me right now - but I'd be 
even more interested in a good ORM for experienced SQL developers 
tutorial that tells me how the ORM differs from the core level (and where 
its benefits lie).

Thanks,
Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Is there a good ORM tutorial? And advice on parent/child management

2014-01-10 Thread Paul Moore
On Friday, 10 January 2014 16:27:12 UTC, Kevin H wrote:

 It sounds to me like the problem you're having has to do with how you are 
 getting the reference to the package, which isn't shown in your example. 
  How are you getting it?


The new_release() function is what I do - I create a new Package object. I 
was under the impression that when I did a session.merge() that would match 
up existing objects. I'm probably being naive in that, though...
 

 The session doesn't do anything by magic, even if it seems that way 
 sometimes.  It just manages things behind the scenes.


Hmm - managing things behind the scenes is what I was meaning by magic. 
The problem is that I can't find any clear reference about what is, and 
what is not, managed behind the scenes. As I said, it's more about my 
understanding (and hence about me knowing what code I need to write) than 
about SQLAlchemy doing anything wrong.
 

 If you want a new Package, create a new Package object.  If you want an 
 exisiting package, query for it.  Just like you would in bare SQL code.

 If you don't know which you need, try searching and create it if it isn't 
 found.  I usually do something like:

 try:
 pkg = session.query(Package).filter(condition-goes-here)
 except sa.orm.exc.NotFound:
 pkg = Package()
 # populate attributes and add to session


OK. That's essentially what I was hoping to avoid. Largely because of that 
query - I may not yet have committed the package to the database.

For a more extended example, suppose I do the following:

p = Package(pip)
session.merge(p) # I could do session.add here, but I'm not 100% clear 
why just doing merge isn't better in case I'm not sure if pip is already 
present

Somewhere a little later, in other code where I haven't committed yet, but 
I don't have a reference to p available:

r = Release(1.5)
r.package = Package(pip)

Can I query for pip here? There's been no commit yet, and there may not 
even have been a flush (I've had problems with null foreign keys so I've 
had to force autoflush off in a few places). Essentially, will a query 
locate an object that's in session.new but which hasn't been flushed to the 
database yet?

This is the crux of my issue. I really don't understand why I'm getting 
null foreign keys on autoflush, but switching autoflush off seems to fix 
it. But maybe that's what's giving me these issues, so maybe I need to turn 
autoflush back on. But then what do I do about my noll FKs?

Unfortunately (fortunately?) the SQLAlchemy docs are good enough that there 
 isn't a huge impetus for people outside the project to write tutorials, 
 blog posts, etc.  I'd like to see more of that kind of thing as well, but 
 everyone has limited time, I guess.


Without wishing to seem critical, I find the ORM docs pretty difficult to 
follow. They seem to jump around between schema design (DDL) and usage 
(select and DML) in a pretty haphazard fashion, and the information about 
transactional control and session management seems to be spread around the 
various sections. That's basically just my perspective, and may reflect my 
experience, but it is frustrating. Agreed entirely about people having 
limited time, and the docs are certainly far better than a lot that's 
around.

Maybe I'll just have to have a thorough read of the docs before I carry on 
coding. Feels like I'm making more problems than I'm solving right now. Or 
maybe stick to what I know and drop back to the SQL core stuff.

Thanks for the help,
Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Is there a good ORM tutorial? And advice on parent/child management

2014-01-10 Thread Paul Moore
On Friday, 10 January 2014 17:52:45 UTC, Michael Bayer wrote:

 there’s various patterns for dealing with the very common issue of “create 
 unique object if not exists, else use the current one”.  One that I 
 frequently point to is the unique object recipe: 


OK, so looking at that seems to imply that

pkg = Package(name=pip)
session.add(pkg)
is_it_there = session.query(Package).filter(Package.name == 'pip').first()
assert is_it_there is pkg

will fail the assertion (unless there is a flush). Otherwise, I don't see 
why the unique cache is needed in the recipe you reference.

I think I'm still missing something here...
Paul.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Integrity errors in foreign key with nullable=False in a large hierarchy

2014-01-09 Thread Paul Moore
The following is a stripped down example of my app, that does NOT show the 
problem:

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, backref, sessionmaker
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy import create_engine

Base = declarative_base()

class Release(Base):
__tablename__ = 'releases'
id = Column(Integer, primary_key=True)
version = Column(String, nullable=False)
def __init__(self, version):
self.version = version
def __repr__(self):
return Release(version={}).format(self.version)

class URL(Base):
__tablename__ = 'urls'
id = Column(Integer, primary_key=True)
release_id = Column(Integer, ForeignKey('releases.id'), nullable=False)
url = Column(String, nullable=False)
release = relationship(Release,
cascade=all, delete-orphan,
single_parent=True,
backref=backref('urls', order_by=url))
def __repr__(self):
return URL(url={}).format(self.url)

def new_release(version, urls):
r = Release(version)
l = []
for url in urls:
u = URL()
u.url = url
l.append(u)
r.urls = l

return r

if __name__ == '__main__':
db = create_engine('sqlite://')
Base.metadata.create_all(db)
Session = sessionmaker(db)
s = Session()
s.merge(new_release('1.0', ['http://myapp.net/1.0']))
s.commit()

The real app is basically just a lot more complex (releases have 4 child 
lists like urls, my example adds multiple objects in each list. The app 
fails with an integrity error because a url has a null request_id. In 
actual fact, the hierarchy all gets built up properly in the end, but it 
appears that something is trying to save the children before all of the 
links are set up.

If I remove nullable=False from the foreign keys, it works fine.

I think one of the error tracebacks (sorry, I lost them in other output) 
mentioned autoflush - could the ORM be trying to flush bits of the 
hierarchy before it's complete? Is there a better fix than removing the 
nullable=False constraints?

Thanks for any help,
Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Integrity errors in foreign key with nullable=False in a large hierarchy

2014-01-09 Thread Paul Moore
On Thursday, 9 January 2014 21:27:06 UTC, Michael Bayer wrote:

  I think one of the error tracebacks (sorry, I lost them in other output) 
 mentioned autoflush - could the ORM be trying to flush bits of the 
 hierarchy before it's complete? Is there a better fix than removing the 
 nullable=False constraints? 

 absolutely, which is why the “no_autoflush” contextmanager exists: 


 http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html?highlight=no_autoflush#sqlalchemy.orm.session.Session.no_autoflush
  

 the stack trace you’re getting will give you clues to where this context 
 manager should go.  You’ll see the word “autoflush” in the stack trace, and 
 from there you can trace up to see the specific operation in your app that 
 is triggering it. 

 
Thanks. That's exactly what I wanted :-)

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Insert from JSON data (coping with unneeded fields and varying types)

2013-12-19 Thread Paul Moore
On Tuesday, 17 December 2013 20:43:33 UTC, Michael Bayer wrote:

 typically you should write marshaling code here using a schema library - 
 though I’ve not used it, I’d recommend colander for this: 
 http://docs.pylonsproject.org/projects/colander/en/latest/ 


Nice, thanks! I hadn't heard of colander, and didn't know the term schema 
library so wouldn't have got very far with Google. I'll take a look at 
this.

Paul 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Insert from JSON data (coping with unneeded fields and varying types)

2013-12-17 Thread Paul Moore
I'm trying to load data into a database from a JSON data source. In theory, 
my data (mostly) matches my database table schema - the data is a list of 
dictionaries, with keys for the various columns. But, as the source data is 
JSON, I can't be 100% certain it doesn't contain oddities. Also, some 
fields contain string representations where I actually want particular data 
types (datetime values stored as ISO format strings, for example).

My current prototype code simply picks out the various fields by hand, 
using rec.get(name, default) to deal with possibly missing fields, and then 
does a conversion where needed. But it all seems very repetitive, given 
that I have a database schema with all the column names and types available 
to me. I could write some code to introspect the table definition, strip 
out the unneeded fields, apply type conversions and default values, etc, 
but it's going to be fiddly to get right and it seems to me to be a fairly 
common requirement for loading data into databases. So I wondered if I'd 
missed something built into SQLAlchemy that already did something like this.

Does anyone know of an existing solution, before I end up writing one of my 
own?

Thanks,
Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] attribute_mapped_collection use as a key-list dictionary

2013-09-04 Thread Paul Balomiri
Hi,
First off and again, thanks for the support here. I think I can work for
the momement with your proposed implementation.

My problem is related to the fact that the list values of the GBK
defaultdict are plain list types, and thus cannot fire events for
operations on them. The testcase below does not work, and, as you
mentioned, no other operation on GBK's list values will fire the
corresponding events.

Now my attempt (admittedly without enough knowledge of sqlalchemy
internals) was to create a list which *forwards* append/remove events
to the GBK Collection which could in turn add/remove them in their quality
as true InstrumentedAttribute (thus handling the DB part) . So more
specifically i used prepare_instrumentation() hoping to be able to
instantiate an InstrumentedList with event capabilities. The
InstrumentedLists would not need be first class InstrumentedAttributes ( --
perhaps could not because they appear after reflection time? --).

I see now that it was a far off longshot.

This approach would also remove the immutability constraint on keyfunc's
return value. Keyfunc would be a read/writable attribute instead.

I hope to find time to get more accustomed to sqlalchemy's internals and to
implement this idea somewhere in Jan. or Feb..

By the way I'm implementing Rails-Style DB binding (table people-Class
Person e.t.c.) using the SQLA inspector interface. I'll post the lib in
github as soon as I'm eating my own dog food (=it's usable) .

--
Paul

Here is a testcase where i would like to have the list
p1._addresses_by_role[home] fire a remove event which removes the person
from GBK:

def test_del_item(self):
sess = self.sess
p1 = Person()
a1 = Address(name=Bucharest)

# here, p2a already refers to a1/p1, the _addresses_by_role
# will be set up when it loads after a commit
p2a = PersonToAddress(address=a1, person=p1, role=home)
sess.add(p1)

sess.commit()
self._assertAssociated(p1, a1, p2a)

del p1._addresses_by_role[home][0]
sess.commit()
import pdb
pdb.set_trace()
self.assertFalse(home in p1._addresses_by_role)




2013/9/4 Michael Bayer mike...@zzzcomputing.com


 On Sep 3, 2013, at 8:47 AM, Paul Balomiri paulbalom...@gmail.com wrote:

  I would like to install
 
  event.listen(list, 'append', append_listener)
  event.listen(list, 'remove', rm_listener)
 
  on those lists, such that the GroupByKeyCollection can modify added
 objects according to the relationship it implements:
  * set the appropiate foreign key constraints
  * insert a removed object with it's new value for the key attribute
 after a change (announced by append_listener)
  * reset the fks upon item removal.

 using event.listen with GBK doesn't make sense.  events can only be used
 with specific target types, the remove append events only apply to an
 ORM-produced InstrumentedAttribute, such as Person._addresses_by_role here
 (note, we mean the class-bound attribute, not the collection on an
 instance).  There is no need to use event.listen with the collection
 itself, as remove/append are produced originally by the add()/remove()
 methods on GBK itself; any extra logic which should take place would be
 invoked directly from there (and in fact my original example fails to fire
 off the event with remove()).

 Additionally, all the usage of prepare_instrumentation() etc. should not
 be necessary, that's all internal stuff which is called automatically.

 As mentioned before, the behavior of this collection is completely outside
 the realm of a normal collection so it needs to implement the
 append/remove events directly, which isn't something a new user to
 SQLAlchemy would typically be able to handle without a much deeper
 understanding of how the attribute system works.

 I've implemented your test case as below as well as some other variants in
 association with the original code I gave you - for the remove case I've
 added the necessary code to the custom collection. All foreign key
 constraints are set correctly as a function of the ORM's normal operation,
 and as far as reset, when an association between Person and Address is
 removed, we want to just delete the association so cascade is used for
 that.   I'm not sure what insert a removed object with it's new value for
 the key attribute after a change means; add a test to the TestPA class
 illustrating the behavior you want and I'll add it.

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 import collections
 from sqlalchemy.orm.collections import collection, collection_adapter
 from sqlalchemy.ext.associationproxy import association_proxy,
 _AssociationCollection
 Base = declarative_base()

 class GroupByKeyCollection(collections.defaultdict):
 def __init__(self, keyfunc):
 super(GroupByKeyCollection, self).__init__(list)
 self.keyfunc = keyfunc

Re: [sqlalchemy] attribute_mapped_collection use as a key-list dictionary

2013-09-03 Thread Paul Balomiri
Hi,

Trying to advance on this issue i wrote an InstrumentedList which shall:
* hold only values sharing the same key as defined by a property on the
values
* change that property to the list value upon insertion
* set the property to whatever null value is defined  when the value is
removed from the list
* listen for changes on the property of the value object, as to ensure that
the value is removed when it's key poroperty does not match the list key
any more

The GroupByKeyCollection is thus a collection of GroupedByKeyList objects
with the key being the constant key of each such list.

I would like to install

event.listen(list, 'append', append_listener)
event.listen(list, 'remove', rm_listener)

on those lists, such that the GroupByKeyCollection can modify added objects
according to the relationship it implements:
* set the appropiate foreign key constraints
* insert a removed object with it's new value for the key attribute after a
change (announced by append_listener)
* reset the fks upon item removal.

My current problem is that i have not successfully instrumented the event
dispatcher mechanism for GroupedByKeyList.
I have done the following (full code is attached. I refrained from
including it here):

class GroupedByKeyList(InstrumentedList):
   def init(key):
   
   def append(self, obj):
   ...
   def remove(self,obj):
   ...
__instance_for_instrumentation=GroupedByKeyList(Just for the sake of
prepare_instrumentation, key_attribute=blah)
'''Accomodate for no zero argument initialization'''

prepare_instrumentation( lambda : __instance_for_instrumentation)
del __instance_for_instrumentation
_instrument_class(GroupedByKeyList)

Now when i try to append in GroupByKeyCollection the event listeners i get

 Traceback (most recent call last):
   File
 /home/paul/maivic-server/libs/sqlalchemy_keyed_relationship/test/test_sqla_grouped_collection2.py,
 line 60, in test_append_to_list
 p1._addresses_by_role.add(p2a)
   File
 /home/paul/maivic-server/env/local/lib/python2.7/site-packages/sqlalchemy/orm/collections.py,
 line 1008, in wrapper
 return method(*args, **kw)
   File
 /home/paul/maivic-server/libs/sqlalchemy_keyed_relationship/python/sqla_keyed_relationship/sqla_grouped_collection2.py,
 line 129, in add
 self[key].append(value)
   File
 /home/paul/maivic-server/libs/sqlalchemy_keyed_relationship/python/sqla_keyed_relationship/sqla_grouped_collection2.py,
 line 122, in __missing__
 event.listen(l, 'remove', remove_listener)
   File
 /home/paul/maivic-server/env/local/lib/python2.7/site-packages/sqlalchemy/event.py,
 line 40, in listen
 tgt.dispatch._listen(tgt, identifier, fn, *args, **kw)
 AttributeError: 'GroupedByKeyList' object has no attribute 'dispatch'



Seamingly instrumenting the list is not enough for installing the event
dispatch mechanism.

Where is relationship(collection_class=) implemented?
I could not quite follow the code. collection_class is saved on the
RelationshipProperty object, but i could not figure out where it is picked
up again, in order to see how instrumentation is done for standard
collections.

Should i use my own instrumentation outside of sqlalchemy?

I have attached the code as i have it now.

to run it i created a test case based on your previous answer, which in
this mail

Thank you,
Paul


from sqlalchemy.ext.declarative.api import declarative_base
from sqlalchemy.schema import Column, ForeignKey
from sqlalchemy.types import Integer, String
from sqlalchemy.orm import relationship
from sqla_keyed_relationship.sqla_grouped_collection2 import
GroupByKeyCollection,\
AssociationGBK
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.engine import create_engine
from sqlalchemy.orm.session import Session
from unittest.case import TestCase

Base = declarative_base()
class Person(Base):
__tablename__ = 'person'

id = Column(Integer, primary_key=True)

_addresses_by_role = relationship(PersonToAddress,
collection_class=
lambda: GroupByKeyCollection(
key_attribute=role
)
)
addresses_by_role = association_proxy(
_addresses_by_role,
address,
proxy_factory=AssociationGBK,
creator=lambda k, v: PersonToAddress(role=k,
address=v))

class PersonToAddress(Base):
__tablename__ = 'person_to_address'

id = Column(Integer, primary_key=True)
person_id = Column(Integer, ForeignKey('person.id'))
address_id = Column(Integer, ForeignKey('address.id'))
role = Column(String)
person=relationship(Person, backref=p2as)
address = relationship(Address)

class Address(Base):
__tablename__ = 'address'

id = Column(Integer, primary_key=True)
name = Column(String)

class Test (TestCase):
def setUp(self

Re: [sqlalchemy] attribute_mapped_collection use as a key-list dictionary

2013-08-15 Thread Paul Balomiri
Hi,

Thank you for the elaborated Answer !

I am trying to implement a general solution for the key-list problem
using events.

basically i want to instrument for GroupByKeyCollection any changes
relevant to the keyfunc.

say we have
p= Person()
p._address_by_role['r1']= [PersonToAddress(address=Address(name='a1'),
role='r1')  ]

My problem is that i cannot access the parent object (PersonToAddress)
from ScalarAttributeImpl supplied by the events framework as
initiation parameter of the set callback. What i want is to remove an
object from a key-associated list when it's keying function result
mutates. For this i have to fetch the PersonToAddress from
PersonToAddress.role.set event. Can you hint me a way to fetch a
mapped object from it's attribute set event ?

The following describes how i see path to the solution. Do you think i
am on the right track?

The keying function shall be reapplied whenever keying Attributes are
mutated on PersonToAddress. Upon detecting a changed value i want to
reorganize the _address_by_role structure.

The second step would be to implement callbacks on the instrumented
lists which form the values of the GroupByKeyCollection.

The behavior i target is such that:

p._address_by_role.append(PersonToAddress(address=Address(name='a1',
role='r1')) #OK
p._address_by_role['r2'].append(PersonToAddress(address=Address(name='a1'),
role='r1')) # OK, but  PersonToAddress.role is changed to 'r2'
p._address_by_role['r2'].append(PersonToAddress(address=Address(name='a1')))
#OK,  PersonToAddress.role is set to 'r2'
del p._address_by_role['r2'][0] #O.K, the first element is removed,
and it's role value is set to the default value

p._address_by_role['r2'][1]=  p._address_by_role['r1'][0]
# OK, but may steps should happen here:
#   -p._address_by_role['r1'][0] is put into p._address_by_role['r2']
#   -this changes the attr. value p._address_by_role['r1'][0].role to r2
#   this triggers the removal from p._address_by_role['r1']

Thank you
Paul

2013/8/13 Michael Bayer mike...@zzzcomputing.com:

 On Aug 13, 2013, at 11:44 AM, Paul Balomiri paulbalom...@gmail.com wrote:

 I would like to get a list as value for the dict, such that i can
 assign more than one entity to any one key. The output should look
 like this:
 {u'home': [Address object at 0x29568d0,Address object at ...] ,
 u'work': [Address object at 0x2a3eb90]}

 Now in the database whenever i set a new value for a key(=role), the
 entry in PersonToAddress' table is replaced (not added). This is
 consistent with having a 1-key to 1-value mapping. Can I however
 change the behaviour in such a way that more than one Addresses are
 allowed for one Person using the same key(=role in this example)?


 OK, an attribute_mapped_collection is just an adapter for what is basically a 
 sequence.  Instead of a sequence of objects, it's a sequence of (key, 
 object).   So by itself, attribute_mapped_collection can only store mapped 
 objects, not collections as values.

 When using the association proxy, there is a way to get a dictionary of 
 values, but the association proxy only knows how to close two hops into 
 one.  So to achieve that directly, you'd need one relationship that is a 
 key/value mapping to a middle object, then that middle object has a 
 collection of things.So here PersonToAddress would be more like 
 PersonAddressCollection, and then each Address object would have a 
 person_address_collection_id.   That's obviously not the traditional 
 association object pattern - instead of a collection of associations to 
 scalars, it's a collection of collections, since that's really the structure 
 you're looking for here.

 To approximate the collection of collections on top of a traditional 
 association pattern is tricky.  The simplest way is probably to make a 
 read-only @property that just fabricates a dictionary of collections on the 
 fly, reading from the pure collection of PersonToAddress objects.  If you 
 want just a quick read-only system, I'd go with that.

 Otherwise, we need to crack open the collection mechanics completely, and 
 since you want association proxying, we need to crack that open as well.  
 I've worked up a proof of concept for this idea which is below, and it was 
 not at all trivial to come up with.  In particular I stopped at getting 
 Person.addresses_by_role['role'].append(Address()) to work, since that means 
 we'd need two distinctly instrumented collections, it's doable but is more 
 complex.Below I adapted collections.defaultdict() to provide us with a 
 collection of collections over a single collection and also the association 
 proxy's base collection adapter in order to reduce the hops:

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 import collections
 from sqlalchemy.orm.collections import collection, collection_adapter
 from sqlalchemy.ext.associationproxy import association_proxy, 
 _AssociationCollection
 Base

[sqlalchemy]

2013-08-13 Thread Paul Balomiri
Hi,

I am trying to build an attribute_mapped_collection reference from
table people (Mapped class is called Person). However, I would like to
get a list of entities for each key.

I have the following tables with the relevant PK and FK listed
Person:
  - id

PersonToAddress:
- id
- person_id
- address_id
  role # this is the mapped special key

Address:
- id

to establish a relationship i do the following (only the relationships
are included in the listing)
class PersonToAddress:
person = relationship( __table_to_classnames__['people'],

backref=backref('people_to_addresses',
collection_class=attribute_mapped_collection(role)))

class Person:
addresses_by_role =
association_proxy('people_to_addresses','address',creator = lambda
k,v:PeopleToAddress(role=k,address=v))

Now querying yields this result:
p = Session.query(Person).get(id=1)
print p.addresses_by_role
{u'home': Address object at 0x29568d0, u'work': Address object at 0x2a3eb90}

I would like to get a list as value for the dict, such that i can
assign more than one entity to any one key. The output should look
like this:
{u'home': [Address object at 0x29568d0,Address object at ...] ,
u'work': [Address object at 0x2a3eb90]}

Now in the database whenever i set a new value for a key(=role), the
entry in PersonToAddress' table is replaced (not added). This is
consistent with having a 1-key to 1-value mapping. Can I however
change the behaviour in such a way that more than one Addresses are
allowed for one Person using the same key(=role in this example)?

I should note that i tried supplying the uselist=True parameter in the
backref argument to PersonToAddress.person. This, however does
nothing.
Adding uselist=True to the parameters of the relationship (as opposed
to the backref) does create a list in both the backref and
addresses_by_role's values. The list, however only contains an
element, and if a new one is added, the entry in the db is
changed.Still only 1 element of the list is ever present in
PersonToAddress' table.

Am i overlooking something in the way attribute_mapped_collection
should be used ?

Paul

-- 
paulbalom...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] attribute_mapped_collection use as a key-list dictionary

2013-08-13 Thread Paul Balomiri
sorry for having forgot to add a subject

2013/8/13 Paul Balomiri paulbalom...@gmail.com:
 Hi,

 I am trying to build an attribute_mapped_collection reference from
 table people (Mapped class is called Person). However, I would like to
 get a list of entities for each key.

 I have the following tables with the relevant PK and FK listed
 Person:
   - id

 PersonToAddress:
 - id
 - person_id
 - address_id
   role # this is the mapped special key

 Address:
 - id

 to establish a relationship i do the following (only the relationships
 are included in the listing)
 class PersonToAddress:
 person = relationship( __table_to_classnames__['people'],

 backref=backref('people_to_addresses',
 collection_class=attribute_mapped_collection(role)))

 class Person:
 addresses_by_role =
 association_proxy('people_to_addresses','address',creator = lambda
 k,v:PeopleToAddress(role=k,address=v))

 Now querying yields this result:
 p = Session.query(Person).get(id=1)
 print p.addresses_by_role
 {u'home': Address object at 0x29568d0, u'work': Address object at 
 0x2a3eb90}

 I would like to get a list as value for the dict, such that i can
 assign more than one entity to any one key. The output should look
 like this:
 {u'home': [Address object at 0x29568d0,Address object at ...] ,
 u'work': [Address object at 0x2a3eb90]}

 Now in the database whenever i set a new value for a key(=role), the
 entry in PersonToAddress' table is replaced (not added). This is
 consistent with having a 1-key to 1-value mapping. Can I however
 change the behaviour in such a way that more than one Addresses are
 allowed for one Person using the same key(=role in this example)?

 I should note that i tried supplying the uselist=True parameter in the
 backref argument to PersonToAddress.person. This, however does
 nothing.
 Adding uselist=True to the parameters of the relationship (as opposed
 to the backref) does create a list in both the backref and
 addresses_by_role's values. The list, however only contains an
 element, and if a new one is added, the entry in the db is
 changed.Still only 1 element of the list is ever present in
 PersonToAddress' table.

 Am i overlooking something in the way attribute_mapped_collection
 should be used ?

 Paul

 --
 paulbalom...@gmail.com

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
paulbalom...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Joinedload and duplicate relations

2013-07-14 Thread Paul Johnston
Hi,

I've just been debugging a slow running query:

products = 
db.Product.query.options(sao.joinedload_all('variations.channels'), 
sao.joinedload_all('variations.specifics')).all()

The second joinedload_all should just have been joinedload. It was causing 
variations to be included twice in the joinedload, presumably causing a 
cross product that made the query very slow.

I can't imagine there's any legitimate need to include a relation twice, so 
it would be helpful if SQLAlchemy issued an error for this condition.

Many thanks,

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Recipe for text search across multiple fields

2013-05-31 Thread Paul Johnston
Hi,

Often you want to offer the user a text box which will search through 
multiple fields. If the user is looking at the list of orders, they want a 
search box that will search: order id, customer name, product names, etc. 
I'm trying to put together a recipe for this, although it's becoming more 
complicated than I planned.

The recipe will take three inputs: mapped class, list of fields, search 
term. The list of fields will be like ['id', 'customer.name', 
'products.name'] - where there is a dot in the field name, that indicates 
the search should walk a relation. For starters the matching will be an 
ilike with % characters put around the search term.

This is what I came up with so far:

def text_search(cls, fields, search):
queries = []
for field in fields:
query = cls.query.order_by(None)
parts = field.split('.')
cur_cls = cls
for part in parts[:-1]:
attr = getattr(cur_cls, part)
cur_cls = attr.property.mapper.class_
query = query.outerjoin(attr)
queries.append(query.filter(getattr(cur_cls, 
parts[-1]).ilike('%'+search+'%')))
return queries[0].union(*queries[1:])

The problem is I'm getting PostgreSQL syntax errors, because some order_by 
clauses are still appearing in the queries, which don't play nice with the 
union. Any suggestions for fixing this would be welcome!

Once that's fixed, and with a few more refinements, I think this would be a 
very handy recipe to keep around, or even put in the SQLAlchemy core.

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Recipe for text search across multiple fields

2013-05-31 Thread Paul Johnston
Hi,

That's fixed it! I was so close :-) I was using mapper.order_by, which I'd 
hoped order_by(None) would cancel. No worries - it works now.

This is in tw2.sqla DbListPage now, and I'll be putting updates in as I get 
round to it.

Paul

 

 what's the purpose of cls.query.order_by(None) ?   you're not using 
 mapper.order_by i hope ?   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Problem with merging related objects with surrogate keys

2013-04-05 Thread Paul Harrington
This is now working for us in 0.8.0

Thanks!
pjjH


On Monday, October 8, 2012 12:34:57 PM UTC-4, Paul Harrington wrote:

 I am helping a colleague with a SQL Alchemy problem that I have not 
 encountered before. What we are trying to do seems reasonable enough: merge 
 in a bunch of related objects. However, we run into difficulty when using 
 physical data-models that have surrogate PKs. In this example Bar has a FK 
 to Foo. We want to add a Bar *and* a Foo FK target in one merge. This seems 
 reasonable and I seem to recall working code that operates in the other 
 direction (i.e. assigning objects to a list-based mapped attribute on the 
 PK table).

 o = Bar(barpar='Corona', foo=Foo(foopar='WD-40'))

 # expectation: this merge should put in the Foo object, flush to
 # obtain foo_id, then use that foo_id to construct the Bar object
 S.merge(o)


 2012-10-08 11:51:32,200 INFO sqlalchemy.engine.base.Engine SELECT 
 bar.bar_id AS
 bar_bar_id, bar.barpar AS bar_barpar, bar.foo_id AS bar_foo_id
 FROM bar
 WHERE bar.barpar = ? AND bar.foo_id IS NULL

 In order for this to work, we have to do this very awkward-looking merge + 
 flush + merge.
 f = Foo(foopar='WD-40')
 f=S.merge(f)
 S.flush()   # We need this to get the 
 surrogates
 x = S.merge(Bar(barpar='Corona', _foo=f.foo_id)) # have to call the 
 constructor with the value of the surrogate. Would prefer to call with 
 foo=f. Is this possible?

 What are our options to get this working? Are we missing something with 
 the mapper configuration?  I thought that SA was doing a topological sort 
 of the objects and would persist things in the correct order. I am very 
 surprised to see the bar.foo_is IS NULL in the SQL logs.

 thanks in advance, as always.

 pjjH


 from __future__ import absolute_import, division, with_statement

 from   sqlalchemy   import (Column, ForeignKey, Integer,
 create_engine, String)
 from   sqlalchemy.ext.declarative \
 import declarative_base
 from   sqlalchemy.orm   import relationship, sessionmaker
 from   sqlalchemy.schemaimport UniqueConstraint


 Base = declarative_base()

 class Foo(Base):
 __tablename__ = 'foo'

 foo_id = Column(Integer, primary_key=True, autoincrement=True)
 foopar = Column(String(1000))

 __mapper_args__ = {'primary_key': [foopar]} 
 __table_args__ = (UniqueConstraint(foopar),)


 class Bar(Base):
 __tablename__ = 'bar'

 bar_id = Column(Integer, primary_key=True, autoincrement=True)
 barpar = Column(String(1000))
 foo_id = Column(ForeignKey(Foo.foo_id))

 __mapper_args__ = {'primary_key': [barpar, foo_id]}
 __table_args__ = (UniqueConstraint(barpar, foo_id),)

 foo = relationship(Foo)


 filename = 'tester.db'

 Session = sessionmaker()

 S = Session()

 S.bind = create_engine('sqlite:///%s' % filename)

 Base.metadata.create_all(S.bind)

 S.bind.echo = True

 o = Bar(barpar='Corona', foo=Foo(foopar='WD-40'))

 # expectation: this merge should put in the Foo object, flush to
 # obtain foo_id, then use that foo_id to construct the Bar object
 S.merge(o)

 S.commit()



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] zope.sqlalchemy - commit some objects on failure

2013-01-06 Thread Paul Johnston
Hi,

I'm using zope.sqlalchemy in a web application (actually ToscaWidgets not 
Zope) so each request is wrapped in a transaction. If the request succeeds 
the transaction is committed; if there is an error it is rolled back. This 
works great.

I have a log table where I log incoming XML to web callback methods (from 
eBay, 3DCart, etc.) Now, if there is an error what I want to happen is most 
things to be rolled back, but the log table still committed.

This has left me scratching my head a bit. Any ideas on an elegant way to 
do this?

Many thanks,

Paul


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/ZQATq5gFcgQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Automatic data partitioning using a custom Session class

2012-12-07 Thread Paul Johnston
Hi Mike,

we have a recipe that's all about the built in filter which also 
 illustrates how to work around that existing criterion thing: 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/PreFilteredQuery 


Thanks for that - exactly what I needed. I'm still tweaking my app to use 
this approach, but it seems to be working really well.

When (if) I get round to writing a tutorial I'll let you know. Hope you're 
keeping well,

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/QSyWHO39YgwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Automatic data partitioning using a custom Session class

2012-12-06 Thread Paul Johnston
Hi,

I hope everyone's keeping well. It's been ages since I've been on the list. 
I do use SQLAlchemy from time to time, but now it generally works so well, 
that I don't have any questions to ask!

But I would appreciate some thoughts on the approach I've taken with a 
multi-tennant SaaS web app. It's a multichannel stock management system for 
online retailers. All the user data is attached to a merchant - products, 
variations, categories, orders, etc. It's important that one merchant 
cannot access data belonging to another merchant. When handling a request, 
the active merchant can be determined from the logged-in user, which is 
kept in thread local storage.

So I started with lots of code like:
db.Order.query.filter_by(merchant_id = twa.get_user().merchant_id)

Now, this is fine, but it's repetitive, and it's risky for security - it 
just takes me to forget one filter_by merchant_id and we've got a security 
vulnerability.

So, what I wanted to do is create a custom session that will do this 
automatically. It needs to do two things:
 1) Any query object against an entity that has a merchant_id property is 
filtered on that
 2) Any new object that has a merchant_id property has the property 
automatically set
 
I don't think a session extension can do (1), so I created MySession 
subclassing Session, and passed this as class_ to sessionmaker. Here's my 
initial attempt at MySession:

class MySession(sa.orm.Session):
def query(self, *entities, **kwargs):
query = super(MySession, self).query(*entities, **kwargs)
for e in entities:
if e.tables[0].name == 'user':
continue
if e.has_property('merchant_id') and twa.get_user():
query = query.filter(e.class_.merchant_id == 
twa.get_user().merchant_id)
return query

Now, I faced on major problem - seeing these errors:

InvalidRequestError: Query.get() being called on a Query with existing 
criterion.

As a temporary workaround, I edited query.py and disabled the check that 
causes this. That's got me going for now, although obviously a proper fix 
is needed. I haven't actually attempted (2) yet, but I will be trying that 
shortly.

I'd really appreciate some feedback on this, particularly ideas to fix the 
InvalidRequestError. I think this is a very powerful technique that would 
be useful to many developers. Once my app is working I will see about 
writing a tutorial on the matter.

Many thanks,

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/wK5ljrQ7z4cJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Possible race condition using subqueryload?

2012-10-30 Thread Paul Rodger

Hi All.

I have a select query that uses subqueryload and looks like this:

completed_imports = self.ra_import_file.visible() \
.filter(ImportFile.lock_date == None) \
.filter(ImportFile.process_date != None) \
.order_by(ImportFile.process_date.desc()) \
.options(subqueryload('user')) \
.all()

However, intermittently I am finding results from the query where
process_date is None (NULL) in production, which seems weird, since I
am using  .filter(ImportFile.process_date != None).

This table is quite busy, and there can be quite a few rows that turn
into complete imports by changing their process_date from NULL to an
actual date.

Because I am using subqueryload(), two SQL statements are issued:

First query:

  SELECT import_file.create_date ...
  FROM import_file
  WHERE import_file.import_file_id IN (SELECT
import_file.import_file_id AS import_file_import_file_id
  FROM import_file
  WHERE import_file.lock_date IS NULL AND import_file.process_date IS
NOT NULL ORDER BY import_file.process_date DESC

Second query:

  SELECT user.password AS user_password... ,
anon_1.import_file_user_id AS anon_1_import_file_user_id
  FROM (SELECT import_file.user_id AS import_file_user_id
  FROM import_file
  WHERE import_file.import_file_id IN (SELECT
import_file.import_file_id AS import_file_import_file_id
  FROM import_file
  WHERE import_file.lock_date IS NULL AND import_file.process_date IS
NOT NULL) AS anon_1 JOIN user
  ON anon_1.import_file_user_id = user.user_id ORDER BY
anon_1.import_file_user_id

My [unconfirmed] theory why I occasionally get rows returned that have
NULL process_dates is that race conditions like this happen when under
high load:

1) Row 'foo' in the file_import table has process_date = NULL
2) First SQL query runs (SELECT import_file.create_date ...) and the
'foo' row is not returned in the result set because it does not match
the AND import_file.process_date IS NOT NULL where clause.
3) External process completes import and sets process_date =
current_date for row 'foo'
4) Second SQL query runs (SELECT user.password AS user_password...)
and the 'foo' row is now in the result set because it matches the AND
import_file.process_date IS NOT NULL where clause.
5) subqueryload joins both SQL queries together in results in python
(sorry, I got a bit lost trying to trace this in sqlalchemy code),
resulting in row 'foo' being in the results, but missing values such
as 'process_date', which are only queried in the first SQL query.

Am I completely wrong? Does subqueryload() only return results if each
row was returned in both SQL queries?

Thanks.

- Paul



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Problem with merging related objects with surrogate keys

2012-10-08 Thread Paul Harrington
I am helping a colleague with a SQL Alchemy problem that I have not 
encountered before. What we are trying to do seems reasonable enough: merge 
in a bunch of related objects. However, we run into difficulty when using 
physical data-models that have surrogate PKs. In this example Bar has a FK 
to Foo. We want to add a Bar *and* a Foo FK target in one merge. This seems 
reasonable and I seem to recall working code that operates in the other 
direction (i.e. assigning objects to a list-based mapped attribute on the 
PK table).

o = Bar(barpar='Corona', foo=Foo(foopar='WD-40'))

# expectation: this merge should put in the Foo object, flush to
# obtain foo_id, then use that foo_id to construct the Bar object
S.merge(o)


2012-10-08 11:51:32,200 INFO sqlalchemy.engine.base.Engine SELECT 
bar.bar_id AS
bar_bar_id, bar.barpar AS bar_barpar, bar.foo_id AS bar_foo_id
FROM bar
WHERE bar.barpar = ? AND bar.foo_id IS NULL

In order for this to work, we have to do this very awkward-looking merge + 
flush + merge.
f = Foo(foopar='WD-40')
f=S.merge(f)
S.flush()   # We need this to get the surrogates
x = S.merge(Bar(barpar='Corona', _foo=f.foo_id)) # have to call the 
constructor with the value of the surrogate. Would prefer to call with 
foo=f. Is this possible?

What are our options to get this working? Are we missing something with the 
mapper configuration?  I thought that SA was doing a topological sort of 
the objects and would persist things in the correct order. I am very 
surprised to see the bar.foo_is IS NULL in the SQL logs.

thanks in advance, as always.

pjjH


from __future__ import absolute_import, division, with_statement

from   sqlalchemy   import (Column, ForeignKey, Integer,
create_engine, String)
from   sqlalchemy.ext.declarative \
import declarative_base
from   sqlalchemy.orm   import relationship, sessionmaker
from   sqlalchemy.schemaimport UniqueConstraint


Base = declarative_base()

class Foo(Base):
__tablename__ = 'foo'

foo_id = Column(Integer, primary_key=True, autoincrement=True)
foopar = Column(String(1000))

__mapper_args__ = {'primary_key': [foopar]} 
__table_args__ = (UniqueConstraint(foopar),)


class Bar(Base):
__tablename__ = 'bar'

bar_id = Column(Integer, primary_key=True, autoincrement=True)
barpar = Column(String(1000))
foo_id = Column(ForeignKey(Foo.foo_id))

__mapper_args__ = {'primary_key': [barpar, foo_id]}
__table_args__ = (UniqueConstraint(barpar, foo_id),)

foo = relationship(Foo)


filename = 'tester.db'

Session = sessionmaker()

S = Session()

S.bind = create_engine('sqlite:///%s' % filename)

Base.metadata.create_all(S.bind)

S.bind.echo = True

o = Bar(barpar='Corona', foo=Foo(foopar='WD-40'))

# expectation: this merge should put in the Foo object, flush to
# obtain foo_id, then use that foo_id to construct the Bar object
S.merge(o)

S.commit()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/JfWTFT-o3twJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] New to sqlalchemy multiple files one to one and one to many - circular relationship issues

2012-01-06 Thread Paul Kraus
I have an address class that i would like to use in several locations for 
instance vendors need addresses and customers need addresses.

I have my model split between 2 files globals and ar. A customer can have a 
default bill to address, default ship to address, and multiple address to 
choose from for ship to's beyond the default. I have the bill_to and 
ship_to default working fine but for the life of me can't figure out how to 
create the locations reference. I know how to do it if i put it on the 
address class but then i have python import issues obviously. I could put 
them in the same file but then i lose the versalitity of having the same 
kind of address setup for vendors (defaults and multiples locations also). 
How can I define locations that would be a list of addresses on the 
customer class.

Hope this makes sense. TIA

AR model ..
from erp.model.globals import Address

class Customer(DeclarativeBase):
__tablename__ = 'customers'
customer_id = Column(Integer, primary_key=True)
customer_name = Column(Unicode(100))
discount = Column(Float)
#bill_to_id = Column(Integer, ForeignKey('addresses.address_id'))
#bill_to = 
relation(Address,primaryjoin=bill_to_id==Address.address_id,uselist=False)
ship_to_id = Column(Integer, ForeignKey('addresses.address_id'))
ship_to = 
relation(Address,primaryjoin=ship_to_id==Address.address_id,uselist=False)

globals ...
rom erp.model import DeclarativeBase, metadata, DBSession

class Address(DeclarativeBase):
__tablename__ = 'addresses'
address_id = Column(Integer,primary_key=True)
name = Column(Unicode(100))
address_one = Column(Unicode(100))
address_two = Column(Unicode(100))
address_three = Column(Unicode(100))
city = Column(Unicode(100))
state = Column(Unicode(100))
zip_code = Column(Unicode(100))
phone = Column(Unicode(100))
fax = Column(Unicode(100))
contact = Column(Unicode(100))

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/0osPdVWRxgwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Select like but the other way around

2011-11-07 Thread Paul
I'm trying to do a like statement in a query filter. I'm fine doing it one way
for instance

session.query(Table).filter(Table.path.like(C:\Test\%))

which would hopefully return all folders and files in the folder Test

but what if I want to do it the other way around and pass 
C:\Test\testfile.txt 
and return all the folders. I want something like 
C:\Test\testfile.txt.like(Table.path+%) but obviously a string wont have 
the 
method like.

is there a function I can import that takes 2 arguements or another method I 
can 
use to achieve this?

Thanks!

Paul



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] multiple insert or update

2011-11-01 Thread Paul
What's the cleanest way to perform a multiple insert or update? ie, if the 
record
already exists an update operation is performed or if not an insert is 
performed.

I have seen merge being recommended for this insert or update, but I wondered 
if 
this was available as a multiple call like add_all which I could provide a list?

Is there a way to do this or should I just put the merge into a loop?

Cheers!

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: multiple insert or update

2011-11-01 Thread Paul
Michael Bayer mike_mp at zzzcomputing.com writes:

 
 
 On Nov 1, 2011, at 7:23 AM, Paul wrote:
 
  What's the cleanest way to perform a multiple insert or update? ie, if the 
record
  already exists an update operation is performed or if not an insert is 
performed.
 
 that's a funny usage of terminology - a multiple insert would normally be 
construed as meaning an INSERT
 that has many rows at once, which is a different issue.
 
 As I read this I wasn't clear what you were asking for, the MERGE SQL 
construct, or just asking about ORM
 merge.   After I wrote a whole reply here based on the SQL construct I 
realized you were talking about the ORM.
 
 If you have a bunch of objects, all of which you can populate with their 
primary key, and you'd like some to
 result in INSERTS and others in UPDATE, then yes you can use Session.merge() 
in a loop. I would try to
 pre-load all the existing rows, if possible, into a temporary collection - 
ideally limited to those rows
 you know you're operating upon.  That would make the operation of 
Session.merge() much faster as it can
 locate each object in the local identity map rather than emitting a SELECT 
 for 
each one.
 

Ah thanks, I'm not entirely sure how I would pre-load the rows. Would I just 
use 
a query to select the rows in the same session somewhere before? Do I need to 
store these returned instances somewhere?



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Setup to best support multi-threads

2011-10-31 Thread Paul
I'm getting the error sqlalchemy.exc.ProgrammingError: (ProgrammingError) 
SQLite objects created in a thread can only be used in that same thread.The 
object was created in thread id 5808 and this is thread id 7936 None None

with my current setup, I'm not sure what I've done wrong.

I set up this little test to see if I could write to the same table from 
multiple threads. the table has 3 columns all of type int.


from sqlalchemy import create_engine, MetaData
from sqlalchemy.orm import scoped_session,sessionmaker
from sqlalchemy.ext.declarative import declarative_base

db_engine= create_engine(sqlite:///database.db),echo=True)
Base= declarative_base(db_engine)

class Failures(Base):
  __tablename__= failures
  __table_args__= {autoload:True}
  
  def __repr__(self):
return Failures('%s','%s','%s') %(self.Id,self.action,self.reason)

metadata= Base.metadata
Session= scoped_session(sessionmaker(bind=db_engine))

class TestWriteToDB(threading.Thread):
  
  def __init__(self,start):
threading.Thread.__init__(self)
self.session= Session()
self.insert_list=[]
for i in range(start,start+10):
  f=Failures(resourceId=i,action=i,reason=i)
  self.insert_list.append(f)
  
  def run(self):
self.session.add_all(self.insert_list)
self.session.commit()


if __name__ == __main__:
  for i in range(1,40,10):
t=TestWriteToDB(i)
t.start()
  





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Setup to best support multi-threads

2011-10-27 Thread Paul
I'm new to sqlalchemy and was looking at a good starting point. I currently 
have 
an application which has multiple threads at certain parts, some of which need 
access to the same sqlite database at the same time. Mostly only one writing 
but 
sometimes more than one thread will need to write to the database. So simply
using sqlite3 wont cut it.

I was under the impression that this kind of setup can be achieved with
sqlalchemy but I was wondering which sort of setup with engine, pools and a 
design pattern to best achieve this would be?

Performance isn't a huge concern as small amounts of data will be being written
at a time so blocking or queueing would be acceptable solutions.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Entity KeyErrors under heavy load

2011-08-29 Thread Paul Andrew
Hi there,

We're running load tests against our Pylons application, which uses
SQLAlchemy to hit an Oracle DB.

Under load, we're starting to see errors pop up in our Apache logs:

[Mon Aug 29 13:39:59 2011] [error] Exception KeyError:
KeyError((class 'api.entity.proj.ent1.Ent1', (631,)),) in bound
method InstanceState._cleanup of sqlalchemy.orm.state.InstanceState
object at 0x2aaabca94d90 ignored
[Mon Aug 29 13:39:59 2011] [error] Exception KeyError:
KeyError((class 'api.entity.shared_entities.Ent', (5,)),) in bound
method InstanceState._cleanup of sqlalchemy.orm.state.InstanceState
object at 0x2aaab6ea9090 ignored
[Mon Aug 29 13:39:59 2011] [error] Exception KeyError:
KeyError((class 'api.entity.proj.ent2.Ent2, (32,)),) in bound
method InstanceState._cleanup of sqlalchemy.orm.state.InstanceState
object at 0x2aaab7bf0410 ignored


These errors only appear under load, I can't reproduce them by hitting
the same pages again manually. They also don't exactly correlate with
actual server errors; the requests that are happening at this time are
returning correctly.

Is anyone able to tell me how to follow up on these kinds of errors,
so I can figure out what's causing them?

Cheers,

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Problem with table reflection (version 0.6.6) with mysql database

2011-02-08 Thread Paul Rigor
Hello,

I have a table with the following schema:

+-+---+--+-+-++
| Field   | Type  | Null | Key | Default | Extra  |
+-+---+--+-+-++
| acc | varchar(1024) | YES  | | NULL||
| is_obsolete | int(11)   | YES  | | NULL||
| is_root | int(11)   | YES  | | NULL||
| term_type   | varchar(1024) | YES  | | NULL||
| id  | int(11)   | YES  | | NULL||
| cid | int(11)   | NO   | PRI | NULL| auto_increment |
| name| varchar(1024) | YES  | | NULL||
+-+---+--+-+-++


When attempting to run the following code to obtain column information
programmatically...

from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.engine import reflection
dburi = mysql://...
engine =  create_engine(dburi)
meta = MetaData(dburi)
user_table = Table('term', meta,useexisting=True)
engine.reflecttable(user_table,include_columns=True) # More verbose error
trace
insp = reflection.Inspector.from_engine(engine)
insp.reflecttable(user_table, include_columns=True)


I get the following problem:
python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/sqlalchemy/engine/reflection.pyc
in reflecttable(self, table, include_columns)
383 found_table = True
384 name = col_d['name']
-- 385 if include_columns and name not in include_columns:
386 continue
387

TypeError: argument of type 'bool' is not iterable

Is there a better way of obtaining table schemas?

Thanks,
Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Problem with table reflection (version 0.6.6) with mysql database

2011-02-08 Thread Paul Rigor
Hello,

The same error happens with version 0.6.0 as well as 0.5.8. Although for
0.5.8, the error message is different (see below). Note also that the
version of the Mysql python driver is 1.2.3. Thanks!!!

QLAlchemy-0.5.8-py2.6.egg/sqlalchemy/databases/mysql.pyc in reflect(self,
connection, table, show_create, charset, only)
   2133
   2134 if only:
- 2135 only = set(only)
   2136
   2137 for line in re.split(r'\r?\n', show_create):

TypeError: 'bool' object is not iterable

Cheers,
Paul



On Tue, Feb 8, 2011 at 1:36 PM, Paul Rigor paulri...@gmail.com wrote:

 Hello,

 I have a table with the following schema:

 +-+---+--+-+-++
 | Field   | Type  | Null | Key | Default | Extra  |
 +-+---+--+-+-++
 | acc | varchar(1024) | YES  | | NULL||
 | is_obsolete | int(11)   | YES  | | NULL||
 | is_root | int(11)   | YES  | | NULL||
 | term_type   | varchar(1024) | YES  | | NULL||
 | id  | int(11)   | YES  | | NULL||
 | cid | int(11)   | NO   | PRI | NULL| auto_increment |
 | name| varchar(1024) | YES  | | NULL||
 +-+---+--+-+-++


 When attempting to run the following code to obtain column information
 programmatically...

 from sqlalchemy import create_engine, MetaData, Table
 from sqlalchemy.engine import reflection
 dburi = mysql://...
 engine =  create_engine(dburi)
 meta = MetaData(dburi)
 user_table = Table('term', meta,useexisting=True)
 engine.reflecttable(user_table,include_columns=True) # More verbose error
 trace
 insp = reflection.Inspector.from_engine(engine)
 insp.reflecttable(user_table, include_columns=True)


 I get the following problem:
 python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/sqlalchemy/engine/reflection.pyc
 in reflecttable(self, table, include_columns)
 383 found_table = True
 384 name = col_d['name']
 -- 385 if include_columns and name not in include_columns:
 386 continue
 387

 TypeError: argument of type 'bool' is not iterable

 Is there a better way of obtaining table schemas?

 Thanks,
 Paul


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Efficiency of adding to M:M relations

2010-06-30 Thread Paul Johnston
Hi,

I hope everyone's well here. It's been some time since I posted. Great
to see it up to 0.6, and even more progress on MS-SQL (although I'm
now unlikely to be using that).

As always, I'm using SQLAlchemy as part of a web app. I have a set of
checkboxes, which I'm saving into an M:M relation. The app receives a
list of IDs from the client. To save these to the M:M, I need them as
database objects. So I'm doing (roughly):

myobj.relation = [OtherTable.get(i) for i in ids]

The problem with this is it's causing a database query for each id.
What I'd really like to do is somehow create a placeholder object
with just the id, that doesn't cost a database query to create. After
that, I'll trust flush() to do its magic as efficiently as possible.

Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Where can I learn more

2010-06-12 Thread Paul Hemans
Where can I learn more about creating a dialect for SQLalchemy? I
could just copy one of the existing dialects, but I don't know what I
am looking for in what needs to be changed. I need a kind of high
level view of how it all fits together.
Essential SQLalchemy doesn't seem to cover it. Can anyone point me to
some info?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sql particle as bind parameter option:

2010-04-23 Thread Paul Balomiri

Hi,

Several days ago i has a similar question, which
was, perhaps a bit too theoretical. Here is a simple case:

select(text(select * from f(:time)) )

can I supply alternatively either
{time: datetime.datetime(1999,9,9,9,9,9)}
or
{time:  now()}
 I cannot get around that sqlalchemy interprets   now() as a  
string, while i try to use the postgres(or other backends) now()  
function.




Paul Balomiri
paulbalom...@gmail.com



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sql particle as bind parameter option:

2010-04-23 Thread Paul Balomiri

Hi,

Thanks, for your reply. I was aware that it is not just a string  
substitution,

and that the bindparams also maps the type of the parameters e.t.c.

But is it possible to supply a type /set of types which are or  
represent a lexical sub-structure ?
I might go into doing it, but first i want to check that i'm not  
reinventing the wheel.


I noticed that the where clause can be supplied several times, so
select( [my_table_metadata]). where(). where(...) is valid, thus  
it is possible
for this special case to insert a logical expression after the initial  
select() definition.


I am looking for a similar, but more general case, not necessarily  
using bindparam where  func.now()
would be a valid substitution or insertion element into a preexisting  
expression.


regards
Paul Balomiri
paulbalom...@gmail.com



On 23.04.2010, at 15:26, Michael Bayer wrote:



On Apr 23, 2010, at 6:58 AM, Paul Balomiri wrote:


Hi,

Several days ago i has a similar question, which
was, perhaps a bit too theoretical. Here is a simple case:

select(text(select * from f(:time)) )

can I supply alternatively either
{time: datetime.datetime(1999,9,9,9,9,9)}
or
{time:  now()}
I cannot get around that sqlalchemy interprets   now() as a  
string, while i try to use the postgres(or other backends) now()  
function.


bind parameters exist only for the purpose of specifying literal  
data to be injected as the values into a SQL statement.   They  
cannot be used to modify the lexical structure of a statement.   So  
a lexical unit like CURRENT TIMESTAMP or NOW()  is part of the  
statement's lexical source and cannot be injected as a bind parameter.


To put it another way, bind parameters are not just string  
substitutions the way something like foo %s % ('bar') is in  
Python, even though they may look that way.



--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sql particle as bind parameter option:

2010-04-23 Thread Paul Balomiri




On 23.04.2010, at 17:03, Michael Bayer wrote:



On Apr 23, 2010, at 9:59 AM, Paul Balomiri paulbalom...@gmail.com  
wrote:



Hi,

Thanks, for your reply. I was aware that it is not just a string  
substitution,

and that the bindparams also maps the type of the parameters e.t.c.

But is it possible to supply a type /set of types which are or  
represent a lexical sub-structure ?
I might go into doing it, but first i want to check that i'm not  
reinventing the wheel.


I noticed that the where clause can be supplied several times, so
select( [my_table_metadata]). where(). where(...) is valid,  
thus it is possible
for this special case to insert a logical expression after the  
initial select() definition.


I am looking for a similar, but more general case, not necessarily  
using bindparam where  func.now()
would be a valid substitution or insertion element into a  
preexisting expression.




Why not just create a python function that generatively produces the  
desired statement based on arguments?  I don't see the advantage to  
something more magical than that.

Well, this is what i already do:

lazy_sql_partial = functools.partial(lambda x: select([...] ,  
from_obj(func.a(bindparam('a'),bindparam('b'),x   ))) )

and then i name the binding like so:
lazy_sql_unbound = (lazy_sql_partial, (funcarg))


At the time when i generate the sql expression I do not know the  
parameters, not even a default. Those are generated based on user input.
in case parameters are missing for a whole expression the whole  
expression is skipped.


A even simpler expression would be a=1 or a=2 or ... At the moment  
i cannot generate such a constuct, which allows me to
define a select in one place, and then later add a or_(1,2,...) clause  
at a certain point.As a special case i could use   select().where(1).  
where(2)... to get an and_(1,2,...)



An example :
Whenever the map window changes, i need to calculate the dataset which  
falls out of the current window, and the ones which drops in.
Additionally i want to query the a modified sql expression whenever  
the table changes (modified = with an additional in_ costraining to  
pks in the table)

to do both i need to regenerate parts of the sql, but not all of it.

The way i do it now seems rather ugly because:
1) (partial_bound_func, (funcarg)) is error prone ( I know i cold do  
more reflection to find out argument numbers and names , but it would  
in the end duplicate the bindparam mechanism )

2) to execute i have to:
	- first look for parameters which are arguments to the partial  
generative functions  myselect= lazy_sql_unbound( user input params  
for partial )

- then use the remaining parameters in session.execute( )

It just feels like all of this rather belongs into the lib, because it  
could be all solved by allowing bindparam to have a value of type  
SQLExpression.

moreover, the expression substitutions could accept bindparams as well.

As a last point , the compiler could check the validity, as it does  
already.










regards
Paul Balomiri
paulbalom...@gmail.com



On 23.04.2010, at 15:26, Michael Bayer wrote:



On Apr 23, 2010, at 6:58 AM, Paul Balomiri wrote:


Hi,

Several days ago i has a similar question, which
was, perhaps a bit too theoretical. Here is a simple case:

select(text(select * from f(:time)) )

can I supply alternatively either
{time: datetime.datetime(1999,9,9,9,9,9)}
or
{time:  now()}
I cannot get around that sqlalchemy interprets   now() as a  
string, while i try to use the postgres(or other backends) now()  
function.


bind parameters exist only for the purpose of specifying literal  
data to be injected as the values into a SQL statement.   They  
cannot be used to modify the lexical structure of a statement.
So a lexical unit like CURRENT TIMESTAMP or NOW()  is part of  
the statement's lexical source and cannot be injected as a bind  
parameter.


To put it another way, bind parameters are not just string  
substitutions the way something like foo %s % ('bar') is in  
Python, even though they may look that way.



--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group

Re: [sqlalchemy] list filter

2010-04-23 Thread Paul Balomiri

Hi,

That would be table_metadata.c.column_name.in_([1,2,3]),
If you are looking for an in relation. You could also use  
MappedObjectClass.property instead of the column object.


Otherwhise, if you are really testing for array equality your  
expression can be used. Also, in this latter case, make sure that  
there is a DB array type, with the db you are using.


look here for some examples and for API Docs
http://www.sqlalchemy.org/docs/_06/reference/sqlalchemy/expressions.html#functions

Paul Balomiri
paulbalom...@gmail.com



On 23.04.2010, at 16:33, Alexander Zhabotinskiy wrote:


Hello.

How to filter by list
may be like:

.filter(Some.value==[1,2,3])

--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sql clauses as bound parameters

2010-04-21 Thread Paul Balomiri

Hi,
Is it possible to supply bindparam a clause type?

I want to execute a query of the form

select * from f(:a, :b,:c)

where :b might be either a String or a function.
if :b is a function, i'd like to do something like:

s=select([col1 , col2, col3], from_obj=func.f(bindparam('a'),  
bindparam('b' , type=Clause) ) )

and
clause_variant_1 = func.bbox( bindparam('x1') ,  bindparam('y1'),  
bindparam('y2'), bindparam('y2'))

clause_variant_2= text(some funny value)
when i execute o'd like to do something like:


connection.execute(s, a=vala, c=valc, b= clause_variant_1,  x1=11,  
x2=12, y1=2, y2=3 )

or, alernativelly
connection.execute(s, a=vala, c=valc, b= clause_variant_2)

I already realized that i could set parameters with params, and then  
compile SQL expressions

would perhaps this be a more accurate approach ?:
connection.execute(s, a=vala, c=valc, b=  
clause_variant_1.params(  x1=11, x2=12, y1=2, y2=3).compile() ).


I'm kind of stuck here

cheers,
Paul




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: sql clauses as bound parameters

2010-04-21 Thread Paul Balomiri
Alternatively a possibility to replace a bindparam Expression with
some other Expression would do it too...

2010/4/21 Paul Balomiri paulbalom...@gmail.com:
 Hi,
 Is it possible to supply bindparam a clause type?

 I want to execute a query of the form

 select * from f(:a, :b,:c)

 where :b might be either a String or a function.
 if :b is a function, i'd like to do something like:

 s=select([col1 , col2, col3], from_obj=func.f(bindparam('a'),
 bindparam('b' , type=Clause) ) )
 and
 clause_variant_1 = func.bbox( bindparam('x1') ,  bindparam('y1'),
 bindparam('y2'), bindparam('y2'))
 clause_variant_2= text(some funny value)
 when i execute o'd like to do something like:


 connection.execute(s, a=vala, c=valc, b= clause_variant_1,  x1=11,
 x2=12, y1=2, y2=3 )
 or, alernativelly
 connection.execute(s, a=vala, c=valc, b= clause_variant_2)

 I already realized that i could set parameters with params, and then compile
 SQL expressions
 would perhaps this be a more accurate approach ?:
 connection.execute(s, a=vala, c=valc, b= clause_variant_1.params(
  x1=11, x2=12, y1=2, y2=3).compile() ).

 I'm kind of stuck here

 cheers,
 Paul








-- 
paulbalom...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] session.merge(obj, load=False) yields Parent instance obj at 0x18817d0 is not bound to a Session; lazy load operation of attribute 'predecessor' cannot proceed

2010-04-16 Thread Paul Balomiri
Hi,

I was just trying to load a relation ( obj.predecessor) for obj.
obj was created in a session, which has been committed, and closed afterwards.
I merged obj to the session using

session.merge(obj, load=False)

 but doing so yields this exception:

Parent instance obj at 0x18817d0 is not bound to a Session; lazy
load operation of attribute 'predecessor' cannot proceed

without load=False everything is o.k.

I cannot quite understand this behaviour. I thought that load=False
does not update the object from db, but it merges it as-is to the
session.
but perhaps i'm missing some insight...

Paul

-- 
paulbalom...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: session.merge(obj, load=False) yields Parent instance obj at 0x18817d0 is not bound to a Session; lazy load operation of attribute 'predecessor' cannot proceed

2010-04-16 Thread Paul Balomiri
My  Fault,

session.merge(obj, load=False) should be

obj=session.merge(obj, load=False)



2010/4/17 Paul Balomiri paulbalom...@gmail.com:
 Hi,

 I was just trying to load a relation ( obj.predecessor) for obj.
 obj was created in a session, which has been committed, and closed afterwards.
 I merged obj to the session using

 session.merge(obj, load=False)

  but doing so yields this exception:

 Parent instance obj at 0x18817d0 is not bound to a Session; lazy
 load operation of attribute 'predecessor' cannot proceed

 without load=False everything is o.k.

 I cannot quite understand this behaviour. I thought that load=False
 does not update the object from db, but it merges it as-is to the
 session.
 but perhaps i'm missing some insight...

 Paul

 --
 paulbalom...@gmail.com




-- 
paulbalom...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Notification framework for table changes

2010-03-28 Thread paul
Hi,

Sorry for the last mail, which i sent in error, before finishing it.
If you(Michael) still have it in the filter, please just reject it.

I am working on a notification framework, which saves the dirty, new
and deleted states of objects in sessions to an Eventlog at commit
time.
The target is to have asynchronous agents listening to table changes
and a sqla extension to notify them when data is written to db.

The notification sender core code is written as a SessionExtension at
the moment.
The design uses the postgres NOTIFY / listen statements, and a table
(event_log) for carrying the per commit information of primary id's
which have added/deleted/modified. I know that NOTIFY is not sql
standard, but many dbs have some sort of notification capability. So i
think a general case is abstractable from this db agnostic proposal
(=pg's NOTIFY).
It is even conceivable to send  the notification event through OS
services, while still keep the notification data in the db.

I have already implemented this simple protocol:

before_commit(self, session):
   event_data=# get all pks for session.['dirty', 'new', 'deleted']
   for each x  of [dirty', 'new', 'deleted']
 session.add(Event( data=event_data[x]))

   session.add(Event(data=event_data)) # gener
   for each object type:
 for each notification type: #= one of (dirty', 'new', 'deleted')
 self.pending_notifications = [self.get_table_name(class_)]
   
First Question:  How does SQLAlchemy map the  call to
session.query(MappedObject) to a MetaData.tables[x]. Is there any way
to do this without using a custom mapper fuction ? I would prefer
using the same mechanism as SQLAlchemy
   self.pending_notifications += General Event for table


after_commit:
   send Notifications on separate db agnostic connection (in pg i need
to set a different isolation level)


My second concern is about what session.execute(UPDATE table_x set
pop = ''newval) actually does.
Is the sql statement parsed for table names? My interest would be in
marking the objects which are changed by this update as dirty. So what
i am really looking for is
a method to generate
Note that i use pk=Primary Key

  'SELECT pk1, pk2 , change_field from tablename where prop=x'
from an update:
  'UPDATE tablename set change_field=...  where prop=x'
This would enable me to generate a dirty pk  list from any (text) sql
statement accepted by sqla.

Please let me know if you feel that this design is not the 'sqla way
to do things'. I want to OS the code, but i wanted to get an short
feedback if this is of any use for anyone out there. So please
comment :)

paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Notification framework for table changes

2010-03-28 Thread Paul Balomiri





On 28.03.2010, at 17:58, Michael Bayer wrote:



On Mar 28, 2010, at 11:38 AM, paul wrote:


Hi,

Sorry for the last mail, which i sent in error, before finishing it.
If you(Michael) still have it in the filter, please just reject it.

I am working on a notification framework, which saves the dirty, new
and deleted states of objects in sessions to an Eventlog at commit
time.
The target is to have asynchronous agents listening to table changes
and a sqla extension to notify them when data is written to db.

The notification sender core code is written as a SessionExtension at
the moment.
The design uses the postgres NOTIFY / listen statements, and a table
(event_log) for carrying the per commit information of primary id's
which have added/deleted/modified. I know that NOTIFY is not sql
standard, but many dbs have some sort of notification capability.  
So i

think a general case is abstractable from this db agnostic proposal
(=pg's NOTIFY).
It is even conceivable to send  the notification event through OS
services, while still keep the notification data in the db.

I have already implemented this simple protocol:

before_commit(self, session):
 event_data=# get all pks for session.['dirty', 'new', 'deleted']
 for each x  of [dirty', 'new', 'deleted']
   session.add(Event( data=event_data[x]))

 session.add(Event(data=event_data)) # gener
 for each object type:
   for each notification type: #= one of (dirty', 'new', 'deleted')
   self.pending_notifications = [self.get_table_name(class_)]
 
First Question:  How does SQLAlchemy map the  call to
session.query(MappedObject) to a MetaData.tables[x]. Is there any way
to do this without using a custom mapper fuction ? I would prefer
using the same mechanism as SQLAlchemy
 self.pending_notifications += General Event for table


after_commit:
 send Notifications on separate db agnostic connection (in pg i need
to set a different isolation level)


My second concern is about what session.execute(UPDATE table_x set
pop = ''newval) actually does.
Is the sql statement parsed for table names? My interest would be in
marking the objects which are changed by this update as dirty. So  
what

i am really looking for is
a method to generate
Note that i use pk=Primary Key

'SELECT pk1, pk2 , change_field from tablename where prop=x'
from an update:
'UPDATE tablename set change_field=...  where prop=x'
This would enable me to generate a dirty pk  list from any (text) sql
statement accepted by sqla.

Please let me know if you feel that this design is not the 'sqla way
to do things'. I want to OS the code, but i wanted to get an short
feedback if this is of any use for anyone out there. So please
comment :)


some details here are unclear, but it appears you are attempting to  
save Event objects to the database in response to flushed changes,  
and also sending out notifications.   A SessionExtension is an  
appropriate place to do that.If I were writing a generic change  
event system, I would probably use rabbitmq instead of NOTIFY.  If  
I wanted to also have a log of changes in the database, I'd probably  
forego the usage of Session.add() and just do a series of INSERT  
statements using executemany() syntax, as it would have the minimal  
performance/complexity impact.

Thanks for the pointer to rabbitmq!


To get a Table from a mapped class you can use  
class_mapper(MappedClass).mapped_table.   A Session.execute() given  
a literal string-based statement does not parse the string - usually  
its a given that high-level ORM enabled features, like notifying  
listeners of newly changes objects, aren't going to work if the user  
drops down to raw SQL.


That last point leads to naturally towards, if you really want to  
NOTIFY when any changes whatsoever occur on a table, then you should  
be using triggers.
Yes, but i plan to introduce these at table creation time, and only to  
write them to db when the lib user adds a specific ddl extention. DDLs  
would have to be written for all dialects available, so this is a bit  
far fetched.
At the moment I just try to stick with sqla for change management, and  
just abstract the NOTIFY concept. The event log can also be omitted,  
so basically there are 3 Levels (in increasing order of intrusiveness  
into the data model)


1) pure change notification  = here each  listener must maintain it's  
own table state management, no modification to the db model  
whatsoever. Just a 'wake up' event is sent
2) change notification + event_log = the changeset is published  
through an additional table, but the overall datamodel is not affected
3) trigger based change notification = the changeset is maintained by  
triggers, and their definition for each DB is handled by sqla ddl  
extensions on creation time


  Postgresql also allows stored procedures to be written in many  
languages including Python so even using rabbitmq for notification  
is quite doable entirely

Re: [sqlalchemy] Notification framework for table changes

2010-03-28 Thread Paul Balomiri

Hi,
How do i get a list of class mappers  from a metadata table  ?




On 28.03.2010, at 17:58, Michael Bayer wrote:


mapped_table


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] reflecting schema just on a single table

2010-02-23 Thread Paul Rigor (uci)
Hi,

Is there anyway to use a metadata object just to obtain the schema of a
single table? I have a database of hundreds of tables, and calling
MetaData.reflect() retrieves the schema for all of the tables.  This
unnecessarily uses up memory and incurs additional time for i/o to complete.

Thanks,
Paul

-- 
Paul Rigor
Pre-doctoral BIT Fellow and Graduate Student
Institute for Genomics and Bioinformatics
Donald Bren School of Information and Computer Sciences
University of California, Irvine
http://www.ics.uci.edu/~prigor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Optimal table design for SQLAlchemy (Or One-to-many tables vs meta tables)

2009-09-08 Thread Paul Nesbit
Hi Michael,

Thanks for taking the time to provide this informative and helpful reply,
it's much appreciated.

Cheers,

  Paul

On Mon, Sep 7, 2009 at 4:08 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Sep 7, 2009, at 3:27 PM, Paul Nesbit wrote:

 Hi,

 I'm in the process of setting up SQLAlchemy classes after having defined my
 database tables.   I'm at the point where I'm mapping relationships, and
 I've realized now that I may have designed my tables in a sub-optimal way,
 or in a way that may not lend itself well to mapping relationships with
 SQLAlchemy.  Before asking any questions about how to map relationships in
 my model, I'd like ask a more general question about database design.

 I don't know how to describe the design I've chosen, other than to say I've
 seen it used in other data models (WorldPress and RT) in which relationships
 are defined in meta tables with key value pairs, rather than one-to-many
 relationships between the entity tables and a designated relaionship table.
 Here's an example of the two designs:

 *Users
 idNameemail
 *1 bobb...@company
 2 nancyna...@company

 *Roles
 idroledescription*
 1 administratoraccess to all
 2 editorcan change doc content
 3 publisher  can create/move/delete docs

 The meta table design:

 *UserMeta
 iduser_idmetakeyvalue*
 11role_id   1
 22role_id   2
 32role_id   3

 The one-to-many design:

 *UserRoles
 iduser_idrole_id*
 1 1   1
 2 2   2
 3 2   3

 I prefer the meta table design because a) I'm seeing clueful application
 developers use it (not a great reason, I know) and b) it means fewer tables
 (e.g. if I create a company table, I don't need to also create a
 usercompanies table).   I suspect there's other benefits, however I'm not
 aware enough to know them.

 Can anyone here speak to the benefits of one model over the other?

 If I proceeded with the meta-table option, can I expect mapping the
 relationships between the meta-table's key/value relationships to be
 painful, or not possible?


 SQLA can accommodate both versions, but the meta is more difficult to
 model, since it ultimately involves modeling any number of relations between
 any number of different mapped classes among one giant association table,
 and the conditions by which two remote tables are joined together is more
 complex, less performant, less safe and less flexible from a relational
 database point of view.

 I located an old email I wrote to some colleagues about this subject but
 its perhaps a little too specific to what we were doing then.  The basic
 answer is that theres no advantage whatsoever to having fewer tables, and
 there are many disadvantages to using a model that batches lots of disparate
 data into one huge table and disallows proper constraints.   More tables
 allow data to be partitioned among fewer sets of rows.  Therefore lookups
 and joins are less costly for all relationships, the cost of one or two
 enormous relationships isn't passed onto all the other relationships in the
 database, and the distinct tables of relationship data are generally more
 easily managed and partitioned across disks and servers.  Allowing
 individual columns to represent exactly one relationship elsewhere allows
 efficient indexing (i.e. indexing of a single integer field, instead of a
 composite across the id plus metakey), greatly reduces the space required to
 store the relationships (no need to store metakey), and allows the
 construction of proper foreign key constraints.   Having to keep track of
 many association tables, the only issue I can think of that might be raised,
 is not an issue since you're using a SQL/ORM toolkit to transform that into
 an invisible detail.

 The meta model you have would require that value not have any proper
 constraints which for any serious database designer is a total dealbreaker -
 any number of rows can be inserted into UserMeta, or created via deletions
 of related objects, that are entirely invalid and your database has no way
 to prevent such corrupted data from being created, nor is there any
 capability to CASCADE deletes from remote objects.   The reason such models
 exist (primarily in the PHP and Ruby on Rails worlds) is because they
 ultimately derive from MySQL and its famous culture of who needs foreign
 keys?.  While MySQL devs have recanted that philosophy years ago, the awful
 habits they've spawned among these communities persist.

 i wrote a blog post about something slightly more complex than this called
 polymorphic association, which illustrates Rails' incorrect methodology of
 implementing this (and how to map the identical thing in SQLAlchemy) and a
 better, constraint-bearing method at http://techspot.zzzeek.org/?p=13 ...
 but that's just a related nugget.  The use case you have

[sqlalchemy] IronPython

2009-08-03 Thread Paul Hemans

Does anyone have any experience with SQLalchemy on IronPython? I was
considering using it as part of a platform and would like to find out
if there are problems before I got in too deep.

Thanks
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Basic Search Engine

2009-05-11 Thread Paul McGuire

On May 11, 6:20 am, fluence ndudfi...@gmail.com wrote:
 @Paul

 I have been having a play with [py]parsing. What a nifty little
 library!

 I read those 2 free tutes and liked what I saw so bought a
 subscription to safari just so I could read your short cut.


Glad to hear that pyparsing is giving you a jump start!  I downloaded
sqlalchemy, and eventually got the parameters straight to call your
code (passing the search string, the c attribute of a Table,
followed by a list of column names).  Here's what I got:

from sqlalchemy import Table, Column, Integer, String, MetaData,
ForeignKey

metadata = MetaData()
prod_table = Table('product', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('color', String),
Column('size', String),
Column('material', String),
)

print search_fields_like(RED OR GREEN AND NOT BLUE, prod_table.c,
color size.split())

Gives:

product.color LIKE :color_1 ESCAPE '\\' OR product.size LIKE :size_1
ESCAPE '\\' OR (product.color LIKE :color_2 ESCAPE '\\' OR
product.size LIKE :size_2 ESCAPE '\\') AND NOT (product.color
LIKE :color_3 ESCAPE '\\' OR product.size LIKE :size_3 ESCAPE '\\')

(Where do the parsed values, like RED, GREEN, and BLUE go?


You may at some point need to go beyond just Word(alphas) for search
terms, such as Word(alphanums) (words made up of alphas or letters),
or Word(alphas, alphanums) (words made up of alphas or letters, but
must start with an alpha).

Since your search string just takes search values, this is what makes
it necessary for you to qualify the call with a list of potential
search columns.  This is okay if you are searching fields of an
article (like say, title, subject, abstract, and/or body).  But if the
columns are dissimilar, such as fields of the products in a catalog,
then you'll be searching fields like size and color with impossible or
even misleading options (size like '%RED%' is a wasted search, but
color like '%L%' will return YELLOW items, whether they are Large or
not).  For an application like the product catalog, then you could
recast your query grammar to search for boolean combinations of
conditional expressions like field like value.  Then you wouldn't
need the additional parameter listing the fields to search, you can
parse them out of the query string itself.  This would also support
conditional tests other than like, such as price  200.

In any event, I hope pyparsing will help support your experimenting,
and let you try some different application ideas while pyparsing
offloads some of the dull parsing stuff.

-- Paul
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Basic Search Engine

2009-05-10 Thread Paul McGuire

On May 10, 5:13 am, Nicholas Dudfield ndudfi...@gmail.com wrote:
 Greetings,

 I have been using SQLA for a few months.

 For admin CRUD index pages I have been using a naive search_keywords
 function as
 seen at end of message.

 Instead of using a primitive shlex.split, which incidentally is not unicode
 friendly, and one crude search_type (AND|OR) I'd like to use something
 that will
 lex/parse a search string and build the queries.

 eg.
   ((x or z or y) and q) or not h m

 I imagine this would be a fairly common requirement however I can't seem
 to find
 any implementation anywhere.

 I used google code search with the query pyparsing sqlalchemy lang:python
 however found no useful results.


Google for pyparsing query parser and you'll find some helpful
links:
http://pyparsing.wikispaces.com/file/view/searchparser.py
http://rephrase.net/days/07/04/pyparsing

The O'Reilly shortcut Getting Started with Pyparsing ends with the
development of a search query parser to search for recipes by querying
for matching ingredients.

-- Paul
(author of Getting Started with Pyparsing)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: obtaining a table schema

2009-04-29 Thread Paul Rigor (gmail)
Thanks Mike,
Like I mentioned for my particular application, I won't be using the usual
ORM but just the bare engine/connection. I'll just be provided with a table
name and a connection.  I did a little bit of research but was only able to
figure out how to obtain the primary for a mysql database (ie, through the
mysql dialect instance).

I was just wondering if there was a transparent interface regardless of the
database dialect.

Paul

On Wed, Apr 29, 2009 at 5:35 AM, Mike Conley mconl...@gmail.com wrote:

 Look at the primary_key attribute of the table instance.

 uu = Table('u',meta,
 Column('id',Integer,primary_key=True),
 Column('data',Integer))
 print uu.primary_key.columns
 ['u.id']


 Mike



 On Tue, Apr 28, 2009 at 7:53 PM, Paul Rigor (gmail) 
 paulri...@gmail.comwrote:

 Hi gang,
 I've recently started using sqlalchemy, so hopefully this isn't a stupid
 question...

 I was wondering whether there was an easy way to obtain a particular
 table's schema if one is using just bare connection (ie, not using any
 special orm's).  Specifically, is there a utility method somewhere which
 allows one to obtain the primary key of a table?

 Thanks!!
 paul

 --
 Paul Rigor
 Graduate Student
 Institute for Genomics and Bioinformatics
 Donald Bren School of Information and Computer Sciences
 University of California in Irvine
 248 ICS2 Bldg.
 +1 (760) 536 - 6767 (skype)




 



-- 
Paul Rigor
Graduate Student
Institute for Genomics and Bioinformatics
Donald Bren School of Information and Computer Sciences
University of California in Irvine
248 ICS2 Bldg.
+1 (760) 536 - 6767 (skype)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: obtaining a table schema

2009-04-29 Thread Paul Rigor (gmail)
Thanks,
FYI that link you sent was very useful.

For anyone else interested, here's my code snippet.  I've tested this with
both mysql and sqlite databases.

def get_primary_key(tablename,*args,**kwargs):
from sqlalchemy import MetaData
metadata = MetaData(*args,**kwargs) # uri similar to instantiating
an engine
metadata.reflect()
try:
table = metadata.tables[tablename]
except KeyError:
table = metadata.tables.values()[0]
except IndexError:
raise(Exception(Error: The database does not contain any
tables.))
try:
primary_key = table.primary_key.keys()[0]
except IndexError:
raise(Exception(Error: The specified table has no primary
key!))
return primary_key


On Wed, Apr 29, 2009 at 12:33 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 the Table object as well as the primary_key attribute are transparent as
 far as what DBAPI and database is in use.  Its also not part of the ORM.
 there is a more fine-grained interface called the Inspector available in
 0.6, but you can get the same results by reflecting a Table.

 On Apr 29, 2009, at 3:09 PM, Paul Rigor (gmail) wrote:

 Thanks Mike,
 Like I mentioned for my particular application, I won't be using the usual
 ORM but just the bare engine/connection. I'll just be provided with a table
 name and a connection.  I did a little bit of research but was only able to
 figure out how to obtain the primary for a mysql database (ie, through the
 mysql dialect instance).

 I was just wondering if there was a transparent interface regardless of the
 database dialect.

 Paul

 On Wed, Apr 29, 2009 at 5:35 AM, Mike Conley mconl...@gmail.com wrote:

 Look at the primary_key attribute of the table instance.

 uu = Table('u',meta,
 Column('id',Integer,primary_key=True),
 Column('data',Integer))
 print uu.primary_key.columns
 ['u.id']


 Mike



 On Tue, Apr 28, 2009 at 7:53 PM, Paul Rigor (gmail) 
 paulri...@gmail.comwrote:

 Hi gang,
 I've recently started using sqlalchemy, so hopefully this isn't a stupid
 question...

 I was wondering whether there was an easy way to obtain a particular
 table's schema if one is using just bare connection (ie, not using any
 special orm's).  Specifically, is there a utility method somewhere which
 allows one to obtain the primary key of a table?

 Thanks!!
 paul

 --
 Paul Rigor
 Graduate Student
 Institute for Genomics and Bioinformatics
 Donald Bren School of Information and Computer Sciences
 University of California in Irvine
 248 ICS2 Bldg.
 +1 (760) 536 - 6767 (skype)








 --
 Paul Rigor
 Graduate Student
 Institute for Genomics and Bioinformatics
 Donald Bren School of Information and Computer Sciences
 University of California in Irvine
 248 ICS2 Bldg.
 +1 (760) 536 - 6767 (skype)





 



-- 
Paul Rigor
Graduate Student
Institute for Genomics and Bioinformatics
Donald Bren School of Information and Computer Sciences
University of California in Irvine
248 ICS2 Bldg.
+1 (760) 536 - 6767 (skype)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query on a field from an expression

2009-04-29 Thread Paul Hemans

Resolved by moving the method into the schema procedure file.

On Apr 28, 10:21 am, Paul Hemans p_hem...@hotmail.com wrote:
 Hi I am new to Python. I need to produce a simple query on the key
 field.
 exists = self.session.query(BILLS).filter(ID==1)

 However, the process is running generically. That is the declarative
 table definitions are in the file schema.py and the table, field and
 value are determined at runtime. So what I have is the following ( I
 have omitted a bit of code for clarity) :

 class myClass():
     def import_data
         import schema
         # self.tableName defined elsewhere
         TI = eval(schema.+self.tableName+())
         exists = None
         for node in tupleNode.childNodes:
             for dataNode in node.childNodes:
                     cValue = dataNode.data
                     if node.tagName == self.keyField:
                           Prob.
                         #  self.keyField is determined elsewhere
                         exists = self.session.query(TI).filter(getattr
 (TI,self.keyField)==cValue)

 I get to the query and get the following message:
 Invalid column expression 'schema.BILLS object at 0x29DB7330'
 Any help would be appreciated.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] reflection with app server

2009-04-28 Thread paul

hello,

i am looking for help and explanation to reflect tables in context of
app server like cherrypy. i can't find a 'place' to auto-load and map
tables. if you do it as part of cp start thread (e.g.
http://cherrypy.org/wiki/CustomPlugins, 
http://tools.cherrypy.org/wiki/Databases),
we load/map often; i sure don't want to load/map for every app user.
with reflection, i need an engine, which i don't get with app server
until app server starts ... and then it looks to late. anybody can
explain?

thx,
paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] obtaining a table schema

2009-04-28 Thread Paul Rigor (gmail)
Hi gang,
I've recently started using sqlalchemy, so hopefully this isn't a stupid
question...

I was wondering whether there was an easy way to obtain a particular table's
schema if one is using just bare connection (ie, not using any special
orm's).  Specifically, is there a utility method somewhere which allows one
to obtain the primary key of a table?

Thanks!!
paul

-- 
Paul Rigor
Graduate Student
Institute for Genomics and Bioinformatics
Donald Bren School of Information and Computer Sciences
University of California in Irvine
248 ICS2 Bldg.
+1 (760) 536 - 6767 (skype)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] query on a field from an expression

2009-04-27 Thread Paul Hemans

Hi I am new to Python. I need to produce a simple query on the key
field.
exists = self.session.query(BILLS).filter(ID==1)

However, the process is running generically. That is the declarative
table definitions are in the file schema.py and the table, field and
value are determined at runtime. So what I have is the following ( I
have omitted a bit of code for clarity) :

class myClass():
def import_data
import schema
# self.tableName defined elsewhere
TI = eval(schema.+self.tableName+())
exists = None
for node in tupleNode.childNodes:
for dataNode in node.childNodes:
cValue = dataNode.data
if node.tagName == self.keyField:
  Prob.
#  self.keyField is determined elsewhere
exists = self.session.query(TI).filter(getattr
(TI,self.keyField)==cValue)

I get to the query and get the following message:
Invalid column expression 'schema.BILLS object at 0x29DB7330'
Any help would be appreciated.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Create table from declarative class definition.

2009-04-26 Thread Paul Hemans

Hi Newbie here,
If, using the declarative style of classes, I want to create my tables
separately from actually populating them. I thought I would use
something similar to the following:

###
from sqlalchemy import *
import time
from datetime import *
from sqlalchemy.ext.declarative import declarative_base

class CST_LEVEL(Base):
__tablename__ = 'CST_LEVEL'

REFNO = Column(String(length=10), primary_key=True)
CODE = Column(String(length=10))
DESCRIPT = Column(String(length=60))
def __init__(self, \
REFNO=, \
CODE=, \
DESCRIPT=):
self.REFNO = REFNO
self.CODE = CODE
self.DESCRIPT = DESCRIPT

engine = create_engine('sqlite:///tutorial.db', echo=False)
metadata = MetaData()
Base = declarative_base()
metadata.create_all(engine)
###

But this doesn't work I guess because the class never gets
instantiated. If I instantiate the class with x = CST_LEVEL() then I
would create a blank record as a by-product. How do you create the
tables, without records, using the declarative syntax?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] OT: Travel Blog

2008-11-22 Thread Paul Johnston

Hi all,

Well, I finally set off on my travels, and it's turned out to be more
than I ever dreamed of. If you're interested, I'm keeping a blog here:
http://paj28.livejournal.com/

Hope everything's going well with SA. All the best,

Paul
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] MSSQL Failing deletes with subquery + schema

2008-10-06 Thread Paul Johnston

Hi,

Is there any chance someone can look at ticket 973?
http://www.sqlalchemy.org/trac/ticket/973

This is quite important to me, hitting the bug with a production app.
I've got a very hacky fix, which just disables the table aliasing, but
I don't think that's good enough to commit. I have looked at this a
few times and not figured it out, so any help would be appreciated.

Best wishes,

Paul
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   3   4   >