Re: [sqlalchemy] How to SELECT computed data using SQLAlchemy?

2021-12-14 Thread 'Daniel Grindrod' via sqlalchemy
That worked brilliantly, thanks so much for your help!
Very much appreciated :)

On Tuesday, 14 December 2021 at 18:16:02 UTC Mike Bayer wrote:

>
>
> On Tue, Dec 14, 2021, at 12:26 PM, 'Daniel Grindrod' via sqlalchemy wrote:
>
> Hi Michael,
>
> Thanks for such a quick reply.
> I enjoyed reading it! I actually inherited this API (I swear I'm not just 
> making excuses!) from a colleague who left a few months earlier, so it's 
> very much been a case of 'Figuring it out as I go along'.
>
> Apologies for the incomplete code - despite it not being particularly 
> exciting code, I wanted to double check that I'm allowed to post it 
> publicly. 
> So the original (complete) code for this function is as follows:
>
> def similar_structure_matches(smiles, similarity_threshold): 
>
>   struc_sim_query = db.select([structures_tbl, text(":q_smiles as 
> query_smiles, jc_tanimoto(canonical_smiles, :q_smiles) as 
> similarity").bindparams(q_smiles=smiles)]). \  
> where(text("jc_tanimoto(canonical_smiles, :q_smiles) >= :q_sim"). 
> bindparams(q_smiles=smiles, q_sim=similarity_threshold)) 
>
>   struc_sim_res = struc_sim_query.execute().fetchall()
>
>   if len(struc_sim_res) ==   0: 
> return '', 204 
>
> returnMatchLimaSchema(many=True).dump(struc_sim_res) 
> The above code is used to generate tanimoto (similarity) scores for the 
> queried structure against each structure in the database( SMILES describe 
> chemical structures).
> As I understand it, the jc_tanimoto function comes from the Chemaxon 
> Cartridge which we have installed on our Oracle server (Cartridge API | 
> ChemAxon Docs 
> <https://docs.chemaxon.com/display/docs/cartridge-api.md#src-1803469-cartridgeapi-jc-tanimoto>
> ).
>
> I'm not entirely sure how to call this function, without it being wrapped 
> by text().
> As I understand it, the code you sent across would be applying the 
> comparison (now jc_tanimoto) function in the Python; not within Oracle 
> itself (of course, that was impossible for you to predict with the 
> incomplete code I sent across).
>
> Could you please advise on how to correctly structure this query?
>
>
> There's a construct in SQLAlchemy called func that renders a SQL 
> -function-like syntax for any arbitrary word, like this:
>
> f>>> from sqlalchemy import func
> >>> from sqlalchemy import select
> >>> print(select([func.jc_tanimoto('some data').label("my_label")]))
> SELECT jc_tanimoto(:jc_tanimoto_1) AS my_label
>
> so as long as there's no unusual SQL syntaxes in play you can use 
> func. to generate SQL for any SQL function with parameters.
>
>
>
>
>
>
> Thanks again,
> Dan
>
> On Tuesday, 14 December 2021 at 13:31:12 UTC Mike Bayer wrote:
>
>
>
> On Tue, Dec 14, 2021, at 5:40 AM, 'Daniel Grindrod' via sqlalchemy wrote:
>
> Hi all,
>
> I'm working on a REST API which is built using Flask-SQLAlchemy and 
> Connexion. I'm fairly new to SQLAlchemy, but it's been brilliant so far 
> :) This API uses SQLAlchemy 1.3.16, and connects to an Oracle Database (12c 
> 12.1.0.1.0 64bit). 
> <https://stackoverflow.com/posts/70341129/timeline>
>
> I'm having an issue generating the correct SQL from a SQLAlchemy query. I 
> would really appreciate any help. The troublesome function is shown below.
> def similar_matches(input_descriptor, threshold, lim=None, offset): 
>
>   query = db.select([tbl, text(":q_descriptors as query_descriptors, 
> comparison(descriptors, :q_descriptors) as 
> similarity")bindparams(q_descriptor=input_descriptor).\
>   where( text("comparison(descriptors, :q_descriptors) >=  
> q_threshold").bindparams(q_descriptor=input_descriptor, q_threshold = 
> threshold) 
>
>
> heya -
>
> it's early here but I almost want to be able to tell a story about that 
> pattern above, which has select(text("cola, colb, colc, ...))  in it.   
> It's kind of an "anti-unicorn" for me, as I've done many refactorings to 
> the result-processing part of SQLAlchemy's engine and each time I do so, 
> there's some internal handwringing over, "what if someone is SELECTING from 
> a text() that has multiple columns comma-separated in them?", which 
> specifically is a problem because it means we can't positionally link the 
> columns we get back from the cursor to the Python expressions that are in 
> the select() object, and each time it's like, "nah, nobody would do that", 
> or, "nah, nobody *should* do that", but yet, as there's not a 
> straightforward way to detect/warn for that, there's a whole set of code / 
> commentary at 
> https://github.com/sqlalchemy/sqlal

Re: [sqlalchemy] How to SELECT computed data using SQLAlchemy?

2021-12-14 Thread 'Daniel Grindrod' via sqlalchemy
Hi Michael,

Thanks for such a quick reply.
I enjoyed reading it! I actually inherited this API (I swear I'm not just 
making excuses!) from a colleague who left a few months earlier, so it's 
very much been a case of 'Figuring it out as I go along'.

Apologies for the incomplete code - despite it not being particularly 
exciting code, I wanted to double check that I'm allowed to post it 
publicly. 
So the original (complete) code for this function is as follows:

def similar_structure_matches(smiles, similarity_threshold): 

  struc_sim_query = db.select([structures_tbl, text(":q_smiles as 
query_smiles, jc_tanimoto(canonical_smiles, :q_smiles) as 
similarity").bindparams(q_smiles=smiles)]). \  
where(text("jc_tanimoto(canonical_smiles, :q_smiles) >= :q_sim"). 
bindparams(q_smiles=smiles, q_sim=similarity_threshold)) 

  struc_sim_res = struc_sim_query.execute().fetchall()

  if len(struc_sim_res) ==   0: 
return '', 204 

returnMatchLimaSchema(many=True).dump(struc_sim_res) 

The above code is used to generate tanimoto (similarity) scores for the 
queried structure against each structure in the database( SMILES describe 
chemical structures).
As I understand it, the jc_tanimoto function comes from the Chemaxon 
Cartridge which we have installed on our Oracle server (Cartridge API | 
ChemAxon Docs 
<https://docs.chemaxon.com/display/docs/cartridge-api.md#src-1803469-cartridgeapi-jc-tanimoto>
).

I'm not entirely sure how to call this function, without it being wrapped 
by text().
As I understand it, the code you sent across would be applying the 
comparison (now jc_tanimoto) function in the Python; not within Oracle 
itself (of course, that was impossible for you to predict with the 
incomplete code I sent across).

Could you please advise on how to correctly structure this query?

Thanks again,
Dan

On Tuesday, 14 December 2021 at 13:31:12 UTC Mike Bayer wrote:

>
>
> On Tue, Dec 14, 2021, at 5:40 AM, 'Daniel Grindrod' via sqlalchemy wrote:
>
> Hi all,
>
> I'm working on a REST API which is built using Flask-SQLAlchemy and 
> Connexion. I'm fairly new to SQLAlchemy, but it's been brilliant so far 
> :) This API uses SQLAlchemy 1.3.16, and connects to an Oracle Database (12c 
> 12.1.0.1.0 64bit). 
> <https://stackoverflow.com/posts/70341129/timeline>
>
> I'm having an issue generating the correct SQL from a SQLAlchemy query. I 
> would really appreciate any help. The troublesome function is shown below.
> def similar_matches(input_descriptor, threshold, lim=None, offset): 
>
>   query = db.select([tbl, text(":q_descriptors as query_descriptors, 
> comparison(descriptors, :q_descriptors) as 
> similarity")bindparams(q_descriptor=input_descriptor).\
>   where( text("comparison(descriptors, :q_descriptors) >=  
> q_threshold").bindparams(q_descriptor=input_descriptor, q_threshold = 
> threshold) 
>
>
> heya -
>
> it's early here but I almost want to be able to tell a story about that 
> pattern above, which has select(text("cola, colb, colc, ...))  in it.   
> It's kind of an "anti-unicorn" for me, as I've done many refactorings to 
> the result-processing part of SQLAlchemy's engine and each time I do so, 
> there's some internal handwringing over, "what if someone is SELECTING from 
> a text() that has multiple columns comma-separated in them?", which 
> specifically is a problem because it means we can't positionally link the 
> columns we get back from the cursor to the Python expressions that are in 
> the select() object, and each time it's like, "nah, nobody would do that", 
> or, "nah, nobody *should* do that", but yet, as there's not a 
> straightforward way to detect/warn for that, there's a whole set of code / 
> commentary at 
> https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/engine/cursor.py#L325
>  
> which wonders if we'd ever see this.   
>
> and here it is!  :)   the dark unicorn.So, it's also the source of the 
> issue here, because the Oracle dialect has to restructure the query to 
> simulate limit/offset.   S back into the barn w/ the unicorn and 
> what we do here is make sure the select() has enough structure so that 
> SQLAlchemy knows what's going on and here that would look like (note I'm 
> making some syntactical assumptions about the code above which seems to be 
> incomplete ):
>
> from sqlalchemy import literal, func
>
> query = db.select(
> [
> tbl,
> literal(input_descriptor).label("query_descriptors"),
> func.comparison(tbl.c.descriptors, 
> q_descriptors).label("similarity")
> ]).
>   where(
> func.comparison(tbl.c.descriptors, q_descriptors) >= threshold
>
>   )
>
> that way yo

[sqlalchemy] How to SELECT computed data using SQLAlchemy?

2021-12-14 Thread 'Daniel Grindrod' via sqlalchemy
Hi all,

I'm working on a REST API which is built using Flask-SQLAlchemy and 
Connexion. I'm fairly new to SQLAlchemy, but it's been brilliant so far 
:) This API uses SQLAlchemy 1.3.16, and connects to an Oracle Database (12c 
12.1.0.1.0 64bit). 


I'm having an issue generating the correct SQL from a SQLAlchemy query. I 
would really appreciate any help. The troublesome function is shown below.
def similar_matches(input_descriptor, threshold, lim=None, offset): 

  query = db.select([tbl, text(":q_descriptors as query_descriptors, 
comparison(descriptors, :q_descriptors) as 
similarity")bindparams(q_descriptor=input_descriptor).\
  where( text("comparison(descriptors, :q_descriptors) >=  
q_threshold").bindparams(q_descriptor=input_descriptor, q_threshold = 
threshold) 

  res = query.execute().fetchall() 

  if len(res)=0 return '', 204 

  return MatchLimaScheme(many = True).dump(res) 

This SQLAlchemy code takes two inputs (descriptor and threshold), and 
searches through each descriptor in an Oracle database, calculating a 
similarity measure  between the queried descriptor and each stored 
descriptor. All rows where similarity score >= threshold are returned in a 
JSON.

The above code works fine, but returns all results - whereas I want to also 
be able to include a .offset() and a .limit() (for lazy loading). The code 
above generates SQL along these lines:
SELECT ID, last_modified, descriptors, :q_descriptors as query_descriptors, 
comparison(descriptors, :q_descriptors) as similarity' FROM tbl WHERE 
compare(descriptors, :q_descriptors) >= :q_threshold 

which works well. However, when I add .limit() or .offset() on the end of 
my query i.e.
query = db.select([tbl, text(":q_descriptors as query_descriptors, 
comparison(descriptors, :q_descriptors) as similarity" 
).bindparams(q_descriptor=input_descriptor).where( 
text("comparison(descriptors, :q_descriptors) >= :q_threshold") 
.bindparams(q_descriptor=input_descriptor,q_threshold = 
threshold).limit(limit) 

The SQL generated changes to be along these lines: 
SELECT ID, last_modified, descriptors FROM (SELECT tbl.ID as ID, 
tbl.last_modified as last_modified, tbl.descriptors as descriptors, 
:q_descriptors as query_descriptors, comparison(descriptors, 
:q_descriptors) as similarity) FROM tbl WHERE compare(descriptors, 
:q_descriptors) >= :q_threshold WHERE ROWNUM <= :q_limit 

As a raw SQL query this is fine, but I'm no longer including the 
query_descriptors and similarity metrics in my SELECT clause. Thus I get a 
columnNotFoundError. How do I adjust the .select() function above so that 
my SQL looks more like:
SELECT ID, last_modified, descriptors, query_descriptors, similarity FROM 
(SELECT tbl.ID as ID, tbl.last_modified as last_modified, tbl.descriptors 
as descriptors,:q_descriptors as query_descriptors comparison(descriptors, 
:q_descriptors) as similarity' FROM tbl WHERE compare(descriptors, 
:q_descriptors) >= :q_threshold WHERE ROWNUM <= :q_limit OFFSET :q_offset 

Basically, I'm looking to explicitly tell SQLAlchemy that I want to SELECT 
tbl AND query_descriptors AND similarity.

I've also been informed that it's bad practice to not include a .order_by() 
in these queries, but I don't think that is what's causing the issue here. 
It is on my to do list though.

Please let me know if I need to provide more information.

Thanks for any help,

Dan

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/adf3ee50-d198-4193-bcfb-e866cad62e52n%40googlegroups.com.


Re: Stray changes detected only sometimes with revision --autogenerate

2020-09-30 Thread Daniel Krebs

Hi Mike,

I adapted our setup to the new cookbook recipe and it basically fixed 
both problems! I don't need to maintain my
patches anymore (thanks to setting 
`connection.dialect.default_schema_name`, this did the trick!) and the 
detected stray changes are also gone. I did some investigation with 
increased log level as you suggested and came to the conclusion that it 
must have been a bad idea to manipulate the `conn_table_names` object 
in-place. But since that's gone now, everything is working as expected \o/


Thank you so much for your help!

Cheers,
Daniel

On 29.09.20 19:04, Mike Bayer wrote:



Hi, so I added a quick recipe to the site just now just so that the "set 
search path" idea is documented to some extent, that is at 
https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases 
.


Re: autogenerate, if you have many schemas that all contain an exact 
copy of the same schema, then autogenerate implies you'd be running this 
exactly once for just one of the schemas, or a "master" schema, such 
that you generate the .py files that will then be applicable to all the 
schemas.  The recipe above discusses this, and in particular it works in 
a completely schema agnostic fashion, you would set 
include_schemas=False in the config and allow the PostgreSQL search_path 
to handle all movement between schemas.


then for the actual migration "upgrade" run, you would run those files 
against each schema in sequence, again making use of search_path in 
order to select each schema.


as far as how alembic is looking at current tables you would need to 
turn on SQL logging, using the "debug" level, such as in alembic.ini (if 
you're using that):


[logger_sqlalchemy]
level = DEBUG
handlers =
qualname = sqlalchemy.engine



this will emit a lot of queries and result sets.  You'd have to capture 
that logging and then grep through for the "nonexistent" schema - it 
will be present in a result set that Alembic is capturing, most 
fundamentally the one it uses to get all the schema names.





On Tue, Sep 29, 2020, at 4:01 PM, Daniel Krebs wrote:

Hi Mike,

thanks a bunch for the quick response!

>> This is supported by SQLAlchemy more robustly using the
>> schema_translate_map feature, see
>> 
https://docs.sqlalchemy.org/en/13/core/connections.html#schema-translating 


>> .   this feature allows schema names to render explicitly without the
>> need to manipulate search_path.
>
> this feature may not work completely with Alembic however, I'm
> evaluating that now.
>
> still doesnt shed any light on your sporadic problem, however.

It's been some time already since I've implemented this, so can't recall
the exact reason why I went down that road but I know for sure that I
was trying to make it work with schema_translate_map but didn't succeed.
I would have very much preferred a solution without patching alembic it
seemed to my last resort at the time. I'd be more than happy to find a
solution here :)


> However, when using that approach you'd normally be running
autogenerate from just one schema, since you are looking to generate
migrations just once and then run them on every tenant.
>
> so it's not clear why autogenerate is being run on every tenant
explicitly - you'd have just one "model" schema that's the one where you
actually run autogenerate upon.   there would be no need to consider
other schemas and include_schemas would be set to False.

Indeed that sounds like a good idea. So you'd suggest to keep one schema
just for autogeneration purposes and then apply the migrations to all
customer schemas? Is that possible from within alembic or would you wrap
some tooling around alembic in order to apply migrations one-by-one to
every schema?


> As far as being able to filter out schemas across many, the current
approach is the include_object hook, which is insufficient for many
schemas as it does not block the reflection of all the tables.   a new
hook include_name is being added in the coming weeks that allows for
pre-emptive inclusion or exclusion of specific schema names.

I tried the include_object hook back then, but I was facing reflection
issues if I remember correctly. We're using TimescaleDB [1] in the same
database which adds some schemas of its own that alembic doesn't seem to
be able to handle.


> I don't have much insight onto this other than looking at concurrent
activities on the database.   "sometimes" usually indicates a race
condition of some sort, and the "autogenerate" process is strictly one
of reading data.

Indeed I'm suspecting the same thing. However, I'm not the most
knowledgable DBA nor do I have an understanding of how alembic actually
compares the current state of the DB with the SA models. Could you maybe
provide a more concrete point what and where to look for?


Than

Re: Stray changes detected only sometimes with revision --autogenerate

2020-09-29 Thread Daniel Krebs

Hi Mike,

thanks a bunch for the quick response!

This is supported by SQLAlchemy more robustly using the 
schema_translate_map feature, see 
https://docs.sqlalchemy.org/en/13/core/connections.html#schema-translating 
.   this feature allows schema names to render explicitly without the 
need to manipulate search_path.


this feature may not work completely with Alembic however, I'm 
evaluating that now.


still doesnt shed any light on your sporadic problem, however.


It's been some time already since I've implemented this, so can't recall 
the exact reason why I went down that road but I know for sure that I 
was trying to make it work with schema_translate_map but didn't succeed. 
I would have very much preferred a solution without patching alembic it 
seemed to my last resort at the time. I'd be more than happy to find a 
solution here :)



> However, when using that approach you'd normally be running 
autogenerate from just one schema, since you are looking to generate 
migrations just once and then run them on every tenant.

>
> so it's not clear why autogenerate is being run on every tenant 
explicitly - you'd have just one "model" schema that's the one where you 
actually run autogenerate upon.   there would be no need to consider 
other schemas and include_schemas would be set to False.


Indeed that sounds like a good idea. So you'd suggest to keep one schema 
just for autogeneration purposes and then apply the migrations to all 
customer schemas? Is that possible from within alembic or would you wrap 
some tooling around alembic in order to apply migrations one-by-one to 
every schema?



> As far as being able to filter out schemas across many, the current 
approach is the include_object hook, which is insufficient for many 
schemas as it does not block the reflection of all the tables.   a new 
hook include_name is being added in the coming weeks that allows for 
pre-emptive inclusion or exclusion of specific schema names.


I tried the include_object hook back then, but I was facing reflection 
issues if I remember correctly. We're using TimescaleDB [1] in the same 
database which adds some schemas of its own that alembic doesn't seem to 
be able to handle.



> I don't have much insight onto this other than looking at concurrent 
activities on the database.   "sometimes" usually indicates a race 
condition of some sort, and the "autogenerate" process is strictly one 
of reading data.


Indeed I'm suspecting the same thing. However, I'm not the most 
knowledgable DBA nor do I have an understanding of how alembic actually 
compares the current state of the DB with the SA models. Could you maybe 
provide a more concrete point what and where to look for?



Thank you for taking the time to help!


Cheers,
Daniel

[1] https://www.timescale.com/

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/fc2bc5e4-274b-a9e7-0566-3b745c4e5fc5%40enlyze.com.


Stray changes detected only sometimes with revision --autogenerate

2020-09-29 Thread Daniel Krebs
Hi,

we're having rather strange problems with Alembic 1.4.2 and Postgres 12, 
detecting stray changes *sometimes* but also sometimes not. I already dug 
through the code but I increasingly get the feel that this is rooted 
somewhere in the interaction between alembic and Postgres.

But let me explain our setup first. We maintain a set of SQLAlchemy model 
definitions that we evolve over time and use alembic to migrate the 
database accordingly. For every customer, we add a new Postgres schema to 
the database identified by a corresponding UUID (e.g. 
a3d74dcc-1634-33a5-ff74-235f3a7c6322). See [1] for our (stripped) env.py.

Since we want all customers to use the same DDL, we want common migrations 
that disregard the schema name altogether. It seems that this is not a 
supported use-case of alembic, so I hacked around it such that alembic only 
sees schema-less objects and I run SET search_path TO "uuid" before each 
migration (see env.py [1]). The patch should be rather straight-forward and 
can be found at [2].

Now the issue that we're facing is, that *sometimes* autogenerate detects 
changes in one or two customer schemas that are not real [3]. Deleting the 
migration and creating a new one often doesn't detect the changes anymore 
or for a different customer/schema. The tables that are incorrectly found 
to have changed also change over time. 

My current workaround is to autogenerate migrations until alembic "does the 
right thing". I know that patching alembic is not the best base upon which 
to ask for support, but to be honest I am running out of theories what is 
going wrong here. Hope someone can help or point me into the right 
direction :)

Cheers,
Daniel

[1] 
https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-env-py
[2] 
https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-alembic_patch-diff
[2] 
https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-log-txt



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/479fee75-e006-4c07-9ab3-149250205521n%40googlegroups.com.


[sqlalchemy] MS-SQL/PyODBC/Linux: How to set a query timeout?

2020-06-15 Thread Daniel Haude
Occasionally my Flask/WSGI application gets stuck while executing a query 
against an SQL Server database. I have no control over that server, so 
maybe I'll never be able to fix the issue, but at least I'd like to be able 
to present my users with a meaningful error message rather than a "500 
Internal server error" caused by the web server's eventually giving up on 
the request. So I'd like to limit the SQL query execution to a reasonable 
timeout which, upon running out, would raise an exception which I could 
catch and deal with. 

It seems the database query goes through several layers: SQLAlchemy -> 
PyODBC -> unixodbc -> MS ODBC Driver -> database. I have no idea which of 
those elements would be responsible for enforcing a query timeout, nor how 
to specify one. My connection URI looks like this:

mssql+pyodbc://user@host/db?driver=ODBC+Driver+17+for+SQL+Server

I don't need to specify a per-query timeout. Just once for the whole 
application.

Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/3b716143-633e-4c94-ba4d-b1f3d1c38f6co%40googlegroups.com.


Re: Invoke commands with custom env.py from library

2020-03-05 Thread Daniel Cardin
1. I always expected that, with any reasonably non-trivial app, I would 
inevitably need to fall back to using their env.py. My hope was that for 
simpler cases, since this is a testing tool, there would be a way to not 
require any changes to their env.py by default. Ultimately I can document 
what the recommended changes to a typical env.py would be, and that's not 
the end of the world.

2. I did try to change version_locations, but unless I'm mistaken what I'm 
asking for maybe seems like an unsupported usecase. Attempting to target 
the library env with "script_location" and the local versions/ with 
"version_locations" still seems to look at the wrong location. Given how 
ScriptDirectory is constructed and and then `run_env` directly looks for 
"env.py", it seems like what im asking for is just not an expected usecase.

On Thursday, March 5, 2020 at 12:21:54 PM UTC-5, Mike Bayer wrote:
>
>
>
> On Thu, Mar 5, 2020, at 12:02 PM, Daniel Cardin wrote:
>
> So, yes I mean "commands" as in `alembic.command.upgrade()`.
>
> The idea would be that the library defines an env.py (e.g. the important 
> portion of which would look something like:)
> ...
>
> connectable = context.config.attributes.get("connection", None)
> with self.connection.connect() as connection:
> alembic.context.configure(connection=connection)
>
> with alembic.context.begin_transaction():
> alembic.context.run_migrations()
>
> which lives at `src/library/foo/env.py`. If the migrations were colocated 
> with the env.py, then afaik setting "script_location" to "library:foo" 
> would enable me to run `alembic.command.upgrade()` and it would just work. 
> However in this case (if possible), the migrations/versions/* would live at 
> an arbitrary other location (e.g. the code which has installed this 
> library).
>
>
> Ok two thoughts on that:
>
> 1. usually the way this goes is that the project that has the migrations 
> does everything normally, it has its own env.py, and inside that env.py, 
> that's where it imports *your* env.py.  that is, instaed of having the 
> usual env.py it would only have:
>
> # env.py
>
> from daniel_cardins.library.env import run_migrations
>
> run_migrations()
>
> 2. the locations of the version files vs. the env.py script are actually 
> separate in the config.there is a script_location that it uses to find 
> the home base where env.py is, however there is also a version_locations 
> config that can have any number of other directories in it and these 
> supersede script_location for the version files themselves.
>
>
>
> I think in general, if the end-user has a bunch of version files set up, 
> it's probably not a big deal that they have a stub "env.py" right there 
> that just calls out to your library.  that's something very clear that 
> people looking at a project that uses your library can understand quickly 
> if they already know alembic.
>
>
>
>
>
> General workflow:
> * person working on project "foo" invokes some cli/command/whatever which 
> requests an upgrade
> * library does whatever requisite setup
> * library invokes `alembic.command.upgrade()`
> * upgrade() ends up routing the code through the library's `env.py`
> * the context of the migration command is targeting the project "foo"'s 
> local migrations/versions folder
>
> The specifics of the above are just based on my knowledge of alembic, so 
> if there's another process i could be doing where env.py isn't "invoked" so 
> much as the above code block is just called normally, then that's ideal.
>
> On Thursday, March 5, 2020 at 9:36:09 AM UTC-5, Mike Bayer wrote:
>
>
>
> On Thu, Mar 5, 2020, at 8:08 AM, Daniel Cardin wrote:
>
> I am attempting to write a library which invokes alembic commands, while 
> referencing the migrations of a separate package which has installed said 
> library.
>
> The intent here, is for the library to invoke the alembic commands with an 
> env.py defined in that package. This seems to work through 
> config.get("script_location", "package_name:foldername")
> but then obviously expects the actual migrations to be colocated at the 
> same location.
>
> My guess would be, if this is possible at all, that there'd be something I 
> could put in the env.py which would reconfigure it to execute the 
> `context.run_migrations()` migration context (and therefore search path, 
> back at the original call site.
>
> I realize that this won't always work, given that env.py is often likely 
> customized enough such that a generic one wouldn't be able to execute them, 
> but per yo

Re: Invoke commands with custom env.py from library

2020-03-05 Thread Daniel Cardin
So, yes I mean "commands" as in `alembic.command.upgrade()`.

The idea would be that the library defines an env.py (e.g. the important 
portion of which would look something like:)
...

connectable = context.config.attributes.get("connection", None)
with self.connection.connect() as connection:
alembic.context.configure(connection=connection)

with alembic.context.begin_transaction():
alembic.context.run_migrations()

which lives at `src/library/foo/env.py`. If the migrations were colocated 
with the env.py, then afaik setting "script_location" to "library:foo" 
would enable me to run `alembic.command.upgrade()` and it would just work. 
However in this case (if possible), the migrations/versions/* would live at 
an arbitrary other location (e.g. the code which has installed this 
library).

General workflow:
* person working on project "foo" invokes some cli/command/whatever which 
requests an upgrade
* library does whatever requisite setup
* library invokes `alembic.command.upgrade()`
* upgrade() ends up routing the code through the library's `env.py`
* the context of the migration command is targeting the project "foo"'s 
local migrations/versions folder

The specifics of the above are just based on my knowledge of alembic, so if 
there's another process i could be doing where env.py isn't "invoked" so 
much as the above code block is just called normally, then that's ideal.

On Thursday, March 5, 2020 at 9:36:09 AM UTC-5, Mike Bayer wrote:
>
>
>
> On Thu, Mar 5, 2020, at 8:08 AM, Daniel Cardin wrote:
>
> I am attempting to write a library which invokes alembic commands, while 
> referencing the migrations of a separate package which has installed said 
> library.
>
> The intent here, is for the library to invoke the alembic commands with an 
> env.py defined in that package. This seems to work through 
> config.get("script_location", "package_name:foldername")
> but then obviously expects the actual migrations to be colocated at the 
> same location.
>
> My guess would be, if this is possible at all, that there'd be something I 
> could put in the env.py which would reconfigure it to execute the 
> `context.run_migrations()` migration context (and therefore search path, 
> back at the original call site.
>
> I realize that this won't always work, given that env.py is often likely 
> customized enough such that a generic one wouldn't be able to execute them, 
> but per your cookbook suggestions about programmatic invocation of 
> commands, this sort of thing requires the user to opt into changing their 
> env.py to use
> connectable = context.config.attributes.get("connection", None)
> in order to make use of a connection handed to it.
>
>
> I'm not following all the moving pieces here, like when you say 
> "commands", i assume you mean the commands in alembic.command, , like 
> alembic.command.upgrade().   when you say "an env.py defined in that 
> package", I guess you mean in the separate package that is using your 
> library.  
>
> this doesn't seem different from what Alembic itself does, "that package" 
> would have a directory where the env.py and its migration files are 
> present?   So... I'm not really sure what you're trying to do beyond 
> call an Alembic command that is against a particular directory.
>
> if OTOH by "commands" you mean migration operations, like you want to call 
> op.create_table() yourself, OK, but I don't really understand enough to 
> formulate an answer for you.
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5612f1d1-9e93-46ed-9be7-0db362b815a8%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/5612f1d1-9e93-46ed-9be7-0db362b815a8%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/598f480c-e977-4ce2-a315-8563aa9944fc%40googlegroups.com.


Invoke commands with custom env.py from library

2020-03-05 Thread Daniel Cardin
I am attempting to write a library which invokes alembic commands, while 
referencing the migrations of a separate package which has installed said 
library.

The intent here, is for the library to invoke the alembic commands with an 
env.py defined in that package. This seems to work through 
config.get("script_location", "package_name:foldername")
but then obviously expects the actual migrations to be colocated at the 
same location.

My guess would be, if this is possible at all, that there'd be something I 
could put in the env.py which would reconfigure it to execute the 
`context.run_migrations()` migration context (and therefore search path, 
back at the original call site.

I realize that this won't always work, given that env.py is often likely 
customized enough such that a generic one wouldn't be able to execute them, 
but per your cookbook suggestions about programmatic invocation of 
commands, this sort of thing requires the user to opt into changing their 
env.py to use
connectable = context.config.attributes.get("connection", None)
in order to make use of a connection handed to it.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/5612f1d1-9e93-46ed-9be7-0db362b815a8%40googlegroups.com.


[sqlalchemy] Don't want foreign key on child to be updated when parent is updated

2019-02-26 Thread Daniel Leon
Suppose I have a Parent and Child table with Child having
parent_id = Column(Integer, ForeignKey('parent.id'), back_populates=
'children', nullable=False)
parent = relationship('parent')
and Parent having
children = relationship('child', back_populates='parent')

Then if I try to delete a Parent, since Child has parent_id non-nullable 
I'd get error *Cannot insert the value NULL into column parent_id*. I want 
Child to retain its parent_id after its Parent is deleted.

I didn't find a cascade option that accomplishes this.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Remove all objects of specified type from session

2019-01-29 Thread Daniel Leon
There is the method expunge which removes specified object from session and 
expunge_all which removes all objects from session. Is there a way I can 
expunge all objects from certain type?

We're transitioning from SQLObject, which reveals its cache dictionary. 
There are many places where we 'expire' a table by removing all objects of 
that type from the cache.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Same name for both Column and property

2019-01-24 Thread Daniel Leon
For some columns I want to process it a bit before getting or setting it. 
For example, I have a column Status, but have to call it _Status to avoid 
name conflict with its property.

class Task(Base):
_Status = Column('Status')

@property
def Status(self):
return f(self._Status)

@Status.setter
def Status(self, value):
self._Status = value

Now whenever I write a query involving Status I must remember to use 
_Status. If I accidentally use Status in a query, it adds WHERE 0 = 1, thus 
returning nothing, so it fails and I don't know why.

I'm converting from SQLObject where you can call the column Status and 
define getter and setter by defining functions _get_Status and _set_Status. 
When I access Status in a query SQLObject knows to use the Column and 
outside a query it uses _get_Status.

How can I get this behaviour with SQLObject?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multitenancy with tenant_id column

2019-01-23 Thread Daniel
Hi Mike,

Thanks for your quick response. And quick patch! I look forward to the
1.2.17 release.

On Tue, Jan 22, 2019 at 10:03 PM Mike Bayer 
wrote:

> On Tue, Jan 22, 2019 at 4:49 PM Daniel Lister  wrote:
> > Not as much of an issue but inserts are also not handled.
>
> the Query object doesn't do INSERTs, INSERTs are controlled by
> creating an instance of your mapped class with the desired values upon
> it.   I would imagine if tenant_id is required, you simply make that a
> NOT NULL column in your table (which it would be anyway) then have
> tenant_id be the first argument positional argument on all __init__()
> methods.  Or use the "init" mapper event which is called alongside
> __init__().There are lots of other ways to get at INSERTs too as
> well as UPDATE/DELETE within the normal ORM flush process (e.g. not
> bulk update() or delete()) using the mapper or session hooks..   How
> you organize these depends on the coding patterns you seek to use
> (e.g. explicit call everywhere, pulling from a thread local registry,
> etc.)
>
> >
> > http://www.sqlalchemy.org/
> >
> > To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> > ---
> > You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/20ieAvQCp_4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>


-- 
-Daniel

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Multitenancy with tenant_id column

2019-01-22 Thread Daniel Lister
Hi all,

I'm working on a multi tenant web app with SQLAlchemy and I would like to 
use a tenant_id column on some tables to isolate data per tenant. Ideally, 
filtering on and adding this tenant_id would be automatic. I found this 

 
talk by the creator of Flask that briefly mentions a way to do this with 
sqlalchemy. This 
 
recipe goes into more detail on this approach but also points out some 
flaws. Primarily that it will not handle update or delete queries. To me 
this feels like a major flaw, a developer, used to having tenant_id handled 
automatically, may easily forget when performing an update or delete, thus 
creating a vulnerability. It proposes to solve the update and delete issue 
by using before_cursor_execute() event handler. However, as far as I can 
tell, this approach would involve editing the SQL text directly, defeating 
much of the point of using SQLAlchemy. Not as much of an issue but inserts 
are also not handled.

I also found this  library that 
tries to solve the problem, however it is described as experimental and 
hasn't been updated in 5 years. It seems to have the problems of the above 
solution as well.

It seems to me that this is a common pattern and I'm surprised there aren't 
good, solid solutions out there for SQLAlchemy. This 
 library supports this 
approach for Django.

Ideally I would love to have a session that is scoped to a tenant and 
automatically applies a tenant_id on every select, update, delete, and 
insert so that queries can be written without the need to consider tenant 
at all. Is this feasible? I would greatly appropriate if anyone could help 
or point me in the right direction. Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Query every time when accessing many-to-one field

2019-01-15 Thread Daniel Leon
from sqlalchemy import Column, ForeignKey, Integer, String, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative.base import _declarative_constructor
from sqlalchemy.orm import Session, relationship, sessionmaker
from sqlalchemy.orm.dynamic import AppenderQuery


class BaseBase(object):

def __init__(self, **kwargs):
_declarative_constructor(self, **kwargs)
session.add(self)
session.commit()


Base = declarative_base(cls=BaseBase, constructor=BaseBase.__init__)


class Node(Base):

__tablename__ = 'node'

id = Column(Integer, primary_key=True)
label = Column(String)
parent_id = Column(Integer, ForeignKey('node.id'))
parent = relationship('Node', remote_side=id, back_populates='children')
children = relationship('Node', back_populates='parent')

def __repr__(self):
return self.label


if __name__ == '__main__':
engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)

Session = sessionmaker(bind=engine, expire_on_commit=False)
session = Session()

node = Node(label='root')
# Accessing children for first time so query is made
print node.children
child1 = Node(label='child1', parent_id=node.id)
# Accessing children again so no query is made and child1 is not there
print node.children
child2 = Node(label='child2', parent=node)
# Accessing children again so no query is made and child1 is not there, 
but child2 is there
print node.children

In my MCVE I have expire_on_commit=False. When I access node.children for 
the first time, it makes that query. When I create child1 specifying 
parent_id, it doesn't update node.children.

I am converting a codebase from SQLObject, which queries every time you 
access many-to-one field. With SQLAlchemy I know I can specify 
lazy='dynamic' in children relationship, but that returns a query object so 
I'd have to do node.children.all().

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] New object on foreign key after refresh

2019-01-02 Thread Daniel Leon
Sometimes when I do session.refresh(self) a foreign field 
self.ProductionOrder is sometimes a different object than before. I have 
non-database fields on ProductionOrder that are lost. I have 
expire_on_commit=False so I expect an object corresponding to a database 
row to persist. Debugging, I see that this object replacement occurs after 
self.refresh(). I haven't determined the conditions under which it occurs. 
Any thoughts?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] General on set event handler

2018-12-14 Thread Daniel Leon
I want to be able to refresh the object immediately before setting its 
attribute and commit immediately after. I notice that the set event is 
before set. Is there an after set event?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] General on set event handler

2018-12-13 Thread Daniel Leon
I know that you can define an on set event handler for an attribute using 
the decorator @event.listens_for(Class.Attribute, 'set'). I'd like a single 
event handler to handle every attribute on every class. How can I do this?

In the event handler I want to refresh the object before setting the value 
and commit after. I know this is strange, but I'm converting a project from 
SQLObject and to make the conversion as seamless as possible I want to make 
SQLAlchemy mimic SQLObject's behaviour.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Global pre-filter of queries containing a given model.

2018-10-18 Thread Daniel Cardin
Fair enough, I was expecting something like that to be the case anyhow. 
Thank you very much!

On Thursday, October 18, 2018 at 2:33:20 PM UTC-4, Mike Bayer wrote:
>
> On Thu, Oct 18, 2018 at 12:31 PM Daniel Cardin  > wrote: 
> > 
> > O, very nice thanks! This appears to work much more reliably and 
> requires far less code! 
> > 
> > We do have some relationships which are using joined loading. And one 
> which is doing Parent -> mapping-table --joinedload--> Child 
> -selectin--> Parent, which also appears not to work, but I could 
> (naively) see the joinedload in the middle causing that to be the problem. 
> Is there a way to ensure this works for joined relationships as well? 
>
> you would have to do something crazy like make the joinedload() 
> against a new relationship that has the modified join condition. 
> maybe if there were an option for joinedload to not use aliases, which 
> was discussed in a different thread, that might make things easier. 
> But if you're dealing with collections, the "selectin" strategy, 
> especially the new enhancements coming in 1.3 which omit the JOIN 
> entirely, is probably a better eager loader in any case. 
>
>
> > 
> > On Wednesday, October 17, 2018 at 10:24:50 PM UTC-4, Mike Bayer wrote: 
> >> 
> >> On Wed, Oct 17, 2018 at 11:57 AM Daniel Cardin  
> wrote: 
> >> > 
> >> > I would like add a global filter to all queries emitted against a 
> particular table. In particular, its important that it doesn't require 
> changes to existing queries; and it would be nice if there was an escape 
> hatch. 
> >> > 
> >> > I'm aware of the following 2 recipes that googling this returns: 
> >> > 
> https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PreFilteredQuery 
> >> > 
> https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/GlobalFilter 
> >> > 
> >> > GlobalFilter seems to be the way to implement the escape hatch, e.g. 
> `.options(OmitGlobalFilter())`, or somesuch 
> >> > 
> >> > PreFilteredQuery at first seems to be exactly what I want for 
> applying the filter without changing existing queries, but only appears to 
> work when it is directly queried against, and not for relationships. 
> >> 
> >> the "select" loader strategy should be applying the criteria for 
> >> PreFilteredQuery at least, but for other kinds of "lazy" it probably 
> >> won't. You'd need to write an event handler that intercepts all 
> >> Query objects unconditionally and carefully tests them to detect the 
> >> various kinds of loading to apply the criteria you want, this would be 
> >> using the before_compile event: 
> >> 
> https://docs.sqlalchemy.org/en/latest/orm/events.html?highlight=before_compile#sqlalchemy.orm.events.QueryEvents.before_compile.
>  
>
> >> The "prefilteredquery" example should really be retired in favor of 
> >> before_compile in any case. The only loader strategy this wouldn't 
> >> work with would be "joined", since that loader is not creating a new 
> >> query, only augmenting an existing one, and that is driven strictly 
> >> off the relationship loader pattern. 
> >> 
> >> 
> >> 
> >> > 
> >> > class Parent(Base): 
> >> > __tablename__ = 'parent' 
> >> > id = Column(types.Integer, primary_key=True) 
> >> > children = relationship('Children', lazy='selectin')  # or 
> lazy=joined, or whatever 
> >> > 
> >> > class Child(Base): 
> >> > __tablename__ = 'child' 
> >> > id = Column(types.Integer, primary_key=True) 
> >> > parent_id = Column(ForeignKey('parent.id'), nullable=False) 
> >> > 
> >> > class OmitCertainIds(Query): 
> >> > # ... the beginning portion of PreFilterQuery 
> >> > def _apply_filter(self): 
> >> > mzero = self._mapper_zero() 
> >> > if mzero is None or mzero.class_ != Children: 
> >> > return self 
> >> > return self.enable_assertions(False).filter(Parent.id.in_([1, 
> 2])) 
> >> > 
> >> > # ... 
> >> > 
> >> > # This works great! 
> >> > db.query(Child).all() 
> >> > 
> >> > # This does not apply that filter 
> >> > db.query(Parent).all()[0].children 
> >> > 
> >> > Is there a way to globally filter queries which return Child rows 
> (such as tho

Re: [sqlalchemy] Global pre-filter of queries containing a given model.

2018-10-18 Thread Daniel Cardin
O, very nice thanks! This appears to work much more reliably and 
requires far less code!

We do have some relationships which are using joined loading. And one which 
is doing Parent -> mapping-table --joinedload--> Child -selectin--> 
Parent, which also appears not to work, but I could (naively) see the 
joinedload in the middle causing that to be the problem. Is there a way to 
ensure this works for joined relationships as well?

On Wednesday, October 17, 2018 at 10:24:50 PM UTC-4, Mike Bayer wrote:
>
> On Wed, Oct 17, 2018 at 11:57 AM Daniel Cardin  > wrote: 
> > 
> > I would like add a global filter to all queries emitted against a 
> particular table. In particular, its important that it doesn't require 
> changes to existing queries; and it would be nice if there was an escape 
> hatch. 
> > 
> > I'm aware of the following 2 recipes that googling this returns: 
> > 
> https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PreFilteredQuery 
> > https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/GlobalFilter 
> > 
> > GlobalFilter seems to be the way to implement the escape hatch, e.g. 
> `.options(OmitGlobalFilter())`, or somesuch 
> > 
> > PreFilteredQuery at first seems to be exactly what I want for applying 
> the filter without changing existing queries, but only appears to work when 
> it is directly queried against, and not for relationships. 
>
> the "select" loader strategy should be applying the criteria for 
> PreFilteredQuery at least, but for other kinds of "lazy" it probably 
> won't. You'd need to write an event handler that intercepts all 
> Query objects unconditionally and carefully tests them to detect the 
> various kinds of loading to apply the criteria you want, this would be 
> using the before_compile event: 
>
> https://docs.sqlalchemy.org/en/latest/orm/events.html?highlight=before_compile#sqlalchemy.orm.events.QueryEvents.before_compile.
>  
>
> The "prefilteredquery" example should really be retired in favor of 
> before_compile in any case. The only loader strategy this wouldn't 
> work with would be "joined", since that loader is not creating a new 
> query, only augmenting an existing one, and that is driven strictly 
> off the relationship loader pattern. 
>
>
>
> > 
> > class Parent(Base): 
> > __tablename__ = 'parent' 
> > id = Column(types.Integer, primary_key=True) 
> > children = relationship('Children', lazy='selectin')  # or 
> lazy=joined, or whatever 
> > 
> > class Child(Base): 
> > __tablename__ = 'child' 
> > id = Column(types.Integer, primary_key=True) 
> > parent_id = Column(ForeignKey('parent.id'), nullable=False) 
> > 
> > class OmitCertainIds(Query): 
> > # ... the beginning portion of PreFilterQuery 
> > def _apply_filter(self): 
> > mzero = self._mapper_zero() 
> > if mzero is None or mzero.class_ != Children: 
> > return self 
> > return self.enable_assertions(False).filter(Parent.id.in_([1, 
> 2])) 
> > 
> > # ... 
> > 
> > # This works great! 
> > db.query(Child).all() 
> > 
> > # This does not apply that filter 
> > db.query(Parent).all()[0].children 
> > 
> > Is there a way to globally filter queries which return Child rows (such 
> as though that relationship, where its not an explicit `.query` call)? Or 
> does this require adjusting all such relationships, or other means by which 
> sqlalcemy may emit queries without a direct query. 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Global pre-filter of queries containing a given model.

2018-10-17 Thread Daniel Cardin
I would like add a global filter to all queries emitted against a 
particular table. In particular, its important that it doesn't *require* 
changes to existing queries; and it would be nice if there was an escape 
hatch.

I'm aware of the following 2 recipes that googling this returns:
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PreFilteredQuery
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/GlobalFilter

GlobalFilter seems to be the way to implement the escape hatch, e.g. 
`.options(OmitGlobalFilter())`, or somesuch

PreFilteredQuery at first seems to be exactly what I want for applying the 
filter without changing existing queries, but only appears to work when it 
is directly queried against, and not for relationships.

class Parent(Base):
__tablename__ = 'parent'
id = Column(types.Integer, primary_key=True)
children = relationship('Children', lazy='selectin')  # or lazy=joined, 
or whatever

class Child(Base):
__tablename__ = 'child'
id = Column(types.Integer, primary_key=True)
parent_id = Column(ForeignKey('parent.id'), nullable=False)

class OmitCertainIds(Query):
# ... the beginning portion of PreFilterQuery
def _apply_filter(self):
mzero = self._mapper_zero()
if mzero is None or mzero.class_ != Children:
return self
return self.enable_assertions(False).filter(Parent.id.in_([1, 2]))

# ...

# This works great!
db.query(Child).all()

# This does not apply that filter
db.query(Parent).all()[0].children

Is there a way to globally filter queries which return Child rows (such as 
though that relationship, where its not an explicit `.query` call)? Or does 
this require adjusting all such relationships, or other means by which 
sqlalcemy may emit queries without a direct query.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Conditionally ordered nested relationships

2018-06-11 Thread Daniel Cardin

>
> the fact that you are changing the order of things_check seems to 
> imply you want the order of parent_feature.things to change also 
> but you're not changing the query. 
>
 
Ugh, woops again. Yes that's exactly what i meant!

in which case you would use contains_eager()
>

Wow okay, that exactly works.

So I guess i didnt/dont understand understand at all what contains_eager is 
meant to do even after reading the documentation. The join beforehand is 
what generates the sql which ultimately allows contains_eager to load the 
relationship. But how is that different from how it would have worked with 
a normal joinedload (i mean i can see the difference in the generated sql, 
but for me to know that I should have been using contains_eager in the 
first place)?

Also, (not that I actually want to in this case) but would it be possible 
to do the same thing with a different relationship loading mechanishm? Like 
suppose this was typically a `selectin` loaded relationship with a default 
order_by on it. Is it possible to have it do the same thing it would have 
done for selectin anyway, order *that* query configurably, and get the same 
end-result?

Anyways, thanks so much. I was banging my head against this for far too 
long!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Conditionally ordered nested relationships

2018-06-08 Thread Daniel Cardin
Aha i didn't realize I had a mismatch of relationship names

Below I included a full example that fails an assert where I'm having the 
issue.

   - The only reason I have "unordered_things" and "things" relationships 
   is because I wasn't sure how to get it to only include the query's sort
   - The only "requirement" I have of the 2nd query is that is the same for 
   both asc and desc sorting. Everything else was just my best attempt at 
   getting it to work
   

from sqlalchemy import asc, Column, create_engine, desc, ForeignKey, types
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import joinedload, relationship, sessionmaker

Base = declarative_base()

class Parent(Base):
__tablename__ = 'parent'
id = Column(types.Integer, primary_key=True, autoincrement=True)

parent_feature = relationship('ParentFeature', lazy='joined', uselist=
False)

class Thing(Base):
__tablename__ = 'thing'
id = Column(types.Integer, primary_key=True, autoincrement=True)
name = Column(types.Unicode, nullable=False)

class ParentFeature(Base):
__tablename__ = 'parent_feature'
id = Column(types.Integer, primary_key=True, autoincrement=True)
parent_id = Column(ForeignKey('parent.id'), nullable=False, index=True)

parent = relationship('Parent', uselist=False)
unordered_things = relationship('Thing', secondary=
'parent_feature_thing')
things = relationship('Thing', secondary='parent_feature_thing', 
order_by=Thing.name, viewonly=True)

class ParentFeatureThingPivot(Base):
__tablename__ = 'parent_feature_thing'
parent_feature_id = Column(ForeignKey('parent_feature.id'), primary_key=
True)
thing_id = Column(ForeignKey('thing.id'), primary_key=True)

engine = create_engine('sqlite:///')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()

models = [
Parent(id=1),
Parent(id=2),
Thing(id=1, name=''),
Thing(id=2, name=''),
Thing(id=3, name=''),
ParentFeature(id=1, parent_id=1),
ParentFeature(id=2, parent_id=2),
ParentFeatureThingPivot(parent_feature_id=2, thing_id=1),
ParentFeatureThingPivot(parent_feature_id=2, thing_id=2),
ParentFeatureThingPivot(parent_feature_id=2, thing_id=3),
]
for model in models:
session.add(model)
session.flush()

parent = session.query(Parent).all()[1]
things = [thing.name for thing in parent.parent_feature.things]
things_check = ['', '', '']
print('{} == {}'.format(things, things_check))
assert things == things_check

for direction in (asc, desc):
things_check = ['', '', '']
if direction == desc:
things_check = list(reversed(things_check))

parent_feature = (
session.query(ParentFeature)
.join(ParentFeature.parent, ParentFeature.unordered_things)
.options(
joinedload(ParentFeature.parent),
joinedload(ParentFeature.unordered_things),
)
.filter(Parent.id == 2)
.order_by(asc(Thing.name))
.one()
)
things = [thing.name for thing in parent_feature.things]
print('{} == {}'.format(things, things_check))
assert things == things_check


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Conditionally ordered nested relationships

2018-06-08 Thread Daniel Cardin
So "ParentFeature.shows" is there for the case where I query 
`pg.query(Parent)`, I want it to be ordered ascending. 

The only reason i have "ParentFeature.unordered_shows" is because if I try 
to apply an `order_by` to `Thing.name` in a query (like my query example), 
it emits "ORDER BY thing.name DESC, thing.name" (e.g. the order_by on the 
relationship is still applied, despite my sort). If I can override that, i 
have no need for both relationships

To answer your other question. I only have the join+joinedloads from 
following 
https://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html?highlight=joinedload#the-zen-of-joined-eager-loading.

The query I included was only to show my attempt at writing a query which:

   1. query ParentFeature
   2. only get the 1 ParentFeature with a specific Parent
   3. apply the sort to ParentFeature.things, so the "result.one().things" 
   is sorted ascending/descending

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Conditionally ordered nested relationships

2018-06-08 Thread Daniel Cardin
I'm attempting to a sort of complex relationship filter/sort operation. I 
need to filter results by the id of a parent relationship, and then sort a 
nested relationship by one of its attributes

class Parent(Base):
__tablename__ = 'parent'
id = Column(types.Integer, primary_key=True, autoincrement=True)
name = Column(types.Unicode, nullable=False)

class ParentFeature(Base):
__tablename__ = 'parent_feature'
id = Column(types.Integer, primary_key=True, autoincrement=True)
parent_id = Column(ForeignKey('parent.id', ondelete='CASCADE'), nullable
=False, index=True)

parent = relationship('Parent')
unordered_things = relationship('Thing', secondary=
'parent_feature_thing')
things = relationship('Thing', secondary='parent_feature_thing', 
order_by=Thing.name, viewonly=True)

class ParentFeatureThingPivot(Base):
__tablename__ = 'parent_feature_thing'
parent_feature_id = Column(ForeignKey('parent_feature.id', ondelete=
'CASCADE'), primary_key=True)
thing_id = Column(ForeignKey('thing_id.id', ondelete='CASCADE'), 
primary_key=True)

class Thing(Base):
__tablename__ = 'thing'
id = Column(types.Integer, primary_key=True, autoincrement=True)
name = Column(types.Unicode, nullable=False)

Ideally on ParentFeature I would only need "things" and not 
"unordered_things", but sometimes I query
directly against Parent, in which case it should always order_by ascending. 
Anyways that works.

The actual issue, is that sometimes I need to be able to conditionally 
order "things" on queries against
ParentFeature ascending or descending, and apparently(?) I can't use the 
"things" relationship because
it is undconditionally applied to uses of that relationship

The query that I think should be working, but is not is like this:

direction = desc # or asc
result = (
pg.query(ParentFeature)
.join(ParentFeature.parent, ParentFeature.unordered_shows)
.options(
joinedload(ParentFeature.parent),
joinedload(ParentFeature.unordered_shows),
)
.filter(Parent.id == 3)
.order_by(direction(Thing.name))
)


except it still orders the final query ascending (or unordered? but 
definitely not descending)

any ideas?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Load sqlalchemy orm model from dict

2017-01-06 Thread Daniel Kraus

Jonathan Vanasco <jvana...@gmail.com> writes:
> On Thursday, January 5, 2017 at 8:34:52 PM UTC-5, Daniel Kraus wrote:
>>
>> The use-case is that I have a big model with lots of complex
>> relationships but 90% of the time I don't need the data from those.
>
> If I'm reading your question correctly, most of what sqlalchemy does (and
> excels at) is specifically keeping people from doing what you're trying to
> do.

I think you got me wrong then.

> It seems like you're trying to avoid all the work that is done to
> ensure data integrity across sessions and transactions.  (Which is a common
> need)

Nope,
I just want to use a cache where I only store only the DB row for my
model and not all data from relation tables as well.

> Read up on the `merge` session
> method 
> (http://docs.sqlalchemy.org/en/latest/orm/session_state_management.html#merging)
>  The dogpile caching section is largely based on that (though goes beyond
> it).
>
> You would do something like this:
>
> user = User(**userdata)
> user = session.merge(user)
>
> That will merge the user object into the session (and return the merged
> object).

Thanks.


> You will run into problems if your cached data is incomplete though --
> AFAIK, there is no way to tell sqlalchemy that you've only loaded data for
> certain columns.  If you don't populate all the columns in your cache, but
> have it in the db, I have no idea how to get that info from the db.

`session.merge` populates the missing attributes from the db.
But in my case, where I have all the data and know it's up to date,
I can even skip that one select and use the `load=False` parameter.
So for my example I can (thanks to Mike's reply):

  make_transient_to_detached(user)
  user = session.merge(user, load=False)

and after that I can do my `user.big_n_to_m_relation_data` and it will
correctly query the db.


Thanks,
  Daniel

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Load sqlalchemy orm model from dict

2017-01-06 Thread Daniel Kraus
Hi!

mike bayer <mike...@zzzcomputing.com> writes:
> you're looking for session.merge() but if you're looking to save on a
> SELECT you might also want to send in load=False - and if you are
> starting with a fresh (non-pickled) object you probably need to call
> make_transient_to_detached first so that it acts like it was loaded from
> the database first.

Nice. Works like I wanted :)

Thanks,
  Daniel

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Load sqlalchemy orm model from dict

2017-01-05 Thread Daniel Kraus
Hi!

I don't even quote my old message since it's just confusing.
In my head the question made sense ;)

So I try again with a code example:

I have a class `User`:
#+BEGIN_SRC python
class User(Base):
id = Column(Integer, primary_key=True)
name = Column(String(64))
email = Column(String(64))

languages = relationship('Language', secondary='user_languages')
#+END_SRC

I already have a lot of users stored in my DB.
And I know that I have, for example, this user in my DB:
#+BEGIN_SRC python
user_dict = {
'id': 23,
'name': 'foo',
'email': 'foo@bar',
}
#+END_SRC

So I have all the attributes but the relations.

Now I want to make a sqlalchemy `User` instance
and kind of register it in sqlalchemy's system
so I can get the `languages` if needed.

#+BEGIN_SRC python
user = User(**user_dict)

# Now I can access the id, name email attributes
assert user.id == 23

# but since sqlalchemy thinks it's a new model it doesn't
# lazy load any relationships
assert len(user.languages) == 0
# I want here that the languages for the user with id 23 appear

# So I want that `user` is the same as when I would have done
user_from_db =  DBSession.query(User).get(23)
assert user == user_from_db
#+END_SRC


The use-case is that I have a big model with lots of complex
relationships but 90% of the time I don't need the data from those.
So I only want to cache the direct attributes plus what else I need
and then load those from the cache like above and be able to
use the sqlalchemy model like I would have queried it from the db.


Hope it makes a bit more sense now.
Sorry for the confusing first question and wasting your time.

Thanks,
  Daniel

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Load sqlalchemy orm model from dict

2017-01-03 Thread Daniel Kraus
Hi,

how can I load a sqlalchemy orm model from a dictionary?

Let's say I have a `User` model with attributes `id`, `name`, `email`
and a relationship `languages`.

I have a dict with `id`, `name`, `email` from users in my cache,
but not `languages` that's rarely used.

Is it possible to create a `User` model from a `user_dict` that behaves
like I would have queried it with dbsession.query(User).get(42)?
What I mean in particular is that I want that an access to
`user.languages` creates a subquery and populates the attribute.

I now about the dogpile caching example which is similar but looks
way too much for what I want.

PS, I asked the same question on stackoverflow:
http://stackoverflow.com/questions/41158307/load-sqlalchemy-orm-model-from-dict

Maybe I didn't make clear what I want.
Please tell me if I should rephrase my question.

Thanks in advance,
  Daniel

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Documented URL string for Google Cloud SQL confusing

2016-05-11 Thread Daniel Sank
The documentation on connection url strings [a] indicates that connections
to Google Cloud SQL should use

mysql+mysqldb://root@
/?unix_socket=/cloudsql/:

There are several problems:

1. The given string seems to implicitly assume use of the local proxy
provided by Google. This is fine, but it would be better to state that
explicitly.

2. The string assumes the user has configured the proxy to put its sockets
at /cloudsql. This should be stated explicitly or the string should
parametrize that path.

3. The parametrization : is misleading, or
possibly incorrect. It might be clearer to write

mysql+mysqldb://root@/?unix_socket=/cloudsql/

In the Cloud SQL dashboard, the "instance connection name" is something like

google.com::

so the present documentation would likely lead to the user omitting the
necessary "google.com:" at the beginning (at least, it did for me).

I'd like to fix this up, but I haven't found any documentation on how to
build the documentation either on github or on the sqlalchemy website. Any
clues?


[a]
http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#module-sqlalchemy.dialects.mysql.gaerdbms


-- 
Daniel Sank

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] prefix_with for queries with that eager load relationships

2016-02-25 Thread Daniel Kraus


On Tuesday, 23 February 2016 23:16:25 UTC+8, Mike Bayer wrote:
>
>
> On 02/23/2016 04:00 AM, Daniel Kraus wrote: 
> > Here is a simple script to demonstrate the error: 
> > 
> > https://gist.github.com/dakra/0424086f5837d722bc58 
>
> the joinedload() case "works", as long as you don't use LIMIT or OFFSET, 
> as there's no subquery: 
>
> SELECT SQL_CALC_FOUND_ROWS users.id AS users_id, users.name AS 
> users_name, addresses_1.id AS addresses_1_id, addresses_1.email_address 
> AS addresses_1_email_address, addresses_1.user_id AS addresses_1_user_id 
> FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = 
> addresses_1.user_id 
>
> but looking at the docs for the purpose of found_rows, it would only be 
> used with a LIMIT.   Therefore it's not really valid to use this 
> function with joined eager loading of a collection because it only works 
> at the top level of the query and a joined eager load is going to return 
> more rows than there are actual entities.   If OTOH you are only 
> retrieving a many-to-one via joined eager load, this should all work 
> totally fine, and even in the case of LIMIT I don't think a subquery is 
> applied for simple many-to-one relationships. 
>
> So subqueryload is the only practical option when you need collection 
> eager loading plus the found rows feature with limit.   In this case you 
> definitely don't want this emitted in the subquery because even if it 
> were accepted it would mess up your found_rows().   Longer term solution 
> here would be to provide flags to the query.prefix_with() method to 
> indicate prefixes that should always move to the outside of the query as 
> well as prefixes that should not be passed along to subqueryloaders and 
> other transformations. 
>
> Here's a found_rows modifier that will anticipate a subqueryload and 
> erase any _prefixes() nested: 
>
> from sqlalchemy.orm.interfaces import MapperOption 
> from sqlalchemy.sql import visitors 
>
>
> class FoundRows(MapperOption): 
>  def process_query(self, query): 
>  query._prefixes = "SQL_CALC_FOUND_ROWS", 
>
>  def process_query_conditionally(self, query): 
>  # when subqueryload calls upon loader options, it is passing 
>  # the fully contructed query w/ the original query already 
>  # embedded as a core select() object.  So we will modify the 
>  # select() after the fact. 
>
>  def visit_select(select): 
>  select._prefixes = () 
>
>  # this can be more hardcoded, but here we're searching throughout 
>  # all select() objects and erasing their _prefixes 
>  for from_ in query._from_obj: 
>  visitors.traverse( 
>  from_, {}, {"select": visit_select}) 
>
>
> users = s.query(User).options(FoundRows(), 
> subqueryload(User.addresses)).limit(3).all() 
> row_count = s.execute('SELECT FOUND_ROWS()').scalar() 
>
> print(users, row_count) 
>
>

Thanks, this does remove the 'SQL_CALC_FOUND_ROWS' from the subquery,
but when I query like you above with option subqueryload, sqlalchemy fires 
_2_ queries,
the first one having SQL_CALC_FOUND_ROWS and the second one not, so
s.execute('SELECT FOUND_ROWS()') only returns the found rows for that 
second query:

--- cut ---
[...INSERTs...]
2016-02-25 17:20:02,625 INFO sqlalchemy.engine.base.Engine SELECT 
SQL_CALC_FOUND_ROWS users.id AS users_id, users.name AS users_name 
FROM users 
 LIMIT %(param_1)s
2016-02-25 17:20:02,625 INFO sqlalchemy.engine.base.Engine {'param_1': 1}
2016-02-25 17:20:02,627 INFO sqlalchemy.engine.base.Engine SELECT 
addresses.id AS addresses_id, addresses.email_address AS 
addresses_email_address, addresses.user_id AS addresses_user_id, 
anon_1.users_id AS anon_1_users_id 
FROM (SELECT users.id AS users_id 
FROM users 
 LIMIT %(param_1)s) AS anon_1 INNER JOIN addresses ON anon_1.users_id = 
addresses.user_id ORDER BY anon_1.users_id
2016-02-25 17:20:02,627 INFO sqlalchemy.engine.base.Engine {'param_1': 1}
2016-02-25 17:20:02,628 INFO sqlalchemy.engine.base.Engine SELECT 
FOUND_ROWS()
2016-02-25 17:20:02,628 INFO sqlalchemy.engine.base.Engine {}
[<__main__.User object at 0x7f66e19a61d0>] 2
--- cut ---


But looking more into it, SQL_CALC_FOUND_ROWS seems to be slower in most 
cases anyway. See:
https://www.percona.com/blog/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/
https://mariadb.atlassian.net/browse/MDEV-4592  (The related issue is open 
since 2009).


So now generate the query and to get the row count I have:
--- cut ---
if model_query.statement._group_by_clause.clauses:
# if there's a GROUP BY we count the slow way:
# SELECT count(*) FROM (SELECT ... FROM Model ... )
row_count = 
model_query

[sqlalchemy] prefix_with for queries with that eager load relationships

2016-02-23 Thread Daniel Kraus
Hi,

I want to use mysqls `SQL_CALC_FOUND_ROWS` but when I use 
`query.prefix_with(...)` it fails when the query eager loads a relationship
because sqlalchemy puts the prefix not at the beginning.

I'm not sure if I should file a bug report or if it's intended behaviour.
If I'm doing something wrong, how can I prefix the query only once in the 
beginning?

Here is a simple script to demonstrate the error:

https://gist.github.com/dakra/0424086f5837d722bc58

--- cut ---
from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
from sqlalchemy.orm import Session, relationship, subqueryload, joinedload
from sqlalchemy.ext.declarative import declarative_base


Base = declarative_base()

e = create_engine("mysql+mysqlconnector://scott:tiger@localhost/test", 
echo=True)

class Address(Base):
__tablename__ = 'addresses'
id = Column(Integer, primary_key=True)
email_address = Column(String(64))
user_id = Column(Integer, ForeignKey('users.id'))


class User(Base):
__tablename__ = 'users'

id = Column(Integer, primary_key=True)
name = Column(String(64))

addresses = relationship(Address, backref="user")


Base.metadata.drop_all(e)
Base.metadata.create_all(e)

s = Session(e)


u = User(name='test')
s.add_all([u, Address(email_address='email1', user=u), 
Address(email_address='email2', user=u)])
s.commit()

# this works like expected
users = s.query(User).prefix_with('SQL_CALC_FOUND_ROWS').all()
row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)

# with eager loading (subqueryload or joinedload) it fails
users = 
s.query(User).prefix_with('SQL_CALC_FOUND_ROWS').options(subqueryload(User.addresses)).all()
row_count = s.execute('SELECT FOUND_ROWS()').scalar()

print(users, row_count)
--- cut ---


If I execute, the relevant error message is:
sqlalchemy.exc.ProgrammingError: (mysql.connector.errors.ProgrammingError) 
1234 (42000): Incorrect usage/placement of 'SQL_CALC_FOUND_ROWS' [SQL: 
'SELECT addresses.id AS addresses_id, addresses.email_address AS 
addresses_email_address, addresses.user_id AS addresses_user_id, 
anon_1.users_id AS anon_1_users_id \nFROM (SELECT SQL_CALC_FOUND_ROWS 
users.id AS users_id \nFROM users) AS anon_1 INNER JOIN addresses ON 
anon_1.users_id = addresses.user_id ORDER BY anon_1.users_id']

Thanks,
  Daniel

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How do I start a transaction without regard to whether it is nested or not?

2016-02-19 Thread Daniel Fortunov
Context

I would like to make use of nested transactions using SQLAlchemy (in 
postgres), but I want to write an independent function that doesn't care if 
the session already has a transaction or not -- it can start a nested 
transaction or outer transaction appropriately.


Question

What is the most elegant syntax to begin a transaction, which will be 
either an outer or nested transaction, as appropriate?

So far the best I have seen is: session.begin(nested=session.is_active)

Is there something better than this?


Discussion

I am using postgres, which does not natively support nested transactions, 
so nested transactions are implemented using the SAVEPOINT command.

Specifically, what the above code does (against postgres, at least) is: * 
If there is no transaction in progress, start a (non-nested) transaction 
(BEGIN) * If there is already a transaction in progress, begin a nested 
transaction (SAVEPOINT)

Is there something better than this? Ideally I'd like to just call 
session.begin() and have it internally work out if it needs to be an outer 
transaction (BEGIN) or a nested transaction (SAVEPOINT) without me having 
to be explicit about it.


Motivation

I would like to write a function f(session) which takes a SQLAlchemy 
session and makes some changes within a transaction. Also, I want the 
ability to rollback the changes that f() has made (and *only* the changes 
that f() has made).

If the calling code has begun a transaction, and made changes, then I don't 
want a rollback within f() to discard changes that were made by the calling 
code.


Thanks in advance,

Dani

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQL connections rising into the hundreds / thousands

2016-02-04 Thread Daniel Cochran
Hi Michael -- thank you for the reply.

everything looks fine, except what does get_sqlAlchemy() do?   If that 
> creates a new SqlAlchemy instance each time, then you're making new 
> connection pools on every request.   even then, when these are garbage 
> collected the underlying database connections would be closed. 
>
> Using a single engine with your settings you won't get more than 12 
> connection at a time per Python process (note this does *not* include a 
> child subprocess, e.g. via fork() or python multiprocessing). 
>

The get_sql code uses the webapp2 registry, which is basically a store to 
put data that should live across the lifetime of an app's deployment, to 
instantiate what I believe should be a singleton instance of the SqlAlchemy 
class above:

def get_sql():
  """Retrieves a singleton SQL class across HTTP requests."""
  sql = webapp2.get_app().registry.get('sql')
  if not sql:
connection = 
'mysql+mysqldb://{user}@/{db}?charset=utf8_socket={socket}/'.format(
user='root',
db='logs',
socket='/cloudsql/myapp.com:myapp')
sql = sql_alchemy.SqlAlchemy(connection)
webapp2.get_app().registry['sql'] = sql


  return sql

I'll double check to make sure this is only instantiated once, but I 
believe this is correct:
https://webapp-improved.appspot.com/guide/app.html#registry
 

> not enough information here to tell.  Your program may be creating 
> multiple engines or it may be sharing the engine across process 
> boundaries, either of which will work against the limits within the pool 
> itself.
>

Hmm - I thought that my "ping_connection" method in the Sql Alchemy 
constructor was meant to test for potential failed connections that were 
being shared across process boundaries. Any other ways to diagnose?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQL connections rising into the hundreds / thousands

2016-02-03 Thread Daniel Cochran
I recently switched to SQL Alchemy to take advantage of connection pooling, 
but I notice that my SQL server will have anywhere from 30-60 open 
connections throughout the day, ("threads_connected"), and sometimes it 
randomly climbs up to the max of 2000 and just completely falls over. This 
climb has nothing to do with queries per second, though, sometimes it will 
happen at 2am when traffic is very low.

The obvious culprit to the high # of connections is that one of my API 
endpoints that connects to the SQL instance is not closing connections, 
though I'm not sure where that would be happening. Here is my configuration:

class SqlAlchemy(object):
  """A light abstraction for SQL Alchemy."""


  def __init__(self, connection):
base = automap_base()
self.engine = sql.create_engine(connection,
max_overflow=2,
pool_size=10,
pool_recycle=120,
pool_timeout=15)


base.prepare(self.engine, reflect=True)


# These models reflect the tables in your database.
self.models = base.classes
self.Session = scoped_session(sessionmaker(bind=self.engine))


# Ping the connection to make sure it's valid. Per:
# http://docs.sqlalchemy.org/en/latest/core/pooling.html
@sql.event.listens_for(self.engine, 'engine_connect')
def ping_connection(connection, branch):  # pylint: 
disable=unused-variable
  if branch:
# "branch" refers to a sub-connection of a connection,
# we don't want to bother pinging on these.
return


  try:
# run a SELECT 1.   use a core select() so that
# the SELECT of a scalar value without a table is
# appropriately formatted for the backend
connection.scalar(sql.select([1]))
  except sql.exc.DBAPIError as err:
# catch SQLAlchemy's DBAPIError, which is a wrapper
# for the DBAPI's exception.  It includes a .connection_invalidated
# attribute which specifies if this connection is a "disconnect"
# condition, which is based on inspection of the original exception
# by the dialect in use.
if err.connection_invalidated:
  # run the same SELECT again - the connection will re-validate
  # itself and establish a new connection.  The disconnect detection
  # here also causes the whole connection pool to be invalidated
  # so that all stale connections are discarded.
  connection.scalar(sql.select([1]))
else:
  raise


  def insert(self, table, data):
"""Insert one record into a SQL table.


Args:
  table: A string representing a table name that exists in self.engine.
  data: A dict of key/value pairs representing the inserted record.


Returns:
  A dict of the newly entered log.
"""
session = self.get_session()
record_as_dict = {}

try:
  model = getattr(self.models, table)
  record = model(**data)
  session.add(record)
  session.commit()
  record_as_dict = to_dict(record)
except Exception, err:
  raise Exception(err)
finally:
  self.close_session()


return record_as_dict

  def close_session(self):
 """Close a SQL Alchemy Session."""
return self.Session.remove()

  def get_session(self):
"""Returns a SQL Alchemy Session."""
return self.Session()


In my API, I then would do something like this on POST, where 
get_sqlAlchemy returns a singleton instance of the class above via the 
webapp2 registry:

  def post(self):
record = get_sqlAlchemy().insert('MyTable', self.request.body)
self.response.set_status(201)
self.response.out.write(json.EncodeJSON(record))

I may just be wildly misunderstanding how connection pooling works, but my 
desire would be that no more than 12 connections are ever open with the 
server, as I'm using Google App Engine and I believe the max concurrent 
connections is 12:

https://cloud.google.com/appengine/docs/python/cloud-sql/#Python_Connect_to_your_database

How does my configuration above fail to limit the connections?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Using bindparams in a lazily loaded relationship

2015-12-04 Thread Daniel Thul
I configured a relationship to use a bindparam as part of its primaryjoin 
condition.
This works as long as the relationship is loaded eagerly. As soon as I 
switch to lazy loading it won't resolve the relationship properly.
I guess this is because the lazy load "forgets" the params that have been 
specified in the original query.
Is this by design and is there something I can do about it?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Lazier one-to-many relationships

2015-11-29 Thread Daniel Grace
I'm working on developing an API that, among other things, allows for 
partial updates or insertions to a parent's children.  One minor problem 
that I've found is that any access to parent.children loads the entire 
collection -- when in many cases it's not necessary to do so.  

For instance, statements like these -- in a one-to-many relationship (or 
potentially some forms of many-to-many relationships, but that's outside my 
current use-case.)

# Add a new child to parent.children.  All's this really does is set 
child.parent_id, so it doesn't need to know anything about other children
parent.children.append(new_child)  # lists
parent.children.add(new_child)  # sets

# Determine if a particular child belongs to this parent... basically 
returning child.parent is parent or child.parent_id == parent.id
child in parent.children

# Remove a particular child from a parent
parent.children.remove(child)  # Raise keyerror if child not in 
parent.children, 

I'd like to find a way to mimic this in a way that still allows for 
'normal' load strategies -- i.e. still allowing "for child in 
parent.children" to work as normal (triggering the load at that time.) 
 Ideally, there'd be a way to get at just the modifications as well.

Any thoughts on how I might accomplish this?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to query postgresql JSONB columns?

2015-04-04 Thread Daniel Kerkow
Am 05.04.2015 01:58 schrieb Mike Bayer mike...@zzzcomputing.com:



 On 4/4/15 7:47 PM, Daniel Kerkow wrote:


 2015-04-05 1:29 GMT+02:00 Mike Bayer mike...@zzzcomputing.com:



 On 4/4/15 7:22 PM, Daniel Kerkow wrote:

 Hi,
 I am new to SQLAlchemy, doing my first steps with Flask.
 I have the following model using JSONB data type in PostgreSQL.

 The JSON data looks like

 {'key1': 'value1', 'key2': 'value2'}

 The Docs are relatively sparse regarding this topic.


 the cases you ask for are all described at :


http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON

 How can I query the properties column for containing a specific key or
key=value combination?


 select([table]).where(table.c.col['key'] == 'somevalue')


 Seems I am doing something wrong here:

 select(Record).where(Record.c.properties['key']  == 'value')

 TypeError: '_BoundDeclarativeMeta' object is not iterable

 Did I miss to import something? Any preparations?



 First off, if you are using the select() construct, it accepts a list of
things to SELECT from, so select([Record]).

 Secondly, Record looks a whole lot like a declarative ORM class due to
the BoundDeclarativeMeta line; there's a difference between using table
metadata and an ORM class, in this case that the ORM class doesn't use the
.c. attribute.   If you are using ORM classes you'd likely want to use
session.query() which is generally more appropriate.



Yes, that is exactly what I am looking for. Sorry for missing that. How
would that actually look like in ORM style?









 How can I update these?


 update([table]).values({table.c.col['key'] = 'value'})

 Can I get a list of unique keys in all records?


 i dunno.  Whats the Postgresql query you'd like to emit?


 In raw psql, the following works:
 select *
 from records
 where properties-'color' = 'black';






 Any help is welcome!

 Daniel

 class Record(Base):

 represents single stratigraphic units

 # ID of corresponding site:
 site_id = db.Column(db.Integer, db.ForeignKey('
public.sites.id'))

 # depth values:
 depth   = db.Column(db.Numeric, nullable=True)
 upper_boundary  = db.Column(db.Numeric, nullable=True)
 lower_boundary  = db.Column(db.Numeric, nullable=True)

 # stratigraphic properties, represented as key/value store
 properties  = db.Column(JSONB)

 --
 You received this message because you are subscribed to the Google
Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to a topic in the
Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/vk6L6152vE8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to the Google
Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to a topic in the
Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
https://groups.google.com/d/topic/sqlalchemy/vk6L6152vE8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to query postgresql JSONB columns?

2015-04-04 Thread Daniel Kerkow
Hi,
I am new to SQLAlchemy, doing my first steps with Flask. 
I have the following model using JSONB data type in PostgreSQL. 

The JSON data looks like

{'key1': 'value1', 'key2': 'value2'}

The Docs are relatively sparse regarding this topic.

How can I query the properties column for containing a specific key or 
key=value combination? 
How can I update these? 
Can I get a list of unique keys in all records?

Any help is welcome!

Daniel

class Record(Base):

represents single stratigraphic units

# ID of corresponding site:
site_id = db.Column(db.Integer, db.ForeignKey('public.sites.id'
))

# depth values:
depth   = db.Column(db.Numeric, nullable=True)
upper_boundary  = db.Column(db.Numeric, nullable=True)
lower_boundary  = db.Column(db.Numeric, nullable=True)

# stratigraphic properties, represented as key/value store
properties  = db.Column(JSONB)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to query postgresql JSONB columns?

2015-04-04 Thread Daniel Kerkow
2015-04-05 1:29 GMT+02:00 Mike Bayer mike...@zzzcomputing.com:



 On 4/4/15 7:22 PM, Daniel Kerkow wrote:

 Hi,
 I am new to SQLAlchemy, doing my first steps with Flask.
 I have the following model using JSONB data type in PostgreSQL.

 The JSON data looks like

  {'key1': 'value1', 'key2': 'value2'}

 The Docs are relatively sparse regarding this topic.


 the cases you ask for are all described at :


 http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSON

  How can I query the properties column for containing a specific key or
 key=value combination?


 select([table]).where(table.c.col['key'] == 'somevalue')


Seems I am doing something wrong here:

select(Record).where(Record.c.properties['key']  == 'value')

TypeError: '_BoundDeclarativeMeta' object is not iterable

Did I miss to import something? Any preparations?



  How can I update these?


 update([table]).values({table.c.col['key'] = 'value'})

  Can I get a list of unique keys in all records?


 i dunno.  Whats the Postgresql query you'd like to emit?


In raw psql, the following works:
select *
from records
where properties-'color' = 'black';






 Any help is welcome!

 Daniel

  class Record(Base):

 represents single stratigraphic units

 # ID of corresponding site:
 site_id = db.Column(db.Integer, db.ForeignKey('public.sites.id
 '))

 # depth values:
 depth   = db.Column(db.Numeric, nullable=True)
 upper_boundary  = db.Column(db.Numeric, nullable=True)
 lower_boundary  = db.Column(db.Numeric, nullable=True)

 # stratigraphic properties, represented as key/value store
 properties  = db.Column(JSONB)

  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/vk6L6152vE8/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sql server expression negation -- Statement(s) could not be prepared.

2014-08-07 Thread Daniel Weitzenfeld
ah ok, I didn’t know that SQL server lacked Falseness.  thanks.



On August 7, 2014 at 11:01:50 AM, Michael Bayer (mike...@zzzcomputing.com) 
wrote:


On Aug 7, 2014, at 11:34 AM, dweitzenfeld dweitzenf...@gmail.com wrote:

\nFROM [MemberFacts] \nWHERE ([MemberFacts].[Rtid] LIKE ? + '%%') = 0) AS 
anon_1 LEFT OUTER JOIN [AffinionCodes] AS [AffinionCodes_1] ON 
anon_1.[MemberFacts_Rtid] = [AffinionCodes_1].[Rtid] ('',)


It seems to be the  = 0  that sql server is complaining about.  Is this a 
known issue? 


Well SQL Server doesn’t have a “False” construct, so the best we can do for 
“false” is 0 in SQL server, which isn’t going to work in more elaborate 
situations such as the above.  

if you want to negate things, just use sqlalchemy.not_() or ~:


from sqlalchemy.sql import column
from sqlalchemy import String
from sqlalchemy.dialects import mssql

c = column('x', String)

print (~c.startswith(foo)).compile(dialect=mssql.dialect())


classic$ python test.py
x NOT LIKE :x_1 + '%%'



I've created a silly workaround, by baking the negation in: 

    @hybrid_property
    def is_not_legacy(self):
        return self.Rtid[:4] != ''

    @is_not_legacy.expression
    def is_not_legacy(cls):
        return cls.Rtid.notlike('%')
  





--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google 
Groups sqlalchemy group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/sqlalchemy/81b7A9eDz9U/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Lookup table referenced by multiple tables in parent query

2014-05-29 Thread Daniel Grace
I have a table structure something like this:

* Two tables Foo and Bar
* FooBar, an many-to-many association table between Foo and Bar
* A County lookup table referenced by both Foo and Bar
* A Region lookup table referenced by County

I'm frequently running queries that involve both Foo and bar, and 
County/Region tables joined to both of them.  Since I need to reference the 
same table twice, and the join condition itself is ambiguous, I'm having to 
do something like this:

FooCounty = orm.aliased(County, FooCounty)
FooRegion = orm.aliased(Region, BarRegion)
BarCounty = orm.aliased(County, BarCounty)
BarRegion = orm.aliased(Region, BarRegion)

session.query(Foo).join(FooBar).join(Bar).join(FooCounty, Foo.county_id == 
FooCounty.id).join(BarCounty, Bar.county_id == BarCounty.id)   # etc. 

Is there a way I can set something up so that FooCounty and BarCounty are 
essentially permanent aliases of County that will each follow the correct 
relationship, and likewise for FooRegion/BarRegion?  In short, I'd like to 
be able to let SQLAlchemy automatically handle the Join condition and such, 
so I can get the above snippet down to a much simpler...

session.query(Foo).join(FooBar).join(Bar).join(FooCounty).join(BarCounty) 
 # etc.

Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Mixing ORM and set-returning-functions (e.g. generate_series)

2013-11-14 Thread Daniel Grace
On Wednesday, November 13, 2013 4:48:47 PM UTC-8, Michael Bayer wrote:
[...]

 see attached for demo, should get you started


Thanks!  That was a tremendous help.  I've attached the version I ended up 
working with, using your code as a base.

The main changes, besides some names, are:

   - Changed 'output' to 'columns', which makes more sense.
   - If 'columns' contains strings, produce ColumnClause(name) for each 
   string as convenient shorthand.
   - Added column_map(self, iterable) as a shorthand to repeating 
   column_alias(self, from_, to) multiple times.
   - Add anonymous aliasing support like other aliases have.

This still doesn't entirely support all of that Postgres considers legal 
syntax -- e.g. these are both legal but can't currently be represented:

SELECT generate_series FROM generate_series(1, 100)
SELECT number FROM generate_series(1, 100) AS number

I'd kind of like to figure out how to support that for just for 
completeness sake (my idea was to trigger that behavior if no columns were 
specified), but I think it's a bad convention and would never use it in my 
own SQL anyways -- so it's not a priority for me.

The more interesting problem, I think, is to be able to determine the 
output column type based on input -- generate_series() can accept 
timestamps (and timestamp with time zones) in addition to ints/bigints, and 
the return value will be the same as the input parameters.  That said, I'm 
not entirely certain what benefit SQLAlchemy knowing the output column type 
provides.

I'm also not entirely happy that sqlalchemy.sql.expression.Alias() doesn't 
handle this correctly, but I'm under the impression that Aliases (as 
opposed to lowercase-a aliases) shouldn't be constructed directly anyways.

-- Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Mixing ORM and set-returning-functions (e.g. generate_series)

2013-11-14 Thread Daniel Grace
Gmail warns you before posting something with I've attached and no 
attachment, but apparently Google Groups does not.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.
from sqlalchemy.sql.expression import FromClause, ColumnClause, _anonymous_label, _truncated_label, func
from sqlalchemy.ext.compiler import compiles

__all__ = ['PGSetReturningFunction', 'srfunc', 'compiles_srf']
class PGSetReturningFunction(FromClause):
named_with_column = True

def __init__(self, func, columns, alias=None):
self.func = func
self._column_args = [ ColumnClause(c) if isinstance(c, str) else c for c in columns ]
self.name = alias or func.name
self._from_objects = [self]

def alias(self, name=None):
if name is None:
if self.named_with_column:
name = getattr(self, 'name', None)
name = _anonymous_label('%%(%d %s)s' % (id(self), name or 'anon'))
return self.__class__(self.func, self.c, name)

def column_alias(self, from_, to):
pg = self.__class__(self.func, self.c, self.name)
pg.c
pg._columns[to] = pg._columns[from_]._make_proxy(pg, to)
pg._columns.remove(pg.c[from_])
return pg

def column_map(self, iterable):
pg = self.__class__(self.func, self.c, self.name)
pg.c
_new_columns = dict()

for from_, to in iterable:
_new_columns[to] = pg._columns[from_]._make_proxy(pg, to)
pg._columns.remove(pg.c[from_])
pg._columns.update(_new_columns)
return pg

def _populate_column_collection(self):
for c in self._column_args:
c._make_proxy(self)

class Generator():
def __getattr__(self, key):
def create_function(*args, **kw):
columns = kw['columns']
return PGSetReturningFunction(getattr(func, key)(*args), columns)
return create_function

srfunc = PGSetReturningFunction.Generator()

@compiles(PGSetReturningFunction)
def compile_srf(element, compiler, **kw):
col_kwargs = kw.copy()
col_kwargs['include_table'] = False
alias=element.name
if isinstance(alias, _truncated_label):
alias = compiler._truncated_identifier(alias, alias)

return {function} AS {alias}({columns}).format(
function=compiler.process(element.func, **kw),
alias=alias,
columns=, .join(compiler.process(expr, **col_kwargs) for expr in element.c)
)

series = srfunc.generate_series(1, 10, columns=['value']).alias().column_map({'value': 'v'}.items())
print(str(series))


Re: [sqlalchemy] Mixing ORM and set-returning-functions (e.g. generate_series)

2013-11-13 Thread Daniel Grace
I know it's been a few days, but I've finally had time to actually get back 
to this

On Thursday, November 7, 2013 6:18:57 PM UTC-8, Michael Bayer wrote:

 On Nov 7, 2013, at 6:46 PM, Daniel Grace thisgen...@gmail.comjavascript: 
 wrote:

[...]

 that’s basically equivalent.   If you want the identical syntax with that 
 function, the @compiles construct would be a start, but you’d be probably 
 making a custom FromClause subclass which is a little involved.


I'd think that a generic form of this might actually be a way to go, but 
I'm not (yet) familiar enough with sqlalchemy's internals to make a lot of 
headway in designing any 'sane' solution.

It's worth noting PostgreSQL supports column-level aliasing on anything, 
not just a set-returning-function, such that the following is valid:

CREATE TABLE foo ( id SERIAL NOT NULL);
INSERT INTO foo ( id ) VALUES (1), (2), (3);
SELECT bar.baz FROM foo AS bar(baz);

That said, it's not cleanly usable in most cases -- you can't alias a 
column by name, so you have to know the exact order columns appear in 
(which you might not know if you didn't reflect nor create the table -- a 
column definition that doesn't appear in the model might be in the table, 
for instance).

That said, it'd seem like a generic Set Returning Function implementation 
would need to do the following:
* Subclass from GenericFunction to track data types of input values and 
whatever voodoo is required for bind parameters.  (I think this also gives 
us the behavior of SELECT generate_series(...) being treated like SELECT 
FROM ... generate_series(...)... for free)
* Subclass from FromClause or perhaps even Alias to track types and names 
of output values in self.c
* Have an alternate/extended 'alias' implementation capable of defining 
aliases on a per-column level, so AS alias becomes AS alias(column_alias, 
...)

So usage might be something like:

series = sql.srfunc.generate_series(1, 10, output=[Column('value', 
Integer()])
foo = series.alias(foo).column_alias('value', 'v')
bar = foo.alias(bar)  # I'm assuming that aliasing an existing alias Does 
The Right Thing(tm), I've never tried it.

session.query(series.c.value) 
# SELECT generate_series.value FROM generate_series(1, 10) AS 
generate_series(value)

session.query(foo.c.v)
# SELECT foo.v FROM generate_series(1, 10) AS foo(v)

session.query(foo.c.v + bar.c.v).select_from(foo).join(bar, foo.c.v  
foo.c.v.) 
# SELECT foo.v + bar.v FROM generate_series(1, 10) AS foo(v) JOIN 
generate_series(1, 10) AS bar(v) ON foo.v  bar.v;



Thoughts?

-- Daniel 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Mixing ORM and set-returning-functions (e.g. generate_series)

2013-11-07 Thread Daniel Grace
I've been using the ORM exclusively for my project and I've run into a 
particular case that has me stumped:

I'm trying to find a way to generate SQL that essentially looks like this, 
using session.query() as a base:  (Running on PostgreSQL)

SELECT model.*, series.number
FROM model, generate_series(1,100) AS series(number)
WHERE some_where_clauses AND series.number some_condition

The closest solutions I've found all have one or more of the following 
problems:

   - They non-ORM methods (e.g. select() instead of query() and a lot of 
   text())
   - They end up wrapping generate_series in a subselect -- SELECT  
   FROM (SELECT generate_series(...)).  This is technically legal(*) 
   - They don't give me a way to properly alias both the table and its 
   columns (in the case of 
   session.query(...).select_from(sql.func.generate_series(...)))

Any ideas what I should be doing here?  FWIW, it seems like this similar 
situation could potentially crop up if using a RDBMS that supported 
parameterized views (Postgres doesn't, yet) -- though at least that 
situation would (probably) know what the output columns are from reflection.

-- Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Getting the unmodified version of an object without losing changes

2013-10-24 Thread Daniel Grace
The problem I was encountering is that history doesn't make it easy to 
differentiate between collections and non-collections very well:

inspect(obj).attrs.intcolumn.history.unchanged returns return [intval], 
rather than intval.

inspect(obj).attrs.children.history.unchanged returns [childobj]

It'd be handy if I can just do something like:

original = get_original(self)
if original.value != self.value:
pass

With some sufficiently-robust-but-should-be-simple implementation of 
get_history.

The main problem with Jonathan's approach is the results will be 
inconsistent if any of the attributes are collections.

In my actual case at the moment, I know every attribute I'm going to look 
at is a scalar, so inspect(self).attrs.target_attribute.non_added()[0] 
works (provided this was loaded from the database at all and isn't a new 
instance of the model).  The problem is that I can't make easily write a 
general case for this, because the [0] is required for scalars, but will 
completely break when applied to any attribute that's a collection.

Michael: By Losing changes, I mean that I know it's easily possible to 
get to the unmodified version of an object via session.rollback() or 
session.expire() -- but then I lose the pending (not yet committed) changes 
that I'm trying to compare against.  I wasn't referring to autoflush.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Getting the unmodified version of an object without losing changes

2013-10-23 Thread Daniel Grace
I have a situation where an object in the ORM performs some calculations 
and other assorted checks.  One of these situations is checking to see the 
difference between the original value of an attribute and the current value 
-- in particular, some of the validation going on should prohibit a change 
that causes a particular check to fail, but only if that check wasn't 
already failing (e.g. some legacy)

I see that I can use inspect() to get the history of attributes, but that 
quickly gets unwieldy when working with a large number of attributes.  What 
I'd love to be able to do is something like:

def do_something(self):
orig = get_original_version(self)
delta = self.value - orig.value
# 

Is there a simple way to accomplish this?  Right now I'm doing something 
like:

orig_value, = inspect(self).attrs.value.history.non_added or 
(default_value, )

which seems messy and only works for scalar values.

-- Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Running SQL Alchemy on a QNX RTOS Machine

2013-09-18 Thread Daniel Weissberger


I am a considering using SQLAlchemy for ORM. I have a couple of concerns up 
front I wanted to share with you. 

 

**The machine running Python will be a QNX RTOS Machine 

 
Currently I have 32 bit MS Access installed on a 64 bit machine giving me a 
setup error with PYODBC DBAPI ( I had to use PYPYODBC; This was just a 
temporary setup to test the architecture on my PC, eventually i need to 
migrate to the QNX). Is this DBAPI (PYPYODBC) supported/recommended? 
 
Seems that there is no SQLAlchemy support (dialect documentation) for MS 
Access, is this correct? Is it not possible to use an MS Access DB with 
SQLALchemy?
 
The machine running Python will be a QNX RTOS Machine

Given my system (Ideally I would like to host the database on the QNX 
machine), could anyone suggest for me the following:

 

1)  A recommended database platform (if there is a supported DBAPI for 
MS Access this is preferred)
2)  Given this database platform, a recommended DBAPI
3)  Any other tips and help you can offer in getting this set up would 
be GREATLY appreciated.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Having __mapper_args__ order by columns defined in a superclass?

2013-06-10 Thread Daniel Grace
I'm converting some existing code from reflecting tables in the database to 
actually defining all of the columns.  One of my primary reasons for doing 
this is being able to apply mixins and such to table definitions -- for 
instance, there's numerous lookup tables which all have id and name 
columns.

My problems arise when trying to declare an order_by that references 
columns defined in a superclass -- e.g: wanting to do something like this:

class LookupTableMixin():
def __str__(self):
return self.name

class LookupTable(Model, LookupTableMixin):
__abstract__ = True
id = Column(INTEGER(), primary_key=True)
name = Column(TEXT(), nullable=False)

class SortedLookupTable(LookupTable):
__abstract__ = True
__mapper_args__ = { 'order_by': [ name ] }


I'm trying to determine the correct way to do this:

1. As written, it doesn't work since 'name' is not defined yet.
2. Using name in quotes works, up until the point where the the 
automatic sort causes a conflict since it refers to any column named name 
(ORDER BY name, rather than ORDER BY __tablename__.name)
3. No form of callable that I've tried seems to work 
4. Simply redefining 'id' and 'name' (e.g. copy-pasting the column 
definitions from LookupTable to SortedLookupTable) ultimately causes the 
following:

CompileError(Cannot compile Column object until its 'name' is assigned.,)


Oddly enough, this only happens if __mapper_args__ is also present (and 
ordering by the column)

5. __mapper_args__ does not appear to work with declared_attr, such that:

@declared_attr
def __mapper_args__(cls):
return { 'order_by': [ cls.name ] }

causes the same error as above.

What should I be doing here to get this to work?

Thanks in advance,

-- Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Having __mapper_args__ order by columns defined in a superclass?

2013-06-10 Thread Daniel Grace


On Monday, June 10, 2013 11:13:09 AM UTC-7, Michael Bayer wrote:


 [...]


 Using 0.8 you can pull the actual copied column name from the __table__:

 class SortedLookupTable(LookupTable):
 __abstract__ = True

 @declared_attr
 def __mapper_args__(cls):
 return {'order_by': [cls.__table__.c.name]}

 cls.name is the un-copied Column object still associated with the 
 LookupTable abstract class, ready for use on other mapped classes.This 
 pattern is discussed in this section: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/extensions/declarative.html#resolving-column-conflicts.


I don't know why I didn't think to try that (one of my attempts did involve 
__table__, but not there).  That fixed it, thanks! 

I probably overlooked the Resolving Column Conflicts bit because I wasn't 
trying to redeclare a column at all originally, so didn't think it applied 
(plus I wasn't yet sure if @declared_attr worked on __mapper_args__ or not 
at that point)


 Also as far as the order_by mapper argument, there's no plans to remove 
 it, but it's a legacy argument that doesn't work well/at all with many 
 queries (such as queries against multiple entities, queries against 
 individual columns).

 It's handy in cases of lookup tables and other similar places -- but I 
have ran into cases where it breaks as well (unions, joins, etc.).  It 
would be handy if it could be quietly discarded in some places where it's 
presence causes things to break anyways, but if it's legacy I probably 
should try to phase out of using it and not expect any changes on that 
front.

-- Daniel
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] textcolumn.contains() escaping, the lack of icontains(), and possible bug?

2013-05-13 Thread Daniel Grace
Good to hear!

I took a look at #2694 and it seems that using column.contains(other, 
autoescape=True)  might get wordy fairly quick when -- at least in new 
applications -- it would be a handy default.  While it's probably not 
particularly feasible, it'd be handy if the default for autoescape could 
somehow be set on a engine/metadata/etc level.

-- Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Avoid adding transient object to collection?

2013-04-26 Thread Daniel Grace
I'm still fairly new at sqlalchemy, and am still occasionally being 
surprised by how sometimes-too-clever it is.  I ran into one of those 
moments today.

I have something that looks like this.  (All the tables are reflected.)

class Parent(Model, SimpleLookupTable):
__table__ = Table('parent')
# 

class Child(Model):
__table__ = Table('child')
parent = relationship(Parent, lazy='joined', 
backref=backref('children', lazy='lazy'))
# 


As part of my program design, I'm sometimes creating a partially-populated 
Child() that serves as a 'template' for a child that might be created 
later, but isn't at this point.

template = Child(parent = parent)
do_lots_of_stuff()

I was a bit surprised to find that at this point, parent.children already 
contains the new Child().  This makes sense when thinking about it under 
normal circumstances... but in this particular case, I don't want this to 
be part of the collection until it's actually added to the session (which I 
can confirm it's not, I've tried both expunging and make_transient().

What's the best way to accomplish this?  The documented behavior of 
cascade_backrefs=False is almost, but not quite, what I need.  

-- Daniel Grace

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Default table aliasing (or: avoiding conflicts between foo.type_id and foo_type.id)

2013-04-12 Thread Daniel Grace
I ran into an issue today due to the following scenario:

table foo (model class Foo) has a column type_id, which refers to the 
id column of table foo_type (model FooType)

session.query(Foo, FooType), since the generated labels alias both 
foo.type_id and foo_type.id to foo_type_id, and thus the generated 
SQL duplicates the column labels and causes an exception similar to :

InvalidRequestError: Ambiguous column name foo_type.id' in result set! try 
'use_labels' option on select statement


There's two possible workarounds for this that I can see:

   1. Alias one of the tables to something else so there is no conflict 
   (e.g. foo_type to ft.  Drawback: This needs to happen in all situations 
   where this might happen
   2. Refactor the database in some form such that the resulting column 
   name combinations don't happen.  Drawback: Altering the existing db schema 
   may not be feasible.

It would be handy, however, to be able to do one of these options instead 
to avoid the problem from even happening:

   1. Set a 'default alias' for a table or column at the mapper/model 
   level.  Maybe class FooType can be configured with a default alias of ft, 
   or Foo.id can have a default alias of foo_type_id (which would then make 
   it foo_foo_type_id after applying the table labelling)
   2. Set/change the default separator between a table name and a column 
   name when generating column aliases.  I could work around this if, as an 
   application-level default, all column labels were table__column (two 
   underscores) instead of table_column.  Maybe the separator can be a 
   callable -- (lambda tablename, columnname: tablename + __ + columnname)

Is there some other way to solve this that I might be missing?

If not, might I humbly submit this as a feature request for an upcoming 
sqlalchemy version? ;)

-- Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Default table aliasing (or: avoiding conflicts between foo.type_id and foo_type.id)

2013-04-12 Thread Daniel Grace
I don't know how I managed to mangle that and thought I proofread my post, 
but that second bit should have read:

session.query(Foo, FooType) doesn't work, since the label generation 
aliases both foo.type_id and foo_type.id to foo_type_id and thus 
causes an exception similar to : ...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Explicit CROSS JOIN; or, left joining the cartesian product of two unrelated tables to a third

2013-04-05 Thread Daniel Grace
I wasn't having any luck doing the equivalent of inner join ... on true 
earlier, but apparently I was just doing it wrong.  That works.   (In fact, 
I can just use literal(True) instead of literal(1) == 1)

It still feels a little hackish, but at least it works.  Thanks!

-- Daniel

On Friday, April 5, 2013 10:07:55 AM UTC-7, Michael Bayer wrote:

 according to wikipedia, a CROSS JOIN is just a straight up cartesian 
 product.  So why not do a JOIN b on 1 == 1 ?

 query(model_a).join(model_b, literal(1) == 1).outerjoin(model_c, ...)




 On Apr 5, 2013, at 12:52 PM, Daniel Grace thisgen...@gmail.comjavascript: 
 wrote:

 I've been banging my head against the wall for hours trying to figure out 
 how to do this in sqlalchemy.  After discussing on #sqlalchemy, it was 
 suggested I ask here so here goes.

 I'm trying to get SQL roughly equivalent to the below, but using the ORM:

 SELECT  FROM a CROSS JOIN b LEFT JOIN c ON c.a_id=a.id AND c.b_id=b.id



 What I can't do is something like this:

 # session.query(model_a, model_b).outerjoin(model_c, model_c.a_id==
 model_a.id  model_c.b_id==model_b.id)

 because the resultant SQL becomes this:

 SELECT  FROM a, b LEFT JOIN c ON c.a_id=a.id AND c.b_id=b.id

 which fails (in Postgres and, IIRC, newer MySQL versions) due to explicit 
 joins' binding tighter than the comma operator:

 ERROR: invalid reference to FROM-clause entry for table aSQL state: 
 42P01Hint: There is an entry for table a, but it cannot be referenced from 
 this part of the query.Character: 342



 A few notes:


 I'm using the ORM, but I can't actually use the Many to Many/Association 
 Object patterns here because 'c' actually depends on three tables, not two.  
 (In this particular case, I'm only interested in cases of c related to one 
 particular entry in the third table, so there's no need to query on that 
 particular relationship).


 I can't merely rewrite the query to c INNER JOIN a... INNER JOIN b because 
 I'm also interested in the combinations of (a,b) for which there are no c.  
 (I could technically union it against another query that uses NOT EXISTS, but 
 this smells of bad hackery).


 If I could get parenthesis around the a, b, the query would work as intended: 
 SELECT  FROM (a, b) LEFT JOIN c ON c.a_id=a.id AND c.b_id=b.id


 Any ideas on how to solve this?


 -- Daniel Grace



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Connecting to ms sql db on Windows Server 2008 with pyodbc and python 3.3

2013-03-07 Thread Daniel Kraus
Hi,

when I try to connect with sqlalchemy and mssql+pyodbc I get this exeption:
TypeError: The first argument to execute must be a string or unicode 
query.

It works if I only use pyodbc.

E.g.
 conn = pyodbc.connect('DRIVER={SQL 
Server};Server=127.0.0.1;Database=BOM;UID=guest;PWD=guest')
 cursor = conn.cursor()
 cursor.execute(select * from Objects)
 result = cursor.fetchone()

- now `result` is a result from the database as it should be.

When I try to connect with sqlalchemy:

 engine = 
sqlalchemy.create_engine('mssql+pyodbc://guest:guest@127.0.0.1/BOM')
 engine.execute(select * from Objects)

- The above TypeError exception (complete 
traceback: http://pastebin.com/PHxbynFt )

Not sure if this bug is related: http://www.sqlalchemy.org/trac/ticket/2355

Any ideas what could be wrong and how to fix it?
I would really like to stick with python3 ;)

Thanks,
  Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] SQLAlchemy generates multiple connections

2012-11-27 Thread Daniel Rentsch
 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/pool.py, 
line 224, in connect
return _ConnectionFairy(self).checkout()
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/pool.py, 
line 387, in __init__
rec = self._connection_record = pool._do_get()
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/pool.py, 
line 911, in _do_get
raise AssertionError(connection is already checked out + suffix)
sqlalchemy.exc.StatementError: connection is already checked out at:
  File sql_bug.py, line 97, in module
session.query(User).filter_by(country=spain, id=1).one()
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 
line 2184, in one
ret = list(self)
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 
line 2227, in __iter__
return self._execute_and_instances(context)
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 
line 2241, in _execute_and_instances
close_with_result=True)
  File sql_bug.py, line 18, in debug_connection_from_session
conn = m(*args, **kwargs)
 (original cause: AssertionError: connection is already checked out at:
  File sql_bug.py, line 97, in module
session.query(User).filter_by(country=spain, id=1).one()
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 
line 2184, in one
ret = list(self)
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 
line 2227, in __iter__
return self._execute_and_instances(context)
  File 
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 
line 2241, in _execute_and_instances
close_with_result=True)
  File sql_bug.py, line 18, in debug_connection_from_session
conn = m(*args, **kwargs)) 'SELECT tf_user.id AS tf_user_id, 
tf_user.tf_country.id AS tf_user_tf_country.id \nFROM tf_user \nWHERE ? 
= tf_user.tf_country.id AND tf_user.id = ?' [immutabledict({})]






The following code (see also the attachment) generates the 
described behavior:

# -*- coding: utf-8 -*-
from pprint import pprint
from sqlalchemy.engine import create_engine
from sqlalchemy.orm import mapper, relation
from sqlalchemy.orm.query import Query
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.pool import AssertionPool
from sqlalchemy.schema import MetaData, Table, Column, ForeignKey
from sqlalchemy.types import Integer
import os


if __name__ == __main__:
*# boilerplate code start*
# just to debug the connections returned by Query
m = Query._connection_from_session
def debug_connection_from_session(*args, **kwargs):
conn = m(*args, **kwargs)
pprint(conn.connection.connection)
return conn
Query._connection_from_session = debug_connection_from_session


path = /tmp/test.db
try:
os.remove(path)
except:
pass
engine = create_engine(sqlite:/// + path)
conn = engine.connect()
metadata = MetaData()
metadata.bind = engine

user_table = Table(
'tf_user', metadata,
Column('id', Integer, primary_key=True),
Column('tf_country.id', None, ForeignKey('tf_country.id')))

country_table = Table(
'tf_country', metadata,
Column('id', Integer, primary_key=True))

class User(object): pass
class Country(object): pass

mapper(Country, country_table)
mapper(User, user_table, properties=dict(country=relation(Country, 
uselist=False)))

metadata.create_all(engine)

Session = sessionmaker(bind=engine, autocommit=True)
session = Session()

peter = User()
peter.id = 1

spain = Country()
spain.id = 2

peter.country = spain

session.add(peter)
session.add(spain)

session.flush()

# close and clean everything
session.close()
conn.close()
del session
del engine
del metadata
del peter
del spain
del conn

*# /boilerplate code end *### .. now the actual 
problem starts...

# this works without throwing an exception but generates multiple 
connections
#engine = create_engine(sqlite:/// + path)  

# this will generate an exception at LINE  97   .. because it generates 
multiple 
connections
engine = create_engine(sqlite:/// + path , poolclass=AssertionPool) 


Session = sessionmaker(bind=engine, autocommit=True)
session = Session()

# this works fine
session.expire_all()
session.query(User).filter_by(id=1).one()# this works fine

spain = session.query(Country).get(2)

# this results in a new connection
session.expire_all()
session.query(User).filter_by(country=spain, id=1).one() # this 
generates a new connection



Greetings, Daniel

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Daniel

[sqlalchemy] stucco_evolution 0.4 released

2012-04-19 Thread Daniel Holth
stucco_evolution 0.4 has been released. It is a migration tool for 
SQLAlchemy that attempts to deal with packaged dependencies having their 
own migration scripts. Reading - as depends on,

web application - openid package - users/groups package
web application - users/groups package

When asked to upgrade web application, stucco_evolution will topologically 
sort its dependencies, run all the migrations for the users/groups package, 
then run the migrations for the openid package, and finally run the 
migrations for the web application. If the dependency migrations are 
constrained in what they change, it works. Foreign key relationships can 
point in the direction of the - without problems.

Let me know if you've tried it, or know of another package that attempts to 
deal with non-monolithic database migration.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/D_bztOahVBQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] stucco_evolution 0.4 released

2012-04-19 Thread Daniel Holth
On Thursday, April 19, 2012 1:43:59 PM UTC-4, Michael Bayer wrote:

 If you've seen my recent talks you saw that I'm a little skeptical of what 
 you're terming non-monolithic databases.Let's say this means, a 
 database with a set of tables maintained by entirely different packages, 
 but with the possibility of dependencies between those tables.If I 
 understand correctly, if we were dealing with sets of tables that didn't 
 have any dependency, you wouldn't need a distributed migration tool, each 
 package would handle migrations for its own set of tables independently, is 
 that right ?

 I think what I need to see here are, what exactly are these packages, 
 outside of the Django community, that actually create their own tables yet 
 encourage dependencies between those tables and your app's own tables ?   I 
 know people are working on them since I see people asking questions about 
 those use cases, but what are they ?  What's the openid and user/groups 
 package you're thinking of here ?


I think you are right, there isn't anything outside of the Django world 
that does this; stucco_evolution is my attempt to bring something like that 
kind of re-use to my non-Django-powered world, and as far as I can tell*, I 
am its only user.

* Koders code search

Admittedly so far the only use case is the users/groups schema where the 
application attaches a separate user profile table, just like Django. The 
relationships always go in only one direction: the dependent schema holds a 
foreign key referencing the dependency schema.

It really is possible to distribute the entire user management interface as 
a separately maintained package, while still being able to get at 
user.profile in your app, but you won't be able to perform migrations that 
change the user table's primary key. It's probably more useful that 
stucco_evolution makes sure the users table is simply created first.

In the openid case, an openid package manages a users_openids table instead 
of adding an openid column to the users table.

So there you have it. It very well may be that there is exactly one use 
case for this package, but who doesn't need to keep track of users and 
groups? Other than that it does a passable job of applying hand-written 
linear database upgrades, and it is short.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/AQejNE70FEYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] stucco_evolution 0.4 released

2012-04-19 Thread Daniel Holth


  So there you have it. It very well may be that there is exactly one use 
 case for this package, but who doesn't need to keep track of users and 
 groups? Other than that it does a passable job of applying hand-written 
 linear database upgrades, and it is short.

 that it is, and the surprise here isrepoze.evolution !  yikes !

 so I guess with these two systems, writing the scripts is totally up to 
 the developer, is that right ?  

 There's a lot that alembic could bring into this.   There's all the 
 Alembic ops and dialect-agnostic directives (DDL abstraction).  There's 
 migration modes that either talk directly to a database or generate a SQL 
 Script.There's the tools to create new scripts and a more sophisticated 
 versioning scheme too (repoze.evolution seems to use an incrementing 
 integer).

 It almost seems like Alembic could integrate with repoze.evolution though 
 I'm not sure if that's useful.   You could certainly use Alembic's DDL 
 abstractions directly in this system with a couple of lines if nothing else?


My little project doesn't care about DDL, it just passes your script a 
connection. I didn't consider Alembic when I wrote stucco_evolution in 2010 
but I wouldn't mind using it now. At the time I just needed something that 
didn't scare me. repoze.evolution is fine, it is only 98 lines of code, 17 
of which I actually execute. Its design abstracts out the kind of thing 
that is being upgraded, so you could write another kind of 
EvolutionManager() to upgrade filesystems if you felt like it.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/25sv7PEmPy8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Given a classname, how can I get tablename in my DeclarativeMeta

2012-03-26 Thread Daniel Nouri
I'm trying to do a few (maybe too) clever things to make SQLA
declarative relations less verbose for the most common cases,
especially for when two relations to the same type exist, and
therefore a 'primaryjoin' on the relationship is normally required.
So with kemi's DeclarativeMeta, you can write this and it'll add the
foreign keys and primaryjoins for you:

class Customer(Base):
name = Column(String)
billing_address = relationship(Address)
shipping_address = relationship(Address)

My implementation for this works OK _until_ some class defines its own
table name, e.g.:

class Address(Base):
__tablename__ = 'myaddresses'

By default, I convert the class name to lower case and use that for a
table name.  For this default case, I'm able to derive the column for
the ForeignKey that I need to create, so with a
relationship(Address) I would assume a ForeignKey(address.id).

What I'm looking for is a more robust way of getting the table name.
When the class itself is passed to relationship(Address), that's not
an issue, but what if I only have the name of the class and that class
is not inside the class registry yet?  Is there maybe an event that I
can use to finalize my ForeignKey target fullnames?

kemi's code has a test that illustrates the problem:
https://github.com/dnouri/kemi/blob/master/kemi/test.py#L101

(Also, any thoughts that you might have on kemi's approach as a whole
-- maybe someone already did this, please let me know.)


Cheers,
Daniel
-- 
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Given a classname, how can I get tablename in my DeclarativeMeta

2012-03-26 Thread Daniel Nouri
On Mon, Mar 26, 2012 at 11:29 PM, Michael Bayer
mike...@zzzcomputing.com wrote:

 On Mar 26, 2012, at 1:53 PM, Daniel Nouri wrote:

 I'm trying to do a few (maybe too) clever things to make SQLA
 declarative relations less verbose for the most common cases,
 especially for when two relations to the same type exist, and
 therefore a 'primaryjoin' on the relationship is normally required.

 this will be simplified in 0.8, you'll be able to pass just foreign_keys in 
 this scenario, no primaryjoin will be needed

Sounds great.

 What I'm looking for is a more robust way of getting the table name.
 When the class itself is passed to relationship(Address), that's not
 an issue, but what if I only have the name of the class and that class
 is not inside the class registry yet?

 You should use the mapper_configured event - see the example at 
 http://techspot.zzzeek.org/2011/05/17/magic-a-new-orm/ .   This example 
 should also start to reveal why I'm -1 on using custom metaclasses - the 
 events should provide a cleaner way to do these things.

Thanks for the pointer.  Wasn't aware of that magic.py.  Quite a funny
blog post, too! :-)

I guess I'll think a bit more to see if I want to use this at all.  It
seems useful to only fill in the blanks, and allow people to
override using standard SQLA constructs (e.g. by specifying the
foreign key themselves).  I'm thinking maybe that's friendlier than
requiring users to learn about a different relationship function.  (Of
course it'll break expectations elsewhere and it's more limited.)

The primaryjoin is the reason I started this.  The system I'm working
with requires the primaryjoin basically everywhere where I make a new
relation (most classes derive from a Node), and it's a bit of a
stumbling block, so I'm happy to hear about the patch in 0.8.


Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Only one record with parent_id of None; constraint possible?

2012-03-14 Thread Daniel Nouri
On Tue, Mar 13, 2012 at 5:03 PM, Conor conor.edward.da...@gmail.com wrote:
 On 03/13/2012 09:21 AM, Daniel Nouri wrote:

 I have a node with a parent_id, which may be None (for the root node).
  Can I make a SQL table constraint that says: 'there may only be one
 node with the parent_id of None' (while it's fine if many nodes share
 a parent_id that's not None)?

 Thanks,
 Daniel

 You can use a functional unique index that takes advantage of multiple NULLs
 in being allowed in a unique index (beware: older MS SQL versions did not
 follow this behavior):

 CREATE UNIQUE INDEX mytable_parent_id_un ON mytable (CASE WHEN parent_id IS
 NULL THEN 1 ELSE NULL END)

Thanks very much.  This looks like what I want.  I tried this using
the event/DDL, but with both SQLite and Postgres, I'm getting this
error:

OperationalError: (OperationalError) near CASE: syntax error
u'CREATE UNIQUE INDEX nodes_parent_id_un ON nodes (CASE WHEN parent_id
IS NULL THEN 1 ELSE NULL END)' ()

Did I get the syntax wrong?


Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Only one record with parent_id of None; constraint possible?

2012-03-13 Thread Daniel Nouri
I have a node with a parent_id, which may be None (for the root node).
 Can I make a SQL table constraint that says: 'there may only be one
node with the parent_id of None' (while it's fine if many nodes share
a parent_id that's not None)?

Thanks,
Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] A hybrid_property with the same name as the attribute

2012-03-02 Thread Daniel Nouri
On Fri, Mar 2, 2012 at 5:49 AM, Fayaz Yusuf Khan fa...@dexetra.com wrote:
 On Thursday 01 Mar 2012 1:27:30 PM Daniel Nouri wrote:
 Maybe I'll just make what seems to be the most common one to override,
 the 'title' attribute, underscore prefixed.  That'll solve the problem
 at hand.

 This choice doesn't seem so extensible. Maybe, you should stick to the
 Special cases aren't special enough to break the rules. rule?

FWIW, I didn't go that way, but instead implemented an update-event
handler that computes and sets the 'title' attribute.  Does the job,
and there's no breaking the rules.  :-)


Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] A hybrid_property with the same name as the attribute

2012-03-01 Thread Daniel Nouri
Thanks for your answer!

On Wed, Feb 29, 2012 at 9:26 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 29, 2012, at 2:23 PM, Daniel Nouri wrote:
 I don't totally buy that since you can just change the superclass as needed, 
 I'm not sure why that is a big deal.    If you're trying to build 
 my_special_library.MagicBaseClass, where people install my_special_library 
 somewhere and just extend from MagicBaseClass, I'm a little skeptical about 
 that approach.   I don't necessarily think it's a good idea that persistence 
 schemes should be imported by third party libraries.

Yes that's what I'm doing.  My magic base class is my CMS's Node
class.  I like that it's easy for add-on authors to derive from it,
and have the adjacency list etc. all set up already.

Maybe I'll just make what seems to be the most common one to override,
the 'title' attribute, underscore prefixed.  That'll solve the problem
at hand.

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] A hybrid_property with the same name as the attribute

2012-02-29 Thread Daniel Nouri
Hi!

My problem: I have a base class that maps a 'title' attribute.  In a
subclass I want to turn that title into a descriptor (just like the
'hybrid_property' examples really).

I think I want to avoid mapping the attribute as '_title' in the base
class.  Not sure I like the added complexity.  Also, if I were to
prefix, I might just as well make all my attributes start with '_',
because I can't possibly anticipate what deriving classes might want
to turn into a property and what not.  Using a 'column_prefix' might
be the right way, but I guess I'd then have to write hybrid properties
for all my attributes, and they all wouldn't really do anything
interesting.

Is there a way to do this in my deriving class without the need to
touch the super class?  I was thinking about implementing a custom
InstrumentedAttribute, but then I I'm not sure where to hook it in.

Thanks in advance for suggestions and pointers.


Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Parent of the parent (relationship) as a hybrid property

2011-12-06 Thread Daniel Nouri
On Mon, Dec 5, 2011 at 11:30 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 On Dec 5, 2011, at 1:39 PM, Daniel Nouri wrote:
 Thanks for your explanation.  I haven't used hybrid a lot yet and this
 helped me understand what it does and what not.

 Your last example that uses a 'with_transformation', i.e. a protocol
 that I guess would amount to allowing to run the query through an
 arbitrary function, is quite nice to read, and I'd definitely be a
 user.  Then again, there's nice enough alternatives for what I'm
 trying to do, so I'll be fine without.  :-)

 OK, well with_transformation is literally:

 def with_transformation(self, fn):
    return fn(self)

 so there's not much controversy there.  I've added it and wrote up an example 
 of how to combine this with a hybrid comparator, to produce two approaches 
 for your problem (which just makes a nice example).   It looks *interesting*, 
 but I don't know how much this pattern would catch on - if it's one of those 
 patterns that can keep expanding out or not.   Check it out at:

 http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html#building-transformers

Awesome!  Thanks for not only giving me a way to do this, but
basically solve my homework too.  :-)

I have to admit I only understood Comparators now.  And why the need
for a 'with_transformation' on top of the Comparators.  The way
GrandparentTransformer's functionality is put together using
'with_transform' seems very elegant.

The GrandparentTransformer in the second example that separates 'join'
from 'filter' seems interesting and the implementation slightly
easier, but the query part is definitely harder in a way that exposes
maybe a lot of what's going on behind the scenes.

I imagine that novice users would also be tempted to think that they
can do query.filter(Node.grandparent == Node(id=5)).  Since the
join in your example doesn't affect the way that subsequent filters
work, I guess it's not useful for the user (doing the query) to know
if it's a transform or a filter, right?  (I don't pretend I understand
the full scope of this, so wouldn't be surprised if this were a choice
of explicit over implicit or some such.)


-- 
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Parent of the parent (relationship) as a hybrid property

2011-12-06 Thread Daniel Nouri
On Tue, Dec 6, 2011 at 12:38 PM, Daniel Nouri daniel.no...@gmail.com wrote:
 On Mon, Dec 5, 2011 at 11:30 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 On Dec 5, 2011, at 1:39 PM, Daniel Nouri wrote:
 Thanks for your explanation.  I haven't used hybrid a lot yet and this
 helped me understand what it does and what not.

 Your last example that uses a 'with_transformation', i.e. a protocol
 that I guess would amount to allowing to run the query through an
 arbitrary function, is quite nice to read, and I'd definitely be a
 user.  Then again, there's nice enough alternatives for what I'm
 trying to do, so I'll be fine without.  :-)

 OK, well with_transformation is literally:

 def with_transformation(self, fn):
    return fn(self)

 so there's not much controversy there.  I've added it and wrote up an 
 example of how to combine this with a hybrid comparator, to produce two 
 approaches for your problem (which just makes a nice example).   It looks 
 *interesting*, but I don't know how much this pattern would catch on - if 
 it's one of those patterns that can keep expanding out or not.   Check it 
 out at:

 http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html#building-transformers

 Awesome!  Thanks for not only giving me a way to do this, but
 basically solve my homework too.  :-)

 I have to admit I only understood Comparators now.  And why the need
 for a 'with_transformation' on top of the Comparators.  The way
 GrandparentTransformer's functionality is put together using
 'with_transform' seems very elegant.

 The GrandparentTransformer in the second example that separates 'join'
 from 'filter' seems interesting and the implementation slightly
 easier, but the query part is definitely harder in a way that exposes
 maybe a lot of what's going on behind the scenes.

 I imagine that novice users would also be tempted to think that they
 can do query.filter(Node.grandparent == Node(id=5)).  Since the
 join in your example doesn't affect the way that subsequent filters
 work, I guess it's not useful for the user (doing the query) to know
 if it's a transform or a filter, right?  (I don't pretend I understand
 the full scope of this, so wouldn't be surprised if this were a choice
 of explicit over implicit or some such.)

Here's a little generalization of that GrandparentTransformer that I
came up with.

 class ParentalTransformer(Comparator):
def __init__(self, expression, level):
super(ParentalTransformer, self).__init__(expression)
self.level = level

def operate(self, op, other):
def transform(query):
cls = Node
for i in range(self.level-1):
cls, prev_cls = aliased(cls), cls
query = query.filter(cls.id==prev_cls.parent_id)
return query.filter(op(cls.parent, other))
return transform

This one tries to do arbitrary depths -- because in my real life
example I needed three levels really (query on grand-grandparent).  It
also turned out that when using polymorphic subclasses of Node (my
tree is really different types of Nodes), I had to use Node instead
of self.__clause_element__(), or else the generated query would
assume that all nodes along the way were of the same type as
self.__clause_element__().  (I *think* this would apply to the
example in the docs too if one were to use polymorphic.)

Does this look right?  :-)


-- 
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Parent of the parent (relationship) as a hybrid property

2011-12-06 Thread Daniel Nouri
On Tue, Dec 6, 2011 at 3:33 PM, Daniel Nouri daniel.no...@gmail.com wrote:
 Here's a little generalization of that GrandparentTransformer that I
 came up with.

  class ParentalTransformer(Comparator):
    def __init__(self, expression, level):
        super(ParentalTransformer, self).__init__(expression)
        self.level = level

    def operate(self, op, other):
        def transform(query):
            cls = Node
            for i in range(self.level-1):
                cls, prev_cls = aliased(cls), cls
                query = query.filter(cls.id==prev_cls.parent_id)
            return query.filter(op(cls.parent, other))
        return transform

I experimented a little more with the idea of having parametrized
Transformers.  Here is one that expects a list of names instead of a
level.  At this point, my use is not really related to hybrids
anymore, so I may be using the wrong tools.

class FollowTransformer(Comparator):
def __init__(self, expression, names):
super(FollowTransformer, self).__init__(expression)
self.names = names

def operate(self, op, other):
def transform(query):
cls = self.__clause_element__()
for name in self.names[:-1]:
cls, prev_cls = aliased(cls), cls
query = query.join(cls, getattr(prev_cls, name))
return query.filter(op(getattr(cls, self.names[-1]), other))
return transform

You can see how the transform is really similar to what I had before.

On the Node class I can now use it like so:

class Node(Base):
__tablename__ = 'node'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('node.id'))
parent = relationship(Node, remote_side=id)

@classmethod
def follow(cls, *names):
return FollowTransformer(cls, names)

And it'll allow me to do queries like this:

session.query(Node).with_transformation(Node.follow('parent',
'parent') == grandparent)

session.query(Node).with_transformation(Node.follow('parent',
'id') % 2).filter(Node.id  5)

which I think looks kinda neat.  It reminds of the
DoubleUnderMagicalQuery: http://pastebin.com/31JAxnfZ


-- 
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Parent of the parent (relationship) as a hybrid property

2011-12-05 Thread Daniel Nouri
Say I have a class that derives from a Node class that has a 'parent'
relationship (using 'parent_id' and 'id' fields):

from sqlalchemy.ext.hybrid import hybrid_property

class MyClass(Node):
@hybrid_property
def grandparent(self):
return self.parent.parent

@grandparent.expression
def grandparent(cls):
pass # hmm?

How would I go about the implementing the grandparent hybrid
expression?  What I have is a boolean clause list with two comparisons
of 'parent_id' and 'id' using aliases (not included in the example).
I was hoping I could return that but it's not what it expects.

Thanks for your help.
--
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Parent of the parent (relationship) as a hybrid property

2011-12-05 Thread Daniel Nouri
On Mon, Dec 5, 2011 at 6:42 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1


 On Dec 5, 2011, at 11:58 AM, Daniel Nouri wrote:

 Say I have a class that derives from a Node class that has a 'parent'
 relationship (using 'parent_id' and 'id' fields):

    from sqlalchemy.ext.hybrid import hybrid_property

    class MyClass(Node):
        @hybrid_property
        def grandparent(self):
            return self.parent.parent

        @grandparent.expression
        def grandparent(cls):
            pass # hmm?

 How would I go about the implementing the grandparent hybrid
 expression?  What I have is a boolean clause list with two comparisons
 of 'parent_id' and 'id' using aliases (not included in the example).
 I was hoping I could return that but it's not what it expects.

 Thanks for your help.

 yeah so this is where @hybrid cannot really do the whole thing.    
 Basically when you use a hybrid on a relationship, the @expression version 
 returns whatever it is that's most useful, given the context that you'd be 
 using it within.  It of course cannot imply the automatic generation of 
 join() or anything like that, which is a manual thing...unless you had 
 @expression return a function that would process a given Query, which is 
 pretty nasty and not really helpful.

 So in this case, I'm not even sure @expression can return a concept that is 
 meaningfully the grandparent - usually the best it can do is return 
 cls.parent, but in this case that directly contradicts that it's called 
 grandparent.

 that would look like:

 q = session.query(MyClass).\
          join(parent_alias, MyClass.grandparent).\
          filter(parent_alias.grandparent == some_object)

 which is nasty because .grandparent is the same as .parent, so don't do that.

  If it returned an aliased(cls).parent, then that alias object would need to 
 be memoized so that you can refer to it multiple times in the query (which is 
 possible).  That would make the usage something like:

 parent _alias = aliased(MyClass)

 q = session.query(MyClass).\
        join(parent_alias, MyClass.parent).\
        join(MyClass.grandparent, parent_alias.parent).\
        filter(MyClass.grandparent == some_node)

 which also makes little sense, soprobably don't do that either :).

 another awful thought I had, return a query option that runs it through 
 join(..., aliased=True):

 q = session.query(MyClass).\
         options(MyClass.grandparent).\
         filter_by(id=id_of_my_grandparent)

 I'd need to enhance options() a bit, or provide some other method to Query 
 like query.with_transformation(fn) that runs the query through a given 
 callable.

 the query above is completely opaque and pretty much can do just the one 
 thing it's been set up to do, not very useful :)

 or if one were to really go nuts with that approach and produce a comparator 
 with an __eq__() method, maybe even:

 session.query(MyClass).with_transformation(MyClass.grandparent==some_parent)

 but I consider that to be kind of a novelty.   Or maybe not, 
 with_transformation() might be a handy method to add in any case.

 so yeah @expression.grandparent is not really very useful given the explicit 
 nature of the Query object, unless you seek to build tricks that build up the 
 Query behind the scenes.

Thanks for your explanation.  I haven't used hybrid a lot yet and this
helped me understand what it does and what not.

Your last example that uses a 'with_transformation', i.e. a protocol
that I guess would amount to allowing to run the query through an
arbitrary function, is quite nice to read, and I'd definitely be a
user.  Then again, there's nice enough alternatives for what I'm
trying to do, so I'll be fine without.  :-)


-- 
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Do not check for truth value of instance in _get_from_identity

2011-10-28 Thread Daniel Nouri
The _get_from_identity method of sqlalchemy.orm.query.Query starts
like this:

@classmethod
def _get_from_identity(cls, session, key, passive):
instance = session.identity_map.get(key)
if instance:
# ...

This is problematic for me since my model class is a dict-like node
(where dict access allows reaching down to children by name).  The
check if isinstance will therefore trigger instance.keys(), which
is expensive.  Rather than adding a __len__ method in my code, I'd
suggest the _get_from_identity implementation to check for if
instance is not None instead:

@classmethod
def _get_from_identity(cls, session, key, passive):
instance = session.identity_map.get(key)
if instance is not None:
# ...


Thanks
Daniel
-- 
http://danielnouri.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] foreign_keys changing cascade behavior for relationships? users going away.

2011-08-25 Thread Daniel Robbins
Hi there.

I have an application that has three different databases in MySQL (InnoDB).
They are separated into distinct databases because sometimes I need to
reload a database from scratch without impacting the other databases. For
this reason, there are no ForeignKey() relationships defined at table
creation time between these three databases, so I can wipe and reload an
individual database at will.

There are still relationships between the databases, from a SQLAlchemy
perspective. Because I don't use ForeignKey(), I need to specify
foreign_keys in my relationship, as follows:

'user' : relationship(User, foreign_keys=[User.__table__.c.email],
primaryjoin=cls.__table__.c.user_email == User.__table__.c.email),

When I have this relationship in my AllocatedHardware object, and I delete
an AllocatedHardware record, this has the unfortunate site-effect of
deleting the user account from the users table. :( When I leave the
foreign_keys parameter out, then SQLAlchemy can't determine the join
condition.

The behavior I want is for the AllocatedHardware record to go away when I
delete it, without impacting my user accounts.

Can this be achieved with tweaking the cascade behavior? It seems like an
unexpected side-effect that specifying foreign_keys will result in cascading
deletes in tables that do not have any db-level foreign key relationships.

Thanks and Regards,

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Repetitive Fields in declarative

2011-08-19 Thread Daniel Robbins
On Fri, Aug 19, 2011 at 9:23 AM, Mark Erbaugh m...@microenh.com wrote:


 On Aug 19, 2011, at 10:41 AM, Michael Bayer wrote:

  Id use a mixin so that a superclass can be generated in a data driven
 manner:
 
 
  MyCols = type(MyCols, (object, ), dict((field%d % i, Column(Integer))
 for i in xrange(1, 10)))


If you are going to do this a lot, it can be a pain with declarative. I
developed a framework that I used, based on ORM, which uses an explicit
_makeTable() and _mapTable() objects that I call in order to create and map
the tables. Since these are python methods, I can use any kind of python
iteration or code I want to decide what columns to create. It is a pretty
flexible model. Example conceptual code:

class Database(object):

  table_args = { 'mysql_engine' : 'InnoDB' }
  schema = database_name

  def __init__(self,dbclasses=[]):
self.metadata = MetaData()
self.engine = ... (set up engine, etc.)
self.dbclasses = dbclasses
for c in self.dbclasses:
  c._makeTable(self,self.engine)
  cls.__table__.create(bind=self.engine,checkfirst=True)
for c in self.dbclasses:
  c._mapTable(self)

class FooRecord(object):

  @classmethod
  def _makeTable(cls,db,engine):
cls.db = db
cls.__table__ = Table('foo', db.metadata,
  Column('x'),
  Column('y'),
  **cls.table_args,
  schema=cls.schema
  etc.)

  @classmethod
  def _mapTable(cls,db):
mapper(cls, cls.__table__, properties={ ... })

db = Database([FooRecord])

You may find a model like this easier to use to create dynamically-generated
tables. The point here is that SQLAlchemy is sufficiently flexible so that
if declarative doesn't meet your needs or is a bit cumbersome for what you
want to do, you can just start at the ORM (below declarative) level and
build up a framework that works for you.

Regards,

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mysql table creation errno 105 when using vertical partitioning with InnoDB

2011-08-11 Thread Daniel Robbins
Hi there,

I have been using ORM vertical partitioning for a while with MySQL 5.1.58
and MyISAM tables and SQLAlchemy 0.7.2.

I have recently switched over to using InnoDB tables and my vertical
partitioning table creation is failing part-way in with an errno 105 code.
I have confirmed that all the tables that *did* get created are using an
InnoDB engine, so having stray MyISAM tables does not appear to be the
problem.

I have found a fix for the SQL that SQLAlchemy generates (and that MySQL
barfs on) that allows the table creation to succeed, which involves simply
prefixing SQLAlchemy's CREATE TABLE  foreign key references with the
database name. SQLAlchemy generates this line below, which also fails with
errno 105 when I paste it into MySQL monitor, just like when executed by
SQLA directly:

FOREIGN KEY(project_id) REFERENCES projects (id)

When I change it to this, table creation succeeds in mySQL monitor:

FOREIGN KEY(project_id) REFERENCES car_res.projects (id)

Basically, ForeignKey(projects.id) seems sufficient for SQLA to define
the foreign key relationship, but MySQL seems to be wanting
car_res.projects(id) to appear in the FOREIGN KEY SQL, instead of
projects(id).

So I'm a bit confused. Is SQLA to blame for not including this prefix in
vertical partitioning table creation scenarios, or is there some bug in my
code somewhere that is causing MySQL to barf or SQLA to generate incorrect
SQL? I have some basic vertical partitioning test code that uses InnoDB with
two tables, and for my basic test, it seems like the database. prefix is
*not* required by MySQL to successfully create tables.

Anyone have any idea about what might be going on?

Regards,

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] mysql table creation errno 105 when using vertical partitioning with InnoDB

2011-08-11 Thread Daniel Robbins
On Thu, Aug 11, 2011 at 12:20 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 Vertical partitioning implies tables in separate databases or schemas.
  It seems like you have it set up such that a table in partition A can refer
 to a table in partition B using a schema qualifier.   So if this is the case
 you'd use the schema argument on each Table to establish these names, when
 you do your CREATE, as well as schema-qualify the ForeignKey:

 t1 = Table(table_a, metadata, Column('id', Integer, primary_key=True),
 schema=schema_a)
 t2 = Table(table_b, metadata, Column('table_a_id', Integer, ForeignKey(
 schema_a.table_a.id)), schema=schema_b)


Your answers raise more questions :) So it looks like I can use schema
prefixing as above and avoid using vertical partitioning altogether.

Another option would be to forego the usage of ForeignKey for table
 relationships that span across two partitions, at least when CREATE TABLE is
 emitted.   This is effectively what you were doing when you were on MyISAM,
 since REFERENCES is a no-op on MyISAM - it just gets thrown away hence you
 had no error.

 This is total conjecture since I don't know the details here nor do I know
 MySQL's performance characteristics very deeply, but the existence of actual
 cross-schema foreign key constraints in the MySQL database may be a
 performance issue, if writing to table B means a disk operation on an
 entirely separate schema for table A must occur in order to insert or update
 a row.


OK, let me try to understand this.

The sample vertical partitioning code (this is my basic test that works,
shown below, based on the sample code in the O'Reilly book) works correctly
with InnoDB. It appears to emit ForeignKey for table relationships when
CREATE TABLE is emitted. Is this code supposed to fail? It's working.

from sqlalchemy import *
from sqlalchemy.orm import *

engine1 = create_engine('mysql://car:foo@localhost/car_fac')
engine2 = create_engine('mysql://car:foo@localhost/car_res')
engine1.echo = engine2.echo = True

metadata = MetaData()

product_table = Table( 'product', metadata,
Column('sku', String(20), primary_key=True),
Column('msrp', Numeric),
mysql_engine='InnoDB')

product_summary_table = Table( 'product_summary', metadata,
Column('sku', String(20), ForeignKey('product.sku'),
primary_key=True),
Column('name', Unicode(255)),
Column('description', Unicode(255)),
mysql_engine='InnoDB')

product_table.create(bind=engine1,checkfirst=True)
product_summary_table.create(bind=engine2,checkfirst=True)

class Product(object):
pass

class ProductSummary(object):
pass

mapper(ProductSummary, product_summary_table, properties=dict(
product=relation(Product, backref=backref('summary',
uselist=False
mapper(Product, product_table)

Session = sessionmaker(twophase=True)
Session.configure(binds={Product:engine1, ProductSummary:engine2})
session = Session()

As you can see, I'm using the same mysql account, but with two different
engines in a vertical partitioning configuration. MySQL is happy with the
foreign key relationships and creates the tables.

So... I don't understand your suggestion of not emitting ForeignKey at table
creation time. It appears to work in my basic test.

Shouldn't SQLA detect that the tables are vertically partitioned, treat the
ForeignKey relationships as NO-OPs to MySQL (because the tables may not be
able to *see* each other, since you could be using different MySQL accounts
with different permissions), and just use the ForeignKey definitions to help
set up the mappers properly?

I guess I don't understand the limitations/capabilities of vertical
partitioning in SQLA.

I have a more complex application that is basically doing the same thing as
this example code, actually using three engines to connect to three
different MySQL databases on the same server with the same account. This is
probably not the best way to do things, as schema prefixing would be better.
But as the above sample code shows, this *can* work. But my more complex app
is failing with the errno 105, which is what is confusing me. I can't figure
out the difference between my large application and this simple example, and
why the simple example works but my application does not, when they are
essentially doing the same thing. The sample code above emits SQL to MySQL
that defines the ForeignKey relationship and does not need a database.
prefix. But my big app seems to need that database. prefix. Maybe I have
the binds messed up?

-Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] mysql table creation errno 105 when using vertical partitioning with InnoDB

2011-08-11 Thread Daniel Robbins
On Thu, Aug 11, 2011 at 2:12 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 Just FYI the Oreilly book is super, duper, extremely old and out of date.
  A good read through the documentation on the SQLAlchemy site should be more
 complete at this point  and up to date, also stressing a whole set of new
 techniques that weren't available when the OReilly book was written (it was
 written against version 0.3).


OK, I think I am getting the hang of this. My working sample code in fact
didn't work when you tried to do anything with it.

I got the sample code working with two SQLite engines by dropping twophase,
which is not supported in SQLite. And I got the sample vertical partitioning
code working with two MySQL engines by removing the ForeignKey()
relationship and specifying the relationship explicitly in the mapper.

At this point, I was feeling adventurous, so I decided to try one MySQL
engine and one SQLite engine. It seems to work. Pretty cool. Example code
with comments below for anyone who might want to do this in the future. Run
python vertical_test.py init to populate the databases with data. On
successive runs, just run python vertical_test.py:

#!/usr/bin/python

import sys
from sqlalchemy import *
from sqlalchemy.orm import *

engine1 = create_engine('mysql://car:cargofast@localhost/car_res')
engine2 = create_engine('sqlite:///x.db')
engine1.echo = engine2.echo = True

metadata = MetaData()

class Product(object):

__table__ = Table( 'product', metadata,
Column('sku', String(20), primary_key=True),
Column('msrp', Numeric))

class ProductSummary(object):

__table__ = Table( 'product_summary', metadata,
Column('sku', String(20), primary_key=True),
Column('name', Unicode(255)),
Column('description', Unicode(255)),
mysql_engine='InnoDB')

# create tables in different databases:

Product.__table__.create(bind=engine1,checkfirst=True)
ProductSummary.__table__.create(bind=engine2,checkfirst=True)

# map tables to classes and define relationships between the tables:

mapper(ProductSummary, ProductSummary.__table__,
properties=dict(
product=relationship(
Product,

# Since there is no database-level foreign key
relationship,
# we need to define primaryjoin and foreign_keys
explicitly
# so SQLAlchemy understands how the tables are
connected:

primaryjoin=Product.__table__.c.sku ==
ProductSummary.__table__.c.sku,
foreign_keys=[Product.__table__.c.sku],
backref=backref('summary', uselist=False)
)
)
)

mapper(Product, Product.__table__)

# Create session, and bind each class to the appropriate engine:

Session = sessionmaker()
Session.configure(binds={Product:engine1, ProductSummary:engine2})
session = Session()

# Run with init as first argument to create tables and populate them
# with data:

# Run with init as first argument to create tables and populate them
# with data:

if __name__ == __main__ and len(sys.argv) == 2 and sys.argv[1] == init:

# create records using statements:

stmt = Product.__table__.insert()
engine1.execute(
stmt,
[
dict(sku=123, msrp=12.34),
dict(sku=456, msrp=22.12)
])
stmt = ProductSummary.__table__.insert()
engine2.execute(
stmt,
[
dict(sku=123, name=Shoes, description=Some
Shoes),
dict(sku=456, name=Pants, description=Some
Pants),
])

# or create records using ORM:

a = Product()
a.sku = blarg
session.add(a)

b = ProductSummary()
b.sku = a.sku
b.name = blarg
b.description = some blarg
session.add(b)
session.commit()

# Query records and SQLAlchemy relationships will help you to grab related
records
# from totally disparate database engines:

for p in session.query(Product):
print(PRODUCT INFO:,p.sku, p.msrp, p.summary.name,
p.summary.description)

Enjoy,

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Query results not Integers

2011-05-03 Thread Daniel Holth
The query is simply returning rows with one column. For example 
session.query(X.a, X.b).all() would return a potentially less surprising 
list rows with two columns. The rows can be indexed by name or number. The 
'L' is just Python telling you it is a long integer.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqlite transaction isolation, select for update, race condition

2011-04-27 Thread Daniel Holth
Is this pysqlite issue about SELECT not starting a transaction related? 
http://code.google.com/p/pysqlite/issues/detail?id=21


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: replacing or altering a mapper?

2011-04-25 Thread Daniel Holth
I suppose you explicitly don't want to call User.morestuff.otherproperty? I 
like doing it that way, but it could be that I am underusing SQLAlchemy's 
inheritance features. The following works fine:

package 1:

Base1 = declarative_base()
class User(Base1): pass

package 2:

Base2 = declarative_base()
class MoreUserStuff(Base2):

user_id = Column(Integer, ForeignKey(User.user_id), index=True)user = 
relationship(User, backref='morestuff')


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Migrating from 0.4.8 (yes 0.4.8)

2011-04-18 Thread Daniel Holth
You should expect better ORM performance in newer versions. You should go 
straight to 0.6 and see what happens, with an eye on the 0.5 and 0.6 release 
notes for the things that most people have to change (it shouldn't be a big 
deal).

RunSnakeRun is supposed to be a nifty profiler. Can you profile 
unit/integration tests for only the Python portion of the code to bypass the 
PyObjC bridge?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] stucco_evolution 0.33 released!

2011-02-28 Thread Daniel Holth
stucco_evolution is my entry into the crowded 'schema migration' space. For 
people that like writing their own ALTER TABLE statements, stucco_evolution 
only concerns itself with passing a SQLAlchemy connection to numbered Python 
scripts in an evolve/ package, maintaining a (name, number) tuple for each 
managed package.

stucco_evolution is 200 lines of code with automated tests that provide 100% 
statement coverage. It works well for me, but the functions in the API do 
not have very good names. Suggestions welcome.

Daniel Holth

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Transactional DDL and SQLite?

2011-02-18 Thread Daniel Holth
Thanks Mike. I will have to edit the pysqlite C source code if I want to 
prevent it from committing when the query does not contain any of the 
strings select, insert, update, delete, or replace.

if (!strcmp(buf, select)) {
return STATEMENT_SELECT;
} else if (!strcmp(buf, insert)) {
return STATEMENT_INSERT;
} else if (!strcmp(buf, update)) {
return STATEMENT_UPDATE;
} else if (!strcmp(buf, delete)) {
return STATEMENT_DELETE;
} else if (!strcmp(buf, replace)) {
return STATEMENT_REPLACE;
} else {
return STATEMENT_OTHER;
}

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Transactional DDL and SQLite?

2011-02-18 Thread Daniel Holth
It looks like SQLAlchemy 0.7's events make it a lot easer to prepend /* 
select */ to every statement.

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Transactional DDL and SQLite?

2011-02-18 Thread Daniel Holth
I thought I could prepend /* update * / to every statement but that didn't 
work. Instead, https://bitbucket.org/dholth/pysqlite/changeset/cdc3a85dcb49

Obviously it should be a flag. Something like 
pysqlite2.surprise_transactions(False)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Transactional DDL and SQLite?

2011-02-17 Thread Daniel Holth
Can someone help me understand why DDL seems to not be transactional here:

import sqlalchemy
e = sqlalchemy.create_engine('sqlite://')
c = e.connect()
t = c.begin()
c.execute(CREATE TABLE foo (bar INTEGER))
t.rollback()

assert u'foo' in e.table_names() # True

But, if I start up `sqlite3 db.db` and type: BEGIN; CREATE TABLE foo (bar 
INTEGER); ROLLBACK;

then no `foo` table is created. I am using SQLite 3.7.2.

I am trying to write migration scripts that create a table and populate it, 
or fail and rollback the entire transaction.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: can some one give me sample on using max with session.query?

2011-02-01 Thread Daniel Holth
session.query(sqlalchemy.func.max(MappedClass.column)).scalar()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: autocommit on for DDL

2011-01-28 Thread Daniel Holth
 You might be interested to know that the situation is more like If you are 
not using MySQL, you probably have transactional DDL. Even SQLite has it.

According to 
http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis
 

   
   - PostgreSQL - yes
   - MySQL - no; DDL causes an implicit commit
   - Oracle Database 11g Release 2 and above - yes (something called 
   edition-based redefinition)
   - Older versions of Oracle - no; DDL causes an implicit commit
   - SQL Server - yes
   - Sybase Adaptive Server - yes
   - DB2 - yes
   - Informix - yes
   - Firebird (Interbase) - yes


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to specify the remote side in a self-referential relationship on one column only?

2011-01-06 Thread Daniel Gerber
Thanks for your answer.

Eventually I settled for a standard property to just access children.
And the tree iterator --the reason for trying to tweak this
relationship in the first place-- figures out the relations by itself,
as I couldn't extend your solution to work with lazy='subquery' and
join_depth1.
Just a detail: the idea for n1.id.like(n2.id + '_') was that the sql
wildcard '_' means 'one level down the tree'.


On 5 jan, 21:47, Michael Bayer mike...@zzzcomputing.com wrote:
 It is barely possible to do.   The usual descriptor approach to an exotic 
 join condition, in case you'd like to keep things simple, is:

 class Node(Base):
     �...@property
      def children(self):
          return object_session(self).query(Node).filter(Node.id.like(self.id 
 + '_')).all()

 Otherwise, here's the recipe, which is the best we can do - if it has 
 problems beyond hello world, you might need to drop back to the usual 
 descriptor approach:

 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class Node(Base):
    __tablename__ = 'nodes'
    id = Column(String(8), primary_key=True)

 nodes = Node.__table__
 nalias = Node.__table__.alias()
 m2 = mapper(Node, nalias, non_primary=True)

 Node.children = relationship(m2,
                        primaryjoin=nalias.c.id.like(nodes.c.id + _),
                        _local_remote_pairs = [(nodes.c.id, nalias.c.id)],
                        foreign_keys=[nalias.c.id],
                        viewonly=True
                        )

 e = create_engine('sqlite://', echo='debug')
 Node.metadata.create_all(e)
 s = Session(e)

 s.add_all(
     [
         Node(id='node1'),
         Node(id='node1_'),
         Node(id='node2'),
         Node(id='node2_'),
     ]
 )

 n1, n2 = s.query(Node).\
             options(eagerload(Node.children)).\
             filter(Node.id.in_(['node1', 'node2'])).\
             all()

 assert [n.id for n in n1.children] == ['node1_']
 assert [n.id for n in n2.children] == ['node2_']

 On Jan 5, 2011, at 3:06 PM, Daniel Gerber wrote:

  Hi sqlalchemy-group,

  Is it possible to define a self-referential relationship on a join
  condition involving the same column on both sides? This:

  class Node(Base):
     __tablename__ = 'nodes'
     id = Column(String(8), primary_key=True)

  Node2 = aliased(Node)
  Node.children = relationship(Node2,
                         primaryjoin=Node.id.like(Node2.id+'_'),
                         remote_side=Node2.id
                         )

  will raise:
     ArgumentError: relationship 'children' expects a class or a mapper
  argument (received: class 'sqlalchemy.orm.util.AliasedClass')

  Am I missing some logical impossibility here, or it could in principle
  just work? The problem is how to specify which 'id' is the remote
  side...

  The id column represents a materialized path, there is no foreign key
  constraints involved, and a query would look like this:

     SELECT n1.id AS parent , n2.id AS child
         FROM nodes AS n1
             LEFT OUTER JOIN nodes AS n2
                 ON n2.id LIKE n1.id || '_' ;

  Thanks in advance for comments!

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] How to specify the remote side in a self-referential relationship on one column only?

2011-01-05 Thread Daniel Gerber
Hi sqlalchemy-group,

Is it possible to define a self-referential relationship on a join
condition involving the same column on both sides? This:

class Node(Base):
__tablename__ = 'nodes'
id = Column(String(8), primary_key=True)

Node2 = aliased(Node)
Node.children = relationship(Node2,
primaryjoin=Node.id.like(Node2.id+'_'),
remote_side=Node2.id
)

will raise:
ArgumentError: relationship 'children' expects a class or a mapper
argument (received: class 'sqlalchemy.orm.util.AliasedClass')

Am I missing some logical impossibility here, or it could in principle
just work? The problem is how to specify which 'id' is the remote
side...

The id column represents a materialized path, there is no foreign key
constraints involved, and a query would look like this:

SELECT n1.id AS parent , n2.id AS child
FROM nodes AS n1
LEFT OUTER JOIN nodes AS n2
ON n2.id LIKE n1.id || '_' ;


Thanks in advance for comments!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] subclass mapped class to add utility methods?

2010-12-06 Thread Daniel Holth
I am trying to subclass a mapped class from another package for no
other reason than to add a utility method. The polymorphic_on
condition, if it were defined, would have to be 'True'. Example:

class Mapped(declarative_base()):
   # columns

class Utility(Mapped):
   def is_something(self):
  return 'foo'

Encouraged by the fact that session.query(Utility).first() works, I
proceeded and got an exception.

The error is

FlushError: Attempting to flush an item of type Utility on
collection 'Something.mapped', which is not the expected type.
Configure mapper 'Mapper|User|user' to load this subtype
polymorphically, or set enable_typechecks=False to allow subtypes.
Mismatched typeloading may cause bi-directional relationships
(backrefs) to not function properly.

I'm starting to think I should just monkey patch the original class or
just pass the Mapped() into utility()... is that the best way?

Thanks!

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: subclass mapped class to add utility methods?

2010-12-06 Thread Daniel Holth
Thank you! I never would have thought of appending a superclass as an
alternative to subclassing or monkeypatching. No wonder Python is so
hard to optimize. I wound up doing the monkey patch and it seems to
work just fine.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to separate models into own modules?

2010-11-01 Thread Daniel Meier
Well thank you guys for your help.

I found the way how Django (if you know it) separates the modules
great. But I can also live with a single module containing all models.

I played some time with sqlalchemy now, and I created two classes in
my db.py file (see http://pastie.org/1264141). In the User class, I
added two functions create and delete. Is it best practice to
add such functions inside the respective classes? At least, I want the
code that deals with sqlalchemy in one place. I think my example is
bad because I would not be able to i.e. search a user without defining
one first (as search would be in the User class).

@Mark: Your example looks great. I'll try that. Thanks!

Any suggestions/best practices?
Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to separate models into own modules?

2010-11-01 Thread Daniel Meier
Thanks Michael

I must admit that I completely misunderstood the session. It was clear
to me that everything done until session.commit() is in a transaction,
but I somehow had no idea how to implement it and ended up creating
those micro-sessions.. Still not sure where to put the business
logic.

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] How to separate models into own modules?

2010-10-31 Thread Daniel Meier
Hi list

I have tried the examples in the ORM tutorial and I'm wondering how I
can separate different models into own modules.

Let's say I have a BankAccount model and a BankCustomer model. My
idea would be to create two modules, bankaccount.py and
bankcustomer.py, and a database handler that imports the necessary
stuff from that modules.

This is what I came up with: http://pastie.org/1262474

Actually, it doesn't work because the database was not yet initiated
(metadata.create_all). Where should I put that? Or is my idea of
splitting up a bad one?

Daniel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] DB tables, check exists before creation? duplicate and rename table in sqlalchemy?

2010-08-23 Thread Daniel Kluev
On Tue, Aug 24, 2010 at 2:04 AM, keekychen.shared 
keekychen.sha...@gmail.com wrote:

  Dear All,

 I have questions in coding my application using sqlalchemy on database
 tables.

 1. how to create table if not exists in metadata.create_all(engine)
   I googled and fould this
 
 http://www.mail-archive.com/sqlalchemy-us...@lists.sourceforge.net/msg02121.html
 
 but I found no keywords matched checkexists=True in the document for
 version 0.6.4


create_all() does this by default.

-- 
With best regards,
Daniel Kluev

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Model factory

2010-08-20 Thread Daniel Kluev
On Fri, Aug 20, 2010 at 7:39 AM, Eduardo Robles Elvira edu...@gmail.comwrote:

 Now, these functions reimplemented in the inherited class models might
 be called by the jobs view. It would be very convenient if directly
 when I get the jobs from the database with sqlalchemy I could directly
 get them in CustomJobA, CustomJobB instances, instead of all being
 instances from the base clas Job. There's a field in the base Job
 class which tells me which class type it is (basically, it's
 path.to.module and then ClassName).


SQLAlchemy supports this out of the box.
http://www.sqlalchemy.org/docs/reference/ext/declarative.html#single-table-inheritance

So it will be something like this:

class Job(Base):
class_name = Column(Unicode(64))
__mapper_args__ = {'polymorphic_on': class_name}

class CustomJobA(Job):
__mapper_args__ = {'polymorphic_identity': 'CustomJobA'}

SQLAlchemy will then load correct class instance for you on queries,
depending on the value of the field.

-- 
With best regards,
Daniel Kluev

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



  1   2   >