[sqlalchemy] Re: _compose_select_body

2018-01-13 Thread Russ Wilson
I figured it out...thanks

On Saturday, January 13, 2018 at 9:35:29 AM UTC-6, Russ Wilson wrote:
>
>
> in the _compose_select_body within in compiler.py it adds [] around the 
> various parts of the select. I need to alter that so it puts quotes. Is 
> there a property I can set to change that behavior or do i need to 
> overwrite it?
>
> Thanks for the help!
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] _compose_select_body

2018-01-13 Thread Russ Wilson

in the _compose_select_body within in compiler.py it adds [] around the 
various parts of the select. I need to alter that so it puts quotes. Is 
there a property I can set to change that behavior or do i need to 
overwrite it?

Thanks for the help!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] dialect issue with pyodbc.Row type

2018-01-10 Thread Russ Wilson
Thanks for the insights

On Tue, Jan 9, 2018 at 10:23 PM Mike Bayer <mike...@zzzcomputing.com> wrote:

> On Tue, Jan 9, 2018 at 8:45 PM, Russ Wilson <rpwil...@gmail.com> wrote:
> > So i loaded and tested the mmsql dialect and it gave the same results. It
> > returns a list of pyodbc.Row
> >
> > from sqlalchemy import Column, Integer, String
> > from sqlalchemy.ext.declarative import declarative_base
> > from sqlalchemy import create_engine
> > from sqlalchemy import Table, Column, Integer, String, MetaData,
> ForeignKey
> > from sqlalchemy import inspect
> > from sqlalchemy.dialects import registry
> >
> >
> > engine =
> > create_engine("mssql+pyodbc://MYUSER:MYPASSWORD@IP
> :1433/corn?driver=FreeTDS")
> > connection = engine.raw_connection()
> > try:
> > cursor = connection.cursor()
> > cursor.execute("SELECT *  FROM ADV.MYTABLE")
> > results_one = cursor.fetchmany(100)
> > for row in results_one:
> > print(type(row))
> >
> > cursor.close()
> > finally:
> > connection.close()
>
> When you use raw_connection(), you are stating that you would like to
> use a raw DBAPI connection object, and you are no longer using the
> SQLAlchemy dialect.   You are using the plain pyodbc cursor directly,
> which is usually not necessary unless you need to work with stored
> procedures or special cursor methods.
>
> Using SQLAlchemy normally, your code above would be:
>
> with engine.connect() as conn:
> result = conn.execute("SELECT * FROM ADV.MYTABLE")
>
> result is then a ResultProxy object and returns RowProxy objects when
> you call fetchone(), fetchmany(), and fetchall().RowProxy then
> acts like a tuple.
>
>
>
>
> >
> > On Tuesday, January 9, 2018 at 4:38:54 PM UTC-6, Mike Bayer wrote:
> >>
> >> There's the README at
> >> https://github.com/zzzeek/sqlalchemy/blob/master/README.dialects.rst
> which
> >> also includes some links to an example dialect.
> >>
> >> On Jan 9, 2018 12:35 PM, "Russ Wilson" <rpwi...@gmail.com> wrote:
> >>
> >> Is there a good doc that covered at at min needs to be extended to
> create
> >> a dialect?
> >>
> >> On Mon, Jan 8, 2018 at 3:15 PM Mike Bayer <mik...@zzzcomputing.com>
> wrote:
> >>>
> >>> On Sun, Jan 7, 2018 at 9:07 PM, Russ Wilson <rpwi...@gmail.com> wrote:
> >>> > I noticed if you use the cursor.fetchmany it returns the pyodbc
> types.
> >>> > Is
> >>> > this an issue with the dialect?  if you use the connection execute
> you
> >>> > are
> >>> > correct it returns a resultrow. Thanks for the help.
> >>> >
> >>> > cursor = connection.cursor()
> >>> > cursor.execute("SELECT *  FROM mytable")
> >>> > results_one = cursor.fetchmany(100)
> >>> > for row in results_one:
> >>> > print(type(row))
> >>>
> >>> It's not an issue, when you call SQLAlchemy's fetchmany(), it is
> >>> internally retrieving the pyobc row objects and converting them to
> >>> ResultRow objects.
> >>>
> >>> As I mentioned before, we have three dialects that use Pyodbc and two
> >>> are production quality, same API, same row objects, etc.
> >>>
> >>>
> >>>
> >>> >
> >>> >
> >>> > On Sunday, January 7, 2018 at 12:01:29 PM UTC-6, Mike Bayer wrote:
> >>> >>
> >>> >>
> >>> >>
> >>> >> On Jan 7, 2018 11:29 AM, "Russ Wilson" <rpwi...@gmail.com> wrote:
> >>> >>
> >>> >> When I attempt to create a panda dataframe from the results it
> throws
> >>> >> this
> >>> >> error "Shape of passed values is (1, 100), indices imply (9, 100)"
> >>> >> because
> >>> >> it is seeing the results as 1 column vs a list of columns.  Ill
> take a
> >>> >> look
> >>> >> at the SQL Server one.   Thanks
> >>> >>
> >>> >>
> >>> >> That has nothing to do with a SQLAlchemy dialect because all
> >>> >> SQLAlchemy
> >>> >> result sets come back as a ResultSet object and every row is a
> >>> >> ResultRow.
> >>> >> The pyodbc internals are not exposed.
> >>&

Re: [sqlalchemy] dialect issue with pyodbc.Row type

2018-01-09 Thread Russ Wilson
So i loaded and tested the mmsql dialect and it gave the same results. It 
returns a list of pyodbc.Row 

from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy import inspect
from sqlalchemy.dialects import registry


engine = 
create_engine("mssql+pyodbc://MYUSER:MYPASSWORD@IP:1433/corn?driver=FreeTDS")
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.execute("SELECT *  FROM ADV.MYTABLE")
results_one = cursor.fetchmany(100)
for row in results_one:
print(type(row))

cursor.close()
finally:
connection.close()

On Tuesday, January 9, 2018 at 4:38:54 PM UTC-6, Mike Bayer wrote:
>
> There's the README at 
> https://github.com/zzzeek/sqlalchemy/blob/master/README.dialects.rst 
> which also includes some links to an example dialect.
>
> On Jan 9, 2018 12:35 PM, "Russ Wilson" <rpwi...@gmail.com > 
> wrote:
>
> Is there a good doc that covered at at min needs to be extended to create 
> a dialect? 
>
> On Mon, Jan 8, 2018 at 3:15 PM Mike Bayer <mik...@zzzcomputing.com 
> > wrote:
>
>> On Sun, Jan 7, 2018 at 9:07 PM, Russ Wilson <rpwi...@gmail.com 
>> > wrote:
>> > I noticed if you use the cursor.fetchmany it returns the pyodbc types. 
>> Is
>> > this an issue with the dialect?  if you use the connection execute you 
>> are
>> > correct it returns a resultrow. Thanks for the help.
>> >
>> > cursor = connection.cursor()
>> > cursor.execute("SELECT *  FROM mytable")
>> > results_one = cursor.fetchmany(100)
>> > for row in results_one:
>> > print(type(row))
>>
>> It's not an issue, when you call SQLAlchemy's fetchmany(), it is
>> internally retrieving the pyobc row objects and converting them to
>> ResultRow objects.
>>
>> As I mentioned before, we have three dialects that use Pyodbc and two
>> are production quality, same API, same row objects, etc.
>>
>>
>>
>> >
>> >
>> > On Sunday, January 7, 2018 at 12:01:29 PM UTC-6, Mike Bayer wrote:
>> >>
>> >>
>> >>
>> >> On Jan 7, 2018 11:29 AM, "Russ Wilson" <rpwi...@gmail.com> wrote:
>> >>
>> >> When I attempt to create a panda dataframe from the results it throws 
>> this
>> >> error "Shape of passed values is (1, 100), indices imply (9, 100)" 
>> because
>> >> it is seeing the results as 1 column vs a list of columns.  Ill take a 
>> look
>> >> at the SQL Server one.   Thanks
>> >>
>> >>
>> >> That has nothing to do with a SQLAlchemy dialect because all SQLAlchemy
>> >> result sets come back as a ResultSet object and every row is a 
>> ResultRow.
>> >> The pyodbc internals are not exposed.
>> >>
>> >>
>> >>
>> >> pd.DataFrame(data=data, columns=column_names)
>> >>
>> >>
>> >>
>> >> On Saturday, January 6, 2018 at 11:57:57 PM UTC-6, Mike Bayer wrote:
>> >>>
>> >>> pyodbc.Row acts like a tuple so there is no special conversion needed.
>> >>>
>> >>> SQLAlchemy has three pyodbc dialects, for SQL Server (very stable),
>> >>> MySQL (sorta works), and Sybase (probably doesn't work), but you can
>> >>> use the first two as examples for the basics.   They base off of the
>> >>> PyODBCConnector in connectors/pyodbc.py.
>> >>>
>> >>>
>> >>> On Sun, Jan 7, 2018 at 12:40 AM, Russ Wilson <rpwi...@gmail.com> 
>> wrote:
>> >>> >
>> >>> > I was attempting to create a new dialect but hit and issue.  pyodbc 
>> is
>> >>> > returning a list of pyodbc.Row.  Is there a method i should be
>> >>> > implementing
>> >>> > to convert the list to a list of tuples.
>> >>> >
>> >>> > Thanks
>> >>> >
>> >>> > --
>> >>> > SQLAlchemy -
>> >>> > The Python SQL Toolkit and Object Relational Mapper
>> >>> >
>> >>> > http://www.sqlalchemy.org/
>> >>> >
>> >>> > To post example code, please provide an MCVE: Minimal, Complete, and
>> >>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a 

Re: [sqlalchemy] dialect issue with pyodbc.Row type

2018-01-09 Thread Russ Wilson
Is there a good doc that covered at at min needs to be extended to create a
dialect?

On Mon, Jan 8, 2018 at 3:15 PM Mike Bayer <mike...@zzzcomputing.com> wrote:

> On Sun, Jan 7, 2018 at 9:07 PM, Russ Wilson <rpwil...@gmail.com> wrote:
> > I noticed if you use the cursor.fetchmany it returns the pyodbc types. Is
> > this an issue with the dialect?  if you use the connection execute you
> are
> > correct it returns a resultrow. Thanks for the help.
> >
> > cursor = connection.cursor()
> > cursor.execute("SELECT *  FROM mytable")
> > results_one = cursor.fetchmany(100)
> > for row in results_one:
> > print(type(row))
>
> It's not an issue, when you call SQLAlchemy's fetchmany(), it is
> internally retrieving the pyobc row objects and converting them to
> ResultRow objects.
>
> As I mentioned before, we have three dialects that use Pyodbc and two
> are production quality, same API, same row objects, etc.
>
>
>
> >
> >
> > On Sunday, January 7, 2018 at 12:01:29 PM UTC-6, Mike Bayer wrote:
> >>
> >>
> >>
> >> On Jan 7, 2018 11:29 AM, "Russ Wilson" <rpwi...@gmail.com> wrote:
> >>
> >> When I attempt to create a panda dataframe from the results it throws
> this
> >> error "Shape of passed values is (1, 100), indices imply (9, 100)"
> because
> >> it is seeing the results as 1 column vs a list of columns.  Ill take a
> look
> >> at the SQL Server one.   Thanks
> >>
> >>
> >> That has nothing to do with a SQLAlchemy dialect because all SQLAlchemy
> >> result sets come back as a ResultSet object and every row is a
> ResultRow.
> >> The pyodbc internals are not exposed.
> >>
> >>
> >>
> >> pd.DataFrame(data=data, columns=column_names)
> >>
> >>
> >>
> >> On Saturday, January 6, 2018 at 11:57:57 PM UTC-6, Mike Bayer wrote:
> >>>
> >>> pyodbc.Row acts like a tuple so there is no special conversion needed.
> >>>
> >>> SQLAlchemy has three pyodbc dialects, for SQL Server (very stable),
> >>> MySQL (sorta works), and Sybase (probably doesn't work), but you can
> >>> use the first two as examples for the basics.   They base off of the
> >>> PyODBCConnector in connectors/pyodbc.py.
> >>>
> >>>
> >>> On Sun, Jan 7, 2018 at 12:40 AM, Russ Wilson <rpwi...@gmail.com>
> wrote:
> >>> >
> >>> > I was attempting to create a new dialect but hit and issue.  pyodbc
> is
> >>> > returning a list of pyodbc.Row.  Is there a method i should be
> >>> > implementing
> >>> > to convert the list to a list of tuples.
> >>> >
> >>> > Thanks
> >>> >
> >>> > --
> >>> > SQLAlchemy -
> >>> > The Python SQL Toolkit and Object Relational Mapper
> >>> >
> >>> > http://www.sqlalchemy.org/
> >>> >
> >>> > To post example code, please provide an MCVE: Minimal, Complete, and
> >>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a
> full
> >>> > description.
> >>> > ---
> >>> > You received this message because you are subscribed to the Google
> >>> > Groups
> >>> > "sqlalchemy" group.
> >>> > To unsubscribe from this group and stop receiving emails from it,
> send
> >>> > an
> >>> > email to sqlalchemy+...@googlegroups.com.
> >>> > To post to this group, send email to sqlal...@googlegroups.com.
> >>> > Visit this group at https://groups.google.com/group/sqlalchemy.
> >>> > For more options, visit https://groups.google.com/d/optout.
> >>
> >> --
> >> SQLAlchemy -
> >> The Python SQL Toolkit and Object Relational Mapper
> >>
> >> http://www.sqlalchemy.org/
> >>
> >> To post example code, please provide an MCVE: Minimal, Complete, and
> >> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> >> description.
> >> ---
> >> You received this message because you are subscribed to the Google
> Groups
> >> "sqlalchemy" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to sqlalchemy+...@googlegroups.com.
> >> To post to this group, send email to sqlal...@googlegroups.com.
> >> Visit this grou

Re: [sqlalchemy] dialect issue with pyodbc.Row type

2018-01-07 Thread Russ Wilson
I noticed if you use the cursor.fetchmany it returns the pyodbc types. Is 
this an issue with the dialect?  if you use the connection execute you are 
correct it returns a resultrow. Thanks for the help. 

cursor = connection.cursor()
cursor.execute("SELECT *  FROM mytable")
results_one = cursor.fetchmany(100)
for row in results_one:
print(type(row))


On Sunday, January 7, 2018 at 12:01:29 PM UTC-6, Mike Bayer wrote:
>
>
>
> On Jan 7, 2018 11:29 AM, "Russ Wilson" <rpwi...@gmail.com > 
> wrote:
>
> When I attempt to create a panda dataframe from the results it throws this 
> error "Shape of passed values is (1, 100), indices imply (9, 100)" because 
> it is seeing the results as 1 column vs a list of columns.  Ill take a look 
> at the SQL Server one.   Thanks
>
>
> That has nothing to do with a SQLAlchemy dialect because all SQLAlchemy 
> result sets come back as a ResultSet object and every row is a ResultRow.  
>  The pyodbc internals are not exposed.
>
>
>
> pd.DataFrame(data=data, columns=column_names)
>
>
>
> On Saturday, January 6, 2018 at 11:57:57 PM UTC-6, Mike Bayer wrote:
>
>> pyodbc.Row acts like a tuple so there is no special conversion needed. 
>>
>> SQLAlchemy has three pyodbc dialects, for SQL Server (very stable), 
>> MySQL (sorta works), and Sybase (probably doesn't work), but you can 
>> use the first two as examples for the basics.   They base off of the 
>> PyODBCConnector in connectors/pyodbc.py. 
>>
>>
>> On Sun, Jan 7, 2018 at 12:40 AM, Russ Wilson <rpwi...@gmail.com> wrote: 
>> > 
>> > I was attempting to create a new dialect but hit and issue.  pyodbc is 
>> > returning a list of pyodbc.Row.  Is there a method i should be 
>> implementing 
>> > to convert the list to a list of tuples. 
>> > 
>> > Thanks 
>> > 
>> > -- 
>> > SQLAlchemy - 
>> > The Python SQL Toolkit and Object Relational Mapper 
>> > 
>> > http://www.sqlalchemy.org/ 
>> > 
>> > To post example code, please provide an MCVE: Minimal, Complete, and 
>> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>> > description. 
>> > --- 
>> > You received this message because you are subscribed to the Google 
>> Groups 
>> > "sqlalchemy" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an 
>> > email to sqlalchemy+...@googlegroups.com. 
>> > To post to this group, send email to sqlal...@googlegroups.com. 
>> > Visit this group at https://groups.google.com/group/sqlalchemy. 
>> > For more options, visit https://groups.google.com/d/optout. 
>>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com .
> To post to this group, send email to sqlal...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] dialect issue with pyodbc.Row type

2018-01-07 Thread Russ Wilson
When I attempt to create a panda dataframe from the results it throws this 
error "Shape of passed values is (1, 100), indices imply (9, 100)" because 
it is seeing the results as 1 column vs a list of columns.  Ill take a look 
at the SQL Server one.   Thanks

pd.DataFrame(data=data, columns=column_names)



On Saturday, January 6, 2018 at 11:57:57 PM UTC-6, Mike Bayer wrote:
>
> pyodbc.Row acts like a tuple so there is no special conversion needed. 
>
> SQLAlchemy has three pyodbc dialects, for SQL Server (very stable), 
> MySQL (sorta works), and Sybase (probably doesn't work), but you can 
> use the first two as examples for the basics.   They base off of the 
> PyODBCConnector in connectors/pyodbc.py. 
>
>
> On Sun, Jan 7, 2018 at 12:40 AM, Russ Wilson <rpwi...@gmail.com 
> > wrote: 
> > 
> > I was attempting to create a new dialect but hit and issue.  pyodbc is 
> > returning a list of pyodbc.Row.  Is there a method i should be 
> implementing 
> > to convert the list to a list of tuples. 
> > 
> > Thanks 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] dialect issue with pyodbc.Row type

2018-01-06 Thread Russ Wilson

I was attempting to create a new dialect but hit and issue.  pyodbc is 
returning a list of pyodbc.Row.  Is there a method i should be implementing 
to convert the list to a list of tuples. 

Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Determining what, if anything, a Session.merge() does?

2016-03-10 Thread Russ
Excellent, thank you.  is_modified() works very well in this case, with 
caveats noted.  Also, a nice intro to the History API... hadn't seen that 
before!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Determining what, if anything, a Session.merge() does?

2016-03-10 Thread Russ
Is there any way to tell what the outcome of a Session.merge() operation is?

The case of specific interest is when the instance to be merged *does* 
exist prior to the merge() call.  Is there a built in way to see if any 
attributes end up updated, or does this need to be checked manually?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't locate strategy for ... (('lazy', 'joined'),)

2015-05-21 Thread Russ
Yes, 'number' is a column, as you surmised.  When I drop that from the path 
it works fine.  The only remaining problem is/was that this ends up loading 
in every field in the child_product table, and this includes a potentially 
massive BSON column (and more).

After looking into this, I've now learned about deferred, defer, undefer, 
loadonly, etc.  This seems to be the correct way to manage this, and it 
appears to be working fine:

q = q.options(sa.orm
  .joinedload(defined_items)
  .joinedload(child_product)
  .load_only(number)
  )

Thanks for pointing me in the right direction!  This page had the info I 
needed:
http://docs.sqlalchemy.org/en/latest/orm/loading_columns.html

Russ


On Thursday, May 21, 2015 at 4:01:16 PM UTC-4, Michael Bayer wrote:

  

 On 5/21/15 3:56 PM, Russ wrote:
  
  nope.  I'd need a complete, self-contained and succinct example I can 
 run, thanks


  Ok, thanks.  This is a beefy one so that will be extremely tricky to 
 extract.  I had hoped that the combo of lazy+joined would have been a clear 
 indicator since they are opposite loading strategies.
  

 the word lazy there is the name of the field. It is the internal 
 equivalent to the lazy argument on relationship:  


 http://docs.sqlalchemy.org/en/rel_1_0/orm/relationship_api.html?highlight=relationship#sqlalchemy.orm.relationship.params.lazy

 that said, here is exactly what will cause your error. Any of the 
 attribute names defined_items, child_product or number are not in 
 fact bound to a relationship(), and instead refer to a Column-mapped 
 attribute.  Looks a lot like number here is a Column.  Is that the case?  
 You can't call joinedload for a column attribute.   


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Can't locate strategy for ... (('lazy', 'joined'),)

2015-05-21 Thread Russ
I have a query I am running where sqlalchemy is throwing this exception:

Exception: can't locate strategy for class 
'sqlalchemy.orm.properties.ColumnProperty' (('lazy', 'joined'),)

What causes this is the addition of this joinedload_all option to a query 
(q):

q = q.options(sa.orm.joinedload_all(defined_items.child_product.number))

The intent of adding that is to try and avoid emitting sql when accessing 
foo.defined_items[x].child_product.number, as there are many defined_items.

There are several other joined loads on this query that don't have an 
issue.  Any ideas on what would be causing this particular problem?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't locate strategy for ... (('lazy', 'joined'),)

2015-05-21 Thread Russ


 nope.  I'd need a complete, self-contained and succinct example I can run, 
 thanks


Ok, thanks.  This is a beefy one so that will be extremely tricky to 
extract.  I had hoped that the combo of lazy+joined would have been a clear 
indicator since they are opposite loading strategies.

Digging through the relationship definitions I thought it might be because 
the child_product relationship is explicitly declared with lazy = True 
('select') and (since I don't do that on other properties) I thought 
perhaps that was conflicting with the later joinedload request and 
confusing the strategy lookup.  However, I see that True/select is the 
default, and I explicitly request joinedload at query time on other 
relationships/properties without issue, so that can't be it.  I'm stumped 
again.



 




  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Deleting object attributes to avoid merge comparison no longer works in 1.0

2015-04-04 Thread Russ
I have some buried-in-the-codebase-since-0.7 upsert code that uses `merge`. 
 In order to avoid certain attributes from being used in the merge 
comparison, the attributes were deleted using delattr..

The code looks something like this:

db_obj = sess.query(obj_type).filter_by(**filters).one()
pk = obj.__mapper__.primary_key[0].name
setattr(obj, pk, getattr(db_obj, pk))  #convince merge they're comparable
for attr in merge_ignores:
#See http://goo.gl/oBbpp for why we delete the attribute to avoid 
comparison
delattr(obj, attr)
obj = sess.merge(obj)

I've recently been trying 1.0.0b4, and now 1.0.0b5 and this code no longer 
works.

Specifically, the code throws a KeyError on the delattr line. Here's the 
clipped traceback:

File 
/home/russ/code/bitbucket/sqlalchemy/lib/sqlalchemy/orm/attributes.py, 
line 227, in __delete__
  self.impl.delete(instance_state(instance), instance_dict(instance))
File 
/home/russ/code/bitbucket/sqlalchemy/lib/sqlalchemy/orm/attributes.py, 
line 738, in delete
  del dict_[self.key]

KeyError: 'date_added_log'

The attribute *definitely* exists at the time (albeit with a value of 
`None`, but it's there).  I doubt it matters, but obj is an instance of 
declarative_base'd class.  

I see that __slots__ has been introduced to `ScalarObjectAttributeImpl`, 
which is where the is tanking.  __slots__ seems like a likely candidate 
here, but I have to dig further.

For now I've reverted back to 0.9.9 so the code works again.

Is there some less sketchy way (that works in 1.0!) to force merge to not 
look at certain fields?

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Connection pooling strategy for a small/fixed number of db users

2015-03-03 Thread Russ


 well the engine is essentially a holder for a connection pool. If you use 
 a 
 pool like NullPool, it makes a new connection on every use, but in that 
 case 
 there is still not an official way to send in different connection 
 parameters. There’s no advantage to trying to make Engine work in a 
 different way, using two engines is just shorthand for using two different 
 sets of credentials. 


Ok.  That makes sense.  Thanks for clarifying.

Definitely don’t need two sessionmakers, the engine can be passed both to 
 the constructor of Session directly as well as to the sessionmaker 
 function 
 at the moment the new Session is created. But it shouldn’t be any less 
 convenient to have two sessionmakers vs. passing two different engines in, 
 vs. passing connection credentials in somewhere when you need to connect. 
 There’s “two of something” going on no matter how you try to organize 
 that. 


So I only need to select the relevant engine and pas to my )scoped) 
sessionmaker, then.  Great.
 

 I think what’s odd here is that if this is a web app, why is it needing to 
 maintain two sets of credentials internally in a single process for what 
 is 
 apparently the same database. 


Some of the functions in the API allow arbitrary SQL strings to be passed 
for execution... but restricted access/views for reporting only.  The API 
path is used to control server access, rather than exposing the database 
(postgres) to direct external connections.  Postgres only listens on 
localhost.  The implementation for these APIs then uses separate 
credentials to ensure read-only access in their implementation, whereas the 
vast majority of APIs have full access.

Still odd? :)

Thanks for the help, guys!

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Old event listeners firing on the wrong session?

2015-02-09 Thread Russ
Problem solved.

Michael - you nailed it when you guessed:

if you have set an event listener on a single session that is definitely 
 the 
 actual Session object and not the thread local or the scoped_session or 
 anything


sa.event.listen was being passed the scoped_session instance, not an actual 
instance.  This scoped_session usage is all through the code, per what 
seems to be an old way of doing things... that being that the object 
returned by scoped_session() object had all of the Session objects as a 
convenience, so they were pretty much interchangeable.  Clearly not with 
the event system!

Calling the object to get the actual session prevents any unexpected 
rollback handlers from firing shots from the Session graveyard.

Thanks for the help!

Russ

PS: had to dig through some old docs to check my sanity on the 
scoped_session-as-a-Session confusion.  It was clearly a legit way to do it 
back in 0.6:
http://docs.sqlalchemy.org/en/rel_0_6/orm/session.html?highlight=remove#creating-a-thread-local-context

... but new docs have no such comments/recommendations.  I think the new 
method and docs are clearer.

I'm thinking I should scrub all code for this type of usage and fix it up 
so there's no blurry line between Sessions and 
sessionmakers/scoped_sessions.  Do you recommend that?


On Monday, February 9, 2015 at 9:47:23 PM UTC-5, Russ wrote:

 Thanks for the idea, Jonathan!  I was actually discussing such a fallback 
 watchdog on #postgresql earlier today.  Now that I've been having event 
 troubles and it is highlighting atomicity issues with the db/filesystem 
 split, I'm definitely going to implement such a safety net.

 I think the problem is slightly different than yours, though.  In my case 
 the file have been written in mid-transaction, and I want to delete it 
 if/when the transaction rolls back.  If my rollback detector is failing (my 
 bug, I'm sure) I have no db flag to check.  The watchdog needs to check the 
 filesystem for dangling files (files with no db reference).  This will 
 likely be two phase check (with a big gap) to avoid race conditions, but 
 should be easy enough since file dirs+names are done by database id.  If 
 the id is 123456789 it stores to `/files/123/456/789`. The dir splitting 
 should help minimize requirements on filesystem awareness... glad I did 
 that.

 My bigger problem right now is that the late-firing zombie rollback 
 handler is actually sniping files it shouldn't be!  Rollbacks in 
 subsequent/unrelated sessions are sniping files from sessions that ran to 
 commit without issue.  Watchdog or not, I need to fix that!!

 I really wish databases handled hybrid db/filesystem storage better.  Any 
 errors in file deletion code are deadly.  SQL Server has that nice 
 FILESTREAM object for precisely this situation, but it doesn't seem that 
 others do yet. 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Old event listeners firing on the wrong session?

2015-02-09 Thread Russ
Thanks for the idea, Jonathan!  I was actually discussing such a fallback 
watchdog on #postgresql earlier today.  Now that I've been having event 
troubles and it is highlighting atomicity issues with the db/filesystem 
split, I'm definitely going to implement such a safety net.

I think the problem is slightly different than yours, though.  In my case 
the file have been written in mid-transaction, and I want to delete it 
if/when the transaction rolls back.  If my rollback detector is failing (my 
bug, I'm sure) I have no db flag to check.  The watchdog needs to check the 
filesystem for dangling files (files with no db reference).  This will 
likely be two phase check (with a big gap) to avoid race conditions, but 
should be easy enough since file dirs+names are done by database id.  If 
the id is 123456789 it stores to `/files/123/456/789`. The dir splitting 
should help minimize requirements on filesystem awareness... glad I did 
that.

My bigger problem right now is that the late-firing zombie rollback handler 
is actually sniping files it shouldn't be!  Rollbacks in 
subsequent/unrelated sessions are sniping files from sessions that ran to 
commit without issue.  Watchdog or not, I need to fix that!!

I really wish databases handled hybrid db/filesystem storage better.  Any 
errors in file deletion code are deadly.  SQL Server has that nice 
FILESTREAM object for precisely this situation, but it doesn't seem that 
others do yet. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Old event listeners firing on the wrong session?

2015-02-09 Thread Russ
I have a case where I 'm using threadlocal scoped_sessions and I'm storing 
a file during a transaction.  I want to delete the file if the transaction 
rolls back for any reason.  To handle this I use the event system, 
something like so:

def write_data(file_path, data):
with open(file_path, w) as fp:
fp.write(data)

#set up a rollback handler...
def delete_file(session, previous_transaction):
os.remove(file_path)# --- BREAKPOINT HERE

ormSess = globalenv.LocalThreadData.OrmSess
sa.event.listen(orm_session, after_soft_rollback,
delete_file, once=True)

What I'm currently finding is that the delete_file handler is firing off 
long after the session it was hooked to is gone.  I'm definitely calling 
.remove() on the scoped_session.

When I put a breakpoint inside the handler, you can see that there are two 
entirely different sessions.

The rollback is being called (code not shown) on an object at 
0x7f18f8154910:

 orm_session.rollback
bound method scoped_session.do of sqlalchemy.orm.scoping.scoped_session 
object at *0x7f18f8154910*

This fires off the handler for dead-and-gone (or so I thought) session. 
 Down the stack and inside the handler you can see that the session passed 
in does not match:

 session   # the listener arg 
sqlalchemy.orm.session.Session object at *0x5462b90*

Again, these are within the same call stack.

Why would the handler on an old session be getting called?  Am I 
misunderstanding something here?  Is there a proper way to get rid of all 
event handlers on a session?  I know that listeners can be removed 
individually, but I thought scoped_session.remove() would make this 
unnecessary.

Thanks,
Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Old event listeners firing on the wrong session?

2015-02-09 Thread Russ


 do you mean to say event.listen(ormSess) here ?   


Sorry... yes.  Poor transcription/reduction. 

if you have set an event listener on a single session that is definitely 
 the 
 actual Session object and not the thread local or the scoped_session or 
 anything, and that Session is hit by .remove(), it will be garbage 
 collected, and the events associated with it will also go.


Interesting comments on the possibility of registering a listener on the 
thread local or scoped_session.  Given that it *is* firing (albeit from the 
grave when some unrelated session rolls back) I'm pretty sure it is on the 
right one, but I'll look into it.  That is the only location I register a 
rollback handler, too.
 

 It’s not too far fetched that there might be some 
 issue here, but this would need to be demonstrated. Is this issue only 
 under 
 high concurrency, is it relatively infrequent otherwise


Regarding frequency and concurrency, it is definitely not a high 
concurrency thing.  This is happening repeatably in a large construct 
(unit/system testing a fresh system).  I'll look into it further and try 
and reduce it.

and what happens if 
 you actually remove() the event ahead of time? 


I looked at removing event handling at time of scoped_session.remove(), but 
your event removal interface wants the actual handler.  Not easy for me to 
get a hold of in the larger scope, given that the handler is a closure.  I 
can do a nasty hack to check it, once I've sorted the other aspects.

Russ




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to properly declare a postgresql partial index?

2015-02-02 Thread Russ
What is the proper way to declare a postgresql partial index when using the 
@declared_attr decorator?

This form gives me Cannot compile Column object until its 'name' is 
assigned:

track_type   = Column(SmallInteger, nullable = False)
@declared_attr
def __table_args__(cls):
return (Index(idx_track_type, track_type,
  postgresql_where = (cls.track_type != 0)),
)

and this form gives me AttributeError: 'str' object has no attribute 
'_compiler_dispatch':

track_type   = Column(SmallInteger, nullable = False)
@declared_attr
def __table_args__(cls):
return (Index(idx_track_type, track_type,
  postgresql_where = track_type != 0),
)

From [this post][1] I learned about the use of sqlalchemy.sql.text, so this 
is now working for me:
from sqlalchemy.sql import text as sql_text
# snip
@declared_attr
def __table_args__(cls):
return (Index(idx_track_type, track_type,
  postgresql_where = sql_text(track_type != 0)),
)

That post also indicated there may be a bug here, but that was almost 2 
years ago.  Is there a better way to do it now?  More importantly, will the 
working code above continue to work in the future?

[1]: http://goo.gl/Fmgynh

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to properly declare a postgresql partial index?

2015-02-02 Thread Russ
I should have also indicated that the addition of sqlalchemy.sql.text fixes 
the small mixin example.  The little script below works, but I don't know 
if it is a sketchy hack, or a safe long term solution:

from sqlalchemy import * 
from sqlalchemy.orm import * 
from sqlalchemy.ext.declarative import declarative_base, declared_attr 
from sqlalchemy.sql import text as sql_text

Base = declarative_base() 

class A_TableDef(object): 
__tablename__ = 'a'

id = Column(Integer, primary_key=True)
track_type   = Column(SmallInteger, nullable = False)

@declared_attr
def __table_args__(cls):
return (Index(idx_track_type2, track_type,
  postgresql_where = sql_text(track_type != 0)),
)

class A_Model(Base, A_TableDef):
pass

e = create_engine(postgresql://postgres:postgrespw@localhost:5433/edms, 
echo =True)

Base.metadata.drop_all(e)
Base.metadata.create_all(e)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to properly declare a postgresql partial index?

2015-02-02 Thread Russ


  What is the proper way to declare a postgresql partial index when using 
 the @declared_attr decorator? 

 these two concepts aren’t really connected 


Sorry --  I described that poorly, then.  However, I only see the problem 
(in v0.9.8) when I am using @declared_attr as in the case of a mixin.

Your test script works for me, but not when I tweak it to have mixin 
behaviour.  Try this version:

from sqlalchemy import * 
from sqlalchemy.orm import * 
from sqlalchemy.ext.declarative import declarative_base, declared_attr 

Base = declarative_base() 

class A_TableDef(object): 
__tablename__ = 'a' 
id = Column(Integer, primary_key=True) 

track_type   = Column(SmallInteger, nullable = False) 

@declared_attr 
def __table_args__(cls): 
return (Index(idx_track_type, track_type, 
  postgresql_where = (cls.track_type != 0)), 
) 

class A_Model(Base, A_TableDef):
pass

e = create_engine(postgresql://scott:tiger@localhost/test, echo =True) 

Base.metadata.drop_all(e) 
Base.metadata.create_all(e) 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to properly declare a postgresql partial index?

2015-02-02 Thread Russ
Thanks.  The name addition seems tidier to me so I switched to that for the 
moment.

On a somewhat unrelated note, I love the what's new in 1.0 docs you 
linked.  I had not checked them out yet.  The Performance section is 
particularly awesome and well written.  Aspects of it bring me back to 
putting together my profiling talk [1] from a while ago (optimizing 
SQLAlchemy inserts was a perfect vehicle for the talk).  I'll have to 
update that thing now with the fancy new bulk operations... they look quite 
convenient for decent gain with little pain. Nice!

Russ

[1]: https://speakerdeck.com/rwarren/a-brief-intro-to-profiling-in-python


On Monday, February 2, 2015 at 7:55:03 PM UTC-5, Michael Bayer wrote:


 Russ russand...@gmail.com javascript: wrote: 

  I should have also indicated that the addition of sqlalchemy.sql.text 
 fixes the small mixin example.  The little script below works, but I don't 
 know if it is a sketchy hack, or a safe long term solution: 
  
  from sqlalchemy import * 
  from sqlalchemy.orm import * 
  from sqlalchemy.ext.declarative import declarative_base, declared_attr 
  from sqlalchemy.sql import text as sql_text 
  
  Base = declarative_base() 
  
  class A_TableDef(object): 
  __tablename__ = 'a' 
  
  id = Column(Integer, primary_key=True) 
  track_type   = Column(SmallInteger, nullable = False) 
  
  @declared_attr 
  def __table_args__(cls): 
  return (Index(idx_track_type2, track_type, 
postgresql_where = sql_text(track_type != 0)), 
  ) 
  
  class A_Model(Base, A_TableDef): 
  pass 
  
  e = 
 create_engine(postgresql://postgres:postgrespw@localhost:5433/edms, echo 
 =True) 
  
  Base.metadata.drop_all(e) 
  Base.metadata.create_all(e) 


 with mixins, this will work as is in latest master, see 

 http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html#improvements-to-declarative-mixins-declared-attr-and-related-features.
  


 In 0.9, the declared_attr here is called sooner than we’d like, though 
 this 
 particular example works if we just give the column a name (more complex 
 things will still not work very well with the mixins here though): 

 class A_TableDef(object): 
 __tablename__ = 'a' 
 id = Column(Integer, primary_key=True) 

 track_type = Column('track_type', SmallInteger, nullable = False) 

 @declared_attr 
 def __table_args__(cls): 
 return (Index(idx_track_type, track_type, 
   postgresql_where=(cls.track_type != 0)), 
 ) 

 The version with text() is perfectly fine as postgresql_where isn’t 
 significant anywhere except in the DDL. 



  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Inserting Entry Fastest way

2013-03-11 Thread Russ
On Monday, March 11, 2013 4:56:11 PM UTC-4, Arkilic, Arman wrote:

  Hi,
 I am working on a database design that I am required to use lots of tables 
 with one-to-many relationship. As a consequence of the design, I need to 
 insert thousands of entries. I tried session.add(), session.merge, however 
 none of them is fast enough for me to meet the requirements. I was 
 wondering if you can suggest me an efficient way through either ORM or 
 ORM+Core.
 Thanks!


I recently did a presentation structured around getting fast bulk inserts 
with SQLAlchemy.  You may find it useful:

https://speakerdeck.com/rwarren/a-brief-intro-to-profiling-in-python

Please note that the focus was on profiling, and not on SQLAlchemy.   The 
SQLAlchemy example just worked out well (with a contrived step or two) as a 
vehicle for showing different profiling steps/gotchas.  Since the focus was 
on profiling, the example is quite simple (a single user table)... but you 
can easily extend on it for your one-to-many tables.

I also didn't 100% scrub the SQLAlchemy code (I threw this together in a 
hurry), so no yelling at me for bad code. :)

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Inserting Entry Fastest way

2013-03-11 Thread Russ


 wow that is a great talk, I laughed my ass off and you really got in 
 there, nice job !


Thanks!  As long as you weren't laughing because I did the sqlalchemy all 
wrong!  :)

Russ




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] How to tell in advance if session.flush() will do anything?

2012-12-19 Thread Russ
I've got some code where I want to assert that the ORM session is perfectly 
clean. ie: I want to know if a flush() will emit SQL.

What is the best way to determine this?

Right now I'm simply checking like this:

if session.new or session.dirty or session.deleted:
print flush() actions pending!

I believe this is a complete way to check this, but I'm wondering if there 
is a more efficient, possibly single point, way to check this.  One reason 
for asking this is that the docs indicate for session.dirty that this 
collection is now created on the fly each time the property is called... 
but when all I want to know is is flush() going to do anything? it seems 
a waste to generate that collection.

Thanks,
Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/wLdH1yapCTYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to tell in advance if session.flush() will do anything?

2012-12-19 Thread Russ
On Wednesday, December 19, 2012 12:29:50 PM UTC-5, Michael Bayer wrote:

 the Session uses the method session._is_clean() internally to check if 
 going through the flush() steps is warranted, which is roughly equivalent 
 to the check you're doing, though it is doing less work to make this 
 decision


_is_clean() looks perfect, thanks!
 

 There's another caveat, which is that even if _is_clean() is false, that's 
 not a total guarantee that SQL will be emitted on flush.   If you changed 
 one object attribute to be the same value that it already was, for example, 
 that's a dirty event, but when the flush runs through, it will see no net 
 change and not emit an UPDATE. 


No problem... this is actually perfect since it still means that someone 
*tried* to alter something in the session, which is what I'm really after.
 

 This is part of the reason _is_clean() at the moment still has an 
 underscore.   Easier to keep the underscore than to promise a completely 
 predictable usage pattern :). 


I'm fine with that, too... as long as the other part[s] of the reason for 
the underscore don't end up at this method is totally not reliable for use 
in future versions since it could change at any time. :)

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/4kgsie66ewMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to tell in advance if session.flush() will do anything?

2012-12-19 Thread Russ
Whoops - _is_clean() doesn't make it up to a ScopedSession...

Digging into why this is the case I see the registry pattern in there.  Is 
it a safe/valid cheat to access the registry directly and do this?

if my_scoped_session.registry()._is_clean():
print an attempt to modify the session was made!

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/lrgrvV5l5cEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dropping all tables + more

2012-12-14 Thread Russ
I have just updated SQLAlchemy from 0.7.8 to 0.8.0b2 (the current pip 
default) and the 
DropEverythinghttp://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverythingrecipe
 has stopped working.  The problem is on the DropTable line with this 
error :

sqlalchemy.exc.InternalError: (InternalError) cannot drop table 
 test_list_name because other objects depend on it


I'm currently trying to figure out how to fix it with the hint it gives 
(HINT:  Use DROP ... CASCADE to drop the dependent objects too.), but at 
the moment it is not clear to me how to do this.  I'm out of my depth when 
it comes to schema definition language changes.

Does anyone know a quick fix?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/okypK5IgUaoJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom in-query-only rendering for TypeDecorator types?

2012-07-17 Thread Russ
For future reference, it is not actually a great idea to use @compiles to 
render with AT TIME ZONE as I did above.  When done this way, SQLAlchemy 
renders all references to that column using this, *including* any 
references in a WHERE clause.  eg: when looking for log events later than 
some date you would get:

SELECT
  log.blah
 ,log.time AT TIME ZONE 'EST' -- intended use
FROM
  log
WHERE
  log.time AT TIME ZONE 'EST'  foo   -- not intended

As said above, the problem here is that the custom compilation happened in 
both the column specification *and* the WHERE clause.  This is not 
surprising (with hindsight), but it prevents any index on log.time from 
being used (unless there is an appropriate functional index).

For this case, I only wanted it applied to the column spec, not the WHERE, 
but I don't think this is currently possible to differentiate this and 
compile differently in each location...  or is it?  I looked into 
compiler.statement et al to figure out the compilation context, but could 
not.

Russ

PS: For what it's worth, for this specific case in PostgreSQL, this type of 
functionality is better suited to appropriate use of the timestamp with 
time zone data type, and correct session usage of SET TIME ZONE and/or 
PGTZ usage.  I'm currently wrestling with the use cases here instead.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/G99byYSmjoQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Custom in-query-only rendering for TypeDecorator types?

2012-07-12 Thread Russ
I currently define a custom column type for ensuring that dates stored to 
the DB are offset-aware UTC, and convert to to the appropriate timezone on 
retrieval, using a class something like this:

import pytz
import sqlalchemy as sa

class UTCEnforcedDateTime(sa.types.TypeDecorator):
DateTime type that ensures that stored datetime objects are
offset-aware UTC. Values retrieved from the database are converted to 
the
local_tz (if class var is set).


impl = sa.types.DateTime
local_tz = None  #eg: pytz.timezone(EST)

def process_bind_param(self, value, engine):
if (value is not None) and (value.tzinfo != UTC):
raise ValueError(only offset-aware UTC datetimes are allowed.)
return value

def process_result_value(self, value, engine):
if value and self.local_tz:
#Make the DB value UTC right away (because it is!)...
utcDT = UTC.localize(value)
value = utcDT.astimezone(self.local_tz)
return value
###

This has worked extremely well to date, but I'm now finding that the 
process_result_value way of doing things is (relatively) slow for large 
numbers of records because the conversion happens in python after the data 
is retrieved.  ie: process_result_value is called for each appropriate 
value in the resulting records.  For example, a profile run showing 600 ms 
total SQLAlchemy query time, with ~500 ms spent in process_result_value 
doing tz conversions (!!).

I'm trying to figure out how to change this so that, rather than converting 
in python, the conversion happens at the database layer using AT TIME ZONE 
syntax.  I've tried using @compiles as follows:

#THIS CODE DOES NOT WORK...
@compiles(UTCEnforcedDateTime)
def compile_UTCEnforcedDateTime(element, compiler, **kwargs):
tz = UTCEnforcedDateTime.local_tz
if tz:
ret = %s AT TIMEZONE '%s' % (element.name, str(tz))
else:
ret = compiler.process(element) #NOT RIGHT - what is?
return ret

However, this is not right for a few reasons...

The first problem is that the special compilation is executed when the 
table is created that has this column type.  I only want the special 
compilation when the compilation is for rendering a query, but I can't 
figure out how to determine this.  The docs http://goo.gl/N4igi state 
that teh compiler argument can be inspected for any information about the 
in-progress compilation, including compiler.dialect, compiler.statement, 
but I can't figure it out.

The second problem is that I'm not certain how to pass through to the 
default compilation that would have been done without my interception.  Is 
there a generic passthrough?  compiler.visit_type_decorator seems 
promising, but I really want carry_on_like_I_never_intercepted_this(). 
 Or should I call visit_datetime?

How do I fix this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/AZ0K4D5PaFoJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom in-query-only rendering for TypeDecorator types?

2012-07-12 Thread Russ
On Thursday, July 12, 2012 3:12:26 PM UTC-4, Michael Bayer wrote:

 There is no functionality right now that allows special SQL to be 
 automatically associated with a type, as associated with an enclosing 
 column expression, at query time.  There's a ticket to begin making this 
 kind of thing possible which is http://www.sqlalchemy.org/trac/ticket/1534. 
   When using @compiles for a type, the only time that compilation takes 
 effect is when the type itself is rendered, not its containing column - 
 that means, only CREATE TABLE and CAST expressions.

 
On Thursday, July 12, 2012 3:12:26 PM UTC-4, Michael Bayer wrote:

 There is no functionality right now that allows special SQL to be 
 automatically associated with a type, as associated with an enclosing 
 column expression, at query time.  There's a ticket to begin making this 
 kind of thing possible which is http://www.sqlalchemy.org/trac/ticket/1534. 
   When using @compiles for a type, the only time that compilation takes 
 effect is when the type itself is rendered, not its containing column - 
 that means, only CREATE TABLE and CAST expressions.


I now understand this more now, thanks.  Am I right in understanding that, 
after the initial type is passed to the Column constructor, there is no 
real connection between the ColumnClause specified in table definition and 
the column type (TypeEngine)?

There might be some ways to use @compiles around Column expressions which 
 check the type too, but this might be a little tricky (or not).


I've got it mostly working using this method, but have hit a few 
roadblocks.  It was pretty easy to intercept Column definitions that were 
given the UTCEnforcedDateTime type and, with this done, then use DTColumn 
wrapper type instead with an associated compiler something like this:

class DTColumn(sa.Column):
pass

@compiles(DTColumn)
def compile_DTColumn(element, compiler, **kw):
tz = UTCEnforcedDateTime.local_tz
elementName = '%s.%s' % (element.table.name, element.name)
ret = %s AT TIME ZONE '%s' % (elementName, str(tz))
return ret

This does what I want (great!)... but this isn't quite right, yet.

At first I followed the example [1] for @compiles usage with ColumnClause 
classes.  This indicates to use element.name.  This worked for many 
queries, until I ran into a query where the column name became ambiguous 
and I needed to specify the table name as well.  Digging around the 
'element' properties, I then added in element.table.name (as per the code 
above) to handle this.  This worked for many more queries.

However... element.table.name doesn't seem like it can be used directly. 
 It occasionally comes up with strings like %(79508240 log_event)s, which 
clearly is getting substituted in the guts somewhere when I don't do this.

What is the proper way to get the fully qualified element name?

Also, I put in the wrapping quotes on column name since I needed to for a 
few columns, but is there a proper way to do escape this in SQLAlchemy? 
 I expect this is dialect-specific, since it only seems to get done when 
needed when left alone.

I'd also look into why the in-python TZ conversion is so slow, it seems a 
 little suspect that it is 5x slower than the rest of the entire operation 
 overall.   I'd check things like, the result processor isn't being called 
 repeatedly for the same value, stuff like that.


I was also quite surprised, but it is legit.  On re-profile just now it is 
not 80% of my operation, but more like 40%.  See the light green part to 
the left of this runsnakerun output:
http://static.inky.ws/image/2379/image.jpg

There is some other stuff happening there (which only increase the 
tz-conversion percentage when considered), but the upshot of it all is that 
I do want to ditch the process_result_value overhead for this particular 
query, and learn more about the SQLAlchemy internals in the process. :)

Russ

[1] http://docs.sqlalchemy.org/en/rel_0_7/core/compiler.html#synopsis

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/udDuAdJqGzQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom in-query-only rendering for TypeDecorator types?

2012-07-12 Thread Russ


 However... element.table.name doesn't seem like it can be used directly. 
  It occasionally comes up with strings like %(79508240 log_event)s, which 
 clearly is getting substituted in the guts somewhere when I don't do this.


To clarify, I've just determined that this is specifically happening when 
type(element.table) == sa.sql.expression.Alias.  I'm still stumped, but 
that is more info, anyway.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/jcnVOxkGPaIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom in-query-only rendering for TypeDecorator types?

2012-07-12 Thread Russ
On Thursday, July 12, 2012 5:42:12 PM UTC-4, Michael Bayer wrote:

 let the compiler do it:

 elementName = compiler.process(element, **kw)


That causes an infinite loop since it tries to compile DTColumn itself.

I've tried stuff like super(DTColumn, element).compile(), but that doesn't 
help (and was a guess anyway).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/2eZs0B2dct0J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom in-query-only rendering for TypeDecorator types?

2012-07-12 Thread Russ


 oh right.   yeah call compiler.visit_column(element, **kw).


Perfect... that did the trick, thanks!!!

Since the end result isn't much more than this example:
http://docs.sqlalchemy.org/en/rel_0_7/core/compiler.html#synopsis

it may be worth updating it to replace 'element.name' with 
'compiler.visit_column(element, **kw)', since it may help future people.

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/bdCRfquAUd4J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Aliasing a constant within a recursive CTE

2012-07-09 Thread Russ
I'm trying to use the new CTE support in SQLAlchemy in a way that will
allow me to reference the recursion level as a field in the query
result.  This is easy in a straight SQL CTE by aliasing a constant in
the non-recursive part, and then referencing the alias in the
recursive part.  The limited example below (tested in PostgreSQL)
demonstrates this with a single field, and yields 0-10 inclusive:

WITH RECURSIVE cte AS (
SELECT 0 as x
  UNION ALL
SELECT cte.x + 1 FROM cte WHERE cte.x  10
)
SELECT * from cte;

I can't figure out how to replicate this in SQLAlchemy, though.
Specifically, I'm stumped on how to represent the 0 as x part in
SQLAlchemy.  How do you do it?  I've tried variations of this:

select(0).alias(name=x)

as column specs, but with no luck so far.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Aliasing a constant within a recursive CTE

2012-07-09 Thread Russ
 select(literal(0).alias(x)) should do it, see the documentation at ...

Thanks... literal() gave me a location on which to attach a label I
can reference.  I'm closer, but still can't get this to work.

Here's my closest so far (iwth SQLAlchemy 0.7.8):

import sqlalchemy as sa
#set up the non-recursive part of the query (sort of?)...
cte = sa.select(
[sa.literal(0).label(x), ]
).cte(name = cte, recursive = True)
#bring in the recursive part (sort of?)...
cte = cte.union_all(
sa.select([cte.c.x + 1, ]).\
where(cte.c.x  10)
)
#select from the resulting CTE...
statement = sa.select([cte.c.x, ])
print statement

which yields...

WITH RECURSIVE cte(x) AS
(SELECT :param_1 AS x),
cte(x) AS
(SELECT :param_1 AS x UNION ALL SELECT cte.x + :x_1 AS anon_1
FROM cte
WHERE cte.x  :x_2)
 SELECT cte.x
FROM cte

which isn't right at all, and is confusing to me.  Especially where
the UNION ALL ended up and that the CTE wrapping brackets are totally
wrong.

I'll keep at it.  It is hard to assemble this piece by piece and check
as I go since adding the .cte seems to prevent printing of progress
until the very end.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to keep ORM Sessions in sync when using the SQL expression language as well?

2011-10-19 Thread Russ
Another wrinkle to this is that if I already have relationship data within 
the ORM, but then add records outside of the ORM with the expression 
language, I can't figure out how to reconcile this efficiently.

As a specific example, if I add the snippet below to my original example, 
you can see the ORM falls out of sync.

#Add some new emails to Joe outside the ORM...
initial_mail_count = len(joe.addresses)
new_mail_count = 2
new_emails = [newjoe%...@example.com % i for i in range(new_mail_count)]
emailValues = [dict(FK_user_id = joe.id, email = addr) for addr in 
new_emails]
sess.execute(Address.__table__.insert(), emailValues)

#Joe's new emails are not in joe.addresses...
# - because the ORM has zero awareness of the direct execute call
assert len(joe.addresses) == initial_mail_count

#I can get them by expiring all of Joe...
# - is there a way to do this without expiring Joe?
# - Can the relation be expired directly?
# - Better would be to to inform the ORM of new data (instead of expiring 
old
#   data), but I'm looking for workarounds.
sess.expire(joe)
assert len(joe.addresses) == (initial_mail_count + new_mail_count)

As per the code and comments, I can bring it back into sync by expiring the 
top level object, but this is clearly not very efficient since it results in 
a whole query for the top level object, as well as any additional eager 
loads that have been configured for that object.

Can a relation be expired directly?  I'm searching hard for how to do this 
and failing.  I'm also still very interested in whether there is some other 
efficient way to re-sync the ORM with transactions performed outside of it.

I have updated the pastebin sample code to include the above snippet as 
well:
http://pastebin.com/eCDSm0YW

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/rX5kPq4zwjEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to keep ORM Sessions in sync when using the SQL expression language as well?

2011-10-19 Thread Russ
Thanks very much for the response... lots to chew on here.

well pretty much being saavy about expiration is the primary approach.   The 
 rows you affect via an execute(), if they've been loaded in the session 
 they'd need to be expired from memory.


I understand this somewhat and had done that, but am still confused by one 
thing: in my pure ORM case (for joe), how did the identity map get a hold 
of the auto-increment ids when there was no apparent emitted SQL?  It seems 
there wasn't one done, although I suppose it could have been done behind the 
scenes and not logged?  Either that or the insert statements are returning 
the ids somehow... although I didn't see anything indicating that.  Whatever 
method the ORM uses to get the ids from the INSERT statements it emits is 
where I was wondering if I could get the data to stuff the identity-map 
without needing to expire anything and/or re-query.

Unless you can be coarse grained about it and expire everything (as occurs 
 normally after a commit()), or large chunks of things, you'd need to work 
 out specific situations as needed.  Of course the best scenario is when you 
 aren't intermingling ORM state with SQL-emitted UPDATE statements so much. 
  Query.update() and Query.delete() were introduced to help with this as they 
 can apply the same filter() criteria to the objects within the Session, 
 expiring those that match - it can evaluate simple SQL criterion in memory 
 for this purpose.


I had been looking at Query.update and it looked promising and dug around 
for an insert equivalent, but there wasn't one so I ended up dropping down 
to the SQL expression language.

 For example, with the ORM you can't really do a batch/bulk insert (that I 
 know of), but you *can* mix in some SQL expression language to achieve it. 
  Here is a complete example that shows this for the standard Users/Addresses 
 example:
 http://pastebin.com/BArU6hci

 so there, your options are:

 1. expire bob.addresses ( see below, it appears you weren't aware of this)
 2. use lazy=dynamic on bob.addresses, so that SQL is emitted each time 
 the relationship is accessed.


I'm not sure how those apply to doing a bulk insert with the ORM... I must 
be missing something?  dynamic isn't an option for me as I use the 
relation frequently... I just need to sync up after a few selected insert 
locations.  For the expiry of bob.addresses I don't know what to say... I 
had some sort of weird/idiotic mental block as I read and re-read the docs 
on expire and the docstring and still didn't get it for some reason.  I feel 
shame.  Thanks for the pointer to obvious second argument. :(

 Although the latter behaviour isn't really surprising, can the query be 
 avoided somehow?  Is there a way to manually update the session/identity-map 
 with the info?

 You can put things straight into the identity map using Session.add(), 
 assuming the object has a database identity.   Session.merge(, load=False) 
 does it too and is normally used to merge in state from a cache into the 
 Session where the target identities 

may already be present.


Thanks - I'm not sure I 100% get it.  I think it is simply that I can make 
an ORM object instance not associated with a session (I think you call this 
detached), directly assign the PK (if I know it), and then when I call 
Session.add() the ORM will trust me and use the provided PK for integrating 
the object into the identity-map.  Is that right?

If this is the case, I still have the unfortunate problem of not knowing the 
autoincrement PK values without a fresh query.  And before it comes up, I do 
actually need autoincrement PKs versus a more natural candidate key in 
almost all cases.  Reasons here include space/time efficient joining to many 
separate tables, as well as allow renaming of the natural candidate keys 
without changing of record identity (or the many FKs).  I'm continually 
evaluating whether I can ditch autoinc sequence PKs and it does not make 
sense in most cases.

But to get identity on the object and have it be clean as far as pending 
 changes you need to tinker.  The identity key part needs to be via 
 instance_state(obj).key = 
 object_mapper(obj).identity_key_from_instance(obj), so you can see there 
 you're starting to dive into private APIs.   The fixed attributes with no 
 history you can assign via set_committed_value: 
 http://www.sqlalchemy.org/docs/orm/session.html#sqlalchemy.orm.attributes.set_committed_value,
  this is more of a public API as it's there to help people write custom 
 loader logic for special cases.   There would be nothing controversial about 
 making a more public api to assign identity to an object but we don't 
 usually add things without a use case we see fit; otherwise people don't 
 know why such a method is there and if/when they should be using it.   The 
 use cases also drive how the API would look, as there are several ways a 
 put an object in the identity map API could look here.

 

You 

[sqlalchemy] How to keep ORM Sessions in sync when using the SQL expression language as well?

2011-10-18 Thread Russ
I often mix up the SQL expression language with the use of an ORM session, 
and it is great that SQLAlchemy more than supports this.

But... what are the recommended ways to keep the session in sync with what 
you do with the SQL expression stuff?

For example, with the ORM you can't really do a batch/bulk insert (that I 
know of), but you *can* mix in some SQL expression language to achieve it. 
 Here is a complete example that shows this for the standard Users/Addresses 
example:
http://pastebin.com/BArU6hci

In there I do a semi-bulk insert of addresses using both the ORM way and 
the SQL Expression Language way.  With the former way, the inserted 
addresses are available *with their ids* without ever seeming to have a 
query emitted to get them.  With the latter way (SQLEL) the ORM needed to 
emit a query to get the addresses (even when I don't ask for id), presumably 
to update it's identity map.

Although the latter behaviour isn't really surprising, can the query be 
avoided somehow?  Is there a way to manually update the session/identity-map 
with the info?

In general, is there a better way to do what I'm doing?  The example is 
obviously a simplified one... my actual use case is batch inserting 
thousands of records this way and I'd like to be able to keep the ORM and 
avoid the extra query if I could.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/rnzqnwQl858J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Does the MRO matter when using declarative with mixins?

2011-10-17 Thread Russ
All the declarative examples have DeclarativeBase as the first/left base 
class.  Does it need to be?  I've swapped it in several code locations and 
experimented and it seems to be fine, but there's a lot going on with 
declarative and I'm vaguely paranoid about messing it up subtly by altering 
the MRO.

ie: This is normal, and is how all declarative examples are done:

class Foo(Base, MyMixin):
#snip

but is this also ok, or will it cause issues somehow?

class Foo2(MyMixin, Base):
#snip

The reason I'm looking at this is that I've got a case with multiple mixin 
classes, and when the top mixin used doesn't have an __init__ definition, 
but a lower level class does, having the declarative base first fails 
because it doesn't use a (*args, **kwargs) catchall.  I also wonder how 
super() will end up working out.

Since that last bit probably wasn't very clear, here is a small but full 
example of what I mean:
http://static.inky.ws/syn/335

Resolving the __init__ args issue is resolved by putting the Mixin first.  I 
just want to make sure it is fine.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/N4xewJ_5RJQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Does the MRO matter when using declarative with mixins?

2011-10-17 Thread Russ
Great - thanks for the response.   This was causing me more brain ache than 
I care to admit.  My paranoia was rooted in the fact that the docs did seem 
to go out of their way to put the Base first (without specifically saying 
so) which is awkward as you say.

Much appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/gOZETa5S6CQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Dropping all tables + more

2011-09-23 Thread Russ
Is the DropEverything recipe still the best way to drop everything via 
SQLAlchemy?

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverything

The recipe is ancient (almost a year old! :) ) and I just want to check if 
there is a better way now.

How I got here (for searchability)...

When doing unit testing on a primarily PostgreSQL application I routinely 
drop all tables in a database with this:

def DropAllTables(EngineURL):
import sqlalchemy
engine = sqlalchemy.create_engine(EngineURL)
meta = sqlalchemy.MetaData(engine)
meta.reflect()
meta.drop_all()

However, I've recently added tables that refer to each other (which required 
using use_alter on the ForeignKey definition to avoid problems) and 
drop_all ends up raising a CircularDependencyError.  In trying to resolve 
this I found these useful links, which ultimately led me to the 
DropEverything recipe:

http://www.luckydonkey.com/2007/11/23/postgresql-sqlalchemy-dropping-all-tables-and-sequences/
http://blog.pythonisito.com/2008/01/cascading-drop-table-with-sqlalchemy.html
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DropEverything

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/WnMxxvWM2IgJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Seemingly inconsistent results with transaction isolation

2011-09-09 Thread Russ
Thanks for clarifying with a full answer!

 This should not be construed to mean that you should only use the ORM
 with SERIALIZABLE isolation. It's instead just something to be aware of.
 You can of course expire any object or individual attribute at any time. 
In
 this case, if you were to add s2.expire(s2c1) on line 88, you'd then get 
the
 222 value on the next check as it would emit a SELECT.

But it does seem to emit a select!  The output surrounding the line 88 query 
is below:

Querying s2 again (whose transaction is still open) to see what it gets for 
c1...
SQL  SELECT counter.id AS counter_id, counter.name AS counter_name, 
counter.count AS counter_count 
SQL  FROM counter 
SQL  WHERE counter.name = %(name_1)s
SQL  {'name_1': 'C1'}
s2 gets C1.count = 1

... so what were those SQL emmissions about?  They did not seem to arrive at 
the database (based on the result, anyway).

I had thought it may be identity map stuff, regardless of what SQL I saw 
being emitted, which is why I tried the direct query in line 96.  The 
combination of how the heck did SQLAlchemy know to emit SQL here? and ok 
- SQL was emitted, but why didn't it get the response I expect for READ 
COMMITTED? both drove the line 96 check.

Note that if I add the s2.expire(s2c1) before the line 88 query as you 
suggest, the seemingly emitted SQL output is no different, except that the 
value actually gets updated:

Querying s2 again (whose transaction is still open) to see what it gets for 
c1...
SQL  SELECT counter.id AS counter_id, counter.name AS counter_name, 
counter.count AS counter_count 
SQL  FROM counter 
SQL  WHERE counter.name = %(name_1)s
SQL  {'name_1': 'C1'}
s2 gets C1.count = 222

What is happening?  I always trusted the logger output for letting me know 
when the DB was actually being hit, but now I'm not sure I can.

Thanks,
Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/PenYdOGI1hwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Seemingly inconsistent results with transaction isolation

2011-09-09 Thread Russ


 when you say query(Class).filter(criterion).one(), that always emits SQL. 
  It's an open-ended query, and SQLalhcemy doesn't know that s2c1 is the 
 object which corresponds to the SQL you're about to emit.   When the row 
 comes back, it then extracts the primary key from the incoming row, 
 determines that identity is already in the identity map and is unexpired, 
 and the row is skipped


It ignores new data?  I'm clearly going to need to be even more careful 
about this when I expect 'read committed' behaviour from the database.
 

 (that's the disconnect here - incoming rows are not used if the existing 
 identity exists and is unexpired).


Is there a global, or per session or something, way to change this 
ignore-new-data behavior (aside from manually expiring objects prior to 
querying for data?).  Or even more brutal, can the identity map feature be 
turned off altogether on a per session basis?

For those cases when 'read committed' and the associated 
transaction-transaction contamination is actually the desired behavior, 
having to manually tell SQLAlchemy to expire everything all the time so that 
it doesn't ignore new incoming data is quite an extra layer to worry about 
on top of the the already treacherous concurrency issues around 'read 
committed'.  I always knew the identity map was something to be wary of with 
concurrency since it is effectively another layer of isolation on top of 
what the DB is set up to do, but didn't consider the fact that new data that 
is read would be ignored.

From earlier you said:

 

 If a concurrent transaction were to come in from underneath and change 1 
 to 222 while you were still in your ongoing operation, you might be pretty 
 upset


But I shouldn't be!  That is *exactly* how 'read committed' is supposed to 
behave.  If I don't want that, I should be setting repeatable read or 
serializable.   Although I guess it's true that most developers don't 
quite know about transaction isolation levels (I sure didn't until fairly 
recently) and might be upset with SQLAlchemy at the first layer, if it did 
happen to them...

if OTOH you had said query(Class).get(pk), that would pull from the identity 
 map directly and not even hit the DB in this case.


I usually end up using one() and never use get()... I'll look at using get() 
now, as frequently my one() criteria was only on the PK.  And I'll obviously 
be wary of isolation issues with this.  Usually having the identity map is 
awesome.

My main issue was/is that I saw SQL being emitted, was expecting 'read 
committed' behaviour, and didn't get it.  Now I completely know why... 
thanks again.

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/98kZ3oQ-jsUJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Seemingly inconsistent results with transaction isolation

2011-09-08 Thread Russ
I was getting some strange transaction isolation behavior with
SQLAlchemy (0.7.2), psycopg2 (2.4.2), and PostgreSQL 8.4.  In order to
investigate I wrote up a usage sequence that does this:

1. starts a transaction with a session (s1)
2. starts another transaction with session (s2)
3. updates a value in s1 and commits it
4. reads the value back in s2 using the ORM...
  - and it does not get the updated value, although with READ COMMITED
it should
5. reads the value back in s2 using a direct s2.execute statement...
  - and it DOES get the updated value (??)

I don't understand why the ORM-triggered read (which does emit SQL) is
not getting the update value, but the direct statement is getting the
update.

When the logger emits SQL I thought it always sent the SQL to the
database.  Is this a correct assumption?

Here is my complete (and somewhat verbose) test code that shows the
behaviour...
http://static.inky.ws/syn/325

The postgres engine string obviously needs to be changed
appropriately.  And WATCH OUT for the drop tables code in there for
anyone who tries this, if nutty enough to point at a live database.

What is going on?  I expect/hope it is something dumb on my end, but I
just don't see it.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-07-02 Thread Russ
 because an attribute can be mapped to multiple columns, 
 i.e.http://www.sqlalchemy.org/docs/orm/mapper_config.html#mapping-a-class

Ahh... thanks.

 as I did this example and thought back upon how often people want to poke 
 around the mapper configuration, i started trying to think of how to turn the 
 mapper hierarchy into a more of a well described DOM-type of system.    In 
 this case one thing I thought of would be to add a first_column accessor to 
 ColumnProperty.


 because an attribute can be mapped to multiple columns, 
 i.e.http://www.sqlalchemy.org/docs/orm/mapper_config.html#mapping-a-class

Ah... thanks.

 as I did this example and thought back upon how often people want to poke 
 around the mapper configuration, i started trying to think of how to turn the 
 mapper hierarchy into a more of a well described DOM-type of system.    In 
 this case one thing I thought of would be to add a first_column accessor to 
 ColumnProperty.

Allowing multiple columns does make it more DOM-like.  But once you
know ColumnProperty.columns is a sequence (eg: I didn't), columns[0]
is just as good as first_column.

I know very little about your internal implementation, but I might as
well toss a thought out there on navigating a mapper configuration...
it would be more intuitive/explicit to me if there were a
DbColumnProperty base class, and two derived classes:
SingleDbColumnProperty and MultiDbColumnProperty (and I guess you
would need RelationProperty, and probably more).  The former with a
simple .column attribute, and the latter with your current .columns
sequence attribute.  The current situation of checking for the
existence of a .columns attribute to find real persistent data is
not that intuitive.  It would be nicer to look through the mapper
properties for DbColumn instances.  Also, I would bet that the vast
majority of implementations would be SingleColumnProperty, so the
niggly detail of columns[0] wouldn't exist (although that would
obviously break a lot of existing code, unless SingleColumnProperty
always had single entry .columns... but then that is what you have
now).  Overall it is great right now and my commentary is probably
pointless and/or naive.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-30 Thread Russ
I discovered the sqlalchemy.orm.validates decorator, which executes
when I want it to (on attribute assignment) so I got out my hacksaw
and tried to make it work in one fell swoop... implementing automatic
UTC assignment for all UTCEnforcedDateTime columns.

I'm not comfortable with it yet... but a mostly-inclusive demo code
snippet is here:
http://pastebin.com/wB4BLzax

Where I intend to do that last bit of hackery (utcConversionList and
__sa_validators__ work) by introspecting through a large number of
classes (all having the TableDefMixin) and looking for
UTCEnforcedDateTime Column definitions that should be added to the
utcConversionList for the class.  Method tbd there, but not important.

Although it is doing what I want it to do (hands off UTC assignment
where expected), I'm really not comfortable with it.  It seems
*highly* sketchy, quite indirect,  and it seems like there must be a
better way to simply set a default validator for a custom Column
type.  Is there?

It would be much tidier to put the validator into the
UTCEnforcedDateTime class.  If not... can I rely on the direct setting
of __sa_validators__ working in the future?  Using the orm.validates
decorator in each and every class is obviously the better choice for
future compatibility, but if I want one-shot assignment as I do above,
I can't use the orm.validates because it is too late to set
__sa_validators__ directly as orm.validates does, and I needed to be
brutal and go to the __dict__ directly.

I expect there is a simple answer and all my hacking/exploration
related to my posts above was pointless... except it has been highly
educational to root through the SQLAlchemy implementation here.
Tidbits learned include now understanding descriptors (never had cause
to before), and learning about attrgetter and attrsetter.  Stepping
into the attribute assignment (InstrumentedAttribute.__set__) was
highly confusing until reading up on those bits!!  instance_state()
and instance_dict() instantly returning was somewhat mysterious for a
while!

Thanks,
Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-30 Thread Russ
 I've added an example for you as well as Jason who asked almost the same 
 question earlier, which illustrates the TypeDecorator in conjunction with an 
 attribute listener that is applied to all occurrences of the target type, an 
 approach also used by the mutable attributes extension, and which we may look 
 into adding as more of a built in feature in the future although the recipe 
 should be straightforward.

 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrenc...

Thanks!!  I would *never* have associated Jason's question with mine,
but no matter... the end result is perfect.

I had actually gotten it completely working with my messy
__sa_validators__ hackery (and a heck of a lot of questionable
introspection), but I now have it working with the event system as you
have suggested.  It is MUCH cleaner and I like it a lot.

Here is my final implementation:
http://pastebin.com/33Zkfz1h

The only thing I'm still unsure of in the code is why mapper.columns
is a collection and it required checking columns[0], but I can either
just live with that or look into it later.

Also - prior to your suggestion I was still on SQLAlchemy 0.6.6 and
this prompted me to make the leap to 0.7.1 ... all code now working
fine after that transition with only minor hiccups.

That was an excellent introduction to the new event system as well...
thanks again!

Russ

--
Code is reproduced below as well, in case the pastebin ever fails:

from pytz import UTC
import sqlalchemy as sa
import sqlalchemy.orm as orm
import globalenv

class UTCEnforcedDateTime(sa.types.TypeDecorator):
DateTime type that ensures datetime objects are offset-aware
UTC.
impl = sa.types.DateTime

def process_bind_param(self, value, engine):
if (value is not None) and (value.tzinfo != UTC):
raise Exception(Data MUST be offset-aware UTC!)
return value

def process_result_value(self, value, engine):
if value is not None:
return value.replace(tzinfo = UTC)
return value

def _EnsureUTC(target, value, oldvalue, initiator):
'Set' Event handler for all UTCEnforcedDateTime columns.

This handler simply ensures that the provided 'value' is an offset-
aware
UTC datetime.

SQLAlchemy validator (for @validates) for use with
UTCEnforcedDateTime.

Use of this validator will convert times to UTC on assignment (so
that
the UTCEnforcedDateTime implementation doesn't throw an exception
on
commit).


dt = value
if dt == None:
return dt
if dt.tzinfo == UTC:
return dt
tz = globalenv.LocalTZ  #pytz timezone that naive datetimes are in
#Convert naive time to local time...
# - normalize is needed to deal with DST funkiness
dt_tz = tz.normalize(tz.localize(dt))
return dt_tz.astimezone(UTC)

@sa.event.listens_for(orm.mapper, mapper_configured)
def _Configure_UTCEnforcedDateTime_Setter(mapper, class_):
A mapper-configured listener that is triggered every time an
ORM_ class
mapper is registered (once per class).

This event handler makes sure that any defined UTCEnforcedDateTime
are
always receiving data with properly determined UTC offset-aware
values
(with the use of the _EnsureUTC handler).

Adapted from sample code here:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrencesOfType


for prop in mapper.iterate_properties:
if hasattr(prop, 'columns'):  #it is a column (not a relation)
if isinstance(prop.columns[0].type, UTCEnforcedDateTime):
#Set up a listener for datetime setting events...
classAttr = getattr(class_, prop.key) #the attr of the
mapped class
sa.event.listen(
classAttr, #We want to listen for when
classAttr...
set, #has a set event (like a property
setter)...
_EnsureUTC,#and use _EnsureUTC as the
handler...
retval = True) #and allow _EnsureUTC to change the
attr with it's return




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-29 Thread Russ
I have a typical case where I want to ensure that datetime values sent
to the database are UTC, and values read from the database come back
as offset-aware UTC times.  I see several threads on the issue (eg:
http://goo.gl/FmdIJ is most relevant), but none address my question
that I can see.  UTC datetimes are what I'm working with, but not
really what the question is about (although hopefully the code may
help others looking for UTC stuff!).

I originally tried implementing this with a custom type (UTCDateTime)
where I did appropriate conversions on the data to/from the database.
That used code something like this:
http://pastebin.com/xSrV9QcS

That implementation worked well when creating new objects and
committing them, and worked well when querying for existing objects.
However, it failed when committing objects that I updated (query-make
a change-commit).  This is because the UTC conversion only happens on
commit, and some SQLAlchemy internals were doing a comparison on the
new value (naive datetime) and the old value (offset-aware datetime
from DB) and raising an exception.

When I realized that process_bind_param only happens on commit, I
decided to switch my strategy to simply confirming that all incoming
outgoing datetime values are offset-aware UTC using this simpler code:
http://pastebin.com/gLfCUkX3

but in order for that to work, I also needed to and start looking into
how to implement a setter on all of my datetime properties (there are
lots) that would do the UTC assignment/calculation.  I use declarative
mixins which seemed to make this more complicated, but a setter can be
set up using a solution something like this:
http://stackoverflow.com/questions/5821947/5822301#5822301

However, I have a LOT of properties to do this on and it would be far
better to do it at a single choke point like I initially tried.  So...
what I want is all assignments to a UTCEnforcedDateTime column/
property (self.start_date = x) to be converted to UTC at setting time,
not at commit time.  How do I do this?

I can't see how to do it in the docs (coerce_compared_value was
initially tempting but not a fit)/  Digging into it manually I see
that the existing auto-instrumentation of my UTCEnforcedDateTime dips
through InstrumentedAttribute, ScalarAttributeImp, etc, but I can't
locate a good hook point to put my fixing code for non-UTC
assignments.

Does anyone have any recommendations?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to get setter-like behaviour universally across a custom type? (with UTC DateTime examples)

2011-06-29 Thread Russ
 When I realized that process_bind_param only happens on commit, I
 decided to switch my strategy to simply confirming that all incoming
 outgoing datetime values are offset-aware UTC using this simpler code:
 http://pastebin.com/gLfCUkX3

Sorry - I messed up that code segment on edit for pastebin.  Here is a
cleaned up version:
http://pastebin.com/HcnnmXtV

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Unnecessary SQL emitted after a commit to get a PK?

2011-01-13 Thread Russ
On Wednesday, January 12, 2011 2:16:00 PM UTC-5, Michael Bayer wrote:

 Suppose a concurrent thread or process deleted your row in a new 
 transaction and committed it, or didn't even commit yet hence locked the 
 row, in between the time you said commit() and later attempted to access the 
 attributes of the row.  That's the rationale in a nutshell.


Thanks, this make sense.  For my purposes (where business logic ensures no 
post-commit shenanigans) on this one I can just snag the id after a 
pre-commit flush() and that will be fine.

For my issues with object detachment I'll post another topic.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Can expire_on_commit be made to apply at an object level (instead of everything in the session)?

2011-01-13 Thread Russ
On Wednesday, January 12, 2011 2:16:00 PM UTC-5, Michael Bayer wrote:

 see expire_on_commit=False as well as Session.commit() for further detail:
 http://www.sqlalchemy.org/docs/orm/session.html#committing

 http://www.sqlalchemy.org/docs/orm/session.html#sqlalchemy.orm.session.Session.commit
 To reassociate detached objects with a session, use Session.add() or 
 Session.merge().  Detachment is described at 
 http://www.sqlalchemy.org/docs/orm/session.html#quickie-intro-to-object-states
  .

 
I think I'm in a catch-22 situation I'm not sure how to get out of (short of 
copying my object to an unmapped one, which seems wasteful)...

I have N objects in a scoped session and I want to commit the changes on 
them all.  After this is done, I want to keep just one of them around in a 
cache as a simple detached object that won't ever result in emitted SQL. 
 The business rules are such that for this object there is no way the object 
will differ from what is in the DB (and even if the DB was changed behind 
the scenes I want the values at time of commit).

The issue seems to be that on commit(), everything is expired, and the 
comment in the docs that says They are still functional in the detached 
state *if the user has ensured that their state has not been expired before 
detachment* indicates that I can't have a cleanly detached object after a 
commit because of this expiration.  At least not without setting 
expire_on_commit = False... but that applies to all objects in the session. 
 I only want one object to be severed from the database linkage.

Is there any clean way to set expire_on_commit behaviour at a per-object 
level?  It seems to be all or nothing at the moment.

I dug into the code that seems to do the expiry (session._remove_snapshot 
through to InstanceState.expire_attributes) and nothing is leaping out at 
me.  Can I force an un-expire after the commit without legitimately 
reflecting the persistent state?

Russ



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Can expire_on_commit be made to apply at an object level (instead of everything in the session)?

2011-01-13 Thread Russ
On Thursday, January 13, 2011 10:29:10 PM UTC-5, Michael Bayer wrote:

 So you're looking to do a write-through cache here, i.e. write the DB, then 
 write the value straight to the cache.  I think if you were to say 
 session.flush(), which emits the SQL, then detach all the objects using 
 session.expunge(), they're no longer affected by subsequent session 
 operations, then session.commit(), commits the transaction, that would 
 produce the effect you're looking for.


Perfect... flush()-expunge()-commit() does exactly what I want.  Now to 
learn about expunge cascades to tune it up!

 Is there any clean way to set expire_on_commit behaviour at a per-object 
 level?  It seems to be all or nothing at the moment.

 Bizarrely, this question had never been asked before, until about five 
 hours ago.   That happens quite often, for some reason.   For that user I 
 suggested using SessionExtension.after_commit() to re-implement expiration 
 on subsets of objects only, though you'd need some plumbing to indicate what 
 objects get the expiration - perhaps you'd set some attribute on them.I 
 think the flush()-expunge()-commit() idea is easier.


Ack - sorry about missing that topic.  I'm glad I did as the expunge route 
is easier as you say.  It is great to keep the after_commit() in mind, 
though.

Well un-expire is just hitting the object's attributes so it loads back 
 again, assuming you're looking to not have the extra SELECT.  If you were to 
 SELECT all the rows at once in a new query() that also would unexpire all 
 the existing objects it hit within the one result set, if that's useful 
 information.


You are correct, I'm limiting my SELECTs as I've got a few spots where 
tables are heavily trafficked and every SELECT hurts.  I've used joinedload 
a lot to cut down on selects as well.  I used to specify lazy loads in the 
relationship definition, but joined loads at the query is more explicit and 
cuts down on excess data loading for times when it really isn't needed.  A 
bit off topic, but there you go.

Given how fast and great your response is, I also have to send some kudos 
your way.  Not only is SQLAlchemy a great piece of software and extremely 
useful, but your attentiveness to this group and obvious passion for the 
project is truly amazing.  I have no clue where you find the time for it, 
but I assure you it is greatly appreciated!

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Confusion regarding how sessions affect mapped objects (plus a gc oddity)

2011-01-06 Thread Russ
I'm running into some issues with objects being attached to sessions
that I don't quite understand.

Basically, if I create some objects, add them to a session, commit,
then turf the session (or at least let it fall out of scope), one
expectation I have is that I will still be able to access the basic
properties on the object without a DB connection.  I of course do not
expect the object's attributes to match what is in the database if not
connected, but I still want to access the last known values as if it
were a simple unmapped python object.  In practice, I'm finding that
once an object is added to a session I can't access object attributes
without a connected session.

I also ran into some cases where attribute access was 'mysteriously'
working for some objects and not others after the session was deleted
(or dropped out of scope).  I have since determined that it is because
the associated sessions that I had thought were completely gone had
simply not been garbage collected yet and were still performing their
duties!!

I'm not a gc expert by any means, but thought the lingering session
might have something to do with the weak reference the session has on
contained objects.  I was wrong.  I tried the sessionmaker's
'weak_identity_map = False' option and it had no impact.  What is
confusing is the reverse direction... that the objects still have
usable references to the deleted sessions.

So...
1. Why can't I access simple object properties when the session it was
associated with at one point doesn't exist anymore?
2. Why are 'deleted' sessions still working prior to garbage
collection?

The code example below illustrates both issues.  You can just comment
out the gc.collect() to toggle them.

Any clarification here would be much appreciated.

Russ


import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
import sqlalchemy.orm as orm

engine = sa.create_engine(sqlite://)
DeclarativeBase = declarative_base()
Session = orm.sessionmaker(bind = engine)

class MyObj(DeclarativeBase):
__tablename__ = bogus
id = sa.Column(sa.Integer, primary_key = True, autoincrement =
True)
name = sa.Column(sa.String, unique = True)

def __init__(self, Name):
self.name = Name

DeclarativeBase.metadata.create_all(engine)

def PersistObject(Name):
obj = MyObj(Name)
print obj.name   #-- this works fine prior to session adding
s = Session()
s.add(obj)
s.commit()
del s #pointless, but trying to accentuate vs scope drop
return obj

obj1 = PersistObject(Obj1)
obj2 = PersistObject(Obj2)

#Collect the garbage...
import gc
gc.collect() #-- remove this and the following code runs fine

#If garbage is not collected, the attribute accesses work below...
#If garbage IS collected, access won't work because there is no
session
print obj1.name
print obj2.name

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.