Re: [sqlalchemy] mixins niggle

2013-04-26 Thread Chris Withers



On 25/04/2013 15:04, Michael Bayer wrote:

the columns come out in order because they have a creation order counter 
running that tracks the order in which each object was created.There's nothing like 
that built into @declared_attr,


I think declared_attr is a bit of a red herring here, it's the creation 
order coming from the mixin class being processed before the class using 
it. I don't think that's avoidable ;-)



I guess if it produces an object reference we could add the creation counter to 
it as well.


I think we'd need some way to re-order columns as part of some 
post-metaclass thing. Sounds like a lot of work for not much gain. If it 
really annoys me, I can always fiddle with the postgres internals way of 
re-ordering columns in a table...


cheers,

Chris



On Apr 25, 2013, at 4:57 AM, Chris Withersch...@simplistix.co.uk  wrote:


Hi All,

I didn't see anything in the code that could help here (except maybe 
__declare_last__, but that looks like something else) but thought I'd ask in 
case I'm missing something...

So, some of my mixins include columns that logically come later than the 
columns defined in the class using the mixin, eg:

class Temporal(object):

value_from = Column(DateTime(), nullable=False, index=True)
value_to = Column(DateTime(), nullable=False, index=True)

@declared_attr
def value_on(cls, timestamp=None):
if timestamp is None:
timestamp=datetime.now()
return ((cls.value_from= timestamp)
(cls.value_to  timestamp))

class Observation(Temporal, Base):

instrument_id = Column(String(10), ForeignKey('instrument.id'), 
primary_key=True)
source = Column(String(10), primary_key=True)
type = Column(String(10), index=True)
value = Column(Numeric())

The value_from and value_to columns belong at the end of the definition, but:

-  \d observation
   Table public.observation
Column |Type | Modifiers
---+-+---
value_from| timestamp without time zone | not null
value_to  | timestamp without time zone | not null
instrument_id | character varying(10)   | not null
source| character varying(10)   | not null
type  | character varying(10)   |
value | numeric |

Not a biggie, but curious if there's a way to get them to the end...

cheers,

Chris


--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.






--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] OperationalError: (OperationalError) no such column:

2013-04-26 Thread gvv
Hi Michael,

Sorry to be a pain, but will these patches be applied to 7.10?
My app needs a lot of mods for it to work in 8.0.

If not don't worry, I guess it is time for me to move on to 8.

I'll try it out this weekend.

Thanks


On Friday, April 26, 2013 8:33:45 AM UTC+10, Michael Bayer wrote:

 I may have come up with a really great solution for all of this, if you'd 
 like to try the branch I have at https://bitbucket.org/zzzeek/sa_2714 - 
 all the original use cases seem to be working.

 I'll be testing this branch over the next day or so and will have it 
 committed soon.  0.8.1 is also due for a release.


  

 On Apr 24, 2013, at 6:42 PM, gvv gvve...@gmail.com javascript: wrote:

 Hi Michael,

 Thank you very much for your help and quick response.

 The workaround worked.

 I really like the simplicity of the query api that i automate the 
 generation of it.
 I will try to incorporate the workaround.

 Thanks again

 On Thursday, April 25, 2013 7:31:49 AM UTC+10, Michael Bayer wrote:

 Here's a technique you should be able to use as a workaround:

 1. for every mapper that you're joining *to* which is also a joined 
 inheritance subclass, transform it into an aliased() construct
 2. any join involving an aliased() construct, construct the ON criterion 
 manually.

 These two techniques should bypass all the automation in query.join() 
 that's failing, such as:

 from sqlalchemy.orm import aliased

 up = aliased(UserPerson)
 pa = aliased(PersonAddress)

 session.query(User).\
 outerjoin(up, up.ItemUserPerson_Id == User.Id).\
 outerjoin(pa, pa.ItemPerson_Id == up.Id).\
 outerjoin(pa.Emails).\
 outerjoin(pa.Phones).\
 first()




 On Apr 24, 2013, at 1:38 PM, Michael Bayer mik...@zzzcomputing.com 
 wrote:

 oh.  this *fails in 0.7 also*.   I thought this was a regression.   
  Yeah, this query is a little crazier than we've planned for, I can see the 
 general thing it's failing to do but will have to see what's involved to 
 get it going.   But not a regression is good news at least.


 On Apr 24, 2013, at 8:35 AM, gvv gvve...@gmail.com wrote:

 Hi All,

 using 7.10 but falls over also in 8.0.

 User has a One2One UserPerson.
 UserPerson inherits from Person.
 Person has a One2Many PersonAddress.
 PersonAddress inherits from Address.
 Address has a One2Many Phone and One2Many Email.

 The following query falls over with an (OperationalError) no such column: 
 Address.Id.
  
 session.query(User, UserPerson, PersonAddress).filter(User.Id == 1).\
 
 outerjoin(User.Personal,UserPerson.Addresses,PersonAddress.Phones,PersonAddress.Emails).first()

 What am I doing wrong ?

 Thanks in advance for your help.


 sqlalchemy.exc.OperationalError: (OperationalError) no such column: 
 Address.Id u'SELECT User.Id AS User_Id, anon_1.UserPerson_Id AS 
 anon_1_UserPerson_Id, anon_1.Person_Id AS anon_1_Person_Id, 
 anon_1.Person_PersonType AS anon_1_Person_PersonType, 
 anon_1.UserPerson_ItemUserPerson_Id AS 
 anon_1_UserPerson_ItemUserPerson_Id, anon_2.PersonAddress_Id AS 
 anon_2_PersonAddress_Id, anon_2.Address_Id AS anon_2_Address_Id, 
 anon_2.Address_AddressType AS anon_2_Address_AddressType, 
 anon_2.PersonAddress_ItemPerson_Id AS 
 anon_2_PersonAddress_ItemPerson_Id \nFROM User LEFT OUTER JOIN (SELECT 
 Person.Id AS Person_Id, Person.PersonType AS Person_PersonType, 
 UserPerson.Id AS UserPerson_Id, UserPerson.ItemUserPerson_Id AS 
 UserPerson_ItemUserPerson_Id \nFROM Person JOIN UserPerson ON 
 UserPerson.Id = Person.Id) AS anon_1 ON 
 anon_1.UserPerson_ItemUserPerson_Id = User.Id LEFT OUTER JOIN (SELECT 
 Address.Id AS Address_Id, Address.AddressType AS 
 Address_AddressType, PersonAddress.Id AS PersonAddress_Id, 
 PersonAddress.ItemPerson_Id AS PersonAddress_ItemPerson_Id \nFROM 
 Address JOIN PersonAddress ON PersonAddress.Id = Address.Id) AS 
 anon_2 ON anon_1.Person_Id = anon_2.PersonAddress_ItemPerson_Id LEFT 
 OUTER JOIN Phone ON anon_2.Address_Id = Phone.ItemPhone_Id LEFT 
 OUTER JOIN Email ON Address.Id = Email.ItemEmail_Id \nWHERE 
 User.Id = ?\n LIMIT ? OFFSET ?' (1, 1, 0)



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 test.pydecl_enum.py



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

Re: [sqlalchemy] mixing two models/schema

2013-04-26 Thread Andi Blake
thank you michael, very helpful points. 

in the end i'll just use PostgreSQL native schema since i'm not able to get 
the multi-connection thing working. for that i use a common ``
DeclarativeBase`` and ``__table_args__ = {'schema': 'market'}`` or 
``__table_args__ 
= {'schema': 'site'}``.

i'm still interested in the solution 
of 
http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#simple-vertical-partitioning
 
since it toke me some time and i don't like having such an open issue ;) 
this is my original scenario, having two ``DeclarativeBase``, one for the ``
site``, one for the ``market``. As you said, i changed the relationship to 
``user = sa.orm.relationship(User, foreign_keys=[User.id])`` which changes 
the exception to::

sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 
'product_items.user_id' could not find table 'users' with which to generate 
a foreign key to target column 'id'

this happens even though the schema is also assigned in the ``User`` class::

@timestampable()
class User(DeclarativeBase):

# necessary to create references from the market to this users
__table_args__ = {'schema': SITE_DATABASE_SCHEMA_NAME}

id = sa.Column(sa.BigInteger,
Sequence('users_seq_id', optional=True),
primary_key=True)

here is the changed code in in the ``market``::

def initialize_sql(config, engine, user_engine=None):
session_maker = sessionmaker()
DBSession = scoped_session(session_maker)
DBSession.configure(bind=engine)
DBSession.configure(binds={
ProductItem: engine,
User: user_engine
})
log.debug('bind market engine')
Base.metadata.bind = engine
# danger: here we try to autogenerate the schema
# Base.metadata.create_all(engine)

# db-session per request
# @see: 
http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/database/sqlalchemy.html#using-a-non-global-session

def db(request):
log.debug('add DB session to request')
maker = request.registry.dbmaker
session = maker()

def cleanup(request):
session.close()
request.add_finished_callback(cleanup)
return session

config.registry.dbmaker = session_maker
config.set_request_property(db, name='dbsession', reify=True)
# for pyramid 1.4 use: ``config.add_request_method(db, reify=True)``


class ProductItem(Base):

cross-db references:
- 
http://stackoverflow.com/questions/6433592/cross-database-join-in-sqlalchemy
- http://markmail.org/message/z5tdtlcuoth2osqm

schemas:
http://www.postgresql.org/docs/9.1/static/ddl-schemas.html

__tablename__ = 'product_items'
__table_args__ = {'schema': MARKET_DATABASE_SCHEMA_NAME}

id = sa.Column(sa.BigInteger,
   Sequence('place_seq_id', optional=True),
   primary_key=True)
name = Column(Unicode(255), unique=True)
user_id = sa.Column(
sa.BigInteger,
sa.ForeignKey('%s.%s.id' % (SITE_DATABASE_SCHEMA_NAME, 
User.__tablename__)),
primary_key=True
)
user = sa.orm.relationship(User, foreign_keys=[User.id])

do you have a hint how to tell sqlalchemy to choose ``site.users`` rather 
than ``users`` as table?

thanks a lot, 
andi

On Friday, 26 April 2013 00:32:32 UTC+2, Michael Bayer wrote:


 On Apr 25, 2013, at 10:21 AM, Andi Blake 
 andi@googlemail.comjavascript: 
 wrote:

 hi all, 

 i have a webapp with an existing database-model ``site``, including users. 
 in a second service i create a new database-model ``market``, but still 
 want to access the users (which works via separate engine).

 goal: i want to create a relation from the ``market``-model to the 
 ``site``-model. e.g. referencing a ``User`` instance from the 
 ``site``-model to the ``market``-model. like this::


 SITE_DATABASE_SCHEMA_NAME = 'site'
 MARKET_DATABASE_SCHEMA_NAME = 'market'

 def initialize_sql(engine, user_engine = None):
 DBSession = scoped_session(sessionmaker(bind=engine))
 DBSession.configure(bind=engine)
 log.debug('bind market engine')
 Base.metadata.bind = engine
 # danger: here we try to autogenerate the schema
 # Base.metadata.create_all(engine)

 if user_engine:
 log.debug('bind user engine')
 DeclarativeBase.metadata.bind = user_engine

 class ProductItem(Base):
 
 cross-db references:
 - 
 http://stackoverflow.com/questions/6433592/cross-database-join-in-sqlalchemy
 - http://markmail.org/message/z5tdtlcuoth2osqm

 schemas:
 http://www.postgresql.org/docs/9.1/static/ddl-schemas.html
 
 __tablename__ = 'product_items'
 __table_args__ = {'schema': MARKET_DATABASE_SCHEMA_NAME}

 id = sa.Column(sa.BigInteger,
Sequence('place_seq_id', optional=True),
primary_key=True)
 name = Column(Unicode(255), unique=True)
 user_id = sa.Column(
 sa.BigInteger,
 sa.ForeignKey('%s.%s.id' % 

[sqlalchemy] [Q] WindowedRangeQuery recipe

2013-04-26 Thread Ladislav Lenart
Hello.

I have found this recipe:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

I think it fits perfectly for my usecase: process potentially large result set
of ORM instances in chunks of predefined size to limit memory consumption.

I have few questions / remarks:
* I think there is a typo in the line (the comma should not be there, right?):
intervals = [id for id, in q]
* The column supplied to windowed_query() function should be the primary key of
the table that represents the ORM instances, right?
* When are the ORM instances from the previous chunk reclaimed? I know the
session keeps them in an identity map.


Thank you,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] WindowedRangeQuery recipe

2013-04-26 Thread Gunnlaugur Thor Briem
No, the comma is supposed to be there; it's for tuple unpacking. The
iterable q yields tuples (which in this case are of length one, because the
resultset has only one column).

The column should be whatever attribute of the ORM instances you want to
sort by, not necessarily the primary key.

The ORM instances are referenced by the session, so they will not be
reclaimed until the session is closed (or they are expunged from it).

Regards,

Gulli



On Fri, Apr 26, 2013 at 10:06 AM, Ladislav Lenart lenart...@volny.czwrote:

 Hello.

 I have found this recipe:

 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery

 I think it fits perfectly for my usecase: process potentially large result
 set
 of ORM instances in chunks of predefined size to limit memory consumption.

 I have few questions / remarks:
 * I think there is a typo in the line (the comma should not be there,
 right?):
 intervals = [id for id, in q]
 * The column supplied to windowed_query() function should be the primary
 key of
 the table that represents the ORM instances, right?
 * When are the ORM instances from the previous chunk reclaimed? I know the
 session keeps them in an identity map.


 Thank you,

 Ladislav Lenart

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] [Q] WindowedRangeQuery recipe

2013-04-26 Thread Ladislav Lenart
Hello.

I think I understand it all now.

Thank you,

Ladislav Lenart


On 26.4.2013 13:22, Gunnlaugur Thor Briem wrote:
 No, the comma is supposed to be there; it's for tuple unpacking. The iterable 
 q
 yields tuples (which in this case are of length one, because the resultset has
 only one column).
 
 The column should be whatever attribute of the ORM instances you want to sort
 by, not necessarily the primary key.
 
 The ORM instances are referenced by the session, so they will not be reclaimed
 until the session is closed (or they are expunged from it).
 
 Regards,
 
 Gulli
 
 
 
 On Fri, Apr 26, 2013 at 10:06 AM, Ladislav Lenart lenart...@volny.cz
 mailto:lenart...@volny.cz wrote:
 
 Hello.
 
 I have found this recipe:
 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery
 
 I think it fits perfectly for my usecase: process potentially large 
 result set
 of ORM instances in chunks of predefined size to limit memory consumption.
 
 I have few questions / remarks:
 * I think there is a typo in the line (the comma should not be there, 
 right?):
 intervals = [id for id, in q]
 * The column supplied to windowed_query() function should be the primary 
 key of
 the table that represents the ORM instances, right?
 * When are the ORM instances from the previous chunk reclaimed? I know the
 session keeps them in an identity map.
 
 
 Thank you,
 
 Ladislav Lenart
 
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy%2bunsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email
 to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] How to get update done by one transaction in another transaction

2013-04-26 Thread sajuptpm
Hi Michael Bayer,

Is there any way to dynamically change Transaction Isolation Level ??

I want to do it only for a particular operation. So I can't set it at 
Engine or Connection Level, right ??

I am using turbogears + Sqlalchemy with default isolation_level.


What is the default isolation_level ??



=

Also tried DBSession.expire_all() and DBSession.expunge_all(), but not 
getting Updated row in waiting transaction.

http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expire_all
http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expunge_all



Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] OperationalError: (OperationalError) no such column:

2013-04-26 Thread Michael Bayer
the patch takes advantage of other improvements in 0.8 and would be 
destabilizing for 0.7.   it's a pretty major rethink of how join() works.

On Apr 26, 2013, at 4:58 AM, gvv gvver...@gmail.com wrote:

 Hi Michael,
 
 Sorry to be a pain, but will these patches be applied to 7.10?
 My app needs a lot of mods for it to work in 8.0.
 
 If not don't worry, I guess it is time for me to move on to 8.
 
 I'll try it out this weekend.
 
 Thanks
 
 
 On Friday, April 26, 2013 8:33:45 AM UTC+10, Michael Bayer wrote:
 I may have come up with a really great solution for all of this, if you'd 
 like to try the branch I have at https://bitbucket.org/zzzeek/sa_2714 - all 
 the original use cases seem to be working.
 
 I'll be testing this branch over the next day or so and will have it 
 committed soon.  0.8.1 is also due for a release.
 
 
  
 
 On Apr 24, 2013, at 6:42 PM, gvv gvve...@gmail.com wrote:
 
 Hi Michael,
 
 Thank you very much for your help and quick response.
 
 The workaround worked.
 
 I really like the simplicity of the query api that i automate the generation 
 of it.
 I will try to incorporate the workaround.
 
 Thanks again
 
 On Thursday, April 25, 2013 7:31:49 AM UTC+10, Michael Bayer wrote:
 Here's a technique you should be able to use as a workaround:
 
 1. for every mapper that you're joining *to* which is also a joined 
 inheritance subclass, transform it into an aliased() construct
 2. any join involving an aliased() construct, construct the ON criterion 
 manually.
 
 These two techniques should bypass all the automation in query.join() that's 
 failing, such as:
 
 from sqlalchemy.orm import aliased
 
 up = aliased(UserPerson)
 pa = aliased(PersonAddress)
 
 session.query(User).\
 outerjoin(up, up.ItemUserPerson_Id == User.Id).\
 outerjoin(pa, pa.ItemPerson_Id == up.Id).\
 outerjoin(pa.Emails).\
 outerjoin(pa.Phones).\
 first()
 
 
 
 
 On Apr 24, 2013, at 1:38 PM, Michael Bayer mik...@zzzcomputing.com wrote:
 
 oh.  this *fails in 0.7 also*.   I thought this was a regression.Yeah, 
 this query is a little crazier than we've planned for, I can see the 
 general thing it's failing to do but will have to see what's involved to 
 get it going.   But not a regression is good news at least.
 
 
 On Apr 24, 2013, at 8:35 AM, gvv gvve...@gmail.com wrote:
 
 Hi All,
 
 using 7.10 but falls over also in 8.0.
 
 User has a One2One UserPerson.
 UserPerson inherits from Person.
 Person has a One2Many PersonAddress.
 PersonAddress inherits from Address.
 Address has a One2Many Phone and One2Many Email.
 
 The following query falls over with an (OperationalError) no such column: 
 Address.Id.
  
 session.query(User, UserPerson, PersonAddress).filter(User.Id == 1).\
 
 outerjoin(User.Personal,UserPerson.Addresses,PersonAddress.Phones,PersonAddress.Emails).first()
 
 What am I doing wrong ?
 
 Thanks in advance for your help.
 
 
 sqlalchemy.exc.OperationalError: (OperationalError) no such column: 
 Address.Id u'SELECT User.Id AS User_Id, anon_1.UserPerson_Id AS 
 anon_1_UserPerson_Id, anon_1.Person_Id AS anon_1_Person_Id, 
 anon_1.Person_PersonType AS anon_1_Person_PersonType, 
 anon_1.UserPerson_ItemUserPerson_Id AS 
 anon_1_UserPerson_ItemUserPerson_Id, anon_2.PersonAddress_Id AS 
 anon_2_PersonAddress_Id, anon_2.Address_Id AS anon_2_Address_Id, 
 anon_2.Address_AddressType AS anon_2_Address_AddressType, 
 anon_2.PersonAddress_ItemPerson_Id AS 
 anon_2_PersonAddress_ItemPerson_Id \nFROM User LEFT OUTER JOIN (SELECT 
 Person.Id AS Person_Id, Person.PersonType AS 
 Person_PersonType, UserPerson.Id AS UserPerson_Id, 
 UserPerson.ItemUserPerson_Id AS UserPerson_ItemUserPerson_Id \nFROM 
 Person JOIN UserPerson ON UserPerson.Id = Person.Id) AS anon_1 
 ON anon_1.UserPerson_ItemUserPerson_Id = User.Id LEFT OUTER JOIN 
 (SELECT Address.Id AS Address_Id, Address.AddressType AS 
 Address_AddressType, PersonAddress.Id AS PersonAddress_Id, 
 PersonAddress.ItemPerson_Id AS PersonAddress_ItemPerson_Id \nFROM 
 Address JOIN PersonAddress ON PersonAddress.Id = Address.Id) 
 AS anon_2 ON anon_1.Person_Id = anon_2.PersonAddress_ItemPerson_Id 
 LEFT OUTER JOIN Phone ON anon_2.Address_Id = Phone.ItemPhone_Id 
 LEFT OUTER JOIN Email ON Address.Id = Email.ItemEmail_Id \nWHERE 
 User.Id = ?\n LIMIT ? OFFSET ?' (1, 1, 0)
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 test.pydecl_enum.py
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to 

Re: [sqlalchemy] How to get update done by one transaction in another transaction

2013-04-26 Thread Michael Bayer
you can make a Session like this:

conn = engine.connect().execution_options(isolation_level='SERIALIZABLE')
session = Session(bind=conn)

# work with session

session.commit()

isolation level is per-transaction.

the default is not set by SQLAlchemy it depends on how your database is 
configured.



On Apr 26, 2013, at 7:48 AM, sajuptpm sajup...@gmail.com wrote:

 Hi Michael Bayer,
 
 Is there any way to dynamically change Transaction Isolation Level ??
 
 I want to do it only for a particular operation. So I can't set it at 
 Engine or Connection Level, right ??
 
 I am using turbogears + Sqlalchemy with default isolation_level.
 
 What is the default isolation_level ??
 
 
 =
 
 Also tried DBSession.expire_all() and DBSession.expunge_all(), but not 
 getting Updated row in waiting transaction.
 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expire_all
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expunge_all
 
 
 
 Thanks,
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] mixing two models/schema

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 5:46 AM, Andi Blake andi.ba...@googlemail.com wrote:

 i'm still interested in the solution of 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#simple-vertical-partitioning
  since it toke me some time and i don't like having such an open issue ;) 
 this is my original scenario, having two ``DeclarativeBase``, one for the 
 ``site``, one for the ``market``. As you said, i changed the relationship to 
 ``user = sa.orm.relationship(User, foreign_keys=[User.id])`` which changes 
 the exception to::
 
 sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 
 'product_items.user_id' could not find table 'users' with which to generate a 
 foreign key to target column 'id'
 
 this happens even though the schema is also assigned in the ``User`` class::
 
 @timestampable()
 class User(DeclarativeBase):
 
 # necessary to create references from the market to this users
 __table_args__ = {'schema': SITE_DATABASE_SCHEMA_NAME}
 
 id = sa.Column(sa.BigInteger,
 Sequence('users_seq_id', optional=True),
 primary_key=True)
 
 
 class ProductItem(Base):
 user_id = sa.Column(
 sa.BigInteger,
 sa.ForeignKey('%s.%s.id' % (SITE_DATABASE_SCHEMA_NAME, 
 User.__tablename__)),
 primary_key=True
 )
 user = sa.orm.relationship(User, foreign_keys=[User.id])
 
 do you have a hint how to tell sqlalchemy to choose ``site.users`` rather 
 than ``users`` as table?

that should be correct, if User has schema='site' and ProductItem has 
schema='market', then a ForeignKey on ProductItem should refer to 
site.user.id.  But similarly with the User issue, ForeignKey looks inside 
of the local MetaData collection to find the other table, and I see that User() 
has a different Base than ProductItem, which is the home base for a MetaData. 
  So you can alternatively specify the ForeignKey like 
ForeignKey(User.__table__.c.id) if you want to cross over two MetaData 
collections.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread alonn
so not to load too much into memory I should do something like:

for i in session.query(someobject).filter(idsomething)
print i

I'm guessing the answer is no, because of the nature of sql, but I'm not an 
expert so I'm asking.

Thanks for the help!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Mauricio de Abreu Antunes
Read the source:

def all(self):
Return the results represented by this ``Query`` as a list.

This results in an execution of the underlying query.


return list(self)

it means that this method collects everything it needs and it is yielded by
the generator.
If you returns the query for a variable, you can perform a next(variable).


2013/4/26 alonn alonis...@gmail.com

 so not to load too much into memory I should do something like:

 for i in session.query(someobject).filter(idsomething)
 print i

 I'm guessing the answer is no, because of the nature of sql, but I'm not
 an expert so I'm asking.

 Thanks for the help!

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.






-- 
*Mauricio de Abreu Antunes*
Mobile: (51)930-74-525
Skype: mauricio.abreua

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Mauricio de Abreu Antunes
Query object has a __iter__ descriptor.


2013/4/26 Mauricio de Abreu Antunes mauricio.abr...@gmail.com

 Read the source:

 def all(self):
 Return the results represented by this ``Query`` as a list.

 This results in an execution of the underlying query.

 
 return list(self)

 it means that this method collects everything it needs and it is yielded
 by the generator.
 If you returns the query for a variable, you can perform a next(variable).


 2013/4/26 alonn alonis...@gmail.com

 so not to load too much into memory I should do something like:

 for i in session.query(someobject).filter(idsomething)
 print i

 I'm guessing the answer is no, because of the nature of sql, but I'm not
 an expert so I'm asking.

 Thanks for the help!

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.






 --
 *Mauricio de Abreu Antunes*
 Mobile: (51)930-74-525
 Skype: mauricio.abreua




-- 
*Mauricio de Abreu Antunes*
Mobile: (51)930-74-525
Skype: mauricio.abreua

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Werner

Hi,

On 26/04/2013 16:41, alonn wrote:

so not to load too much into memory I should do something like:

for i in session.query(someobject).filter(idsomething)
print i

I'm guessing the answer is no, because of the nature of sql, but I'm 
not an expert so I'm asking.
yes you can, check out the doc for querying, e.g. the following if you 
use the ORM.


http://sqlalchemy.readthedocs.org/en/rel_0_8/orm/tutorial.html#querying

Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 12:06 PM, Werner werner.bru...@sfr.fr wrote:
 On 26/04/2013 16:41, alonn wrote:

 so not to load too much into memory I should do something like:

 for i in session.query(someobject).filter(idsomething)
 print i

 I'm guessing the answer is no, because of the nature of sql, but I'm not
 an expert so I'm asking.

 yes you can, check out the doc for querying, e.g. the following if you use
 the ORM.

 http://sqlalchemy.readthedocs.org/en/rel_0_8/orm/tutorial.html#querying


Not entirely, if you don't use yield_per (as shown in the docs in
fact, but worth mentioning).

Seeing query:

if self._yield_per:
fetch = cursor.fetchmany(self._yield_per)
if not fetch:
break
else:
fetch = cursor.fetchall()

Not only that, but also all rows are processed and saved to a local
list, so all instances are built and populated way before you get the
first row. That is, unless you specify yield_per.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Werner

On 26/04/2013 17:07, Claudio Freire wrote:

On Fri, Apr 26, 2013 at 12:06 PM, Werner werner.bru...@sfr.fr wrote:

On 26/04/2013 16:41, alonn wrote:

so not to load too much into memory I should do something like:

for i in session.query(someobject).filter(idsomething)
 print i

I'm guessing the answer is no, because of the nature of sql, but I'm not
an expert so I'm asking.

yes you can, check out the doc for querying, e.g. the following if you use
the ORM.

http://sqlalchemy.readthedocs.org/en/rel_0_8/orm/tutorial.html#querying


Not entirely, if you don't use yield_per (as shown in the docs in
fact, but worth mentioning).

Seeing query:

if self._yield_per:
 fetch = cursor.fetchmany(self._yield_per)
 if not fetch:
 break
else:
 fetch = cursor.fetchall()

Not only that, but also all rows are processed and saved to a local
list, so all instances are built and populated way before you get the
first row. That is, unless you specify yield_per.

Oops, thanks for correcting me.
Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 12:24 PM, Werner werner.bru...@sfr.fr wrote:
 On 26/04/2013 17:07, Claudio Freire wrote:

 On Fri, Apr 26, 2013 at 12:06 PM, Werner werner.bru...@sfr.fr wrote:

 http://sqlalchemy.readthedocs.org/en/rel_0_8/orm/tutorial.html#querying


 Not entirely, if you don't use yield_per (as shown in the docs in
 fact, but worth mentioning).

 Seeing query:

 if self._yield_per:
  fetch = cursor.fetchmany(self._yield_per)
  if not fetch:
  break
 else:
  fetch = cursor.fetchall()

 Not only that, but also all rows are processed and saved to a local
 list, so all instances are built and populated way before you get the
 first row. That is, unless you specify yield_per.

 Oops, thanks for correcting me.

Um... a tad OT, but looking at that code, there's lots of
opportunities for optimization.

I'll have to profile a bit and let you know.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 12:24 PM, Claudio Freire klaussfre...@gmail.com wrote:

 
 Um... a tad OT, but looking at that code, there's lots of
 opportunities for optimization.
 
 I'll have to profile a bit and let you know.

are you referring to sqlalchemy/orm/loading.py ?   I'd be pretty impressed if 
you can find significant optimizations there which don't break usage contracts. 
   I've spent years poring over profiles and squeezing every function call 
possible out of that system, sometimes producing entirely new approaches that I 
just had to throw out since they didn't work.   It has been rewritten many 
times.   Some background on the approach is at 
http://www.aosabook.org/en/sqlalchemy.html, 20.7. Query and Loading Behavior. 
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 1:35 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 Um... a tad OT, but looking at that code, there's lots of
 opportunities for optimization.

 I'll have to profile a bit and let you know.

 are you referring to sqlalchemy/orm/loading.py ?   I'd be pretty impressed if 
 you can find significant optimizations there which don't break usage 
 contracts.I've spent years poring over profiles and squeezing every 
 function call possible out of that system, sometimes producing entirely new 
 approaches that I just had to throw out since they didn't work.   It has been 
 rewritten many times.   Some background on the approach is at 
 http://www.aosabook.org/en/sqlalchemy.html, 20.7. Query and Loading 
 Behavior.


I know... I'm talking micro-optimization. Pre-binding globals in tight
loops, for instance, like:

def filter_fn(x, tuple=tuple, zip=zip):
return tuple(...)

This is of course only worth it for really really hot loops. That's
why I'm profiling. Maybe it's been done already for all the hottest
loops.

Then there's the possibility to replace some list comprehensions with
itertools, which besides not building a temp list, would also run
entirely in C. This also only makes a difference only on very tight,
builtin-laden loops.

I have an app here that really stresses that part of the ORM, so I can
profile rather easily. In previous profiles, I remember seeing
Query.instances near the top, and all the optimizations I mentioned
above could be applied there, if they make any difference I'll tell.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] How to get update done by one transaction in another transaction

2013-04-26 Thread Richard Gerd Kuesters

Mike, this is interesting stuff.

Let's say I use isolation_level as serializabe. Is there a counterpart 
in performance? If yes, is there something I can do to counter weight? 
ie. bigger connection pool, stream_results (psycopg), etc.



Cheers,
Richard.


On 04/26/2013 11:02 AM, Michael Bayer wrote:

you can make a Session like this:

conn = engine.connect().execution_options(isolation_level='SERIALIZABLE')
session = Session(bind=conn)

# work with session

session.commit()

isolation level is per-transaction.

the default is not set by SQLAlchemy it depends on how your database 
is configured.




On Apr 26, 2013, at 7:48 AM, sajuptpm sajup...@gmail.com 
mailto:sajup...@gmail.com wrote:



Hi Michael Bayer,

Is there any way to dynamically change Transaction Isolation Level ??

I want to do it only for a particular operation. So I can't set it at 
Engine or Connection Level, right ??


I am using turbogears + Sqlalchemy with defaultisolation_level.

What is the defaultisolation_level ??


=

Also tried DBSession.expire_all() and DBSession.expunge_all(), but 
not getting Updated row in waiting transaction.


http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expire_all
http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expunge_all



Thanks,

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 12:41 PM, Claudio Freire klaussfre...@gmail.com wrote:

 On Fri, Apr 26, 2013 at 1:35 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 Um... a tad OT, but looking at that code, there's lots of
 opportunities for optimization.
 
 I'll have to profile a bit and let you know.
 
 are you referring to sqlalchemy/orm/loading.py ?   I'd be pretty impressed 
 if you can find significant optimizations there which don't break usage 
 contracts.I've spent years poring over profiles and squeezing every 
 function call possible out of that system, sometimes producing entirely new 
 approaches that I just had to throw out since they didn't work.   It has 
 been rewritten many times.   Some background on the approach is at 
 http://www.aosabook.org/en/sqlalchemy.html, 20.7. Query and Loading 
 Behavior.
 
 
 I know... I'm talking micro-optimization. Pre-binding globals in tight
 loops, for instance, like:
 
 def filter_fn(x, tuple=tuple, zip=zip):
return tuple(...)
 
 This is of course only worth it for really really hot loops. That's
 why I'm profiling. Maybe it's been done already for all the hottest
 loops.
 
 Then there's the possibility to replace some list comprehensions with
 itertools, which besides not building a temp list, would also run
 entirely in C. This also only makes a difference only on very tight,
 builtin-laden loops.
 
 I have an app here that really stresses that part of the ORM, so I can
 profile rather easily. In previous profiles, I remember seeing
 Query.instances near the top, and all the optimizations I mentioned
 above could be applied there, if they make any difference I'll tell.

the real bottleneck in loading is the loading.instances() function.  I have 
tried for years to reduce overhead in it.  Writing it in C would be best, but 
then again Pypy aims to solve the problem of FN overhead, pre-binding, and 
such.   I don't want to work against Pypy too much.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] How to get update done by one transaction in another transaction

2013-04-26 Thread Michael Bayer
serializable will reduce performance, yes.  There's not much way around it, 
though the tradeoffs vary greatly depending on database. Postgresql for example 
introduces fairly minimal overhead versus repeatable read (see 
http://www.postgresql.org/docs/9.1/static/transaction-iso.html) since they're 
able to get away without using more locks. Traditionally, serializable means 
more locks (see 
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29#Serializable).



On Apr 26, 2013, at 1:00 PM, Richard Gerd Kuesters rich...@humantech.com.br 
wrote:

 Mike, this is interesting stuff.
 
 Let's say I use isolation_level as serializabe. Is there a counterpart in 
 performance? If yes, is there something I can do to counter weight? ie. 
 bigger connection pool, stream_results (psycopg), etc.
 
 
 Cheers,
 Richard.
 
 
 On 04/26/2013 11:02 AM, Michael Bayer wrote:
 you can make a Session like this:
 
 conn = engine.connect().execution_options(isolation_level='SERIALIZABLE')
 session = Session(bind=conn)
 
 # work with session
 
 session.commit()
 
 isolation level is per-transaction.
 
 the default is not set by SQLAlchemy it depends on how your database is 
 configured.
 
 
 
 On Apr 26, 2013, at 7:48 AM, sajuptpm sajup...@gmail.com wrote:
 
 Hi Michael Bayer,
 
 Is there any way to dynamically change Transaction Isolation Level ??
 
 I want to do it only for a particular operation. So I can't set it at 
 Engine or Connection Level, right ??
 
 I am using turbogears + Sqlalchemy with default isolation_level.
 
 What is the default isolation_level ??
 
 
 =
 
 Also tried DBSession.expire_all() and DBSession.expunge_all(), but not 
 getting Updated row in waiting transaction.
 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expire_all
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expunge_all
 
 
 
 Thanks,
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] How to get update done by one transaction in another transaction

2013-04-26 Thread Richard Gerd Kuesters

Thanks again Mike. One more motive to stick to Postgres :)

Cheers,
Richard.


On 04/26/2013 02:11 PM, Michael Bayer wrote:
serializable will reduce performance, yes.  There's not much way 
around it, though the tradeoffs vary greatly depending on database. 
Postgresql for example introduces fairly minimal overhead versus 
repeatable read (see 
http://www.postgresql.org/docs/9.1/static/transaction-iso.html) since 
they're able to get away without using more locks. Traditionally, 
serializable means more locks (see 
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29#Serializable). 





On Apr 26, 2013, at 1:00 PM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



Mike, this is interesting stuff.

Let's say I use isolation_level as serializabe. Is there a 
counterpart in performance? If yes, is there something I can do to 
counter weight? ie. bigger connection pool, stream_results (psycopg), 
etc.



Cheers,
Richard.


On 04/26/2013 11:02 AM, Michael Bayer wrote:

you can make a Session like this:

conn = 
engine.connect().execution_options(isolation_level='SERIALIZABLE')

session = Session(bind=conn)

# work with session

session.commit()

isolation level is per-transaction.

the default is not set by SQLAlchemy it depends on how your database 
is configured.




On Apr 26, 2013, at 7:48 AM, sajuptpm sajup...@gmail.com 
mailto:sajup...@gmail.com wrote:



Hi Michael Bayer,

Is there any way to dynamically change Transaction Isolation Level ??

I want to do it only for a particular operation. So I can't set it 
at Engine or Connection Level, right ??


I am using turbogears + Sqlalchemy with defaultisolation_level.

What is the defaultisolation_level ??


=

Also tried DBSession.expire_all() and DBSession.expunge_all(), but 
not getting Updated row in waiting transaction.


http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expire_all
http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#sqlalchemy.orm.session.Session.expunge_all



Thanks,

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.





--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Tricky situation

2013-04-26 Thread Richard Gerd Kuesters

Hey Mike!

I'm almost there :) The only problem now is with a column that returns a 
postgres *ARRAY* type.


When labeling is not applied, I get the same error as before:

*sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for 
column 'connect_path'*


But, when I apply a label on it, another error returns:

*File 
/home/richard/.../python2.7/site-packages/sqlalchemy/util/_collections.py, 
line 684, in unique_list**

**if hashfunc(x) not in seen**
**TypeError: unhashable type: 'list'*


Any ideas? Perhaps I'm missing something?


Best regards,
Richard.




On 04/25/2013 07:31 PM, Richard Gerd Kuesters wrote:


Hmm, I was thinking in labeling this evening. I'll try tomorrow when I 
get to work and then try this alternative. Maybe it works and avoids 
my workaround :)


Thanks Mike.

Best regards,

Richard.

Em 2013-04-25 19:20, Michael Bayer escreveu:

using explicit labels is the best approach to bypass SQLA's labeling 
schemes, such as this example:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
x = Column(Integer)
y = Column(Integer)
e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)
sess = Session(e)
sess.add_all([
A(x=1, y=2),
A(x=2, y=5),
A(x=3, y=3),
])
sess.commit()
stmt = select([A.id, A.x, A.y, (A.x + A.y).label('xplusy')])
print sess.query(A, stmt.c.xplusy).from_statement(stmt).all()
On Apr 25, 2013, at 12:47 PM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:


Yeah, well, it is a select but didn't work. I also made another 
select on top of it (to be sure), but the error persists (could 
not locate column ...).


Nevermind about it, I think it's not a question of good usage of SA 
I think :)


Thanks for your help!

Cheers,
Richard.


On 04/25/2013 01:22 PM, Michael Bayer wrote:

if the original q is a select(), this should work:
query(MyClass, q.c.somecol, q.c.someothercol).from_statement(q)
if not then I guess I'll screw around with it to see what works.

On Apr 25, 2013, at 10:37 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:


Yup, I agree with you, but things are a little out of hand for me 
to use ORM-level queries. I'll see what I can do ...


Thanks! :)
Cheers,
Richard.

On 04/25/2013 11:31 AM, Michael Bayer wrote:
you'd need to organize things differently for the column grabbing 
to work out.
I'd advise producing the query using ORM-level Query in the first 
place so that you don't need to use from_statement(), which is 
really a last resort system.


On Apr 25, 2013, at 10:27 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



Well, not the desired result ... Now things justs blows :-)

*sqlalchemy.exc.NoSuchColumnError: Could not locate column in 
row for column 'anon_1.level'*



Cheers,
Richard.


On 04/25/2013 11:03 AM, Michael Bayer wrote:
why not just say session.query(MyObj, q.alias()) ?
creating ad-hoc mappers is relatively expensive.


On Apr 25, 2013, at 8:56 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:


Well, probably nevermind because I made a workaround that 
satisfies me (may not be elegant, but that's OK).


Basically, I created the o type a little different: o = 
type('MyObjExt', (Base,), {'__table__':q.alias('q')}) and 
append it to the query like: session.query(MyObj, 
o).from_statement(q).all()


Okay, now it returns a typle, but at least I don't have to 
make a second query :)



Cheers,
Richard.

On 04/25/2013 09:41 AM, Richard Gerd Kuesters wrote:

Hi all,

I've been playing with sqla_hierarchy from 
https://github.com/marplatense/sqla_hierarchy .


The problem is: the returned query appends 3 columns: level 
(Integer), is_leaf (Boolean) and connect_path (pg ARRAY).


So far, so good. If I execute the query using 
session.execute(q).fetchall(), it works like a charm. But, 
as we know, session.execute returns a RowProxy, not 
objects. Using 
session.query(MyObj).from_statement(q).all(), I'm able to 
get my mapped objects, but without the extra columns that 
would make me very pleased (level, is_leaf, connect_path). Is 
there a way to get around this?


I have done testings using o = type('MyObjExt', (MyObj,), 
{'__table__': q} and them use it on the session.query, *but* 
it looses foreign key references - or, well, I don't know how 
to explain this to the mapper (?), since MyObj is 
polymorphic (probably the pitfall?).



Thanks for your time and help.

Best regards,
Richard.
--
You received this message because you are subscribed to the 
Google Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from 
it, send an email to sqlalchemy+unsubscr...@googlegroups.com.

To post to this group, send email to sqlalchemy@googlegroups.com.

Re: [sqlalchemy] Tricky situation

2013-04-26 Thread Richard Gerd Kuesters

Ha! A little google search and ...

https://groups.google.com/forum/?fromgroups=#!topic/sqlalchemy/FHuCN-X9L8A 
https://groups.google.com/forum/?fromgroups=#%21topic/sqlalchemy/FHuCN-X9L8A


VoilĂ !

:-)

Kind regards,
Richard.


On 04/26/2013 03:50 PM, Richard Gerd Kuesters wrote:

Hey Mike!

I'm almost there :) The only problem now is with a column that returns 
a postgres *ARRAY* type.


When labeling is not applied, I get the same error as before:

*sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for 
column 'connect_path'*


But, when I apply a label on it, another error returns:

*File 
/home/richard/.../python2.7/site-packages/sqlalchemy/util/_collections.py, 
line 684, in unique_list**

**if hashfunc(x) not in seen**
**TypeError: unhashable type: 'list'*


Any ideas? Perhaps I'm missing something?


Best regards,
Richard.




On 04/25/2013 07:31 PM, Richard Gerd Kuesters wrote:


Hmm, I was thinking in labeling this evening. I'll try tomorrow when 
I get to work and then try this alternative. Maybe it works and 
avoids my workaround :)


Thanks Mike.

Best regards,

Richard.

Em 2013-04-25 19:20, Michael Bayer escreveu:

using explicit labels is the best approach to bypass SQLA's labeling 
schemes, such as this example:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
x = Column(Integer)
y = Column(Integer)
e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)
sess = Session(e)
sess.add_all([
A(x=1, y=2),
A(x=2, y=5),
A(x=3, y=3),
])
sess.commit()
stmt = select([A.id, A.x, A.y, (A.x + A.y).label('xplusy')])
print sess.query(A, stmt.c.xplusy).from_statement(stmt).all()
On Apr 25, 2013, at 12:47 PM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:


Yeah, well, it is a select but didn't work. I also made another 
select on top of it (to be sure), but the error persists (could 
not locate column ...).


Nevermind about it, I think it's not a question of good usage of SA 
I think :)


Thanks for your help!

Cheers,
Richard.


On 04/25/2013 01:22 PM, Michael Bayer wrote:

if the original q is a select(), this should work:
query(MyClass, q.c.somecol, q.c.someothercol).from_statement(q)
if not then I guess I'll screw around with it to see what works.

On Apr 25, 2013, at 10:37 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:


Yup, I agree with you, but things are a little out of hand for me 
to use ORM-level queries. I'll see what I can do ...


Thanks! :)
Cheers,
Richard.

On 04/25/2013 11:31 AM, Michael Bayer wrote:
you'd need to organize things differently for the column 
grabbing to work out.
I'd advise producing the query using ORM-level Query in the 
first place so that you don't need to use from_statement(), 
which is really a last resort system.


On Apr 25, 2013, at 10:27 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br wrote:



Well, not the desired result ... Now things justs blows :-)

*sqlalchemy.exc.NoSuchColumnError: Could not locate column in 
row for column 'anon_1.level'*



Cheers,
Richard.


On 04/25/2013 11:03 AM, Michael Bayer wrote:
why not just say session.query(MyObj, q.alias()) ?
creating ad-hoc mappers is relatively expensive.


On Apr 25, 2013, at 8:56 AM, Richard Gerd Kuesters 
rich...@humantech.com.br mailto:rich...@humantech.com.br 
wrote:


Well, probably nevermind because I made a workaround that 
satisfies me (may not be elegant, but that's OK).


Basically, I created the o type a little different: o = 
type('MyObjExt', (Base,), {'__table__':q.alias('q')}) and 
append it to the query like: session.query(MyObj, 
o).from_statement(q).all()


Okay, now it returns a typle, but at least I don't have to 
make a second query :)



Cheers,
Richard.

On 04/25/2013 09:41 AM, Richard Gerd Kuesters wrote:

Hi all,

I've been playing with sqla_hierarchy from 
https://github.com/marplatense/sqla_hierarchy .


The problem is: the returned query appends 3 columns: level 
(Integer), is_leaf (Boolean) and connect_path (pg ARRAY).


So far, so good. If I execute the query using 
session.execute(q).fetchall(), it works like a charm. But, 
as we know, session.execute returns a RowProxy, not 
objects. Using 
session.query(MyObj).from_statement(q).all(), I'm able to 
get my mapped objects, but without the extra columns that 
would make me very pleased (level, is_leaf, connect_path). 
Is there a way to get around this?


I have done testings using o = type('MyObjExt', (MyObj,), 
{'__table__': q} and them use it on the session.query, 
*but* it looses foreign key references - or, well, I don't 
know how to explain this to the mapper (?), since MyObj is 
polymorphic (probably the pitfall?).



Thanks for your time and help.

Best regards,
Richard.
--
You 

Re: [sqlalchemy] Tricky situation

2013-04-26 Thread Michael Bayer
is that 0.8?   that particular issue should have been fixed.


On Apr 26, 2013, at 2:50 PM, Richard Gerd Kuesters rich...@humantech.com.br 
wrote:

 Hey Mike!
 
 I'm almost there :) The only problem now is with a column that returns a 
 postgres ARRAY type.
 
 When labeling is not applied, I get the same error as before:
 
 sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for column 
 'connect_path'
 
 But, when I apply a label on it, another error returns:
 
 File 
 /home/richard/.../python2.7/site-packages/sqlalchemy/util/_collections.py, 
 line 684, in unique_list
 if hashfunc(x) not in seen
 TypeError: unhashable type: 'list'
 
 
 Any ideas? Perhaps I'm missing something?
 
 
 Best regards,
 Richard.
 
 
 
 
 On 04/25/2013 07:31 PM, Richard Gerd Kuesters wrote:
 Hmm, I was thinking in labeling this evening. I'll try tomorrow when I get 
 to work and then try this alternative. Maybe it works and avoids my 
 workaround :)
 
 Thanks Mike.
 
  
 Best regards,
 
 Richard.
 
  
 Em 2013-04-25 19:20, Michael Bayer escreveu:
 
 using explicit labels is the best approach to bypass SQLA's labeling 
 schemes, such as this example:
 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 Base = declarative_base()
 class A(Base):
 __tablename__ = 'a'
 id = Column(Integer, primary_key=True)
 x = Column(Integer)
 y = Column(Integer)
 e = create_engine(sqlite://, echo=True)
 Base.metadata.create_all(e)
 sess = Session(e)
 sess.add_all([
 A(x=1, y=2),
 A(x=2, y=5),
 A(x=3, y=3),
 ])
 sess.commit()
 stmt = select([A.id, A.x, A.y, (A.x + A.y).label('xplusy')])
 print sess.query(A, stmt.c.xplusy).from_statement(stmt).all()
 On Apr 25, 2013, at 12:47 PM, Richard Gerd Kuesters 
 rich...@humantech.com.br wrote:
 
 Yeah, well, it is a select but didn't work. I also made another select on 
 top of it (to be sure), but the error persists (could not locate column 
 ...).
 
 Nevermind about it, I think it's not a question of good usage of SA I 
 think :)
 
 Thanks for your help!
 
 Cheers,
 Richard.
 
 
 On 04/25/2013 01:22 PM, Michael Bayer wrote:
 if the original q is a select(), this should work:
 query(MyClass, q.c.somecol, q.c.someothercol).from_statement(q)
 if not then I guess I'll screw around with it to see what works.
 
 On Apr 25, 2013, at 10:37 AM, Richard Gerd Kuesters 
 rich...@humantech.com.br wrote:
 
 Yup, I agree with you, but things are a little out of hand for me to use 
 ORM-level queries. I'll see what I can do ...
 
 Thanks! :)
 Cheers,
 Richard.
 
 On 04/25/2013 11:31 AM, Michael Bayer wrote:
 you'd need to organize things differently for the column grabbing to 
 work out.
 I'd advise producing the query using ORM-level Query in the first place 
 so that you don't need to use from_statement(), which is really a last 
 resort system.
 
 On Apr 25, 2013, at 10:27 AM, Richard Gerd Kuesters 
 rich...@humantech.com.br wrote:
 
 Well, not the desired result ... Now things justs blows :-) 
 
 sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for 
 column 'anon_1.level'
 
 
 Cheers,
 Richard.
 
 
 On 04/25/2013 11:03 AM, Michael Bayer wrote:
 why not just say session.query(MyObj, q.alias()) ?creating 
 ad-hoc mappers is relatively expensive.
 
 On Apr 25, 2013, at 8:56 AM, Richard Gerd Kuesters 
 rich...@humantech.com.br wrote:
 
 Well, probably nevermind because I made a workaround that satisfies 
 me (may not be elegant, but that's OK).
 
 Basically, I created the o type a little different: o = 
 type('MyObjExt', (Base,), {'__table__':q.alias('q')}) and append it 
 to the query like: session.query(MyObj, o).from_statement(q).all()
 
 Okay, now it returns a typle, but at least I don't have to make a 
 second query :)
 
 
 Cheers,
 Richard.
 
 On 04/25/2013 09:41 AM, Richard Gerd Kuesters wrote:
 Hi all,
 
 I've been playing with sqla_hierarchy from 
 https://github.com/marplatense/sqla_hierarchy .
 
 The problem is: the returned query appends 3 columns: level 
 (Integer), is_leaf (Boolean) and connect_path (pg ARRAY).
 
 So far, so good. If I execute the query using 
 session.execute(q).fetchall(), it works like a charm. But, as we 
 know, session.execute returns a RowProxy, not objects. Using 
 session.query(MyObj).from_statement(q).all(), I'm able to get my 
 mapped objects, but without the extra columns that would make me 
 very pleased (level, is_leaf, connect_path). Is there a way to get 
 around this?
 
 I have done testings using o = type('MyObjExt', (MyObj,), 
 {'__table__': q} and them use it on the session.query, *but* it 
 looses foreign key references - or, well, I don't know how to 
 explain this to the mapper (?), since MyObj is polymorphic 
 (probably the pitfall?).
 
 
 Thanks for your time and help.
 
 Best regards,
 Richard.
 -- 
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To unsubscribe from this group and stop 

[sqlalchemy] Avoid adding transient object to collection?

2013-04-26 Thread Daniel Grace
I'm still fairly new at sqlalchemy, and am still occasionally being 
surprised by how sometimes-too-clever it is.  I ran into one of those 
moments today.

I have something that looks like this.  (All the tables are reflected.)

class Parent(Model, SimpleLookupTable):
__table__ = Table('parent')
# 

class Child(Model):
__table__ = Table('child')
parent = relationship(Parent, lazy='joined', 
backref=backref('children', lazy='lazy'))
# 


As part of my program design, I'm sometimes creating a partially-populated 
Child() that serves as a 'template' for a child that might be created 
later, but isn't at this point.

template = Child(parent = parent)
do_lots_of_stuff()

I was a bit surprised to find that at this point, parent.children already 
contains the new Child().  This makes sense when thinking about it under 
normal circumstances... but in this particular case, I don't want this to 
be part of the collection until it's actually added to the session (which I 
can confirm it's not, I've tried both expunging and make_transient().

What's the best way to accomplish this?  The documented behavior of 
cascade_backrefs=False is almost, but not quite, what I need.  

-- Daniel Grace

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Using value from main query inside subquery

2013-04-26 Thread Joril
On Friday, April 26, 2013 12:25:42 AM UTC+2, Michael Bayer wrote:


 this will work out of the box in 0.8 as auto-correlation has been improved 
 a lot.  in 0.7 you can add correlate() explicitly:


Nice, many thanks :) 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 2:04 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 are you referring to sqlalchemy/orm/loading.py ?   I'd be pretty impressed 
 if you can find significant optimizations there which don't break usage 
 contracts.I've spent years poring over profiles and squeezing every 
 function call possible out of that system, sometimes producing entirely new 
 approaches that I just had to throw out since they didn't work.   It has 
 been rewritten many times.   Some background on the approach is at 
 http://www.aosabook.org/en/sqlalchemy.html, 20.7. Query and Loading 
 Behavior.


 I know... I'm talking micro-optimization. Pre-binding globals in tight
 loops, for instance, like:

 def filter_fn(x, tuple=tuple, zip=zip):
return tuple(...)

 This is of course only worth it for really really hot loops. That's
 why I'm profiling. Maybe it's been done already for all the hottest
 loops.

 Then there's the possibility to replace some list comprehensions with
 itertools, which besides not building a temp list, would also run
 entirely in C. This also only makes a difference only on very tight,
 builtin-laden loops.

 I have an app here that really stresses that part of the ORM, so I can
 profile rather easily. In previous profiles, I remember seeing
 Query.instances near the top, and all the optimizations I mentioned
 above could be applied there, if they make any difference I'll tell.

 the real bottleneck in loading is the loading.instances() function.  I have 
 tried for years to reduce overhead in it.  Writing it in C would be best, but 
 then again Pypy aims to solve the problem of FN overhead, pre-binding, and 
 such.   I don't want to work against Pypy too much.

That makes the proposition tricky. I don't know PyPy's performance
characteristics that well. I assume pre-binding wouldn't hurt PyPy
much, since loop traces would be nearly the same, but I've never
tested.

Pre-binding in filter_fn improves its runtime ten-fold. Actually,
pre-binding and replacing tuple(genexpr) by tuple([compexpr]), since
genexprs are rather slow compared to list compehensions. The
improvement accounts for 1% of my test's runtime, so if it hurts PyPy,
it might not be so great an optimization (if it doesn't, though, it's
a very cheap one, and it could be applicable in other places). This
particular one helps in the case of query(Column, Column, Column),
which I use a lot.

Note, however, that my test is 40% waiting on the DB, so CPU usage
impact would be proportionally bigger, especially with parallel
workers (I'm using just one thread when profiling though).

Doing those small optimizations to WeakIdentityMap (another one whose
methods are called an obscenely large amount of times), I get about
10% speedup on those. I imagine that could count in some situations.

Ultimately, though, it's InstrumentedAttribute.__get__ the one sucking
up 30% of alchemy-bound CPU time. I guess there's little that can be
done, since it's necessary to track state changes. But there's a neat
property of descriptors, where if they don't implement __get__, then
they don't take precedence over the instance's dict.

This is very interesting, and handy, since when instance_dict is
attrgetter('__dict__'), then, for regular ColumnPropertys, instead of
using InstrumentedAttribute, I can replace that with an
InstrumentedWriteAttribute that has no get. This means, all of a
sudden, no overhead for simple attribute access.

I've tested it and it mostly works. There's the instance_dict is
attrgetter('__dict__') thing hanging over my head, and the more
serious issue of lazy attributes being mostly broken, but it's an
interesting POC IMHO.

Anyway, with that (fragile) change, I get a speedup of 10% overall
runtime, and about 50% alchemy-specific runtime. Considering I knew
about attribute access' slowness and avoided it in my test, that has
to account for something worth looking into? (before optimizing for
attribute access slowness, the test was about 3 times slower IIRC -
*times* - and it does a hefty amount of regex processing beyond
handling attributes)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] how can i exclude a table on queries ?

2013-04-26 Thread Jonathan Vanasco
Given
class Person:
id
 class Topic:
id
 class Person2Topic :
id 
topic_id  - fkeys topic(id)
person_id - fkeys person(id)
 class Message:
id
person_id_author - fkeys person(id)
topic_id - fkeys topic(id)
 I wanted to select by joining the Person2Topic table directly, with a 
filter

query( Message )\
join(  Person2Topic ,(
Message.topic_id == Person2Topic.topic_id ,
Person2Topic.person_id = 1  
)

This generates errors, because sqlalchemy doesn't have an fkey on 
Message.topic_id = Person2Topic.topic_id
i can only figure out how to do query by doing intermediary joins

query( Message )\
join( Topic , ( Message.topic_id == Topic.id ) )\
join(  Person2Topic ,( Topic.id = Person2Topic.topic_id )\
filter( Person2Topic.person_id = 1 )

is it possible to do a select like I originally wanted ?
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 6:13 PM, Claudio Freire klaussfre...@gmail.com wrote:

 Ultimately, though, it's InstrumentedAttribute.__get__ the one sucking
 up 30% of alchemy-bound CPU time. I guess there's little that can be
 done, since it's necessary to track state changes. But there's a neat
 property of descriptors, where if they don't implement __get__, then
 they don't take precedence over the instance's dict.
 
 This is very interesting, and handy, since when instance_dict is
 attrgetter('__dict__'), then, for regular ColumnPropertys, instead of
 using InstrumentedAttribute, I can replace that with an
 InstrumentedWriteAttribute that has no get. This means, all of a
 sudden, no overhead for simple attribute access.
 
 I've tested it and it mostly works. There's the instance_dict is
 attrgetter('__dict__') thing hanging over my head, and the more
 serious issue of lazy attributes being mostly broken, but it's an
 interesting POC IMHO.


just to be clear, you're breaking the capability of column-based attributes to 
lazy load at all, right?  Yeah, that can't really fly :).  The whole object is 
a live proxy for a database row, we have deferred, all kinds of stuff.

We have had users work on alternative static object loading routines, but of 
course the Query can return cheap NamedTuples to you if you just want fast 
immutable columns.

 Anyway, with that (fragile) change, I get a speedup of 10% overall
 runtime, and about 50% alchemy-specific runtime. Considering I knew
 about attribute access' slowness and avoided it in my test, that has
 to account for something worth looking into?

All attributes have to be expire-able and act as proxies for a database 
connection so I'm not really sure where to go with that.I'm not too 
thrilled about proposals to build in various alternate performance behaviors 
as the library starts to try to act in many different ways that the vast 
majority of users aren't even aware of, it increases complexity internally, 
produces vast amounts of new use cases to test and maintain, etc.I'm always 
willing to look at patches that are all winning, of course, so if you have some 
way to speed things up without breaking usage contracts and without major new 
complexity/brittleness I'd love to look at a pull request.

 (before optimizing for
 attribute access slowness, the test was about 3 times slower IIRC -
 *times* - and it does a hefty amount of regex processing beyond
 handling attributes)

Im not sure what regexes you're referring to here.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] how can i exclude a table on queries ?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 7:10 PM, Jonathan Vanasco jonat...@findmeon.com wrote:

 Given
   class Person:
   id
   
   class Topic:
   id
   
   class Person2Topic :
   id 
   topic_id  - fkeys topic(id)
   person_id - fkeys person(id)
   
   class Message:
   id
   person_id_author - fkeys person(id)
   topic_id - fkeys topic(id)
   
   
   
 I wanted to select by joining the Person2Topic table directly, with a filter
 
   query( Message )\
   join(  Person2Topic ,(
   Message.topic_id == 
 Person2Topic.topic_id ,
   Person2Topic.person_id = 1  
   )
 
 This generates errors, because sqlalchemy doesn't have an fkey on 
 Message.topic_id = Person2Topic.topic_id
   
 i can only figure out how to do query by doing intermediary joins
 
   query( Message )\
   join( Topic , ( Message.topic_id == Topic.id ) )\
   join(  Person2Topic ,( Topic.id = Person2Topic.topic_id )\
   filter( Person2Topic.person_id = 1 )
 
 is it possible to do a select like I originally wanted ?

you can write out the join condition as you are (not sure what's the downside 
there), or if you're looking for a relationship() to do it you'd need to set up 
a primaryjoin + foreign_keys in the relationship() to join as you want.  I'm 
not sure what other options would be here.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Avoid adding transient object to collection?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 4:01 PM, Daniel Grace thisgenericn...@gmail.com wrote:

 I'm still fairly new at sqlalchemy, and am still occasionally being surprised 
 by how sometimes-too-clever it is.  I ran into one of those moments today.
 
 I have something that looks like this.  (All the tables are reflected.)
 
 class Parent(Model, SimpleLookupTable):
 __table__ = Table('parent')
 # 
 
 class Child(Model):
 __table__ = Table('child')
 parent = relationship(Parent, lazy='joined', 
 backref=backref('children', lazy='lazy'))
 # 
 
 
 As part of my program design, I'm sometimes creating a partially-populated 
 Child() that serves as a 'template' for a child that might be created later, 
 but isn't at this point.
 
 template = Child(parent = parent)
 do_lots_of_stuff()
 
 I was a bit surprised to find that at this point, parent.children already 
 contains the new Child().  This makes sense when thinking about it under 
 normal circumstances... but in this particular case, I don't want this to be 
 part of the collection until it's actually added to the session (which I can 
 confirm it's not, I've tried both expunging and make_transient().
 
 What's the best way to accomplish this?  The documented behavior of 
 cascade_backrefs=False is almost, but not quite, what I need.  

well backrefs do what they do, regarding synchronizing both sides of the 
collection/scalar, all the time.  There's no way to turn it off for certain 
object states.  So if you really need Child(parent) but not the collection, 
you'd need to associate parent on Child using some alternate attribute, like 
temporary thing Child(pending_parent=parent).   That or, don't use backrefs 
between Child.parent and Parent.children, that is also an option, though you'd 
need to be more cautious about mutating both sides before a flush.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] how can i exclude a table on queries ?

2013-04-26 Thread Jonathan Vanasco
I thought it should work too, but I'm getting:

InvalidRequestError: Could not find a FROM clause to join from.  Tried 
joining to class 'app.models.Person2Topic', but got: Can't find any 
foreign key relationships between 'message' and 'person_2_topic'.

If i have time this weekend I'll look into this.  If i don't find a bug in 
my code, I'll post a reproducable test-case on github.



On Friday, April 26, 2013 7:18:24 PM UTC-4, Michael Bayer wrote:


 On Apr 26, 2013, at 7:10 PM, Jonathan Vanasco 
 jona...@findmeon.comjavascript: 
 wrote: 

  Given 
  class Person: 
  id 
   
  class Topic: 
  id 
   
  class Person2Topic : 
  id 
  topic_id  - fkeys topic(id) 
  person_id - fkeys person(id) 
   
  class Message: 
  id 
  person_id_author - fkeys person(id) 
  topic_id - fkeys topic(id) 
   
   
   
  I wanted to select by joining the Person2Topic table directly, with a 
 filter 
  
  query( Message )\ 
  join(  Person2Topic ,( 
  Message.topic_id == 
 Person2Topic.topic_id , 
  Person2Topic.person_id = 1   
  ) 
  
  This generates errors, because sqlalchemy doesn't have an fkey on 
 Message.topic_id = Person2Topic.topic_id 
   
  i can only figure out how to do query by doing intermediary joins 
  
  query( Message )\ 
  join( Topic , ( Message.topic_id == Topic.id ) )\ 
  join(  Person2Topic ,( Topic.id = Person2Topic.topic_id 
 )\ 
  filter( Person2Topic.person_id = 1 ) 
  
  is it possible to do a select like I originally wanted ? 

 you can write out the join condition as you are (not sure what's the 
 downside there), or if you're looking for a relationship() to do it you'd 
 need to set up a primaryjoin + foreign_keys in the relationship() to join 
 as you want.  I'm not sure what other options would be here. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 8:15 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 Anyway, with that (fragile) change, I get a speedup of 10% overall
 runtime, and about 50% alchemy-specific runtime. Considering I knew
 about attribute access' slowness and avoided it in my test, that has
 to account for something worth looking into?

 All attributes have to be expire-able and act as proxies for a database 
 connection so I'm not really sure where to go with that.I'm not too 
 thrilled about proposals to build in various alternate performance 
 behaviors as the library starts to try to act in many different ways that the 
 vast majority of users aren't even aware of, it increases complexity 
 internally, produces vast amounts of new use cases to test and maintain, etc. 
I'm always willing to look at patches that are all winning, of course, so 
 if you have some way to speed things up without breaking usage contracts and 
 without major new complexity/brittleness I'd love to look at a pull request.

I know, it's just a probe to see what kind of a speedup could be
obtained by not having that getter's interference. You know... simply
implementing InstrumentedAttribute in C could do the trick...

 (before optimizing for
 attribute access slowness, the test was about 3 times slower IIRC -
 *times* - and it does a hefty amount of regex processing beyond
 handling attributes)

 Im not sure what regexes you're referring to here.

Oh, it's just application-specific regexes. The point was that there's
a lot of application-specific processing, so the speedup must be big
to be observable through the interference.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 8:47 PM, Claudio Freire klaussfre...@gmail.com wrote:
 On Fri, Apr 26, 2013 at 8:15 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:

 Anyway, with that (fragile) change, I get a speedup of 10% overall
 runtime, and about 50% alchemy-specific runtime. Considering I knew
 about attribute access' slowness and avoided it in my test, that has
 to account for something worth looking into?

 All attributes have to be expire-able and act as proxies for a database 
 connection so I'm not really sure where to go with that.I'm not too 
 thrilled about proposals to build in various alternate performance 
 behaviors as the library starts to try to act in many different ways that 
 the vast majority of users aren't even aware of, it increases complexity 
 internally, produces vast amounts of new use cases to test and maintain, 
 etc.I'm always willing to look at patches that are all winning, of 
 course, so if you have some way to speed things up without breaking usage 
 contracts and without major new complexity/brittleness I'd love to look at a 
 pull request.

 I know, it's just a probe to see what kind of a speedup could be
 obtained by not having that getter's interference. You know... simply
 implementing InstrumentedAttribute in C could do the trick...


In fact... I'm gonna try that...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] how can i exclude a table on queries ?

2013-04-26 Thread Michael Bayer
well you example here seems like it's not literal, do you mean to use and_() ?  
I see a tuple there and an assignment.  It shouldn't be trying to do an 
auto-join like that so it seems like join() isn't being called correctly.



On Apr 26, 2013, at 7:42 PM, Jonathan Vanasco jonat...@findmeon.com wrote:

 I thought it should work too, but I'm getting:
 
 InvalidRequestError: Could not find a FROM clause to join from.  Tried 
 joining to class 'app.models.Person2Topic', but got: Can't find any foreign 
 key relationships between 'message' and 'person_2_topic'.
 
 If i have time this weekend I'll look into this.  If i don't find a bug in my 
 code, I'll post a reproducable test-case on github.
 
 
 
 On Friday, April 26, 2013 7:18:24 PM UTC-4, Michael Bayer wrote:
 
 On Apr 26, 2013, at 7:10 PM, Jonathan Vanasco jona...@findmeon.com wrote: 
 
  Given 
  class Person: 
  id 
   
  class Topic: 
  id 
   
  class Person2Topic : 
  id 
  topic_id  - fkeys topic(id) 
  person_id - fkeys person(id) 
   
  class Message: 
  id 
  person_id_author - fkeys person(id) 
  topic_id - fkeys topic(id) 
   
   
   
  I wanted to select by joining the Person2Topic table directly, with a 
  filter 
  
  query( Message )\ 
  join(  Person2Topic ,( 
  Message.topic_id == 
  Person2Topic.topic_id , 
  Person2Topic.person_id = 1   
  ) 
  
  This generates errors, because sqlalchemy doesn't have an fkey on 
  Message.topic_id = Person2Topic.topic_id 
   
  i can only figure out how to do query by doing intermediary joins 
  
  query( Message )\ 
  join( Topic , ( Message.topic_id == Topic.id ) )\ 
  join(  Person2Topic ,( Topic.id = Person2Topic.topic_id )\ 
  filter( Person2Topic.person_id = 1 ) 
  
  is it possible to do a select like I originally wanted ? 
 
 you can write out the join condition as you are (not sure what's the downside 
 there), or if you're looking for a relationship() to do it you'd need to set 
 up a primaryjoin + foreign_keys in the relationship() to join as you want.  
 I'm not sure what other options would be here. 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 7:56 PM, Claudio Freire klaussfre...@gmail.com wrote:

 On Fri, Apr 26, 2013 at 8:47 PM, Claudio Freire klaussfre...@gmail.com 
 wrote:
 On Fri, Apr 26, 2013 at 8:15 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 Anyway, with that (fragile) change, I get a speedup of 10% overall
 runtime, and about 50% alchemy-specific runtime. Considering I knew
 about attribute access' slowness and avoided it in my test, that has
 to account for something worth looking into?
 
 All attributes have to be expire-able and act as proxies for a database 
 connection so I'm not really sure where to go with that.I'm not too 
 thrilled about proposals to build in various alternate performance 
 behaviors as the library starts to try to act in many different ways that 
 the vast majority of users aren't even aware of, it increases complexity 
 internally, produces vast amounts of new use cases to test and maintain, 
 etc.I'm always willing to look at patches that are all winning, of 
 course, so if you have some way to speed things up without breaking usage 
 contracts and without major new complexity/brittleness I'd love to look at 
 a pull request.
 
 I know, it's just a probe to see what kind of a speedup could be
 obtained by not having that getter's interference. You know... simply
 implementing InstrumentedAttribute in C could do the trick...
 
 
 In fact... I'm gonna try that...

feel free!  though you might be surprised, a C function that just calls out to 
all the same Python operations anyway is often only negligibly faster, not 
enough to make the extra complexity worth it.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Michael Bayer

On Apr 26, 2013, at 7:59 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 
 On Apr 26, 2013, at 7:56 PM, Claudio Freire klaussfre...@gmail.com wrote:
 
 On Fri, Apr 26, 2013 at 8:47 PM, Claudio Freire klaussfre...@gmail.com 
 wrote:
 On Fri, Apr 26, 2013 at 8:15 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 Anyway, with that (fragile) change, I get a speedup of 10% overall
 runtime, and about 50% alchemy-specific runtime. Considering I knew
 about attribute access' slowness and avoided it in my test, that has
 to account for something worth looking into?
 
 All attributes have to be expire-able and act as proxies for a database 
 connection so I'm not really sure where to go with that.I'm not too 
 thrilled about proposals to build in various alternate performance 
 behaviors as the library starts to try to act in many different ways that 
 the vast majority of users aren't even aware of, it increases complexity 
 internally, produces vast amounts of new use cases to test and maintain, 
 etc.I'm always willing to look at patches that are all winning, of 
 course, so if you have some way to speed things up without breaking usage 
 contracts and without major new complexity/brittleness I'd love to look at 
 a pull request.
 
 I know, it's just a probe to see what kind of a speedup could be
 obtained by not having that getter's interference. You know... simply
 implementing InstrumentedAttribute in C could do the trick...
 
 
 In fact... I'm gonna try that...
 
 feel free!  though you might be surprised, a C function that just calls out 
 to all the same Python operations anyway is often only negligibly faster, not 
 enough to make the extra complexity worth it.

also if you're looking to help with C, I'd love to get the C extensions out in 
the Py3K version, we have a patch that's fallen out of date at 
http://www.sqlalchemy.org/trac/ticket/2161 that needs freshening up and testing.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Are sqlalchemy queries a generator?

2013-04-26 Thread Claudio Freire
On Fri, Apr 26, 2013 at 9:01 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 All attributes have to be expire-able and act as proxies for a database 
 connection so I'm not really sure where to go with that.I'm not too 
 thrilled about proposals to build in various alternate performance 
 behaviors as the library starts to try to act in many different ways that 
 the vast majority of users aren't even aware of, it increases complexity 
 internally, produces vast amounts of new use cases to test and maintain, 
 etc.I'm always willing to look at patches that are all winning, of 
 course, so if you have some way to speed things up without breaking usage 
 contracts and without major new complexity/brittleness I'd love to look 
 at a pull request.

 I know, it's just a probe to see what kind of a speedup could be
 obtained by not having that getter's interference. You know... simply
 implementing InstrumentedAttribute in C could do the trick...


 In fact... I'm gonna try that...

 feel free!  though you might be surprised, a C function that just calls out 
 to all the same Python operations anyway is often only negligibly faster, 
 not enough to make the extra complexity worth it.

 also if you're looking to help with C, I'd love to get the C extensions out 
 in the Py3K version, we have a patch that's fallen out of date at 
 http://www.sqlalchemy.org/trac/ticket/2161 that needs freshening up and 
 testing.

Will look into that. The point of the C function is to be able to
quickly bypass all that _supports_population and function call
overheads. The getter is dead-simple, so its cost is dominated by
CPython function call overheads, that are readily removable by
re-implementing in C. It can reliably and quickly detect when
instance_dict returns __dict__, too.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Relationship - session.commit() is not needed?

2013-04-26 Thread Mauricio de Abreu Antunes
Everytime I code SQLAlchemy relationships I note that insert operation like
this (tutorial) does not need commit()
http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#working-with-related-objects

When using this bidirectional relationship() is the data automatically
commited to the dabase?

Probably this is an easy question but I can not get how to perform a
rollback in jack.addresses.

-- 
*Mauricio de Abreu Antunes*
Mobile: (51)930-74-525
Skype: mauricio.abreua

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Relationship - session.commit() is not needed?

2013-04-26 Thread Mauricio de Abreu Antunes
Being more specific I talk about this part:

 jack = User('jack', 'Jack Bean', 'gjffdd') jack.addresses[]

 jack.addresses = [... 
 Address(email_address='j...@google.com'),... 
 Address(email_address='j...@yahoo.com')]


Are addresses automaticamente added and commited?
I know it is commiting jack and jack has addresses, but how do i add jack
first (session 1) and right after commiting jack without addresses add a
set of addresses and commit them to the database?

2013/4/26 Mauricio de Abreu Antunes mauricio.abr...@gmail.com

 Everytime I code SQLAlchemy relationships I note that insert operation
 like this (tutorial) does not need commit()

 http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#working-with-related-objects

 When using this bidirectional relationship() is the data automatically
 commited to the dabase?

 Probably this is an easy question but I can not get how to perform a
 rollback in jack.addresses.

 --
 *Mauricio de Abreu Antunes*
 Mobile: (51)930-74-525
 Skype: mauricio.abreua




-- 
*Mauricio de Abreu Antunes*
Mobile: (51)930-74-525
Skype: mauricio.abreua

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Relationship - session.commit() is not needed?

2013-04-26 Thread Mauricio de Abreu Antunes
I think I'm answering myself:

address = Address(args) # passing jack's id
session.add(address)
session.commit()


2013/4/27 Mauricio de Abreu Antunes mauricio.abr...@gmail.com

 Being more specific I talk about this part:

  jack = User('jack', 'Jack Bean', 'gjffdd') jack.addresses[]

  jack.addresses = [... 
  Address(email_address='j...@google.com'),... 
  Address(email_address='j...@yahoo.com')]


 Are addresses automaticamente added and commited?
 I know it is commiting jack and jack has addresses, but how do i add jack
 first (session 1) and right after commiting jack without addresses add a
 set of addresses and commit them to the database?

 2013/4/26 Mauricio de Abreu Antunes mauricio.abr...@gmail.com

 Everytime I code SQLAlchemy relationships I note that insert operation
 like this (tutorial) does not need commit()

 http://docs.sqlalchemy.org/en/latest/orm/tutorial.html#working-with-related-objects

 When using this bidirectional relationship() is the data automatically
 commited to the dabase?

 Probably this is an easy question but I can not get how to perform a
 rollback in jack.addresses.

 --
 *Mauricio de Abreu Antunes*
 Mobile: (51)930-74-525
 Skype: mauricio.abreua




 --
 *Mauricio de Abreu Antunes*
 Mobile: (51)930-74-525
 Skype: mauricio.abreua




-- 
*Mauricio de Abreu Antunes*
Mobile: (51)930-74-525
Skype: mauricio.abreua

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.