[sqlalchemy] Re: this is a test message

2023-01-04 Thread Michael Bayer
this is...another test!  imagine that

On Wednesday, January 4, 2023 at 7:27:10 PM UTC-5 Mike Bayer wrote:

> test
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/2fbb074e-c606-4924-a3fb-d566381ad131n%40googlegroups.com.


[sqlalchemy] Re: after_update event is fired when I save a many-to-many relationship

2023-01-04 Thread Michael Bayer
the net-change to "user1.teams" is likely sending it through the 
"after_update" event but no net SQL is being emitted if no actual columns 
have changed on user1.

you would need to adjust the logic in your application to more specifically 
check for those conditions in which you want to act upon an event.  for 
example, if you want the event only to occur when specific attributes were 
affected, you could try using the history interface to see if those 
attributes have net changes: 
https://docs.sqlalchemy.org/en/14/orm/session_api.html#sqlalchemy.orm.attributes.History
  
or you can use a method like session.is_modified(): 
https://docs.sqlalchemy.org/en/14/orm/session_api.html#sqlalchemy.orm.Session.is_modified

On Friday, December 23, 2022 at 12:23:04 PM UTC-5 bboym...@gmail.com wrote:

> I'm trying to update an audit log table called 'loggings', here is the 
> definition of the corresponding model:
>
> class *Logging*(Base):
> __tablename__ = 'loggings' 
>
>  id = Column(GUID(), primary_key=True, nullable=False, default=uuid4)
>  target_id = Column(GUID(), doc="The ID of the altered object")
>  version = Column('version', Integer, default=0) 
>  created_at = Column(DateTime, default=datetime.now)
>  modified_at = Column(DateTime, default=datetime.now, 
> onupdate=datetime.now)
>
> and two other models and an association table:
>
> membership_table = Table('membership', Base.metadata,
> Column('user_id', GUID(), ForeignKey('user.id')),
> Column('team_id', GUID(), ForeignKey('team.id')),
> PrimaryKeyConstraint('user_id', 'team_id'))
>
> class *User*(ActivityMixin, Base): # Product
> __tablename__ = "user"
>
> id = Column(GUID(), primary_key=True, nullable=False, default=uuid4)
> name = Column(String)
> password = Column(String)
>
> def __repr__(self): # optional
> return f"User {self.name}"
>
> class *Team*(Base):
>__tablename__ = 'team'
>id = Column(GUID(), primary_key=True, nullable=False, default=uuid4)
>name = Column(String(64))
>users = relationship(lambda:User, secondary=membership_table, 
> backref='teams')
>
> I have an event listener/handler attached to the User class and another to 
> the Team class:
> @event.listens_for(User, 'after_update') 
> def create_logs_for_user(mapper, connection, target):
># logic to add a creation related  record to the loggings table
>
> @event.listens_for(User, 'after_update') 
> def update_logs_for_user(mapper, connection, target):
># logic to add user-updates  records to the loggings table
>
> @event.listens_for(Team, 'after_update') 
> def update_logs_for_team(mapper, connection, target):
># logic to add team-updates records to the loggings table
>
> These are the operations that I perform:
>
> [1]: from app import Logging, User, Team, session
> [2]: user1 = User(name='qwerh') 
> [3]: session.add(user1)
> [4]: session.commit()  # after_insert gets fired here
> [5]: team1 = Team(name="team1")
> [6]: session.add(team1)
> [7]: session.commit() # After this point, the users, team, and loggings 
> tables have one record each, as expected.
> [8]: user1.teams.append(team1)   
> [10]: session.commit() # Two update listeners are getting fired here! 
> After the append.
>
> *Question:*
> When I perform the operation on step [8] above, the loggings table gets 
> filled in twice, due to the "after_update" event getting fired by both the 
> User and the Team models.
> How do I prevent this from happening, I only want to capture the one event 
> from the Team model, sorry I'm very new to SQLAlchemy, thank you!
>  
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d2a54ca3-24fa-4cb4-bf9c-9e4d54245fa9n%40googlegroups.com.


[sqlalchemy] Re: How to generate view columns in mapped class with async postgresql

2023-01-04 Thread Michael Bayer
the async aspect here is not that important, for Declarative + reflection 
you likely should look at DeferredReflection: 
https://docs.sqlalchemy.org/en/14/orm/declarative_tables.html#using-deferredreflection

for async, where it has in the example:

Reflected.prepare(engine)

you would instead say:

await async_connection.run_sync(Reflected.prepare)



On Wednesday, December 14, 2022 at 8:43:45 AM UTC-5 dcs3spp wrote:

> Hi,
>
> I am using alembic to apply a migration for a postgresql view using an *async 
> *engine.
> I can see this successfully applied in the database.
>
> I have the following declarative mapped class to  the view, defined as:
>
>
>
>
>
> *class MailingListView(Base): """View for mailing labels.After 
> metata reflection from db -> model expecting columns tobe available on 
> this class. """*
> *__tablename__ = "mailing_list_view"*
>
>
>
> *# Specify the column override from the underlying view that is the 
> primary keyid = Column(UUID(as_uuid=True), primary_key=True)*
>
>
>
>
>
>
>
>
>
> *# Expecting  these columns  below to be mapped in this class after
> # metadata reflection. Currently have to uncomment these# to manually 
> synchronise with view!## addressee = Column(String)# street = 
> Column(String)# town = Column(String)# county = Column(String)# 
> postcode = Column(String)# cursor = Column(String)*
>
> I am reflecting the views using the following:
>
>  
>
>
>
>
>
>
> * def use_inspector(conn):inspector = inspect(conn)
> return inspector.get_view_names()views = await 
> connection.run_sync(use_inspector)# I can see the table columns in 
> __table__.c.keys()# after the reflection below has run*
> *await connection.run_sync(*
> *target_metadata.reflect,*
> *only=views,*
> *views=True,*
> *extend_existing=True,*
> *)*
>
> After performing the above reflection I can see that my mapped model has 
> the underlyingtable columns updated with those defined in the underlying 
> view.
>
>
>
> *obj = MailingListView()obj.__table__.c.keys()*
> However, the properties of my mapped class are not updated after 
> reflection, raising an exception:
>
>
>
> *obj = MailingListView()obj.town = "town" # this raises an exception with 
> unavailable property*
> How is it possible for a postgresql db (asyncpg) + async sqlalchemy to:
>
> Synchronise the columns of a declarative model with its underlying table 
> after metadata reflection. Currently, I have to manually specify the 
> columns in the declarative model.
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/6cb2e213-02a3-4d9f-921f-ec6886ad68ecn%40googlegroups.com.


[sqlalchemy] Re: SQLAlchemy with MySQL on AWS Lambda is taking long time to truncate table

2023-01-04 Thread Michael Bayer
is truncate happening in some other process?   does it speed up once you 
kill the initial python process?   does this truncate take time with other 
kinds of INSERT statements?   is the table very large?   overall you need 
to perform more observations here as to what conditions cause this 
"truncate" to take a lot of time.  there's nothing happening in SQLAlchemy 
that affects this much, particularly for MySQL which is not very aggressive 
about locking tables.

On Wednesday, December 14, 2022 at 12:19:53 AM UTC-5 chandra...@gmail.com 
wrote:

> I tried this, kept commit and close statements and removed dispose 
> statement, but yet the problem is not solved, it's taking time to truncate 
> the table.
>
> On Wednesday, 14 December 2022 at 05:19:27 UTC+5:30 yaakovb...@gmail.com 
> wrote:
>
>> Is it possible your sessions hasn't committed the change / closed the 
>> transaction?
>>
>> Also, I don't think dispose is helping you here. Consider removing it?
>>
>> How about modifying:
>>
>> def add_user():
>> session = Session()
>> session.add(User(**{'user_id': 1, 'name': 'user name'}))
>> session.commit()
>> session.close()
>> # consider removing
>> # session.bind.dispose()
>>
>>
>> On Tuesday, December 13, 2022 at 8:13:14 AM UTC-5 chandra...@gmail.com 
>> wrote:
>>
>>> On creating all tables using alembic for migrations and then truncate 
>>> any empty table gets completed quickly, BUT once lambda function is 
>>> triggered to insert some data in a table through SQLAlchemy ORM Session 
>>> query (as given below) and then truncate the table takes very much time. 
>>> Where is the problem?
>>>
>>> ```
>>> from sqlalchemy import create_engine
>>> from sqlalchemy.orm import sessionmaker
>>>
>>> engine = create_engine("mysql+pymysql://")
>>> Session = sessionmaker(bind=engine)
>>>
>>> def add_user():
>>> session = Session()
>>> session.add(User(**{'user_id': 1, 'name': 'user name'}))
>>> session.close()
>>> session.bind.dispose()
>>>
>>> ```
>>>
>>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d1c8298d-d7f2-4686-8ba7-26fb162994c9n%40googlegroups.com.


[sqlalchemy] Re: getting raw of the query

2023-01-04 Thread Michael Bayer
using logging is the most direct way, however if you want to 
programmatically intercept all SQL you can use before_cursor_execute: 
https://docs.sqlalchemy.org/en/14/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute

On Thursday, December 8, 2022 at 9:31:56 AM UTC-5 yahme...@gmail.com wrote:

> how to get  *raw* query of each *sqlalchemy session orm* in db file:
> for example in db.py file
>
> the raw of the executing query has to be taken 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/fd1c5620-36d2-46f3-a208-7be7ff9c5fecn%40googlegroups.com.


[sqlalchemy] Re: Does SQLAlchemy Core support MERGE?

2023-01-04 Thread Michael Bayer
we don't support the SQL standard MERGE directly right now but you can 
always emit this statement using text().

We support only the MySQL / PostgreSQL / SQlite driver specific INSERT..ON 
CONFLICT kinds of statements directly right now.  

On Sunday, December 11, 2022 at 11:34:43 AM UTC-5 mkmo...@gmail.com wrote:

> Apologies if this is in the documentation, but I was not able to find it.
>
> Does SQLAlchemy Core support the MERGE statement?
>
> Thanks and best regards,
>
> Matthew
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/4bb050d2-e828-4c82-b578-9978225c33f8n%40googlegroups.com.


[sqlalchemy] Re: load_only when loading relatinoships from an instance

2023-01-04 Thread Michael Bayer
yeah you can use defaultload.load_only


defaultload(ModelA.model_b).load_only(ModelB.only_field)



On Wednesday, January 4, 2023 at 3:15:02 AM UTC-5 to...@revenuecat.com 
wrote:

> Hi,
>
> This might be a strange question, but I tried to find this in the 
> documentation to no avail.
>
> Is it possible to use something like load_only to override which columns 
> are loaded in when loading a relationship (as in, a relationship that is 
> not loaded at first with the original query)?
>
> Something like:
> class ModelB:
>   ...
>
> class ModelA:
>   model_b = relationship("ModelB")
>
> model_a = 
> session.query(ModelA).options(load_only(Model_b.only_field_i_want_in_the_future)).filter(ModelA.id==1).first()
>
> It's a bit strange, but I want to ensure if someone loads the model_b 
> property in the future, only specific columns are loaded in at first.
>
> I can do this if I just query for model_b via the foreign key instead of 
> using the relationship property, but I'd like to avoid that if possible.
>
> Sorry if this question is a bit weird/confusing, it's kind of a strange 
> use case.
>
> Thanks,
> Tony
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/8a25a58e-73a0-42c1-94ba-6007860e1e3fn%40googlegroups.com.


Re: selective migration options (e.g. create and add only)

2017-06-09 Thread Michael Bayer


On Friday, June 9, 2017 at 9:00:00 AM UTC-4, ktang wrote:
>
> Hi,
>
> I have an application migrating existing mysql dbs which may have tables 
> only in some system.
> I am trying to use alembic to handle the migration.
>
> When I migrate the db to a newer version, if I don't want to drop any 
> existing tables or columns, how can I do this (just adding stuff)?
>

I'm assuming you're talking about autogenerate.   If you are just writing 
migration scripts by hand, you just put whatever commands you want in them 
and that does not impact anything else that's already in the database.   If 
you are using autogenerate, and you are comparing a model that's only a 
subset of the real database, you can have autogenerate exclude other tables 
by writing an "include_object" function.  Documentation for this is here: 
http://alembic.zzzcomputing.com/en/latest/api/runtime.html?highlight=include_object#alembic.runtime.environment.EnvironmentContext.configure.params.include_object
 
.  You probably want to look at objects of type "table" and include based 
on a list of names you care about.


 

>
> And a minor issue, when I first setup alembic in my application, I 
> generated an initial migration script which autoincrement is not set in 
> only one of the tables while all other tables are created correctly. 
> Is this a known issue?
>

not really, it depends on the specifics of how your Table metadata is set 
up.  If it's just one table then there is likely something different about 
it in your code.
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: best way to declare one-to-one relationships with automap and non-explicit relationships

2016-02-14 Thread Michael Bayer


On Sunday, February 14, 2016 at 4:12:36 PM UTC-5, Brian Cherinka wrote:
>
> Hi, 
>
> I'm trying to use automap a schema, and let it generate all classes and 
> relationships between my tables.  It seems to work well for all 
> relationships except for one-to-one
>  I know to set a one-to-one relationship, you must apply the uselist=True 
> keyword in relationship().  What's the best way to do that when I'm letting 
> automap generate them?  Without having to manually do it myself by removing 
> the automap generated ones, and setting them explicitly.  
>
 

you'd need to implement a generate_relationship function as described at 
http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/automap.html#custom-relationship-arguments
 
which applies the rules you want in order to establish those relationships 
that you'd like to be one-to-one.

 

 Here is what I'm trying so far, but it's not working. 
>


I don't see anything obviously wrong with it but you'd want to step through 
with pdb.set_trace() to ensure every aspect of it is doing what you'd 
expect.   Otherwise "not working" can mean lots of things.

 

>  
>  
>


> onetoones = ['file']
>
> def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
> referred_cls, **kw):
> if local_cls.__table__.name in onetoones:
> kw['uselist'] = False
> # make use of the built-in function to actually return the result.
> return generate_relationship(base, direction, return_fn, attrname, 
> local_cls, referred_cls, **kw)
>
> def camelizeClassName(base, tablename, table):
> return str(tablename[0].upper() + re.sub(r'_([a-z])', lambda m: 
> m.group(1).upper(), tablename[1:]))
>
> _pluralizer = inflect.engine()
> def pluralize_collection(base, local_cls, referred_cls, constraint):
> referred_name = referred_cls.__name__
> uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(), 
> referred_name)[1:]
> pluralized = _pluralizer.plural(uncamelized)
> return pluralized
>
> # Grabs engine
> db = DatabaseConnection()
> engine = db.engine
>
> # Selects schema and automaps it.
> metadata = MetaData(schema='mangadapdb')
> Base = automap_base(bind=engine, metadata=metadata)
>
> # Pre-define Dap class.  Necessary so automap knows to join this table to 
> a declarative base class from another schema
> class Dap(Base):
> __tablename__ = 'dap'
>
> cube_pk = Column(Integer, ForeignKey(datadb.Cube.pk))
> cube = relationship(datadb.Cube, backref='dap', uselist=False)
>
> # Prepare the base
> Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
> name_for_collection_relationship=pluralize_collection, 
> generate_relationship=_gen_relationship)
>
> # Explicitly declare classes
> for cl in Base.classes.keys():
> exec('{0} = Base.classes.{0}'.format(cl))
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Use order_by when using relationship() with secondary-parameter in a many-to-many?

2016-02-14 Thread Michael Bayer
order_by, you spelled it wrong.

> On Feb 14, 2016, at 3:53 AM,   wrote:
> 
> Hi Mike
> 
>> On 2016-02-13 14:30 Mike Bayer  wrote:
>> you can order by secondary, just add it in:
>> 
>> authors = relationship("Person", secondary=ReferenceAuthor, 
>> order_by=ReferenceAuthor.c.Index)
> 
> Are you sure? I get this error about it
> 
> TypeError: relationship() got an unexpected keyword argument 'oder_by'
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] AttributeError thrown by relationship during unit testing

2016-01-12 Thread Michael Bayer


Sent from my iPhone

> On Jan 11, 2016, at 3:47 PM, Sam Raker  wrote:
> 
> Using SQLAlchemy 1.0.9's declarative extension, I've got a many-to-one 
> relationship defined thusly:
> 
> class Article(Base):
>...
>publisher_id = schema.Column(sqltypes.Integer, 
> schema.ForeignKey('publishers.id'))
> ...
> class Publisher(Base):
>__tablename__ = 'publishers'
>...
>articles = relationship('Article', backref='publisher')
> 
> 
> I've got some code I'd like to test that looks like this:
> 
> articles = 
> session.query(Article).filter(...).join(Article.publisher).filter(Publisher.name
>  ==...)...
> 
> The code works fine, but while trying to test it with python 2.7's unittest 
> module, I keep getting 
> AttributeError: type object 'Article' has no attribute 'publisher'
> 
> How do I resolve this? I'm importing both Article and Publisher into my test 
> file. I _really_ don't want to have to connect to a local db just to get 
> instrumented attributes to work properly, and I don't want to have to 
> refactor my code to use Article.publisher_id--as it'd involve either 
> hard-coding (brittle) or two separate ORM/SQL calls (obnoxious)--just to make 
> tests work.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] AttributeError thrown by relationship during unit testing

2016-01-12 Thread Michael Bayer


Two options -  either use back_populates format (search the docs) to set up 
bidirectional relationships, or call configure_mappers() after the model 
classes have been defined.

> On Jan 11, 2016, at 3:47 PM, Sam Raker  wrote:
> 
> Using SQLAlchemy 1.0.9's declarative extension, I've got a many-to-one 
> relationship defined thusly:
> 
> class Article(Base):
>...
>publisher_id = schema.Column(sqltypes.Integer, 
> schema.ForeignKey('publishers.id'))
> ...
> class Publisher(Base):
>__tablename__ = 'publishers'
>...
>articles = relationship('Article', backref='publisher')
> 
> 
> I've got some code I'd like to test that looks like this:
> 
> articles = 
> session.query(Article).filter(...).join(Article.publisher).filter(Publisher.name
>  ==...)...
> 
> The code works fine, but while trying to test it with python 2.7's unittest 
> module, I keep getting 
> AttributeError: type object 'Article' has no attribute 'publisher'
> 
> How do I resolve this? I'm importing both Article and Publisher into my test 
> file. I _really_ don't want to have to connect to a local db just to get 
> instrumented attributes to work properly, and I don't want to have to 
> refactor my code to use Article.publisher_id--as it'd involve either 
> hard-coding (brittle) or two separate ORM/SQL calls (obnoxious)--just to make 
> tests work.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] combining column_property and TypeDecorator

2015-11-10 Thread Michael Bayer


Sent from my iPhone

> On Nov 10, 2015, at 5:46 PM, Uri Okrent  wrote:
> 
> Seems like a simple thing to do but I can't seem to find an example.  Say I 
> have a simple column_property like the example in the docs:
> 
> fullname = column_property(firstname + " " + lastname)
> 
> 
> and I want to perform additional (trivial in this example) processing on the 
> result at the orm level like for instance:
> class FullNameWithTitle(TypeDecorator):
> impl = Text
> 
> def process_result_value(self, value, dialect):
> return "Mr. " + value
> 
> 
> Is there a simple way to specify the FullNameWithTitle type for the fullname 
> column_property?
> 
> Thanks
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] combining column_property and TypeDecorator

2015-11-10 Thread Michael Bayer
Look into the type_coerce() function for that.

> On Nov 10, 2015, at 5:46 PM, Uri Okrent  wrote:
> 
> Seems like a simple thing to do but I can't seem to find an example.  Say I 
> have a simple column_property like the example in the docs:
> 
> fullname = column_property(firstname + " " + lastname)
> 
> 
> and I want to perform additional (trivial in this example) processing on the 
> result at the orm level like for instance:
> class FullNameWithTitle(TypeDecorator):
> impl = Text
> 
> def process_result_value(self, value, dialect):
> return "Mr. " + value
> 
> 
> Is there a simple way to specify the FullNameWithTitle type for the fullname 
> column_property?
> 
> Thanks
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] 10 Reasons to love SQLAlchemy

2015-09-07 Thread Michael Bayer
Thanks very much for this post, Paul!

Sent from my iPhone

> On Sep 4, 2015, at 3:43 PM, Paul Johnston  wrote:
> 
> Hi guys,
> 
> I decided to express my love of SQLAlchemy through a blog post:
> http://pajhome.org.uk/blog/10_reasons_to_love_sqlalchemy.html
> 
> Enjoy,
> 
> Paul
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] create/restore a dump-file

2015-06-20 Thread Michael Bayer


 On Jun 20, 2015, at 10:40 AM, (by way of c.bu...@posteo.jp) 
 c.bu...@posteo.jp wrote:
 
 I read about the Serializer Extension and ask myself if it is a good
 solution for my problem.

Haven't even read the rest of your email yet, and the answer is no.   The 
serializer works really badly.  


 
 Currently I am using PostgreSQL under my sqlalchemy. There is no way
 to make PostgreSQL database portable because the database is not
 represented as one file I could just copy to another machine.

Yes there is.   Use pg_dump.

 
 So when I want to use my application portable I think it would be a
 good workaround to create a dump-file of the database when the
 application is closed and restore it when it is started.


 So application
 doesn't have to think about on which machine it is running.
 
 I could access PostgreSQL directly on commandline but it is not
 platformindependent. I want to do it with the sqlalchemy-way.


Ok to dump the schema you'd use a MetaData object and you'd pickle it.   
Serializer extension has nothing to do with that.  


 
 Would the Serializer Extension be a good solution for that?

The serializer docs state that it is not, read the second bulletpoint near the 
top.  



 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Michael Bayer




 On Apr 20, 2015, at 12:56 PM, Guido Winkelmann 
 gu...@ambient-entertainment.de wrote:
 
 On Monday 20 April 2015 11:23:06 Mike Bayer wrote:
 On 4/20/15 8:09 AM, Guido Winkelmann wrote:
 [...]
 On sqlite, drop_all() seems to fail to get the order of table drops
 right, and consequently runs into a referential integrity error.
 
 If you can post a reproducible issue, that's what I can work with.
 
 I'm afraid the best I can offer right now is the current state of the pyfarm-
 master code base.  It's 100% reproducible there, but it's not exactly a 
 reduced test case...
 
 There are changes to how tables are sorted in the absence of foreign key
 dependency, where this ordering was previously undefined, it is now
 determinstic; see
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-aab33
 2eedafc8e090f42b89ac7a67e6c.
 On MySQL/PostgreSQL, this line fails:
 
 Apparently, sqlalchemy will use symbol('NEVER_SET') where the id of
 the model used for filtering should be.
 
 this is a known regression and is fixed in 1.0.1:
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.1
 
 
 if you can confirm with current master that this is fixed I can release
 today or tomorrow as this particular regression is fairly severe.
 
 I just tested, the problem is still present in the current master 
 (bd61e7a3287079cf742f4df698bfe3628c090522 from github).


Oh, read your text, while you haven't provided a code sample it sounds like you 
are possibly saying filter(Foo.relationship == some_transient_object) and 
expecting that all the None values come out.   Yes?   That is just the kind of 
example of just happened to work I'm talking about.   Can you confirm this is 
what you are doing please ?   Hopefully can find a fix for that.There is an 
entry detailing the behavioral change here but these effects were unanticipated 
(hence there were five betas, to little avail).



 
Guido W.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Documentation for SQLAlchemy 0.4.5?

2015-04-16 Thread Michael Bayer
If you go to pypi and actually download the 0.4.5 tar.gz file from the 
sqlalchemy project and expand it, the documentation will be there in html form 
in the doc/ directory.All sqlalchemy releases include an html build of the 
documentation in this directory in the source distribution.

Sent from my iPhone

 On Apr 16, 2015, at 3:07 AM, Saravanakumar Karunanithi 
 akk.saravanaku...@gmail.com wrote:
 
 Hi,
 
   I am working on a legacy code that uses SqlAlchecmy0.4.5,  Could anyone 
 help me to get the documentation of 0.4.5? I googled it, but no luck, if any 
 one you have the documentation link/pdf, it would be a great help for me, 
 
 Thanks in Advance, 
 Saravanakumar Karunanithi
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Custom Type - two classes (object + TypeDecorator) or one (TypeDecorator)?

2015-03-31 Thread Michael Bayer


Jeffrey McLarty jeffrey.mcla...@gmail.com wrote:

 Hello,
 
 First - amazing work on SQLAlchemy, what an amazing body of work.
 
 Second, my question.
 
 I'm attempting to convert a pure-python object, into a custom type for column 
 definition. My question is, do I need to keep my original MyObj(object) and 
 use it inside the process_bind_param/process_result_value of 
 MyObjType(TypeDecorator), or should I be able to get this to work with just 
 one class?

if you want to serialize your object in some special way to fit it into a
column, a TypeDecorator is the most straightforward way to go. There’s no
magic method that SQLAlchemy calls on your object; only if you used
PickleType you could define a serializer.


 More:
 
 I'm confused about the *arg and **kw passed to typeobj, in the to_instance() 
 function, from type_api.py...when those might conflict with my object 
 constructor's parameters.

those are arguments referring to the type object, not your actual object.
They are two separate things.

 
 ...if anybody wants to have a look at my code, it is on github, organized 
 into a PR, https://github.com/Equitable/trump/pull/15 with links to the files 
 and line numbers.

right…that’s not right. You need two different objects. One would be
BitFlag, with all of the special logic you have here, and the other would be
BitFlagType. BitFlagType wouldn’t really need anything in its constructor,
just the process_bind_param / process_result_value methods.

However! This specific case has a special bonus feature. Your BitFlag
implementation could just as well be a subclass of int (e.g. class
BitFlag(int)) and you’d get the process_bind_param part for free in that
case. Though you’d still probably want process_result_value so that when you
load from the database, you get BitFlag objects back and not just plain
ints.

Either way the TD will be along these lines:

class BitFlagType(TypeDecorator):

impl = Integer

def process_bind_param(self, value, dialect):
if value is not None:
value = value.int_field

return value

def process_result_value(self, value, dialect):
if value is not None:
value = BitFlag(value)
return value

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy core what's the right find if row with primary key exists

2015-03-31 Thread Michael Bayer

Duke Dougal dukedou...@gmail.com wrote:

 I'm trying to find if a row with a given primary key exists.
 
 Is this the right way to do it?

it looks fine and if you’re just starting out, by all means do what works
and makes sense to you.


 I'm using Python 3.4 and Sqlite 3.8.2
 
 
 from sqlalchemy.ext.automap import automap_base
 from sqlalchemy import create_engine, exists
 from sqlalchemy.schema import Table
 from sqlalchemy.schema import MetaData
 from sqlalchemy.sql import select
 
 Base = automap_base()
 engine = create_engine(sqlite:archives.db)
 conn = engine.connect()
 # reflect the tables
 Base.prepare(engine, reflect=True)
 meta = MetaData()
 messages = Table('messages', meta, autoload=True, autoload_with=engine)
 s = 
 select([messages.c.message_id_hash]).where(messages.c.message_id_hash=='GQWMEJ3DJYPFDH2NJEDOIDZ3W2YBUFYP')
 result = conn.execute(s)
 print(result.first())
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Thread safety - is it safe to use the core in a web application?

2015-03-31 Thread Michael Bayer


Duke Dougal dukedou...@gmail.com wrote:

 I am using the Falcon web server and SqlAlchemy core.
 
 I have read warnings about it not being thread safe.
 
 I'm not sure what the implications are of this.  Will there be a problem or 
 can I code around it somehow?
 
 I'm trying to avoid the ORM for now - at least until I have some grasp of 
 what the core is and how it works.


if you’re not doing multithreaded programming, then the issue of
thread-safety is usually not something you need to worry about.

If the integration of Python programs with Falcon has points at which
threads are used, then it needs to handle this issue only to the degree that
some object might be used in multiple threads concurrently. With idiomatic
web applications, the only two objects that live outside of the request are
the session factory and the engine, both of which are thread-safe. The
section
http://docs.sqlalchemy.org/en/rel_0_9/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it
talks a lot about this, but for more specifics it depends on how one
integrates with Falcon.


 thanks
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Inherited class column override

2015-03-31 Thread Michael Bayer


Pierre B rocambolesque...@gmail.com wrote:

 Here's my use case:
 right1 = Right()
 right.left = Left()
 
 right2 = Right2()
 right2.left = Left2()
 
 db.session.add(right) // automatically create the junction using MySubClass1 
 and set the type field to 1
 db.session.add(right2) // automatically create the junction using MySubClass1 
 and set the type field to 2
 db.session.commit()
 
 Basically I have a junction table associating a bunch of different tables in 
 my model.
 I want to abstract that mechanism using relationships and polymorphism so 
 that I don't have to deal with that junction table while coding.
 The relationships I created allow me to not have to deal with it while 
 selecting records but I can't get it to set the type field while inserting 
 records.

OK, you are using the association object pattern. You cannot use “secondary”
in the way that you are doing here. You need to map a relationship to
MySubClass1 explicitly. To reduce verbosity, you’d then apply the
association proxy pattern. Without association proxy, your association of
right and left will be something like:

right2 = Right2()
right2.left_association = MySubClass1()
right2.left_association.left = Left2()

the association proxy then allows for MySubClass1() to be called
automatically and you can refer to “right2.left” directly.

Start with:
http://docs.sqlalchemy.org/en/rel_0_9/orm/basic_relationships.html#association-object

make that work completely, with the more verbose use pattern.

then when that is totally working and understood, then move onto association
proxy:

http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/associationproxy.html



 On Tuesday, March 31, 2015 at 4:11:51 PM UTC+2, Michael Bayer wrote:
 
 Pierre B rocambol...@gmail.com wrote: 
 
  I made a type in the Right model, here are the models again: 
 
 if you’re referring to the behavior of Right.left when you use it in a 
 query, such as query(Right).join(Right.left), then the “default” value of a 
 Column object has no interaction there. 
 
 it seems like you probably want to do something very simple here but I’m not 
 getting enough information on what that is. If you could illustrate the 
 usage of the objects that you are looking for, that would help. 
 
  
 
 
 
  class HasSomeAttribute(object): 
  @declared_attr.cascading 
  def type(cls): 
 if has_inherited_table(cls): 
 if cls.__name__ == 'MySubClass1': 
 return db.Column(db.Integer, default=1) 
 else: 
 return db.Column(db.Integer, default=2) 
 else: 
 return db.Column(db.Integer, default=0) 
  
  class MyClass(HasSomeAttribute, db.Model): 
 __tablename__ = 'people4l2' 
 id = db.Column(db.Integer, primary_key=True) 
  id1 = db.Column(db.Integer) 
  id2 = db.Column(db.Integer) 
  
  class MySubClass1(MyClass): 
 pass 
  
  class MySubClass2(MyClass): 
 pass 
  
  class Right(db.Model): 
  id = db.Column(db.Integer, primary_key=True) 
  left = relationship( 
 'Left', 
 secondary= MySubClass1.__table__, 
 primaryjoin='and_(MySubClass1.type == 802, MySubClass1.id2 == 
  Right.id)', 
 secondaryjoin='and_(MySubClass1.type == 802, MySubClass1.id1 == 
  Left.id)' 
 ) 
  
  class Left(db.Model): 
  id = db.Column(db.Integer, primary_key=True) 
  
  
  On Tuesday, March 31, 2015 at 12:12:35 PM UTC+2, Pierre B wrote: 
  Hi Michael, 
  
  Thank you for your response. 
  Unfortunately I have already tried to use the __init__ function/catch the 
  init event but I am only referencing the sub classes in a relationship 
  which does not seem to actually instantiate classes because the __init__ is 
  never called/init event is never fired. 
  Here is a simple version of my models. 
  
  class HasSomeAttribute(object): 
  @declared_attr.cascading 
  def type(cls): 
  if has_inherited_table(cls): 
  if cls.__name__ == 'MySubClass1': 
  return db.Column(db.Integer, default=1) 
  else: 
  return db.Column(db.Integer, default=2) 
  else: 
  return db.Column(db.Integer, default=0) 
  
  class MyClass(HasSomeAttribute, db.Model): 
  __tablename__ = 'people4l2' 
  id = db.Column(db.Integer, primary_key=True) 
  id1 = db.Column(db.Integer) 
  id2 = db.Column(db.Integer) 
  
  class MySubClass1(MyClass): 
  pass 
  
  class MySubClass2(MyClass): 
  pass 
  
  class Right(db.Model): 
  id = db.Column(db.Integer, primary_key=True) 
  subclass_attr = relationship( 
  'Contact', 
  secondary= MySubClass1.__table__, 
  primaryjoin='and_(MySubClass1.type == 802, MySubClass1.id2 == 
  Right.id)', 
  secondaryjoin='and_(MySubClass1.type == 802, MySubClass1.id1 == 
  Left.id)' 
  ) 
  
  class Left(db.Model): 
  id = db.Column(db.Integer, primary_key=True) 
  
  MyClass is used

Re: [sqlalchemy] Inherited class column override

2015-03-31 Thread Michael Bayer

Pierre B rocambolesque...@gmail.com wrote:

 I made a type in the Right model, here are the models again:

if you’re referring to the behavior of Right.left when you use it in a
query, such as query(Right).join(Right.left), then the “default” value of a
Column object has no interaction there.

it seems like you probably want to do something very simple here but I’m not
getting enough information on what that is. If you could illustrate the
usage of the objects that you are looking for, that would help.

 



 class HasSomeAttribute(object):
 @declared_attr.cascading
 def type(cls):
if has_inherited_table(cls):
if cls.__name__ == 'MySubClass1':
return db.Column(db.Integer, default=1)
else:
return db.Column(db.Integer, default=2)
else:
return db.Column(db.Integer, default=0)

 class MyClass(HasSomeAttribute, db.Model):
__tablename__ = 'people4l2'
id = db.Column(db.Integer, primary_key=True)
 id1 = db.Column(db.Integer)
 id2 = db.Column(db.Integer)
 
 class MySubClass1(MyClass):
pass

 class MySubClass2(MyClass):
pass
 
 class Right(db.Model):
 id = db.Column(db.Integer, primary_key=True)
 left = relationship(
'Left',
secondary= MySubClass1.__table__,
primaryjoin='and_(MySubClass1.type == 802, MySubClass1.id2 == 
 Right.id)',
secondaryjoin='and_(MySubClass1.type == 802, MySubClass1.id1 == 
 Left.id)'
)
 
 class Left(db.Model):
 id = db.Column(db.Integer, primary_key=True)
 
 
 On Tuesday, March 31, 2015 at 12:12:35 PM UTC+2, Pierre B wrote:
 Hi Michael,
 
 Thank you for your response.
 Unfortunately I have already tried to use the __init__ function/catch the 
 init event but I am only referencing the sub classes in a relationship which 
 does not seem to actually instantiate classes because the __init__ is never 
 called/init event is never fired.
 Here is a simple version of my models.
 
 class HasSomeAttribute(object):
 @declared_attr.cascading
 def type(cls):
 if has_inherited_table(cls):
 if cls.__name__ == 'MySubClass1':
 return db.Column(db.Integer, default=1)
 else:
 return db.Column(db.Integer, default=2)
 else:
 return db.Column(db.Integer, default=0)
 
 class MyClass(HasSomeAttribute, db.Model):
 __tablename__ = 'people4l2'
 id = db.Column(db.Integer, primary_key=True)
 id1 = db.Column(db.Integer)
 id2 = db.Column(db.Integer)
 
 class MySubClass1(MyClass):
 pass
 
 class MySubClass2(MyClass):
 pass
 
 class Right(db.Model):
 id = db.Column(db.Integer, primary_key=True)
 subclass_attr = relationship(
 'Contact',
 secondary= MySubClass1.__table__,
 primaryjoin='and_(MySubClass1.type == 802, MySubClass1.id2 == 
 Right.id)',
 secondaryjoin='and_(MySubClass1.type == 802, MySubClass1.id1 == 
 Left.id)'
 )
 
 class Left(db.Model):
 id = db.Column(db.Integer, primary_key=True)
 
 MyClass is used as a junction table for a bunch of different relationships, 
 the type field is used to differentiate the relationships.
 
 
 On Monday, March 30, 2015 at 5:26:30 PM UTC+2, Michael Bayer wrote:
 
 
 Pierre B rocambol...@gmail.com wrote: 
 
  Hi all, 
  
  I'm ultimately trying to have different default values for the same column. 
  Following the documentation, the @declared_attr.cacading decorator seems to 
  be the best approach. 
  Here's my code: 
  class HasSomeAttribute(object): 
  @declared_attr.cascading 
  def type(cls): 
  if has_inherited_table(cls): 
  if cls.__name__ == 'MySubClass1': 
  return db.Column(db.Integer, default=1) 
  else: 
  return db.Column(db.Integer, default=2) 
  else: 
  return db.Column(db.Integer, default=0) 
  
  class MyClass(HasSomeAttribute, db.Model): 
  __tablename__ = 'people4l2' 
  id = db.Column(db.Integer, primary_key=True) 
  
  class MySubClass1(MyClass): 
  pass 
  
  class MySubClass2(MyClass): 
  pass 
  
  I iterated quite a few times over this but I'm systematically getting this 
  error: 
  ArgumentError: Column 'type' on class class '__main__.MySubClass1' 
  conflicts with existing column 'people4l2.type’ 
 
 this mapping illustrates MySubClass1 and MySubClass2 as both sharing the 
 same table “people4l2”, as they have no __tablename__ attribute, so there 
 can only be one “type” column. So in this case it is not appropriate to use 
 cascading in exactly this way, as MyClass already has a “type” column, and 
 that gets attached to the “people4l2” table and that’s it; there can be no 
 different “type” column on MySubClass1/MySubClass2. 
 
 If you’d like “type” to do something different based on which class is being 
 instantiated, this is an ORM-level differentiation. Use either the 
 constructor

Re: [sqlalchemy] select x for update is still snapshot read in sqlalchemy.

2015-03-31 Thread Michael Bayer


ying zhang zhy198...@gmail.com wrote:

 
 
 I'm confused with how Query.with_lockmode('update') works in sqlalchemy.
 My code is in attachment.
 For testing, I start test.py in 1st terminal, then start test.py in 2nd 
 terminal.
 so the 2nd termial will read the data modified by 1st test.py.
 But the 2nd terminal's output contains L42 in test.py.
 It means the 2nd test.py select x for update is still a 'snapshot read'.
 In my opinion, the variable 'locking_u' should be the one changed by 1st 
 terminal, locking.name should be 'aa'.
 Because it's [current read] in 'select x for update' mysql sentences.
 And mysql Isolation level is RR.

Without getting into specifics as these MySQL transaction isolation cases
are intricate to work with, SQLAlchemy is just emitting SQL on the
transaction as you can see with echo=True. So you first need to ensure that
the exact sequence of steps you’re doing work in isolation; that is, type
the commands out by hand on terminal 1 and terminal 2; then when you run
this script, ensure the sequence of commands emitted is the same as well as
their syntax.

Also, a much better way to ensure the timing here instead of timing things
between two consoles, put two Session() objects into a single script, and
just run steps on each Session in the correct sequence. You can step through
with echo=True and pdb.set_trace() and ensure that each command is emitted
and has the expected behavior.


 My test table is as below:
 mysql select * from user_test;
 ++--+
 | id | name |
 ++--+
 |  1 | a|
 |  2 | b|
 |  3 | c|
 |  4 | NULL |
 ++--+
 
 
 Thank you so much if you can understand and help me.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 test.py

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Inherited class column override

2015-03-31 Thread Michael Bayer


Pierre B rocambolesque...@gmail.com wrote:

 I tried using the association object pattern before but can't get it to work 
 because I use the same id1 and id2 columns for all foreign keys and I'm not 
 able to override them in the sub-classes (conflicts with existing column 
 error).
 class MyClass(HasSomeAttribute, db.Model):
__tablename__ = 'people4l2'
id = db.Column(db.Integer, primary_key=True)
 
 class MySubClass1(MyClass):
right1_id = db.Column('id2', db.Integer, ForeignKey('right1.id'))
left1_id = db.Column('id1', db.Integer, ForeignKey('left1.id'))

 class MySubClass2(MyClass):
right2_id = db.Column('id2', db.Integer, ForeignKey('right2.id'))
left2_id = db.Column('id1', db.Integer, ForeignKey('left2.id’))

That’s because you do not have a __tablename__ for these subclasses, so when
you put a column on the subclass, that is physically a column on the
‘people4l2’ table; the names cannot be conflicting. Also, it is not possible
to have a column named “people4l2.id2” which is in some cases a foreign key
to “right1.id” and in other cases to “right2.id”.

This probably all seems very complicated if you only think of it in terms of
a Python object model. That’s why it is essential that you design your
database schema in terms of database tables, and how those tables will work
within a purely relational model, without Python being involved, first. 

For simple cases, the design of the relational model and the object model
are so similar that this explicit step isn’t necessary, but once the goals
become a little bit divergent between relational and object model, that’s
when the relational model has to be developed separately, up front. This is
the essence of how SQLAlchemy works, which becomes apparent the moment you
get into models like these which are typically impossible on most other
ORMs, since most ORMs do not consider design of the relational model as
separate from the object model.

The tradeoff here is basically between “more work with SQLAlchemy” vs.
“not possible at all with other ORMs”  :)

The relational model is the more rigid part of the system here, so you have to
work that part out first; then determine how you want to map the Python
object model on top of the relational model.

 On Tuesday, March 31, 2015 at 4:29:52 PM UTC+2, Michael Bayer wrote:
 
 
 Pierre B rocambol...@gmail.com wrote: 
 
  Here's my use case: 
  right1 = Right() 
  right.left = Left() 
  
  right2 = Right2() 
  right2.left = Left2() 
  
  db.session.add(right) // automatically create the junction using 
  MySubClass1 and set the type field to 1 
  db.session.add(right2) // automatically create the junction using 
  MySubClass1 and set the type field to 2 
  db.session.commit() 
  
  Basically I have a junction table associating a bunch of different tables 
  in my model. 
  I want to abstract that mechanism using relationships and polymorphism so 
  that I don't have to deal with that junction table while coding. 
  The relationships I created allow me to not have to deal with it while 
  selecting records but I can't get it to set the type field while inserting 
  records. 
 
 OK, you are using the association object pattern. You cannot use “secondary” 
 in the way that you are doing here. You need to map a relationship to 
 MySubClass1 explicitly. To reduce verbosity, you’d then apply the 
 association proxy pattern. Without association proxy, your association of 
 right and left will be something like: 
 
 right2 = Right2() 
 right2.left_association = MySubClass1() 
 right2.left_association.left = Left2() 
 
 the association proxy then allows for MySubClass1() to be called 
 automatically and you can refer to “right2.left” directly. 
 
 Start with: 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/basic_relationships.html#association-object
  
 
 make that work completely, with the more verbose use pattern. 
 
 then when that is totally working and understood, then move onto association 
 proxy: 
 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/associationproxy.html 
 
 
 
  On Tuesday, March 31, 2015 at 4:11:51 PM UTC+2, Michael Bayer wrote: 
  
  Pierre B rocambol...@gmail.com wrote: 
  
   I made a type in the Right model, here are the models again: 
  
  if you’re referring to the behavior of Right.left when you use it in a 
  query, such as query(Right).join(Right.left), then the “default” value of a 
  Column object has no interaction there. 
  
  it seems like you probably want to do something very simple here but I’m 
  not 
  getting enough information on what that is. If you could illustrate the 
  usage of the objects that you are looking for, that would help. 
  
   
  
  
  
   class HasSomeAttribute(object): 
   @declared_attr.cascading 
   def type(cls): 
  if has_inherited_table(cls): 
  if cls.__name__ == 'MySubClass1': 
  return db.Column(db.Integer, default=1) 
  else: 
  return db.Column

Re: [sqlalchemy] Postgres JSON/JSONB column NULL or null or None

2015-03-31 Thread Michael Bayer


joe meiring josephmeir...@gmail.com wrote:

 So I've got a model like such and using sqlalchemy v0.9.8
 
 class myModel(Base):
 
 
 id 
 = Column(Integer, primary_key=True)
 
 border 
 = Column(JSONB)
 How can I query for rows that don't have a border? I've tried:
 
 filter(myModel.border != None) #nope
 
 filter
 (myModel.border != 'null') #nope
 from sqlalchemy import null
 
 filter
 (myModel.border != null()) #nope

 The value is apparently stored in postgres as a JSON encoded null value. 
 Its definitely getting serialized back to a python None when instantiated, 
 but I have no idea how to query against it. It looks like you can set 
 none_as_null on the column, i.e.:
 
 Column(JSONB(none_as_null=True))
 Which replaces the JSON encoded null with a SQL null, but that seems strange 
 to have to do on all columns. What am I missing here?


The none_as_null flag doesn’t apply to comparisons right now (maybe it should), 
only to INSERT statements.   If you actually want to deal with the JSON NULL 
value explicitly and differentiate that from a column that has no value, you 
probably want to leave that flag off.

For comparison, assuming you are looking for a non-null value that is the JSON 
value ’null’, you can force the comparison with text:

print s.query(myModel).filter(myModel.border == text('null')).all()






 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Inherited class column override

2015-03-30 Thread Michael Bayer


Pierre B rocambolesque...@gmail.com wrote:

 Hi all,
 
 I'm ultimately trying to have different default values for the same column. 
 Following the documentation, the @declared_attr.cacading decorator seems to 
 be the best approach.
 Here's my code:
 class HasSomeAttribute(object):
 @declared_attr.cascading
 def type(cls):
 if has_inherited_table(cls):
 if cls.__name__ == 'MySubClass1':
 return db.Column(db.Integer, default=1)
 else:
 return db.Column(db.Integer, default=2)
 else:
 return db.Column(db.Integer, default=0)
 
 class MyClass(HasSomeAttribute, db.Model):
 __tablename__ = 'people4l2'
 id = db.Column(db.Integer, primary_key=True)
 
 class MySubClass1(MyClass):
 pass
 
 class MySubClass2(MyClass):
 pass
 
 I iterated quite a few times over this but I'm systematically getting this 
 error:
 ArgumentError: Column 'type' on class class '__main__.MySubClass1' 
 conflicts with existing column 'people4l2.type’

this mapping illustrates MySubClass1 and MySubClass2 as both sharing the
same table “people4l2”, as they have no __tablename__ attribute, so there
can only be one “type” column. So in this case it is not appropriate to use
cascading in exactly this way, as MyClass already has a “type” column, and
that gets attached to the “people4l2” table and that’s it; there can be no
different “type” column on MySubClass1/MySubClass2.

If you’d like “type” to do something different based on which class is being
instantiated, this is an ORM-level differentiation. Use either the
constructor __init__() to set it or use the init() event
(http://docs.sqlalchemy.org/en/rel_0_9/orm/events.html?highlight=event%20init#sqlalchemy.orm.events.InstanceEvents.init).

OTOH if “type” is actually the “polymoprhic discriminator”, which is what
this looks like, then you’d be looking to just set up “type” as the
“polymorphic_on” column and set up the “1”, “2”, “0” as the polymorphic
identity (see
http://docs.sqlalchemy.org/en/rel_0_9/orm/inheritance.html#single-table-inheritance
for a simple example).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] is it possible to turn off DELETE cascades globally for a session?

2015-03-30 Thread Michael Bayer
it’s not an option available at the moment, though there might be ways to avoid 
the collection loads (artificially zeroing them out w/o history).

if you don’t need the relationship features then you might as well just use 
query.delete().



Jonathan Vanasco jonat...@findmeon.com wrote:

 I'm running a migration script, and the deletes via the ORM were creating a 
 performance issue.  (trying to load 4 relationships 9million times adds up)
 
 I couldn't find any docs for this (just some references on update cascades) 
 so i sidestepped the issue by just running the Engine's delete on the table 
 directly.I'm just wondering if I missed any docs.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't upgrade SQLAlchemy any more

2015-03-29 Thread Michael Bayer
this is a pip issue; first ensure you’re on the latest version of pip.  If the 
error persists, please report it to https://github.com/pypa/pip/issues.



Sibylle Koczian nulla.epist...@web.de wrote:

 Hello,
 
 I can't upgrade SQLAlchemy any more on just one of several Windows PCs. 
 Windows version is Windows 7 Starter Edition, 32 bit, Python version is 
 3.4.3, installed SQLAlchemy version is 0.9.7.
 
 With
 
 pip install --upgrade sqlalchemy
 
 I get this:
 
 Exception:
 Traceback (most recent call last):
  File C:\Python34\lib\site-packages\pip\basecommand.py, line 232, in main
status = self.run(options, args)
  File C:\Python34\lib\site-packages\pip\commands\install.py, line 339, in 
 run
 
requirement_set.prepare_files(finder)
  File C:\Python34\lib\site-packages\pip\req\req_set.py, line 235, in 
 prepare_
 files
req_to_install, self.upgrade)
  File C:\Python34\lib\site-packages\pip\index.py, line 305, in 
 find_requireme
 nt
page = self._get_page(main_index_url, req)
  File C:\Python34\lib\site-packages\pip\index.py, line 783, in _get_page
return HTMLPage.get_page(link, req, session=self.session)
  File C:\Python34\lib\site-packages\pip\index.py, line 872, in get_page
Cache-Control: max-age=600,
  File C:\Python34\lib\site-packages\pip\_vendor\requests\sessions.py, line 
 47
 3, in get
return self.request('GET', url, **kwargs)
  File C:\Python34\lib\site-packages\pip\download.py, line 365, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
  File C:\Python34\lib\site-packages\pip\_vendor\requests\sessions.py, line 
 46
 1, in request
resp = self.send(prep, **send_kwargs)
  File C:\Python34\lib\site-packages\pip\_vendor\requests\sessions.py, line 
 61
 0, in send
r.content
  File C:\Python34\lib\site-packages\pip\_vendor\requests\models.py, line 
 730,
 in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or 
 bytes
 ()
  File C:\Python34\lib\site-packages\pip\_vendor\requests\models.py, line 
 655,
 in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
  File 
 C:\Python34\lib\site-packages\pip\_vendor\requests\packages\urllib3\resp
 onse.py, line 256, in stream
data = self.read(amt=amt, decode_content=decode_content)
  File 
 C:\Python34\lib\site-packages\pip\_vendor\requests\packages\urllib3\resp
 onse.py, line 186, in read
data = self._fp.read(amt)
  File C:\Python34\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py,
 line 54, in read
self.__callback(self.__buf.getvalue())
  File C:\Python34\lib\site-packages\pip\_vendor\cachecontrol\controller.py, 
 l
 ine 217, in cache_response
self.serializer.dumps(request, response, body=body),
  File C:\Python34\lib\site-packages\pip\download.py, line 268, in set
return super(SafeFileCache, self).set(*args, **kwargs)
  File 
 C:\Python34\lib\site-packages\pip\_vendor\cachecontrol\caches\file_cache
 .py, line 86, in set
fh.write(value)
  File C:\Python34\lib\site-packages\pip\_vendor\lockfile\__init__.py, line 
 24
 5, in __exit__
self.release()
  File C:\Python34\lib\site-packages\pip\_vendor\lockfile\linklockfile.py, 
 lin
 e 56, in release
raise NotLocked(%s is not locked % self.path)
 pip._vendor.lockfile.NotLocked: 
 c:\users\sibylle\appdata\local\pip\cache\http\4\
 6\5\a\7\465a7ff360f3da62605e6797df3cf194b67c6a52ca200f6fcfb704a4 is not locked
 
 I tried to google the exception, but didn't get anything.
 
 Thanks for hints!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.0.0b4 released

2015-03-29 Thread Michael Bayer
SQLAlchemy release 1.0.0b4 is now available.

This release contains a handful of fixes and enhancements mostly
to address issues that were reported by beta testers.  Things
are looking very good and it's hoped that *maybe* this will be the
last beta, unless a host of new regressions are reported.

In preparation for 1.0.0, production installations that haven't yet been
tested in the 1.0 series should be making sure that their requirements
files are capped at 0.9.99, to avoid surprise upgrades.

Changelog for 1.0.0b4 is at:  

http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.0b4


SQLAlchemy 1.0.0b4 is available on the Download Page at 
http://www.sqlalchemy.org/download.html.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy-alembic group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 1.0.0b4 released

2015-03-29 Thread Michael Bayer
SQLAlchemy release 1.0.0b4 is now available.

This release contains a handful of fixes and enhancements mostly
to address issues that were reported by beta testers.  Things
are looking very good and it's hoped that *maybe* this will be the
last beta, unless a host of new regressions are reported.

In preparation for 1.0.0, production installations that haven't yet been
tested in the 1.0 series should be making sure that their requirements
files are capped at 0.9.99, to avoid surprise upgrades.

Changelog for 1.0.0b4 is at:  

http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.0b4


SQLAlchemy 1.0.0b4 is available on the Download Page at 
http://www.sqlalchemy.org/download.html.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Process UNION results

2015-03-26 Thread Michael Bayer
I’d probably try to order_by(text(“union_date”)) so that it isn’t
interpreted as a column object.

If that doesn’t work, then the query here should use a simple Core union
object, and then a new query
session.query(my_union.c.union_date).distinct().order_by(my_union.c.union_date).




pidev...@gmail.com wrote:

 Hi,
 
 I am trying to process UNION results - there is a code, but I think it is 
 better to ready it properly formatted, thus I am attaching url to stack: 
 http://stackoverflow.com/questions/29208591/sqlalchemy-process-union-result
 
 I would be grateful if you could give me any hints how to deal with it.
 
 Thanks and cheers,
 TSz
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-25 Thread Michael Bayer
Im trying to avoid having to write a full example for you from scratch so if
you could provide everything in one example, both models and where you want
the relationships, with all the columns, we can work from there, thanks.


Adrian adr...@planetcoding.net wrote:

 In case it's unclear what exactly I'm trying to do, here's the version with 
 the relationship defined right in the User model that works fine.
 I'd like to do this exact same thing, but somehow define the relationship 
 outside the User model. Preferably by using the normal declarative syntax to 
 define the association table and defining the relationship there.
 
 # in the User model:
 
 favorite_users = db.relationship(
 'User',
 secondary=favorite_user_table,
 primaryjoin=id == favorite_user_table.c.user_id,
 secondaryjoin=(id == favorite_user_table.c.target_id)  ~is_deleted,
 lazy=True,
 backref=db.backref('favorite_of', lazy=True),
 )
 
 # the association table:
 favorite_user_table = db.Table(
 'favorite_users',
 db.metadata,
 db.Column(
 'user_id',
 db.Integer,
 db.ForeignKey('users.users.id'),
 primary_key=True,
 nullable=False,
 index=True
 ),
 db.Column(
 'target_id',
 db.Integer,
 db.ForeignKey('users.users.id'),
 primary_key=True,
 nullable=False
 ),
 schema='users'
 )
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-03-24 Thread Michael Bayer


Richard Gerd Kuesters | Pollux rich...@pollux.com.br wrote:

 hi all!
 
 i'm dealing with a little problem here. i have a parent table and its two 
 inheritances. there is a value that both children have and must be unique 
 along either types. is there a way to move this column to the parent and use 
 a constraint in the child? my implementation is postgres 9.4+ with psycopg2 
 only.

if this is single table inheritance then the constraint would most ideally
be placed on the parent class.

if you’re trying to make this “magic” such that you can semantically keep
the unique constraints on the child classes, you’d need to build out a
conditional approach within @declared_attr. IMO I think this is an idealized
edge case that in the real world doesn’t matter much - just do what works
(put the col / constraint on the base).

the approach is described at
http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative/inheritance.html#resolving-column-conflicts.
You’d need to make this work for both the column and the constraint.


 as a simple example (i'm just creating this example to simplify things), this 
 works:
 
 class MyParent(Base):
 
 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)
 
 __mapper_args__ = {
 polymorphic_on: foo_type,
 polymorphic_identity: 0
 }
 
 
 class MyChild1(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False)
 child1_specific_name = Column(Unicode(5), nullable=False)
 child1_baz_stuff = Column(Boolean, default=False)
 
 __mapper_args__ = {
 polymorphic_identity: 1
 }
 
 __table_args__ = (
 UniqueConstraint(bar_id, child1_specific_name,),  # works, bar_id is 
 in MyChild1
 )
 
 
 class MyChild2(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False)
 child2_specific_code = Column(UUID, nullable=False)
 child2_baz_stuff = Column(Float, nullable=False)
 
 __mapper_args__ = {
 polymorphic_identity: 2
 }
 
 __table_args__ = (
 UniqueConstraint(bar_id, child2_specific_code,),  # works, bar_id is 
 in MyChild2
 )
 
 
 but i would like to do this, if possible:
 
 class MyParent(Base):
 
 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), 
 nullable=False)  # since both child uses bar_id, why not having it on the 
 parent?
 
 __mapper_args__ = {
 polymorphic_on: foo_type,
 polymorphic_identity: 0
 }
 
 
 class MyChild1(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 child1_specific_name = Column(Unicode(5), nullable=False)
 child1_baz_stuff = Column(Boolean, default=False)
 
 __mapper_args__ = {
 polymorphic_identity: 1
 }
 
 __table_args__ = (
 UniqueConstraint(MyParent.bar_id, child1_specific_name,),  # will it 
 work?
 )
 
 
 class MyChild2(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 child2_specific_code = Column(UUID, nullable=False)
 child2_baz_stuff = Column(Float, nullable=False)
 
 __mapper_args__ = {
 polymorphic_identity: 2
 }
 
 __table_args__ = (
 UniqueConstraint(MyParent.bar_id, child2_specific_code,),  # will it 
 work?
 )
 
 
 well, will it work without being a concrete inheritance? :)
 
 
 best regards,
 richard.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't figure out multiple level secondary join

2015-03-24 Thread Michael Bayer


Andrew Millspaugh millspaugh.and...@gmail.com wrote:

 I've got a class hierarchy that looks something like this:
 
 [ A ] 1* [ B ] 1-* [ C ] 1--* [ D ] 10..1 [ E 
 ] 1..*--0..1 [ F ]
 
 orgprojticketsnap 
   bidlimit  ticketset
 
 And I'm trying to add a relationship from A to F with a backref. The 
 relationship definition (on the A model) looks like:
 
 f = DB.relationship('F',
 secondary=(
 'join(F, E, F.id == E.f_id)'
 '.join(D, E.d_id == D.id)'
 '.join(C, D.c_id == C.id)'
 '.join(B, C.b_id == B.id)'
 ),
 primaryjoin='A.id == B.a_id',
 secondaryjoin='E.f_id == F.id',
 backref=DB.backref('a', uselist=False), viewonly=True
 )
 
 Now, if I query A.f, I get all of the F's, instead of just the ones which 
 have a relationship with A. I'm sure I'm missing something simple, but I 
 can't seem to find it... Any help out there?

what does your SQL output say? Is this query(A).join(A.f) or the “f”
attribute on an existing “A”? I’m assuming you got this style from
http://docs.sqlalchemy.org/en/rel_0_9/orm/join_conditions.html#composite-secondary-joins,
note that its experimental and not well supported. In most cases you should
deal with individual relationships between each class.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Adding a listener on a backref

2015-03-24 Thread Michael Bayer


Cyril Scetbon cscet...@gmail.com wrote:

 I've already tried. But when I try to define a listener it says the user 
 attribute does not exist on Address which is true as it's not defined. I 
 suppose there is a time when it's defined (hen the relationship is actually 
 auto-generated) and maybe it's just the location of the event which is not 
 the right one.

two choices:

1. call configure_mappers() so that the backref resolves itself to the other
side.

2. use two distinct relationship() objects with back_populates instead. See
docs.sqlalchemy.org/en/rel_0_9/orm/backref.html.


 Le mardi 24 mars 2015 14:23:00 UTC+1, Michael Bayer a écrit :
 the “user” backref here is a relationship() like any other, just specify 
 Address.user as the target of the event. 
 
 Cyril Scetbon csce...@gmail.com wrote: 
 
  Hi, 
  
  Is there a way to add an event listener on a backref ? 
  
  I have something like : 
  
  class User(Base): 
  
  
  __tablename__ = 'user' 
  
  
  id = Column(Integer, primary_key=True) 
  
  
  name = Column(String) 
  
  
  
  addresses = relationship(Address, backref=user) 
  
  
  
  class Address(Base): 
  
  
  __tablename__ = 'address' 
  
  
  id = Column(Integer, primary_key=True) 
  
  
  email = Column(String) 
  
  
  user_id = Column(Integer, ForeignKey('user.id')) 
  
  
  I'd like to do add an action when a dev set the attribute Address.user. 
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send an 
  email to sqlalchemy+...@googlegroups.com. 
  To post to this group, send email to sqlal...@googlegroups.com. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Adding a listener on a backref

2015-03-24 Thread Michael Bayer
the “user” backref here is a relationship() like any other, just specify
Address.user as the target of the event.

Cyril Scetbon cscet...@gmail.com wrote:

 Hi,
 
 Is there a way to add an event listener on a backref ?
 
 I have something like :
 
 class User(Base):
 
 
 __tablename__ = 'user'
 
 
 id = Column(Integer, primary_key=True)
 
 
 name = Column(String)
 
 
 
 addresses = relationship(Address, backref=user)
 
 
 
 class Address(Base):
 
 
 __tablename__ = 'address'
 
 
 id = Column(Integer, primary_key=True)
 
 
 email = Column(String)
 
 
 user_id = Column(Integer, ForeignKey('user.id'))
 
 
 I'd like to do add an action when a dev set the attribute Address.user.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-03-24 Thread Michael Bayer
are these two separate constraints?  I just looked and it seems like they are 
distinct.

I just added a fix to 1.0 because someone was hacking around something similar 
to this.

The easiest way to get these for the moment is just to create the 
UniqueConstraint outside of the class definition.

class Foo(Base):
# …

class Bar(Foo):
   # …

UniqueConstraint(Bar.x, Foo.y)

that way all the columns are set up, should just work.



Richard Gerd Kuesters | Pollux rich...@pollux.com.br wrote:

 well, understanding better the docs for column conflicts, can i use a 
 declared_attr in a unique constraint? if yes, my problem is solved :)
 
 
 On 03/24/2015 10:33 AM, Michael Bayer wrote:
 Richard Gerd Kuesters | Pollux 
 rich...@pollux.com.br
  wrote:
 
 
 hi all!
 
 i'm dealing with a little problem here. i have a parent table and its two 
 inheritances. there is a value that both children have and must be unique 
 along either types. is there a way to move this column to the parent and 
 use a constraint in the child? my implementation is postgres 9.4+ with 
 psycopg2 only.
 if this is single table inheritance then the constraint would most ideally
 be placed on the parent class.
 
 if you’re trying to make this “magic” such that you can semantically keep
 the unique constraints on the child classes, you’d need to build out a
 conditional approach within @declared_attr. IMO I think this is an idealized
 edge case that in the real world doesn’t matter much - just do what works
 (put the col / constraint on the base).
 
 the approach is described at
 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative/inheritance.html#resolving-column-conflicts
 .
 You’d need to make this work for both the column and the constraint.
 
 
 
 as a simple example (i'm just creating this example to simplify things), 
 this works:
 
 class MyParent(Base):
 
 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)
 
 __mapper_args__ = {
 polymorphic_on: foo_type,
 polymorphic_identity: 0
 }
 
 
 class MyChild1(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), 
 nullable=False)
 child1_specific_name = Column(Unicode(5), nullable=False)
 child1_baz_stuff = Column(Boolean, default=False)
 
 __mapper_args__ = {
 polymorphic_identity: 1
 }
 
 __table_args__ = (
 UniqueConstraint(bar_id, child1_specific_name,),  # works, bar_id 
 is in MyChild1
 )
 
 
 class MyChild2(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), 
 nullable=False)
 child2_specific_code = Column(UUID, nullable=False)
 child2_baz_stuff = Column(Float, nullable=False)
 
 __mapper_args__ = {
 polymorphic_identity: 2
 }
 
 __table_args__ = (
 UniqueConstraint(bar_id, child2_specific_code,),  # works, bar_id 
 is in MyChild2
 )
 
 
 but i would like to do this, if possible:
 
 class MyParent(Base):
 
 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), 
 nullable=False)  # since both child uses bar_id, why not having it on the 
 parent?
 
 __mapper_args__ = {
 polymorphic_on: foo_type,
 polymorphic_identity: 0
 }
 
 
 class MyChild1(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 child1_specific_name = Column(Unicode(5), nullable=False)
 child1_baz_stuff = Column(Boolean, default=False)
 
 __mapper_args__ = {
 polymorphic_identity: 1
 }
 
 __table_args__ = (
 UniqueConstraint(MyParent.bar_id, child1_specific_name,),  # will 
 it work?
 )
 
 
 class MyChild2(MyParent):
 
 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 child2_specific_code = Column(UUID, nullable=False)
 child2_baz_stuff = Column(Float, nullable=False)
 
 __mapper_args__ = {
 polymorphic_identity: 2
 }
 
 __table_args__ = (
 UniqueConstraint(MyParent.bar_id, child2_specific_code,),  # will 
 it work?
 )
 
 
 well, will it work without being a concrete inheritance? :)
 
 
 best regards,
 richard.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to 
 sqlalchemy+unsubscr...@googlegroups.com
 .
 To post to this group, send email to 
 sqlalchemy@googlegroups.com
 .
 Visit this group at 
 http://groups.google.com/group/sqlalchemy

Re: [sqlalchemy] Can't figure out multiple level secondary join

2015-03-24 Thread Michael Bayer


Andrew Millspaugh millspaugh.and...@gmail.com wrote:

 Yes, I got the style from there. 
 
 I have a great great great great grandchild that I need to be able to access 
 by the great great great great grandparent id. How would you recommend doing 
 that, then? I don't want to have to write f_instances = 
 F.query.join(F.e).join(E.d).join(D.c).join(C.b).join(B.a).filter(A.id == 
 1).all() just to find the F's that have A.id 1.

If that’s all you need, why not put an accessor on F?

class F(Base):
# …

   @classmethod
   def join_to_a(cls):
return (F.e, E.d, D.c, C.b, B.a)


then:

F.query.join(*F.join_to_a())





 
 On Tuesday, March 24, 2015 at 6:28:31 AM UTC-7, Michael Bayer wrote:
 
 
 Andrew Millspaugh millspau...@gmail.com wrote: 
 
  I've got a class hierarchy that looks something like this: 
  
  [ A ] 1* [ B ] 1-* [ C ] 1--* [ D ] 10..1 [ 
  E ] 1..*--0..1 [ F ] 
  
  orgprojticketsnap   
  bidlimit  ticketset 
  
  And I'm trying to add a relationship from A to F with a backref. The 
  relationship definition (on the A model) looks like: 
  
  f = DB.relationship('F', 
  secondary=( 
  'join(F, E, F.id == E.f_id)' 
  '.join(D, E.d_id == D.id)' 
  '.join(C, D.c_id == C.id)' 
  '.join(B, C.b_id == B.id)' 
  ), 
  primaryjoin='A.id == B.a_id', 
  secondaryjoin='E.f_id == F.id', 
  backref=DB.backref('a', uselist=False), viewonly=True 
  ) 
  
  Now, if I query A.f, I get all of the F's, instead of just the ones which 
  have a relationship with A. I'm sure I'm missing something simple, but I 
  can't seem to find it... Any help out there? 
 
 what does your SQL output say? Is this query(A).join(A.f) or the “f” 
 attribute on an existing “A”? I’m assuming you got this style from 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/join_conditions.html#composite-secondary-joins,
  
 note that its experimental and not well supported. In most cases you should 
 deal with individual relationships between each class. 
 
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send an 
  email to sqlalchemy+...@googlegroups.com. 
  To post to this group, send email to sqlal...@googlegroups.com. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't figure out multiple level secondary join

2015-03-24 Thread Michael Bayer


Andrew Millspaugh millspaugh.and...@gmail.com wrote:

 
 I'm pretty inexperienced with SQLAlchemy. I mostly want to know the best way 
 to deal with a relationship like this. I am trying to avoid adding a fake 
 relationship directly between F and A, as it could get out of sync with the 
 actual nested relationship. In general, if I have a sqlalchemy class for 
 which I know the instances will always be uniquely identified by a (great)^n 
 grandparent, how should I be accessing the grandparents from the grandchild 
 and vice versa?

The usual way is SQLA just handles simple relationships and you use Python
to hide the gaps between the two; that is, methods and properties. A method
to iterate through all of the d.c.b.a for each e, for example, you’d put a
@property on E that is “d.c.b.a”. You probably want to consider what SQL you
want to see or not, since a relationship that joins straight from F to A
would get there without loading any rows for E, D, C or B in between, so may
be more what you want from a performance perspective.

There’s an extension called association proxy that is also used to build up
systems like these on the Python side in a potentially nicer way than using
straight methods and properties. Jumping across 5 gaps on a regular
basis is still a complex case no matter what.



 On Tuesday, March 24, 2015 at 1:12:24 AM UTC-7, Andrew Millspaugh wrote:
 I've got a class hierarchy that looks something like this:
 
 [ A ] 1* [ B ] 1-* [ C ] 1--* [ D ] 10..1 [ E 
 ] 1..*--0..1 [ F ]
 
 orgprojticketsnap 
   bidlimit  ticketset
 
 And I'm trying to add a relationship from A to F with a backref. The 
 relationship definition (on the A model) looks like:
 
 f = DB.relationship('F',
 secondary=(
 'join(F, E, F.id == E.f_id)'
 '.join(D, E.d_id == D.id)'
 '.join(C, D.c_id == C.id)'
 '.join(B, C.b_id == B.id)'
 ),
 primaryjoin='A.id == B.a_id',
 secondaryjoin='E.f_id == F.id',
 backref=DB.backref('a', uselist=False), viewonly=True
 )
 
 Now, if I query A.f, I get all of the F's, instead of just the ones which 
 have a relationship with A. I'm sure I'm missing something simple, but I 
 can't seem to find it... Any help out there?
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't figure out multiple level secondary join

2015-03-24 Thread Michael Bayer

Andrew Millspaugh millspaugh.and...@gmail.com wrote:

 Because I also want to be able to go the other way. I want to be able to get 
 the a attribute from any given F. 

well, then you’d put one on F also.

if you want to send working code and SQL we can see what’s wrong with the
relationship b.c. the one in the docs works (I think).



 



 On Tuesday, March 24, 2015 at 4:47:42 PM UTC-7, Michael Bayer wrote:
 
 
 Andrew Millspaugh millspau...@gmail.com wrote: 
 
  Yes, I got the style from there. 
  
  I have a great great great great grandchild that I need to be able to 
  access by the great great great great grandparent id. How would you 
  recommend doing that, then? I don't want to have to write f_instances = 
  F.query.join(F.e).join(E.d).join(D.c).join(C.b).join(B.a).filter(A.id == 
  1).all() just to find the F's that have A.id 1. 
 
 If that’s all you need, why not put an accessor on F? 
 
 class F(Base): 
 # … 
 
@classmethod 
def join_to_a(cls): 
 return (F.e, E.d, D.c, C.b, B.a) 
 
 
 then: 
 
 F.query.join(*F.join_to_a()) 
 
 
 
 
 
  
  On Tuesday, March 24, 2015 at 6:28:31 AM UTC-7, Michael Bayer wrote: 
  
  
  Andrew Millspaugh millspau...@gmail.com wrote: 
  
   I've got a class hierarchy that looks something like this: 
   
   [ A ] 1* [ B ] 1-* [ C ] 1--* [ D ] 10..1 
   [ E ] 1..*--0..1 [ F ] 
   
   orgprojticketsnap 
 bidlimit  ticketset 
   
   And I'm trying to add a relationship from A to F with a backref. The 
   relationship definition (on the A model) looks like: 
   
   f = DB.relationship('F', 
   secondary=( 
   'join(F, E, F.id == E.f_id)' 
   '.join(D, E.d_id == D.id)' 
   '.join(C, D.c_id == C.id)' 
   '.join(B, C.b_id == B.id)' 
   ), 
   primaryjoin='A.id == B.a_id', 
   secondaryjoin='E.f_id == F.id', 
   backref=DB.backref('a', uselist=False), viewonly=True 
   ) 
   
   Now, if I query A.f, I get all of the F's, instead of just the ones which 
   have a relationship with A. I'm sure I'm missing something simple, but I 
   can't seem to find it... Any help out there? 
  
  what does your SQL output say? Is this query(A).join(A.f) or the “f” 
  attribute on an existing “A”? I’m assuming you got this style from 
  http://docs.sqlalchemy.org/en/rel_0_9/orm/join_conditions.html#composite-secondary-joins,
   
  note that its experimental and not well supported. In most cases you should 
  deal with individual relationships between each class. 
  
   -- 
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group. 
   To unsubscribe from this group and stop receiving emails from it, send an 
   email to sqlalchemy+...@googlegroups.com. 
   To post to this group, send email to sqlal...@googlegroups.com. 
   Visit this group at http://groups.google.com/group/sqlalchemy. 
   For more options, visit https://groups.google.com/d/optout. 
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send an 
  email to sqlalchemy+...@googlegroups.com. 
  To post to this group, send email to sqlal...@googlegroups.com. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-24 Thread Michael Bayer


ThiefMaster adr...@planetcoding.net wrote:

 The is_deleted column is in the User table. If possible I'd rather avoid 
 having to replicate it in the favorite tables (hard-deleting favorites is 
 fine, I only need soft deletion for users).

the column can be on either side.

I think maybe you want to illustrate the actual attempts that aren’t working
if there’s some kind of help being sought here.



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-24 Thread Michael Bayer


Adrian adr...@planetcoding.net wrote:

 @declared_attr
 def user(cls):
 The user owning this favorite
 return db.relationship(
 'User',
 lazy=False,
 foreign_keys=lambda: [cls.user_id],
 backref=db.backref(
 '_favorite_users',
 lazy=True,
 cascade='all, delete-orphan',
 primaryjoin=lambda: '(User.id == user_id)  
 ~target.is_deleted'
 )
 )


the primaryjoin should be either the expression as a Python object (not a 
string), 
or a lambda that returns the expression object (not a string), or if you’re 
using 
declarative it can be a string that’s eval’ed. But not a lambda *and* a string 
at 
the same time.


 I've added it on the backref since that's the relationship where I want the 
 filter to apply.
 In the end I'd like to be able to do this: 
 User.query.get(123)._favorite_users which would get me a list of all the 
 favorite users (I'll be using association_proxy, but for now I need to get 
 the relationships themselves working) besides those users who have 
 is_deleted=True (on the User, not the FavoriteUser).
 
 But no matter what I put there (tried both lambdas and strings), I always get 
 this error (so I can't even try to figure out the correct criteria to use, 
 since it fails early, during mapper configuration time):
 sqlalchemy.exc.ArgumentError: Column-based expression object expected for 
 argument 'primaryjoin'; got: '(User.id == user_id)  ~target.is_deleted', 
 type type 'unicode’

yeah that’s the lambda + string together which is not the correct use.


 
 
 Actually looking at this code again... it's almost a standard 
 many-to-many relationship, so I should probably be using secondary and 
 secondaryjoin somewhere. Can I define this backref-like, i.e. from within the 
 FavoriteUser model? That way I don't have to spread things around so much 
 (which would be the case if I defined the relationship in the User model).

the model here doesn’t illustrate how this would be a many-to-many.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it possible to add another criterion to this backref?

2015-03-24 Thread Michael Bayer


ThiefMaster adr...@planetcoding.net wrote:

 Hi,
 
 I have the following models for a favorite system:
 
 https://gist.github.com/ThiefMaster/e4f622d54c74ee322282
 
 Now I'd like to restrict the relationship that's created by the backref in 
 L24, so it doesn't include any favorited users which have the is_deleted 
 column set to true.
 I tried playing around with primaryjoin in the backref but couldn't get it 
 working.
 
 Is what I'm trying to do actually possible?

yes, if this had an is_deleted column (which it does not) you’d add that to
the primaryjoin (which I don’t see here), it would only select for that column.




 Cheers
 Adrian
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sqlalchemy (0.9.7) double quoting python list items when used in where in statement

2015-03-23 Thread Michael Bayer


Edgaras Lukoševičius edgaras.lukosevic...@gmail.com wrote:

 A quick script with SQL table schema.

won’t work with MySQL. The ability to send a pure string SQL statement
directly to execute() in conjunction with a straight tuple, and have it
magically expand out to an IN, is a psycopg2 / postgresql specific feature.

 If tuple is a hack, then how should I it work with the same logic, but 
 without tuple? Construct a string from tuple? I'm afraid that will be double 
 quoted (sanitized) too.

The SQLAlchemy Core is designed to produce SQL strings on the fly. You need
only call “column.in_(collection)” and the correct IN clause with bound
parameters will be generated dynamically.

see below:

from sqlalchemy import create_engine, select, Table, MetaData
from sqlalchemy.orm import sessionmaker

#engine = create_engine('mysql://root@localhost/preferences', echo=False)
engine = create_engine('mysql://scott:tiger@localhost/test', echo=True)

engine.execute(
CREATE TABLE if not exists `preferences` (
 `recipient` varchar(255) COLLATE latin1_general_ci NOT NULL,
 `col1` tinyint(1) NOT NULL DEFAULT '1',
 `col2` tinyint(1) NOT NULL DEFAULT '1',
 `col3` tinyint(1) NOT NULL DEFAULT '1',
 PRIMARY KEY (`recipient`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci
)

m = MetaData()
t = Table(preferences, m, autoload=True, autoload_with=engine)

Session = sessionmaker(bind=engine)
session = Session()


recipients1 = [recipient1]
recipients2 = [recipient1, recipient2, recipient3]

sql_query = select([t.c.col1, t.c.col2, t.c.col3])

user_configs = \
session.execute(sql_query.where(t.c.recipient.in_(recipients1))).fetchall()

print(user_configs)

user_configs = \
session.execute(sql_query.where(t.c.recipient.in_(recipients2))).fetchall()


print(user_configs)


output:

BEGIN (implicit)
2015-03-23 16:08:57,188 INFO sqlalchemy.engine.base.Engine SELECT 
preferences.col1, preferences.col2, preferences.col3 
FROM preferences 
WHERE preferences.recipient IN (%s)
2015-03-23 16:08:57,188 INFO sqlalchemy.engine.base.Engine ('recipient1',)
[]
2015-03-23 16:08:57,189 INFO sqlalchemy.engine.base.Engine SELECT 
preferences.col1, preferences.col2, preferences.col3 
FROM preferences 
WHERE preferences.recipient IN (%s, %s, %s)
2015-03-23 16:08:57,189 INFO sqlalchemy.engine.base.Engine ('recipient1', 
'recipient2', 'recipient3')
[]






 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 
 engine = create_engine('mysql://root@localhost/preferences', echo=False)
 
 Session = sessionmaker(bind=engine)
 session = Session()
 
 #CREATE TABLE `preferences` (
 # `recipient` varchar(255) COLLATE latin1_general_ci NOT NULL,
 # `col1` tinyint(1) NOT NULL DEFAULT '1',
 # `col2` tinyint(1) NOT NULL DEFAULT '1',
 # `col3` tinyint(1) NOT NULL DEFAULT '1',
 # PRIMARY KEY (`recipient`)
 #) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci
 
 
 recipients1=[recipient1]
 recipients2=[recipient1,recipient2,recipient3]
 
 sql_query=SELECT col1, col2, col3 FROM preferences WHERE recipient IN 
 :recipients
 
 # multi value array
 # this fail because of double quotes
 user_configs = session.execute(sql_query, 
 dict(recipients=tuple(recipients2))).fetchall()
 print user_configs
 
 # single value array
 # this fails because of mysql syntax errox (tuple inserted incorrectly)
 user_configs = session.execute(sql_query, 
 dict(recipients=tuple(recipients1))).fetchall()
 print user_configs
 
 
 
  
 
 2015 m. kovas 23 d., pirmadienis 16:45:48 UTC+2, Edgaras Lukoševičius rašė:
 Hello,
 
 as I'm not receiving any responses in stackoverflow I wil try here. Can 
 someone help me with this issue?
 
 http://stackoverflow.com/questions/29195825/sqlalchemy-double-quoting-list-items
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] auto column naming (declarative)

2015-03-23 Thread Michael Bayer
the before_parent_attach() event could do this, sure.  Might be a little tricky:

http://docs.sqlalchemy.org/en/rel_0_9/core/events.html?highlight=before_parent_attach#sqlalchemy.events.DDLEvents.before_parent_attach

Richard Gerd Kuesters | Pollux rich...@pollux.com.br wrote:

 hi all!
 
 i remember bumping into this somewhere, but now that i need it, i can't find. 
 Murphy ... well, here's the question:
 
 * the company i work have a certain convention on naming columns in the 
 database level, like dt_ for datetime, u_ for unicode, ut_ for 
 unicodetext, and so on.
 
 the question is, can't I use a event listener to do this by my own? a type is 
 bound to a format, it's quite simple dict. example:
 
 ...
 last_update = Column(dt_last_update, DateTime, on_update=func, 
 default=func)
 ...
 
 To:
 
 ...
 last_update = Column(DateTime, on_update=func, default=func)  # 
 dt_last_update is automatically created since it's a DateTime type (at the 
 database level only)
 ...
 
 
 best regards,
 richard.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Connections and the session

2015-03-21 Thread Michael Bayer


dcgh...@gmail.com wrote:

 Hello there,
 
 I have the following code structure (more or less) in several processes: 
 
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 
 engine = create_engine(some_mysql_dburi)
 session = sessionmaker(bind=engine)()
 
 while True:
query = session.query(...).filter(...)
# do something here from that query, update, insert new rows, etc.
session.commit()
 
 I am getting a too many connections error and it's not all the time, so I'm 
 not sure what would it be. Although there are several processes running, I 
 don't think at some time I'm connected to mysql server more than the amount 
 of connections allowed, which by default it's about 150 connections I think, 
 and I'm not modifying the default value. So I must have some errors in this 
 layout of my code. Here are some of the questions about the session and the 
 connection(s) within I'd like to ask:
 1- How many opened connections are maintained by a single session object? I 
 read somewhere it's only one, but, here it's my next

if only one engine is associated with it, them just one connection.

 2- Does the session close the connection being used or requests for another?

assuming the session isn’t in “autocommit” mode, the connection stays open
until you call commit(), rollback(), or close().

 If it requests for a new one, does it close the previous (i.e., return it to 
 the engine pool)?

just one connection at a time yup

 3- Should I call session.close() right after the session.commit() statement?

it’s not necessary, however all objects that are associated with that
session are still able to make it start up a new transaction if you keep
using those objects and hit upon unloaded attributes. close() would prevent
that.

 If have to, do I have to put the session creation inside the while?

not the way it is above; after that commit(), the Session there isn’t
connected to anything. Unless the objects associated with it are being used
in other threads, or something like that.

 I read that when the session gets garbage collected the connection(s) is(are) 
 closed,

that happens also but the commit() is enough (and close() makes it sure).

 so I could do this, but I don't know if it is a good use of the session.
 
 I read the docs many times and I didn't find anything solid that answers that 
 questions to me, any help on the subject will be very appreciated, thanks in 
 advance.  

the best section is this one:
http://docs.sqlalchemy.org/en/rel_0_9/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 1.0.0b3 released

2015-03-20 Thread Michael Bayer
SQLAlchemy release 1.0.0b3 is now available.

This is an emergency re-release of 1.0.0b2 to repair an erroneous commit, 
regarding the 
MySQL “utf8mb4” fix which was inadvertently commented out.

Download 1.0.0b3 at:

http://www.sqlalchemy.org/download.html


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy 1.0.0b2 released

2015-03-20 Thread Michael Bayer


Jonathan Vanasco jonat...@findmeon.com wrote:

 Wow.  This is noticeably faster.  

that’s good to hear, I was able to chip away at overhead in many areas
though most are relatively small.



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] ORM general concept question (how to combine all stuff together?)

2015-03-20 Thread Michael Bayer


Ivan Evstegneev bravo@gmail.com wrote:

 Hi Michael 
 
 Thanks for the reply.
 Since the moment I wrote this mail, I  got some more insight on how things 
 work in DBs. 
 Just kept to harvest some info from the NET. 
 
 Though thanks you for answers. 
 
 heh…why aren’t you asking on *their* list then ? :) 
 
 Already did ^_^ even prior to writing here. But, as it seems, they are just 
 one man army, I got a response from a developer, he asked me what my 
 project is about and then dissipated )

actually they are a two-man army. Here at SQLAlchemy Inc. we try to always
have a satisfied customer! (if at all possible).


 Again, thank you.
 
 
 Ivan.
 
 
 
 
 On Friday, March 20, 2015 at 1:30:19 AM UTC+2, Michael Bayer wrote:
 
 
 Ivan Evstegneev brav...@gmail.com wrote: 
 
  Hello, 
  
  
  First of all I'm new to DBs so please don't hang me on a tree ))) 
  
  
  I have basic and simple questions(at least it simple for me ^_^ ). 
  
  While googling and reading some tutorials about ORM DBs at whole, I still 
  cannot build up a logical picture of how it works. 
 
 
 I think you might like some of the videos I have up at 
 http://www.sqlalchemy.org/library.html#talks. The “Session in Depth” gets 
 pretty step-by-step about the internals. 
 
 
  Here  is the brief example (I will use pony orm syntax cause it is readable 
  and simple  ) 
 
 heh…why aren’t you asking on *their* list then ? :) 
 
 
 
  
  Lets begin. 
  
  First of all we need ti initialize a DB, like this: 
  
   db = Database('sqlite', '/path/to/the/test_db.sqlite', create_db=True) 
  
  This command says that we've created a DB file named test_db.sqlite and 
  it based on sqlite. 
  
  Then, let's say  I'll create two classes(connected via cars attribute): 
  
  class Person (db.Entity): 
name = Required(str) 
age = Required(int) 
cars = Set(Car) 
  ... 
  
  class Car(db.Entity): 
   make = Required(str) 
   model = Required(str) 
   owner = Required(Person) 
  ... 
  
  Finally I make a mapping between these classses and test_db.sqlite 
  
  db.generate_mapping(create_tables=True) 
  
  
  So, at this moment I have some classes stored in the memory (cause I worked 
  in python interactive shell) and physical test_db.sqlite placed on my 
  hard drive. 
  This file is actually contains an empty tables, am I right? 
 
 um…well if create_tables is like create_all(), then yes. go to a shell and 
 type “sqlite3 name_of_yourfile”, you can query it directly. 
 
  Because I didn't initialized any entities yet. 
  I can keep working via interactive console in order to accomplish this task 
  and then just write commit() command in order to update test_db.sqlite 
  file. 
  
  Till this point everything looks fine.(I hope) 
  
  The question arises when I ask my self how all this stuff should work with 
  my code? 
  
  I mean that I need to write some functions that will update my_db data. 
  
  Suppose  I have my_main_routine.py which, among the other things, imports 
  some data from xls files. But how do I actually put these xls values in my 
  data base? 
  
  Should it look like that: 
  
  my_main_routine.py 
  my_db_classes.py  which will consist of all the classes I've created 
  before.(i.e. Person and Car) 
  test_db.sqlite  supose I've already mapped my classes to this file. 
 
 if you’re writing a script to read an .xls file, I wouldn’t overthink it to 
 start. If you’re going to use multiple files, I’d advise using a traditional 
 “package” setup, e.g. with an __init__.py and all that. But an .xls reader 
 just to prototype probably can be in one file to start. 
 
  How do I handle all my these db-classes inside of main_routine? 
  
  Does it look like that(generally): 
  
  # my_main_routine.py 
  
  import my_db_classes 
  
  # db binding commands: 
  db.bind('sqlite', '/path/to/the/test_db.sqlite', create_db=True) 
  
  . my_code_ for_importing_xls_values... 
  ..my_code_ for_importing_xls_values... 
  ..my_code_ for_importing_xls_values... 
  ..my_code_ for_importing_xls_values... 
  
  # now I need to pass these xls values to my db 
  # so should it be written like that? -- 
  
  p1 = Person(name = 'John', age = 20) 
  p2 = Person(name = 'Mary' , age = 23) 
  c1 = Car(make='Toyota', model = 'Prius', owner=p2) 
  c2 = Car(make='Ford', model='Exploler', owner=p1) 
 
 that’s an easy way to do it, sure. There’s ways that people might want to 
 automate how the names get matched up but I think if you’re starting out, I 
 think on a name-by-name basis works, you’d have a loop that iterates through 
 rows in the XLS, and for each row it makes a bunch of objects. 
 
  #and then just 
  
  commit() 
  
  #is that all? I just work directly with classes (in SQLAlchemy for 
  particularly) or I need some decorators/ other stuff? 
 
 that is all, if you put things in the session with session.add() first

[sqlalchemy] SQLAlchemy 1.0.0b2 released

2015-03-20 Thread Michael Bayer
SQLAlchemy release 1.0.0b2 is now available.

This is the second pre-release in the 1.0.0 series which includes a small
handful of regression fixes and some additional features. We'd like to thank
those who have been beta testing the 1.0.0 series and encourage all beta
testers to upgrade to beta2!

Users should carefully review the Migration Document [1] for 1.0 for the 1.0
series overall, as well as the Changelog [2] to note which behaviors and issues
are affected in this release. We'd like to thank the many contributors who
helped with this release.

SQLAlchemy 1.0.0b2 is available on the Download Page [3].

[1] http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html
[2] 
http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-1.0.0b2
[3] http://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Tests hang at test_ora8_flags

2015-03-19 Thread Michael Bayer


Tony Locke tlo...@tlocke.org.uk wrote:

 Hi, I've encountered a glitch when running py.test over the entire test suite 
 with the postgresql+pg8000 dialect:
 
 py.test --dburi postgresql+pg8000://postgres:xxx@localhost:5436/test
 
 the test hangs at:
 
 test/dialect/test_oracle.py::CompatFlagsTest::test_ora8_flags PASSED
 
 then if I do a Ctrl-C it carries on and reports all tests as having passed. 
 The versions are:
 
 platform linux -- Python 3.4.0 -- py-1.4.26 -- pytest-2.6.4
 pg8000 1.10.2 (works fine with the previous version 1.10.1)
 the sqlalchemy version is the latest commit on the master branch (commit 
 cd076470baf2fce0eebf5853e3145d96a9d48378)
 
 One really odd thing is that if I run py.test with the -s option (don't 
 capture output) then it runs fine. Also, if I run just 
 test/dialect/test_oracle.py then it runs okay, it's only  a problem when 
 running all tests. Any ideas gratefully received!

usually what happens here is a test doesn’t clean up a result set after
itself, the test ends and that connection stays floating, e.g. not checked
in. then the teardown of the test suite can’t drop the tables.

I usually view this just doing a “ps -ef | grep post” and the fix is to
figure out which test isn’t cleaning up after itself, and then adjusting.
this can be tricky, like, it might not be that particular test you see, it
could be some other test.

But figuring out where its locking, which 99% of the time is on a DROP, is a
good first step.

As to why this might have changed, if pg8000 made any changes that make it
less likely that a dereferenced cursor or connection would be garbage
collected, like a reference cycle, that could trigger this kind of thing.



 Thanks,
 
 Tony.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to output SQLAlchemy logger only to a file?

2015-03-19 Thread Michael Bayer


r...@rosenfeld.to wrote:

 From what I read in the SQLAlchemy logging configuration documentation, I 
 understood that the echo argument on sqlalchemy.create_engine controls 
 whether sqlalchemy logging is forced to stdout, but shouldn't affect whether 
 log messages are available to log handlers.
 
 In the code below, I get no output to stdout OR db.log if echo=False and I 
 get output to both stdout AND db.log if echo=True. I want nothing to stdout 
 while db.log is still populated. How can I accomplish that?

Hmm well logging is looking at the effective level of the logger
itself, not the handler, so you’d need db_logger.setLevel(db_log_level). I
think the “level” that’s on the handler is an additional level of filtering.

The effective level of a logger you can see like this:

 import logging
 log = logging.getLogger(asdf)
 log.getEffectiveLevel()
30

which we can see is WARN:

 logging.WARN, logging.INFO, logging.DEBUG
(30, 20, 10)


the “echo=True” flag sets up this level if not set already.




 
 This is python 2.7.6 and sqlalchemy 0.9.9
 
 import
  sqlalchemy
 
 import
  logging
 
 active_db_url 
 = 'postgres://user:pass@localhost/log_test'
 
 
 db_log_file_name 
 = 'db.log'
 
 db_log_level 
 = logging.
 INFO
 
 db_handler 
 = logging.FileHandler(db_log_file_name)
 
 db_handler
 .setLevel(db_log_level)
 
 
 db_logger 
 = logging.getLogger('sqlalchemy')
 
 db_logger
 .addHandler(db_handler)
 
 
 engine 
 = sqlalchemy.create_engine(active_db_url, echo=True)
 
 engine
 .connect()
 
 Cross posted from 
 http://stackoverflow.com/questions/29114627/how-to-output-sqlalchemy-logger-only-to-a-file
  
 
 Thanks in advance,
 
 Rob
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] ORM general concept question (how to combine all stuff together?)

2015-03-19 Thread Michael Bayer


Ivan Evstegneev bravo@gmail.com wrote:

 Hello,
 
 
 First of all I'm new to DBs so please don't hang me on a tree )))
 
 
 I have basic and simple questions(at least it simple for me ^_^ ).
 
 While googling and reading some tutorials about ORM DBs at whole, I still 
 cannot build up a logical picture of how it works. 


I think you might like some of the videos I have up at
http://www.sqlalchemy.org/library.html#talks. The “Session in Depth” gets
pretty step-by-step about the internals.


 Here  is the brief example (I will use pony orm syntax cause it is readable 
 and simple  )

heh…why aren’t you asking on *their* list then ? :)



 
 Lets begin.
 
 First of all we need ti initialize a DB, like this:
 
  db = Database('sqlite', '/path/to/the/test_db.sqlite', create_db=True)
 
 This command says that we've created a DB file named test_db.sqlite and it 
 based on sqlite.
 
 Then, let's say  I'll create two classes(connected via cars attribute):
 
 class Person (db.Entity):
   name = Required(str)
   age = Required(int)
   cars = Set(Car)
 ...
 
 class Car(db.Entity):
  make = Required(str)
  model = Required(str)
  owner = Required(Person)
 ...
 
 Finally I make a mapping between these classses and test_db.sqlite
 
 db.generate_mapping(create_tables=True)
 
 
 So, at this moment I have some classes stored in the memory (cause I worked 
 in python interactive shell) and physical test_db.sqlite placed on my hard 
 drive.
 This file is actually contains an empty tables, am I right?

um…well if create_tables is like create_all(), then yes. go to a shell and
type “sqlite3 name_of_yourfile”, you can query it directly.

 Because I didn't initialized any entities yet.
 I can keep working via interactive console in order to accomplish this task 
 and then just write commit() command in order to update test_db.sqlite 
 file.
 
 Till this point everything looks fine.(I hope)
 
 The question arises when I ask my self how all this stuff should work with my 
 code?
 
 I mean that I need to write some functions that will update my_db data. 
 
 Suppose  I have my_main_routine.py which, among the other things, imports 
 some data from xls files. But how do I actually put these xls values in my 
 data base?
 
 Should it look like that:
 
 my_main_routine.py
 my_db_classes.py  which will consist of all the classes I've created 
 before.(i.e. Person and Car)
 test_db.sqlite  supose I've already mapped my classes to this file. 

if you’re writing a script to read an .xls file, I wouldn’t overthink it to
start. If you’re going to use multiple files, I’d advise using a traditional
“package” setup, e.g. with an __init__.py and all that. But an .xls reader
just to prototype probably can be in one file to start.

 How do I handle all my these db-classes inside of main_routine?
 
 Does it look like that(generally):
 
 # my_main_routine.py
 
 import my_db_classes
 
 # db binding commands:
 db.bind('sqlite', '/path/to/the/test_db.sqlite', create_db=True)
 
 . my_code_ for_importing_xls_values...
 ..my_code_ for_importing_xls_values...
 ..my_code_ for_importing_xls_values...
 ..my_code_ for_importing_xls_values...
 
 # now I need to pass these xls values to my db
 # so should it be written like that? --
 
 p1 = Person(name = 'John', age = 20)
 p2 = Person(name = 'Mary' , age = 23)
 c1 = Car(make='Toyota', model = 'Prius', owner=p2)
 c2 = Car(make='Ford', model='Exploler', owner=p1)

that’s an easy way to do it, sure. There’s ways that people might want to
automate how the names get matched up but I think if you’re starting out, I
think on a name-by-name basis works, you’d have a loop that iterates through
rows in the XLS, and for each row it makes a bunch of objects.

 #and then just
 
 commit()
 
 #is that all? I just work directly with classes (in SQLAlchemy for 
 particularly) or I need some decorators/ other stuff?

that is all, if you put things in the session with session.add() first, then
you just commit and it flushes them.

 EOF
 
 
 
 Furthermore, as I can barely understand, in order to work with db inside 
 my_routine file I need (preferably) create a separate files i.e.:
 
 my_db_classes.py
 my_db_initial_mapper.py
 # in this file my db_mapping functions should be placed
 # as a result the my_db.sqlite will be created.
 
 and then my main code should look like this:
 
 # my_main_routine.py
 import my_db_classes
 import my_db_initial_mapper
 
 #binding functions may be placed in main_routine file(right?)
 
 .. binding code.. 
 
 ..some xls related code.
 
 ..entities assignment code.
 
 commit()
 
 EOF
 
 Is this right? 

that could work sure.


 
 
 I think my problem appeared because of a lot of examples about databases are 
 using interactive prompt so, it's kinda tricky to combine all of it when 
 talking about python modules.

You might want to poke around for 

Re: [sqlalchemy] select_from() and single table inheritance

2015-03-19 Thread Michael Bayer


Julio César Gázquez julio_li...@mebamutual.com.ar wrote:

 Hi.
 
 I have an inheritance hierarchy on the form, C inherits B (with single table 
 inheritance), B inherits A (with joined table inheritance).
 
 The next query works perfectly:
 
 session.query(C)\
.join(C.related_objects)
.filter( ... )  # Unrelated filtering conditions
 
 as the resulting SQL does the join between the tables for A.__tablename__ and 
 B.__tablename__, and filters B.__tablename__ using the discriminator column 
 in the WHERE.
 
 However, I need to do this:
 
 session.query(func.sum(RelatedTable.value))\
.select_from(C)\
.join(C.related_objects)\
.filter( ... )\  # Unrelated filtering conditions
.one()
 
 While the select_from() still does correctly the A/B join, the filtering 
 condition is missing, so select_from(C) becomes the same as select_from(B).

did you try 1.0.0b1 for this?   The mechanics for single-table have been 
improved in this regard.




 
 I'm getting the right result using a explicit filter condition 
 filter(C.discriminator == C_DISCRIMINATOR), but I guess I'm missing 
 something.
 
 Is this a kind of special case where some extra syntax is required? I didn't 
 find anything relevant in the docs, neither in inheritance nor select_from() 
 parts.
 
 Thanks in advance.
 
 Julio.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Alembic 0.7.5 released

2015-03-19 Thread Michael Bayer
Hey all - 

Alembic 0.7.5 is now available.  

This release has a handful of bug fixes and some new features.

Changelog is available at:  
http://alembic.readthedocs.org/en/latest/changelog.html#change-0.7.5

Download Alembic 0.7.5 at:  https://pypi.python.org/pypi/alembic/0.7.5.post1

Note there is no “0.7.5” release, because I failed to upload the PGP file
and Pypi has a new policy that filenames can never be used again once
uploaded.So 0.7.5.post1 is 0.7.5.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Alembic 0.7.5 released

2015-03-19 Thread Michael Bayer
Hey all - 

Alembic 0.7.5 is now available.  

This release has a handful of bug fixes and some new features.

Changelog is available at:  
http://alembic.readthedocs.org/en/latest/changelog.html#change-0.7.5

Download Alembic 0.7.5 at:  https://pypi.python.org/pypi/alembic/0.7.5.post1

Note there is no “0.7.5” release, because I failed to upload the PGP file
and Pypi has a new policy that filenames can never be used again once
uploaded.So 0.7.5.post1 is 0.7.5.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy-alembic group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] JSONB double casting

2015-03-18 Thread Michael Bayer


Brian Findlay brian.m.find...@gmail.com wrote:

 I'm having some difficulty using SQLAlchemy's jsonb operators to produce my 
 desired SQL.
 
 Intended SQL:
 
 SELECT *
 FROM foo
 WHERE foo.data-'key1' ? 'a'
 
 ...where `foo.data` is formatted like this:
 
 {
 'key1': ['a', 'b', 'c'],
 'key2': ['d', 'e', 'f']
 }
 
 So, I'm trying to find records where the array associated with `key1` 
 contains some value, 'a' in this case.
 
 I thought it'd be a straightforward query, like:
 
 sess.query(Foo).filter(Foo.data['key1'].has_key('a')).all()
 
 But this is yielding:
 
 AttributeError: Neither 'JSONElement' object nor 'Comparator' object has 
 an attribute 'has_key'
 
 So I changed the query to:
 
 sess.query(Foo).filter(Foo.data['key1'].cast(JSONB).has_key('a')).all()
 
 But this query produces the following SQL statement:
 
 SELECT *
 FROM foo
 WHERE (foo.data-'key1')::JSONB ? ‘a'

try using the type_coerce() function instead of cast, it should give you the
has_key() but won’t change the SQL.   (type_cast(Foo.data[‘key’], 
JSONB).has_key())

just a guess.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Loading of dependent objects performance issue

2015-03-17 Thread Michael Bayer


Cyril Scetbon cscet...@gmail.com wrote:

 Hi,
 
 After having read 
 http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html I 
 understand there is one case where SQL is not emitted and I was expecting 
 that my case was this one.
 
 I use polymorphism to store different objects in the same table (only one 
 type displayed here) as follows :
 
 class BatchRecord(db.Model):
 __tablename__ = 'batch_record'
 
 id = db.Column(db.Integer, primary_key=True, nullable=False)
 batch_id = db.Column(db.Integer, db.ForeignKey('batch.id'), 
 nullable=False)
 type = db.Column(db.String(15))
 created = db.Column(db.DateTime, default=datetime.utcnow)
 modified = db.Column(db.DateTime, default=datetime.utcnow, 
 onupdate=datetime.utcnow)
 
 class Batch(db.Model):
 __tablename__ = 'batch'
 
 id = db.Column(db.Integer, primary_key=True, nullable=False)
 source = db.Column(db.String(10))
 created = db.Column(db.DateTime, default=datetime.utcnow)
 modified = db.Column(db.DateTime, default=datetime.utcnow, 
 onupdate=datetime.utcnow)
 
 batch_records = db.relationship('BatchRecord', 
 cascade='all,delete-orphan')
 
 accounts = db.relationship('Account',
 primaryjoin=and_(Batch.id == BatchRecord.batch_id, BatchRecord.type 
 == 'account'),
 backref='batch’)

this primaryjoin is not necessary. Because you are creating a relationship
to “Account”, it will query out to a JOIN of the batch_record and account
tables automatically which will limit the rows to those with a type of
“account”. Only if you’re trying to exclude rows from some other class that
is a subclass of “Account” would this be at all necessary but that would be
unusual.

 class Account(BatchRecord):
 __tablename__ = 'account'
 
 id = db.Column(db.Integer, db.ForeignKey('batch_record.id'), 
 primary_key=True, nullable=False)
 uuid = db.Column(UUID, nullable=False)
 role = db.Column(db.String(15), nullable=False)
 first_name = db.Column(db.String(40))
 last_name = db.Column(db.String(40))
 email = db.Column(db.String(80))
 phone = db.Column(db.String(40))
 cohort = db.Column(db.String(255))
 
 The code I use is similar to the following :
  
 (1) batch = Batch.query.filter(Batch.id == 50048).first()
 (2) a0 = batch.accounts[0]
 (3) a0.batch.source
 (4) a1 = batch.accounts[1]
 (5) a1.batch.source
 
 at (1) SqlAlchemy requests the database (PostgreSQL) to get the batch object 
 using the query
 
 SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS 
 batch_created, batch.modified AS batch_modified
   FROM batch
   WHERE batch.id = 50048
LIMIT 1
 
 at (2)  it does the following query to get account objects :
 
 SELECT account.id AS account_id, batch_record.id AS batch_record_id, 
 batch_record.batch_id AS batch_record_batch_id, batch_record.type AS 
 batch_record_type, batch_record.created AS batch_record_created, 
 batch_record.modified AS batch_record_modified, account.uuid AS account_uuid, 
 account.role AS account_role, account.first_name AS account_first_name, 
 account.last_name AS account_last_name, account.email AS account_email, 
 account.phone AS account_phone, account.cohort AS account_cohort
   FROM batch_record JOIN account ON batch_record.id = account.id
   WHERE 50048 = batch_record.batch_id AND batch_record.type = 'account'
 
 and the issue is at (3). It does a the new following query :
 
 SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS 
 batch_created, batch.modified AS batch_modified
   FROM batch
   WHERE batch.id = 50048 AND 'account' = ‘account'

this is occurring because the relationship is not aware that this is a
so-called “simple many-to-one”, for which is can do a straight primary key
lookup in the identity map. Because your custom “primaryjoin” condition is
also shared on the many-to-one side, it assumes there is special SQL that must 
be
emitted to ensure the correct results, and a simple identity lookup is not
possible.

the two configurations that will solve this issue are:


accounts = db.relationship('Account', backref='batch’)

or alternatively, if you really wanted to keep that primaryjoin:

accounts = db.relationship('Account',
primaryjoin=and_(Batch.id == BatchRecord.batch_id, BatchRecord.type == 
'account'),
backref=backref(‘batch’, primaryjoin=None))



 
  SqlAlchemy should know using account.batch_id (which was stored in the 
 object at (2)) that it references the batch object requested at (1) and 
 should not request the database again to get information it already has (at 
 (1)).
 
 at (4) it does not request the database but at (5) it requests again the 
 database for the same object :
 
 SELECT batch.id AS batch_id, batch.source AS batch_source, batch.created AS 
 batch_created, batch.modified AS batch_modified
   FROM batch
   WHERE batch.id = 50048 AND 'account' = 'account'
 
 The matter is that we 

Re: [sqlalchemy] unhandled data type cx_Oracle.LOB

2015-03-16 Thread Michael Bayer


GP pandit.gau...@gmail.com wrote:

 OK, is “cancellation_obj” a column object with CLOB as the datatype ? 
 
 Yes, that's how it's defined in the database.
 
 Because of dynamic nature of the code, I was using append_column without 
 specifying column type. I made changes to define column in 
 table.c.column_name format rather than just using Column('column name'). 
 This way, I can make sure column data types are included with column 
 definitions, without me having to specify the data type explicitly with each 
 column.
 
 It's interesting that I used that one way (out of three possible ways) that 
 wasn't 'right', but it's all good now :)
 
 Now onto changing from fetchmany() to fetchone() - since LOBs are pretty much 
 forcing me to use fetchone().

OK, if you were to get the CLOB types working correctly, SQLAlchemy’s result 
proxy works around that issue also, by fetching rows in chunks and converting 
the LOB objects to strings while they are still readable, so you could keep 
with the fetchmany() calls.




 
 Thank you for your help!
 GP
 
 On Monday, March 16, 2015 at 5:54:54 PM UTC-4, Michael Bayer wrote:
 
 
 GP pandit...@gmail.com wrote: 
 
  So that's what was happening: 
  
  This select construct fails: 
  select_query = select() 
  select_query.append_column(contract_id) 
  select_query.append_column(cancel_dt) 
  select_query.append_column(cancellation_obj) 
  select_query.append_from(source_table_name) 
  
  
  But this select construct works: 
  select_query = select([source_table.c.contract_id, 
  source_table.c.cancel_dt, source_table.c.cancellation_quote_obj]) 
  
  So it's just matter of rewriting select query in the 'right' way. 
  
  Thanks for pointing in the right direction! 
 
 OK, is “cancellation_obj” a column object with CLOB as the datatype ? 
 
 even if you just made it this: 
 
 from sqlalchemy.sql import column 
 append_column(column(‘cancellation_obj’, CLOB)) 
 
 that should work. 
 
 
 otherwise, what’s interesting here is to add a “column” without a datatype 
 both bypasses the usual Table metadata feature, but also, bypasses if it was 
 totally a plain text SQL string there’s logic in place to intercept the CLOB 
 in that case also.the recipe above managed to avoid both. 
 
 
 
  GP 
  
  On Monday, March 16, 2015 at 4:57:28 PM UTC-4, GP wrote: 
  I think now I (probably) know where this may be coming from. 
  
  You asked 
   is the original query a plain string and not a Core SQL expression 
  
  The way I am forming the query is by using select , append_column, 
  append_whereclause and finally append_from('my_table'). I think this pretty 
  much generates a plain string query and not the one that's tied to a 
  sqlalchemy table type object. And this may be why sqlalchemy is not 
  applying necessary conversion because it doesn't really know the data types 
  of the columns I am selecting? 
  
  Apologies if I am simplifying this too much and/or talking nonsense. 
  
  Thanks 
  GP 
  
  On Monday, March 16, 2015 at 3:49:32 PM UTC-4, GP wrote: 
  Thank you Michael. 
  
  auto_covert_lobs : I ran with all three possible values: True, False, and 
  without supplying it. The results are the same. 
  
  The original query is a bit more complicated than the example I gave, and 
  is built dynamically. But I am using sqlalchemy select, and not a plain 
  string. Query is of object type sqlalchemy.select.sql.selectable.Select 
  (Or sqlalchemy.sql.expression.Select?), if it helps. 
  
  Here is what the query object value looks like: 
  SELECT CAST(contract_id AS FLOAT) AS contract_id, cancel_dt AS cancel_dt, 
  cancellation_obj AS cancellation_obj FROM contract_cancellation WHERE 
  updated_ts BETWEEN :updated_ts_1 AND :updated_ts_2 
  
  Let me try calling value(). 
  
  
  Thanks 
  GP 
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send an 
  email to sqlalchemy+...@googlegroups.com. 
  To post to this group, send email to sqlal...@googlegroups.com. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group

Re: [sqlalchemy] unhandled data type cx_Oracle.LOB

2015-03-16 Thread Michael Bayer


GP pandit.gau...@gmail.com wrote:

 That's what I thought, and it works, but there seems to be a difference in 
 how resultset is handled when you select LOB column.
 
 Here is a basic script, that selects record from a source table which has 36 
 rows. It fetches 10 records at a time.
 
 from sqlalchemy import Table, select, create_engine, MetaData
 
 engine = create_engine('oracle+cx_oracle://xxx:yyy@zzz')
 conn = engine.connect()
 metadata = MetaData()
 metadata.bind = conn
 
 source_table = Table('contract_cancellation_test', metadata, autoload=True)
 target_table = Table('contract_cancellation_test_s', metadata, autoload=True)
 
 # Query 1 : without selecting LOB  : Works fine
 #select_query = select([source_table.c.contract_id, source_table.c.cancel_dt])
 
 # Query 2 : selecting canellation_quote LOB column : Fails in last fetchmany 
 because query_rs is closed
 select_query = select([source_table.c.contract_id, source_table.c.cancel_dt, 
 source_table.c.cancellation_obj])
 
 query_rs = conn.execute(select_query)
 print(executing select)
 
 loop_count = 1
 while True:
 rows = query_rs.fetchmany(size=10)
 if not rows:  # we are done if result set list is empty
 query_rs.close()
 break
 row_dict = [dict(l_row) for l_row in rows]
 insert_target_stmt = target_table.insert()
 print(inserting for loop = {}.format(str(loop_count)))
 insert_target_stmt.execute(row_dict)
 loop_count += 1
 
 print(done)
 conn.close()
 
 Query 1 does not have LOB type column, and it works fine. Query 2 has LOB 
 type column in and it fails in fetchmany() call after last set is retrieved.
 
 Here is the output:
 
 - results query 1 -
 
 executing select
 inserting for loop = 1
 inserting for loop = 2
 inserting for loop = 3
 inserting for loop = 4
 done
 
 
 - results query 1 -
 
 executing select
 inserting for loop = 1
 inserting for loop = 2
 inserting for loop = 3
 inserting for loop = 4
 Traceback (most recent call last):
   File 
 /home/xxx/.local/lib/python3.4/site-packages/sqlalchemy/engine/result.py, 
 line 733, in _fetchone_impl
 return self.cursor.fetchone()
 AttributeError: 'NoneType' object has no attribute 'fetchone'
 
 During handling of the above exception, another exception occurred:
 
 Traceback (most recent call last):
   File /home/xxx/myprojects/python/sync/test_lob_1.py, line 23, in module
 rows = query_rs.fetchmany(size=10)
   ...
   ...
   File 
 /home/xxx/.local/lib/python3.4/site-packages/sqlalchemy/engine/result.py, 
 line 759, in _non_result
 raise exc.ResourceClosedError(This result object is closed.)
 sqlalchemy.exc.ResourceClosedError: This result object is closed.
 
 
 As long as I can check that resultset is empty and break from the loop, I am 
 fine. Any better way of handling this?

That’s a bug in the oracle-specific result proxy.   I’ve created 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3329/fetchmany-fails-on-bufferedcolproxy-on
 for that.






 
 Thanks
 GP
 
 
 
 On Monday, March 16, 2015 at 10:08:47 PM UTC-4, Michael Bayer wrote:
 
 
 GP pandit...@gmail.com wrote: 
 
  OK, is “cancellation_obj” a column object with CLOB as the datatype ? 
  
  Yes, that's how it's defined in the database. 
  
  Because of dynamic nature of the code, I was using append_column without 
  specifying column type. I made changes to define column in 
  table.c.column_name format rather than just using Column('column name'). 
  This way, I can make sure column data types are included with column 
  definitions, without me having to specify the data type explicitly with 
  each column. 
  
  It's interesting that I used that one way (out of three possible ways) that 
  wasn't 'right', but it's all good now :) 
  
  Now onto changing from fetchmany() to fetchone() - since LOBs are pretty 
  much forcing me to use fetchone(). 
 
 OK, if you were to get the CLOB types working correctly, SQLAlchemy’s result 
 proxy works around that issue also, by fetching rows in chunks and converting 
 the LOB objects to strings while they are still readable, so you could keep 
 with the fetchmany() calls. 
 
 
 
 
  
  Thank you for your help! 
  GP 
  
  On Monday, March 16, 2015 at 5:54:54 PM UTC-4, Michael Bayer wrote: 
  
  
  GP pandit...@gmail.com wrote: 
  
   So that's what was happening: 
   
   This select construct fails: 
   select_query = select() 
   select_query.append_column(contract_id) 
   select_query.append_column(cancel_dt) 
   select_query.append_column(cancellation_obj) 
   select_query.append_from(source_table_name) 
   
   
   But this select construct works: 
   select_query = select([source_table.c.contract_id, 
   source_table.c.cancel_dt, source_table.c.cancellation_quote_obj]) 
   
   So it's just matter of rewriting select query in the 'right' way. 
   
   Thanks for pointing in the right direction! 
  
  OK, is “cancellation_obj” a column object with CLOB as the datatype ? 
  
  even if you just made

Re: [sqlalchemy] deferred groups

2015-03-16 Thread Michael Bayer


tonthon tontho...@gmail.com wrote:

 Hi,
 
 I'm using polymorphism and I set up some deferred columns at each level
 of inheritance belonging to the same deferred group :
 
 
 class Base(DBBASE):
id = Column(Integer, primary_key=True)
name = Column(String(255))
description = deferred(
Column(Text()),
group=full
)
 
 class Element(DBBASE):
id = Column(ForeignKey(base.id))
comments = deferred(
Column(Text()),
group=full,
)
 
 
 The following query :
 
 
 Element.query().options(undefer_group('full')).all()
 
 
 doesn't defer the description column, is that the expected behaviour ?

a query for Element here will not load objects of type “Base”, so I don’t see 
where “description” comes into play.

If you can provide a more complete example that would help.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] deferred groups

2015-03-16 Thread Michael Bayer


tonthon tontho...@gmail.com wrote:

 Sorry I was a bit too speed when writing that one :)
 
 So I've got a Base model :
 
 class Base(DBBASE):
__tablename__ =
 'base' 
__mapper_args__ = {'polymorphic_identity': 'base', 'polymorphic_on':
 'type_'}  
id = Column(Integer, primary_key=True)
   name = Column(String(255))
   description = deferred(
   Column(Text()),
   group=full
   )
type_ = Column( String(30), nullable=False)
 
 
 and a child model
 
 class Element(DBBASE):
__tablename__ = 'element'
__mapper_args__ = {'polymorphic_identity': 'element'}
id = Column(ForeignKey(base.id))
   comments = deferred( Column(Text()), group=full, )
 
 
 The following query :
 
 Element.query().options(undefer_group('full')).all()
 
 
 doesn't load the description column, is that the expected behaviour ?

When you say “a child model Element”, that’s not what I see. Element and
Base are both descending from DBBASE. If Element were a child of Base, it
would need to be declared as “class Element(Base)”.

As it is, it is expected because the “Element” class does not have a column
called “description”. If you were to load objects of type “Base”, then you’d
see “description”.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] deferred groups

2015-03-16 Thread Michael Bayer


tonthon tontho...@gmail.com wrote:

 Sorry, still a few mistakes in what I wrote (first testing what you
 paste is a far better way to work) :
 
 
 
  121 from sqlalchemy.orm import
 deferred
  122 from sqlalchemy import
 Text
 
 123   
  
 
  124 class
 Base(DBBASE):
  125 __tablename__ =
 'base' 
  126 __mapper_args__ = {'polymorphic_identity': 'base',
 'polymorphic_on':'type_'}
  127 id = Column(Integer,
 primary_key=True) 
  128 name =
 Column(String(255)) 
  129 description =
 deferred(
  130
 Column(Text()),
  131
 group=full   
  132
 )  
  133 type_ = Column( String(30),
 nullable=False)
 
 134   
  
 
  135 class
 Element(Base):   
  136 __tablename__ =
 'element'  
  137 __mapper_args__ = {'polymorphic_identity':
 'element'}  
  138 id = Column(ForeignKey(base.id),
 primary_key=True)   
  139 comments = deferred( Column(Text()), group=full, )
 
 
 print(DbSession().query(Element).options(undefer_group('full')))
 
 SELECT element.comments AS element_comments, element.id AS element_id,
 base.id AS base_id, base.name AS base_name, base.type_ AS base_type_
 FROM base INNER JOIN element ON base.id = element.id

looks like it was fixed in 0.9.9 in #3287.  Please upgrade.







 
 
 Le 16/03/2015 17:33, tonthon a écrit :
 Sorry I was a bit too speed when writing that one :)
 
 So I've got a Base model :
 
 class Base(DBBASE):
__tablename__ =
 'base' 
__mapper_args__ = {'polymorphic_identity': 'base', 'polymorphic_on':
 'type_'}  
id = Column(Integer, primary_key=True)
   name = Column(String(255))
   description = deferred(
   Column(Text()),
   group=full
   )
type_ = Column( String(30), nullable=False)
 
 
 and a child model
 
 class Element(DBBASE):
__tablename__ = 'element'
__mapper_args__ = {'polymorphic_identity': 'element'}
id = Column(ForeignKey(base.id))
   comments = deferred( Column(Text()), group=full, )
 
 
 The following query :
 
 Element.query().options(undefer_group('full')).all()
 
 
 doesn't load the description column, is that the expected behaviour ?
 
 Regards
 
 
 Le 16/03/2015 17:21, Michael Bayer a écrit :
 tonthon tontho...@gmail.com wrote:
 
 Hi,
 
 I'm using polymorphism and I set up some deferred columns at each level
 of inheritance belonging to the same deferred group :
 
 
 class Base(DBBASE):
   id = Column(Integer, primary_key=True)
   name = Column(String(255))
   description = deferred(
   Column(Text()),
   group=full
   )
 
 class Element(DBBASE):
   id = Column(ForeignKey(base.id))
   comments = deferred(
   Column(Text()),
   group=full,
   )
 
 
 The following query :
 
 
 Element.query().options(undefer_group('full')).all()
 
 
 doesn't defer the description column, is that the expected behaviour ?
 a query for Element here will not load objects of type “Base”, so I don’t 
 see where “description” comes into play.
 
 If you can provide a more complete example that would help.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unhandled data type cx_Oracle.LOB

2015-03-16 Thread Michael Bayer


GP pandit.gau...@gmail.com wrote:

 Hello,
 
 While trying to insert into an Oracle table with one column defined as CLOB, 
 I get the following error:
 File 
 /home/x/.local/lib/python3.4/site-packages/sqlalchemy/engine/default.py,
  line 442, in do_execute
 cursor.execute(statement, parameters)
 sqlalchemy.exc.NotSupportedError: (NotSupportedError) Variable_TypeByValue(): 
 unhandled data type cx_Oracle.LOB 
 
 Statement: 'INSERT INTO contract_cancellation_test_s ( contract_id, 
 cancel_dt, cancellation_obj) VALUES ( :contract_id, :cancel_dt, 
 :cancellation_obj) ' 
 Parameters: {'contract_id': 23.0, 'cancel_dt': datetime.datetime(2015, 1, 14, 
 0, 0),'cancellation_obj' : cx_Oracle.LOB object at 0x7f0a427be4f0 }

It’s a little odd you’re pulling the LOB object from the row directly;
SQLAlchemy should be converting this to a string. Are you setting
auto_convert_lobs to False? Or more likely, is the original query a plain
string and not a Core SQL expression ? If it’s a string, it is possible,
though shouldn’t be happening, that SQLAlchemy won’t be told that this is a
CLOB column and it doesn’t know to do any conversion and you’ll get
cx_oracle’s LOB back; you need to convert that to string.

I’ve checked the code and if OCI is reporting as CLOB, it should be
converted. But call value() on the LOB to resolve.

 Versions:
 Python: 3.4
 SQLAlchemy: 0.9.9
 cx_Oracle: 5.1.3
 
 
 Here is code snippet: I am selecting records from one table and inserting 
 into another (both source and target are different schema - as handled by 
 source_conn, target_conn)
 
 # Sample query: SELECT CONTRACT_ID, CANCEL_DT, CANCELLATION_OBJ from 
 SOURCE_TABLE
 query_rs = source_conn.execute(select_query)
 while True:
 row = query_rs.fetchone()
 if not row: 
 query_rs.close()
 break
 row_dict = dict(row)
 insert_target_stmt = l_target_table.insert()
 insert_target_stmt.execute(row_dict)
 
 
 (My original code was using fetchmany() instead of fetchone(), but I 
 simplified it to first make it work on row by row.)
 
 
 Both the tables (source and target) are defined as :
 CONTRACT_ID   NUMBER(19,0)
 CANCEL_DT TIMESTAMP(6)
 CANCELLATION_OBJ  CLOB
 
 
 I have read the relevant parts of sqlalechmy documentation - and have played 
 with following parameters, but the error remains : auto_convert_lobs, 
 auto_setinputsizes, arraysize
 
 
 I can't figure out what I am doing wrong here. 
 
 Any help?
 
 Thanks!
 GP
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: Creating a gist index on a new colmn

2015-03-15 Thread Michael Bayer
the “GIST” identifier for USING is available using the “postgresql_using” 
keyword argument:

http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#index-types



Jay Payne lett...@gmail.com wrote:

 I've been looking through the docs and I cannot find out how to create a gist 
 index on a geometry column that's I've added. Any ideas?
 
 Thanks
 --J
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy-alembic group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy-alembic group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] duplicate an object

2015-03-13 Thread Michael Bayer


moon...@posteo.org wrote:

 On 2015-03-12 09:53 Michael Bayer mike...@zzzcomputing.com wrote:
 which is why, “copy an object” is not an out of the box thing. Because
 nobody really wants a full “copy”, it’s something custom.
 
 As I described I don't want a full copy. I only want to copy the real
 data handled by the user. All organizing/protocol data from SQLA or
 from me (e.g. the oid) as the developer shouldn't be copied.

SQLA knows nothing about what data in the row is that which you care about
and that which you don’t, so the names of attributes you care about have to
be be arrived at in some way, either hardcoded in a list, or using an 
algorithmic
approach as you have in your stack overflow answer.   Even a primary key
should not be assumed to be “protocol” data, lots of databases use 
natural primary keys.


 The use-case is simple:
 The user want to add new data which just differ a little from the last
 time data was added. So it is more ergonomic to offer a new-data-dialog
 with preset values from the last data-entry instead of an clean empty
 new-data-dialog. In the latter the user need to type in all data again
 but e.g. 80% of it is the same then yesterday. Understand?

I understand completely - you have a custom copy use case where you only
care about particular attributes.   But what I’m hoping you also understand is 
that
SQLAlchemy will never make a decision like this for you.  You have to 
come up with a system on your end to identify which ones you care about and 
which 
ones you don’t.  


 What do you think about this solution:
 https://stackoverflow.com/questions/29039635/how-to-duplicate-an-sqlalchemy-mapped-object-the-correct-way

if it works for you, then it’s great. It’s not a generalized solution for
the whole world, though. It doesn’t take into account columns that are part
of unique Index objects or columns that are mentioned in standalone
UniqueConstraint objects. I’d also imagine a real-world GUI attempting to
present an object with default values filled in would have a lot more
columns in the DB that aren’t exposed. The fact that you want to block only
“PK” and single-column “unique” constraints is quite arbitrary.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] duplicate an object

2015-03-12 Thread Michael Bayer


moon...@posteo.org wrote:

 On 2015-03-08 11:17 Michael Bayer mike...@zzzcomputing.com wrote:
 there’s no particular “SQLAlchemy way” to do this,
 
 What is about
   make_transient()
 ?
 
 I don't understand this function 100%tly.

it changes the state of an object from persistent to transient. the states
are documented at
http://docs.sqlalchemy.org/en/rel_0_9/orm/session_state_management.html#quickie-intro-to-object-states.

I think the concept that might not be clear here is that the “foo.id” attribute 
on your object (assuming “id” is your primary key) and the “identity key” 
referred to in the documentation are not the same thing.   The “identity key” 
is the primary key of an object, *in the database*. I can make an object:   
Foo(id=5), “5” is the value of the “id” attribute which corresponds to what 
would be the primary key column in the database, but there’s no “5” in the 
database.  No identity key.   I persist it, INSERT INTO table VALUES (5), now 
there’s a “5” in the database, now there’s an identity key.

this is a copy routine I recently implemented for openstack, perhaps it will
be helpful:

class Base(object):

def __copy__(self):
Implement a safe copy.copy().

SQLAlchemy-mapped objects travel with an object
called an InstanceState, which is pegged to that object
specifically and tracks everything about that object.  It's
critical within all attribute operations, including gets
and deferred loading.   This object definitely cannot be
shared among two instances, and must be handled.

The copy routine here makes use of session.merge() which
already essentially implements a copy style of operation,
which produces a new instance with a new InstanceState and copies
all the data along mapped attributes without using any SQL.

The mode we are using here has the caveat that the given object
must be clean, e.g. that it has no database-loaded state
that has been updated and not flushed.   This is a good thing,
as creating a copy of an object including non-flushed, pending
database state is probably not a good idea; neither represents
what the actual row looks like, and only one should be flushed.


session = orm.Session()

copy = session.merge(self, load=False)
session.expunge(copy)
return copy




 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] relationship problem

2015-03-12 Thread Michael Bayer


Julien Cigar jci...@ulb.ac.be wrote:

 
 Hi Mike,
 
 Sorry to bother you once with this, but I've re-read all the docs on
 the relationships and I want to be sure that I've understand correctly.
 
 Imagine I have the following link table in SQL:
 https://gist.github.com/silenius/77d406f8e0c0e26eb38f with the following
 mappers: https://gist.github.com/silenius/18190ef4912667e49b1a
 
 So basically you have Accounts, Pools, Roles and a link table
 AccountRole
 
 What I would like is to have on the Pool object the list of Account 
 who have a Role 'facilitator' for a certain Pool.
 
 The result is at line 18-26 of the paste
 (https://gist.github.com/silenius/18190ef4912667e49b1a#file-gistfile1-py-L18-L26),
  I wondered if it was okay because I sometimes mix the primaryjoin and 
 secondaryjoin clauses... If I understand well I don't need a primaryjoin 
 clause
 in this case ..? 

I don’t know offhand, if you turn on INFO logging for the “sqlalchemy.orm”
logger you’ll see every decision SQLAlchemy is making:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
bs = relationship(B)


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a_id = Column(ForeignKey('a.id'))

import logging
logging.basicConfig()
logging.getLogger(sqlalchemy.orm).setLevel(logging.INFO)

configure_mappers()


output:

INFO:sqlalchemy.orm.mapper.Mapper:(B|b) _post_configure_properties() started
INFO:sqlalchemy.orm.mapper.Mapper:(B|b) initialize prop id
INFO:sqlalchemy.orm.mapper.Mapper:(B|b) initialize prop a_id
INFO:sqlalchemy.orm.mapper.Mapper:(B|b) _post_configure_properties() complete
INFO:sqlalchemy.orm.mapper.Mapper:(A|a) _post_configure_properties() started
INFO:sqlalchemy.orm.mapper.Mapper:(A|a) initialize prop bs
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs setup primary join 
a.id = b.a_id
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs setup secondary 
join None
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs synchronize pairs 
[(a.id = b.a_id)]
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs secondary 
synchronize pairs []
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs local/remote pairs 
[(a.id / b.a_id)]
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs remote columns 
[b.a_id]
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs local columns [a.id]
INFO:sqlalchemy.orm.relationships.RelationshipProperty:A.bs relationship 
direction symbol('ONETOMANY')
INFO:sqlalchemy.orm.strategies.LazyLoader:A.bs lazy loading clause :param_1 = 
b.a_id
INFO:sqlalchemy.orm.mapper.Mapper:(A|a) initialize prop id
INFO:sqlalchemy.orm.mapper.Mapper:(A|a) _post_configure_properties() complete







 
 Thanks,
 Julien
 
 I'm using PostgreSQL, and I checked that all constraints are properly
 created on server-side but I haven't checked with sqllite:// 
 
 Thanks,
 Julien
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, 
 send an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, 
 send an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, 
 send an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 

Re: [sqlalchemy] duplicate an object

2015-03-12 Thread Michael Bayer


moon...@posteo.org wrote:

 On 2015-03-08 11:17 Michael Bayer mike...@zzzcomputing.com wrote:
 new_obj = MyClass()
 for attr in mapper.attrs:
setattr(new_obj, attr.key, getattr(old_obj, attr.key))
 
 This would copy everything including primary keys and unique members.

which is why, “copy an object” is not an out of the box thing. Because
nobody really wants a full “copy”, it’s something custom.

 I have hard problems with the SQLA-docu. I know that 'attr' is from
 type 'class ColumnProperty'. But I am not able to found out the members
 of it. I can not see a 'primary_key' or a 'unique' member inside of it.
 
 How can I ask if it is a primary key or unique?

mapper = inspect(MyClass)

is_pk = mapper.attrs.somekey.columns[0].primary_key

for “unique”, there are multiple definitions for “unique”. Do you consider a
column to be “unique” if it is part of a composite unique constraint or
index (e.g. has other columns)? I don’t know. Let’s assume you do:

for constraint in mapper.local_table.constraints:
if isinstance(constraint, UniqueConstraint):
if constraint.columns.contains_column(mapper.attrs.somekey.columns[0]):
is_unique = True

for index in mapper.local_table.indexes:
   if index.unique and 
index.columns.contains_column(mapper.attrs.somekey.columns[0]):
   is_unique = True


 
 And much better for the future: How do I have to use the SQLA-docu to
 solve such problems myself? What do I understand wrong in that docu?

use the search feature and familiarize with all the public accessors on the
handful of objects that are always involved in these kinds of things, Table,
Mapper, ColumnProperty, ColumnCollection, Constraint (that last one is missing, 
will add).





 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] postgres schemas at runtime

2015-03-12 Thread Michael Bayer


Francesco Della Vedova francesco.dellaved...@rocket-internet.de wrote:

 Hello, 
 
 I have a set of models that I would like to replicate on different Postgres 
 schemas. The name of the schema is only known at runtime. 
 Just as an example, imagine I have a bunch of tables of customer data, and I 
 want to create a schema 'customer_id' with the same tables every time I get a 
 new customer.
 
 I'm aware of this strategy but it wouldn't work in my case. 
 
 class TableA(Base):
...
 
 class TableB(Base):
...
 
 for customer_id in customers:
# set the schema here
Base.metadata.create_all()

you can try using table.tometadata(new_metadata, schema=my schema) to make
new tables, and possibly even use it in conjunction with automap
(http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html).

Another approach is to use a before_cursor_execute() event, searching for a
schema token in the SQL and rewriting it to one that is set up on the
current connection in the .info dictionary.

The latter is essentially a hacky way to get at an upcoming feature which is
https://bitbucket.org/zzzeek/sqlalchemy/issue/2685/default-schema-schema-translation-map-as.

This depends of course on if you need to use each schema on a
per-Session/connection basis, or you need to work with all schema-based
models simultaneously.


 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 0.9.9 Released

2015-03-10 Thread Michael Bayer
SQLAlchemy release 0.9.9 is now available.

Release 0.9.9 is a large maintenance release featuring 30 changes, mostly
bug fixes. A handful of modest feature adds are also present, including new
Core features for SQLite, Postgresql, and new API features to provide better
control of transaction isolation level when using the ORM.

The majority of SQLAlchemy development is taking place targeted at version
1.0, which is extremely close to having its initial beta releases. Version
1.0 has accumulated a wide variety of performance improvements, both as
intrinsic to normal operation as well as via new features.

Users should carefully review the Changelog at
http://docs.sqlalchemy.org/en/latest/changelog/changelog_09.html#change-0.9.9
to note which behaviors and issues are affected. We'd like to thank the many
contributors who helped with this release.

SQLAlchemy 0.9.9 is available on the Download Page:

http://www.sqlalchemy.org/download.html


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Database session variables with connection pooling

2015-03-09 Thread Michael Bayer


Kent jkentbo...@gmail.com wrote:

 Is it safe, from within either the 'before_execute' or 
 'before_cursor_execute' events, to use the same connection to execute a SQL 
 statement before the current one?  I assume there is a good chance the answer 
 is no, at least for before_cursor_execute.

if you’re in before_cursor_execute you have the actual DBAPI connection, and
you can use that directly, and that’s totally safe. in before_execute(), you
probably can use the Connection there as well but you’d need to be careful
because you’re in a re-entrant situation, so your event handler would be
called within. You can also use Connection.connection to get at the wrapped
DBAPI connection where again it’s fine to use in before_execute as well.

 Why?  I only want to issue the SQL to update the database's session variables 
 if needed.  Most connection checkout-checkin life cycles will only ever issue 
 SELECT statements and so don't need the database session updated for 
 auditing, so I was intending on waiting until I actually know it is needed 
 (from within before_cursor_execute) before issuing the 
 DBMS_SESSION.SET_CONTEXT(...).  But, once I know that within 
 before_cursor_execute, can I (recursively) issue an conn.execute() for that 
 statement safely or will it affect the original execute?

if you stick with the DBAPI connection directly then you’re definitely safe.


 
 
 
 On Saturday, March 7, 2015 at 6:38:08 PM UTC-5, Michael Bayer wrote:
 
 
 Kent jkent...@gmail.com wrote: 
 
  I'm implementing database session variables (in Oracle, 
  DBMS_SESSION.SET_CONTEXT(...)), in order to be able to set (from 
  sqlalchemy) and retrieve (from a database trigger) the application userid 
  and URL path during table audit triggers. 
  
  The tricky bit is that if I set the user to 'user1', that remains in the 
  session in the database even when a different sqlalchemy thread grabs that 
  same session from the connection pool.  I want to prevent the wrong 
  information accidentally still being in the session, so I want to be sure 
  to reset it when appropriate and I'm wondering whether checkout from the 
  Pool is the event you would recommend? 
  
  @event.listens_for(engine, 'checkout') 
  def receive_checkout(dbapi_connection, connection_record, 
  connection_proxy): 
  
  If the same database session is recycled from the connection pool, will it 
  have the same connection_record?  I'd prefer to record the fact that I've 
  set the database session's variables on an object (such as 
  connection_record) so that subsequent requests can detect whether it needs 
  to be reset.  Will connection_record correspond to a database session? 
 
 
 For this kind of thing you normally reset the state on the “checkin” event. 
 The connection_record does in fact follow around the DBAPI connection, 
 however the .info dictionary is given here as the primary way to track 
 things with a DBAPI connection. .info is available on Connection, the 
 connection record, and the pool wrapper, and it will track the DBAPI 
 connection for its full lifespan, until the connection is closed. So put 
 whatever memoizations you need into the .info dictionary, and then you can 
 pretty much set / reset the state with any of the pool events. 
 
 
  Thanks in advance for any advice here. 
  Kent 
  
  
  
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send an 
  email to sqlalchemy+...@googlegroups.com. 
  To post to this group, send email to sqlal...@googlegroups.com. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] duplicate an object

2015-03-08 Thread Michael Bayer


moon...@posteo.org wrote:

 On 2015-03-07 03:17 moon...@posteo.org wrote:
 Is there a in-build-function to duplicate the instance of a data
 object (except the primary key)?
 
 Isn't there a way for this?
 
 Does the sqla-devs reading this list?

there’s no particular “SQLAlchemy way” to do this, there’s many ways to copy
an object in Python in general as well as in SQLAlchemy, it depends on your
needs. If the object has lots of attributes that aren’t mapped, then clearly
that's outside the realm of SQLAlchemy. If you want to build something based
on inspection of attributes that are mapped, you can build something using
the inspection system where you go through properties like mapper.attrs
(docs are down at the moment):

from sqlalchemy import inspect

mapper = inspect(MyClass)

new_obj = MyClass()
for attr in mapper.attrs:
setattr(new_obj, attr.key, getattr(old_obj, attr.key))

that’s obviously not a very slick approach but mapper.attrs,
mapper.column_attrs, etc. give you access to every attribute that the mapper
knows about.

For me I usually keep things extremely simple and just build a copy method:

class MyClass(Base):
def copy(self):
return MyClass(
 a = self.a,
 b = self.b, 
# ...
)

That’s how I’d normally do this, but of course nobody likes that system. The
copy method has the advantage in that it can easily accommodate your special
use cases without trying to make it guess everything.

There’s also ways to do it with copy.copy() though some special steps are
needed at the moment to give the new object an independent state, probably
hold off on that method until we add some APIs to support that use case.

Docs are down for a linode maintenance window but will be up within an hour
or two.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Database session variables with connection pooling

2015-03-07 Thread Michael Bayer


Kent jkentbo...@gmail.com wrote:

 I'm implementing database session variables (in Oracle, 
 DBMS_SESSION.SET_CONTEXT(...)), in order to be able to set (from sqlalchemy) 
 and retrieve (from a database trigger) the application userid and URL path 
 during table audit triggers.
 
 The tricky bit is that if I set the user to 'user1', that remains in the 
 session in the database even when a different sqlalchemy thread grabs that 
 same session from the connection pool.  I want to prevent the wrong 
 information accidentally still being in the session, so I want to be sure to 
 reset it when appropriate and I'm wondering whether checkout from the Pool is 
 the event you would recommend?
 
 @event.listens_for(engine, 'checkout')
 def receive_checkout(dbapi_connection, connection_record, 
 connection_proxy):
 
 If the same database session is recycled from the connection pool, will it 
 have the same connection_record?  I'd prefer to record the fact that I've set 
 the database session's variables on an object (such as connection_record) so 
 that subsequent requests can detect whether it needs to be reset.  Will 
 connection_record correspond to a database session?


For this kind of thing you normally reset the state on the “checkin” event.
The connection_record does in fact follow around the DBAPI connection,
however the .info dictionary is given here as the primary way to track
things with a DBAPI connection. .info is available on Connection, the
connection record, and the pool wrapper, and it will track the DBAPI
connection for its full lifespan, until the connection is closed. So put
whatever memoizations you need into the .info dictionary, and then you can
pretty much set / reset the state with any of the pool events.


 Thanks in advance for any advice here.
 Kent
 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Complicated self join

2015-03-05 Thread Michael Bayer


Dani Hodovic danih...@gmail.com wrote:

 I've been struggling with a query that gets the most recent date as described 
 here: http://stackoverflow.com/a/123481/2966951
 
 I've been able to produce a SQLAlchemy variant, but it seems to be MUCH 
 slower when executed with MySQL. It also looks slightly differentwith 
 parameters around the inner query. http://pastebin.com/NWEsFtAY


I’m not sure why you’re using subquery() for the SQLAlchemy version when the 
original SQL you’re looking for has no subquery (and the subquery will perform 
*terribly* on MySQL).  Just join to “mytable” as an alias() itself.   

t1 = mytable.alias()
t2 = mytable.alias()


s.query(t1.userid, t1.date).outerjoin(t2, and_(t1.userid == t2.userid, t1.date 
 t2.date)).filter(t2.userid == None)




 
 Please don't point me to scalar subqueries as I've looked at the 
 documentation and this was the best I could come up with.
 
 As this query is a part of a larger query I attempted to solve it with 
 text(), but combining ORM code and raw SQL is a pain in the ass. If there is 
 a text solution however, where this query could be joined with another query 
 that would work too.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Relationship behavior change when switching from reflected to declarative styles; test case included

2015-03-03 Thread Michael Bayer


Evan James thas...@gmail.com wrote:

 Hi folks,
 
 I'm working on a SQLAlchemy-based app where we've decided to make some 
 infrastructure changes, in particular moving from reflection to declaration 
 for mapping the models.
 
 However, we're now running into issues where, after switching to declarative, 
 relationships aren't populated the way we expect when manipulated in Python.  
 For example, we have code that looks like this:
 
 class Widget(...):
 def merge(self, other_widget):
 Merge the widgets, transferring the dependent items on the other 
 widget to this one.
 for frobnicator in other_widget.frobnicators[:]:
 frobnicator.widget = self
 
 meta.Session.delete(other_widget)
 
 
 This code worked as hoped-for when we were reflecting on the database to 
 create our mappers, but after switching to declarative, the dependent items 
 are cascade-deleted on commit when other_widget is deleted, rather than being 
 preserved as children of the merged widget.
 
 It's not difficult to fix this particular issue - explicitly removing the 
 frobnicators from the other_widget.frobnicators collection will prevent them 
 from being deleted, and then the merged widget correctly has them - but we're 
 finding we have a class of bugs where relationships aren't being handled the 
 same way as before.  Unfortunately, build a comprehensive test suite is one 
 of the infrastructure changes we're in the process of making - which means 
 it's not done yet and we can't easily track down all the places we could get 
 tripped up.  We would really prefer to resolve this by changing the 
 definitions in the models, not by changing the application code that 
 manipulates the membership of relationships.


Essentially the issue is likely because the mappings in these two examples
are not equivalent; the reflection based version has Widget.frobnicator and
Frobnicator.widget communicating with each other through a backref, and the
declarative version does not.

Assuming you’re on SQLA 0.9, the reflective version is taking advantage of
the behavior introduced in
http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_09.html#backref-handlers-can-now-propagate-more-than-one-level-deep;
that is, adding frobnicator to first_widget.frobincators affects the backref
of frobnicator.widget which then automatically performs the remove of
second_widget.frobnicators. The declarative version does not make any use of
the “backref” or “back_populates” keyword so cannot take advantage of this
behavior; it doesn’t track any linkage between these two sides.

The section
http://docs.sqlalchemy.org/en/rel_0_9/orm/backref.html?highlight=backref
talks about how to configure either backref or back_populates between
mutually-dependent relationships.



 I've created a reduced test case here which specifically displays the 
 behavior we're having trouble with in minimal form.  If one line in the test 
 case is commented out, the test will pass for reflective models and fail for 
 declarative models; if the line is put back in, success and failure reverse.
 
 My question:  How can we make relationships function identically in 
 declarative syntax as they did in reflective syntax?  We thought we had 
 migrated mapping styles in a way that wouldn't change anything, but here we 
 are.  What are we missing?
 
 Thanks,
 Evan James
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unicode String Error on Insert

2015-03-03 Thread Michael Bayer

J.D. jd.cor...@pearson.com wrote:

 My solution didn't work.  I was able to get my Portuguese data to load by 
 decoding it in ISO-8859-1, but by decoding I lose all the special characters 
 like tildes.
 
 So I still don't understand how to get the engine to accept my data properly.
 
 J.D.
 
 On Tuesday, March 3, 2015 at 3:00:24 PM UTC-7, J.D. wrote:
 I actually figured this out.
 
 It had nothing to do with my SQLAlchemy create_engine configuration.
 
 The data I was trying to create an object with was in ISO-8859-1 format, so I 
 just had to construct my Object the text decoded properly.
 
 Once I did this, the data was inserted into my sqlite3 table just fine.
 
 
 On Tuesday, March 3, 2015 at 1:58:32 PM UTC-7, J.D. wrote:
 Hi,
 
 I am getting the following error, when I try to execute code to insert a new 
 row into one of my tables, and I've googled for answers and tried everything 
 I could find online and nothing seems to resolve the issue.
 
 sqlalchemy.exc.ProgrammingError: (ProgrammingError) You must not use 8-bit 
 bytestrings unless you use a text_factory that can interpret 8-bit 
 bytestrings (like text_factory = str)


the error means that your unicode string has to be passed to pysqlite as a
Python unicode object, that is, in Python 2 it has to be with a “u”, u’my
string’.

if you’re losing encoding information, that means that the encoding you’re
using to decode into unicode is probably not correct.

Feel free to share the string literal and I can round trip it into SQLite
for you.



 I am using the following software:
 
 SQLAlchemy v0.9.8
 
 SQLite 3.8.8.2.
 
 I am creating my engine as follows,
 
 engine = create_engine('sqlite+pysqlite:///prototype.db', module=sqlite)
 
 #engine.raw_connection().connection.text_factory = str
 
 #engine.connect().connection.connection.text_factory = str
 
 session = sessionmaker(bind=engine)()
 
 meta.Base.metadata.bind = engine
 
 
 meta.Base.metadata.create_all(engine)
 
 The object I am trying to insert via session.add(..) has a structure similar 
 to the following: (shortened for brevity)
 
 ..
 
 id = Column(Integer, primary_key=True, unique=True, nullable=False)
 
 title = Column(String, nullable=False) // This is the column that 
 gets the Portuguese data with unicode characters -- I've tried using the 
 column type Unicode
 
 # title = Column(Unicode, nullable=False)
 
 book_id = Column(Integer, nullable=False)
 
 
 code = Column(Integer, nullable=False)
 
 ...
 
 I've tried setting the text_factory on the connection to no avail.  I'm at a 
 loss how to fix this so I can insert my data with unicode chars.
 
 Here is the SQL that is generated for the insert
 
 
 It is highly recommended that you instead just switch your application to 
 Unicode strings. u'INSERT INTO books (title, book_id, code) VALUES (?, ?, ?)' 
 ('Demana/Blitzer: Pr\x8e-c\x87lculo, 2e', '93810', 'removed')
 
 I would appreciate some insight to how to fix this issue so I can insert my 
 data.
 
 Thanks,
 
 J.D.
 
 
 
 
 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Flushed PickleType data disappearing

2015-03-03 Thread Michael Bayer


SQLRook dev...@gmail.com wrote:

 To move one step from your sample toward my codebase, I made a class method 
 to modify the pickled list.  Already, I see behavior that I cannot explain.  
 If the session is passed to modifyTarget(), you can observe the 
 targetInstance become dirty, marked as modified, and then by the end of the 
 function, the column is no longer modified, even though the contents have 
 changed twice.  How does a column become un-modified? (for scalar values, I 
 understand that an int column would in net not be modified if its value 
 changed from 6 to 10 to 6).  You can optionally not pass the session to 
 modifyTarget(), and the object won't be modified after the call.
 
 class Target(Base):
 __tablename__ = 'targets'
 id = Column(Integer, primary_key=True)
 name = Column(Unicode)
 targetList = Column(PickleType, default=[])
 
 def modifyTarget(self,dbSession=None):
 print 'target list on entry: ' + str(self.targetList)
 if dbSession:
 assert not any(dbSession.dirty)
 print 'modified on entry? ' + str(dbSession.is_modified(self))
 temp = self.targetList
 self.targetList = None
 if dbSession:
 assert any(dbSession.dirty)
 assert dbSession.is_modified(self)
 temp.append(6)
 if dbSession:
 assert any(dbSession.dirty)
 print 'session dirty objs: ' + str(dbSession.dirty)
 self.targetList, temp = temp, self.targetList#swap
 print 'target list on exit: ' + str(self.targetList) 
 if dbSession:
 assert dbSession.is_modified(self)

There’s ultimately no assignment to a new value taking place here, you’re
appending in the same collection that was already loaded. The attribute system
is smart enough to skip the expensive operation of persisting an object that
it can see is already the one that was loaded (without using the in-place 
mutation
extension, of course). Also, because you are mutating
the original loaded collection, even if you assign to this attribute a brand
new list, if it has the same values as the “temp” that is the same list it
already has present, it will compare as equal and will not be persisted.

break down like this:

def modifyTarget(self):
import pdb
pdb.set_trace()

temp = self.targetList
self.targetList = None
temp.append(6)
self.targetList = list(temp)

then run, and into pdb:

 /Users/classic/dev/sqlalchemy/test.py(18)modifyTarget()
- temp = self.targetList

# the in python identifier of the list:
(Pdb) id(self.targetList)
4357645232

# set to None
(Pdb) next
 /Users/classic/dev/sqlalchemy/test.py(19)modifyTarget()
- self.targetList = None

# arrive at the .append(), but we didn’t run it yet
(Pdb) next
 /Users/classic/dev/sqlalchemy/test.py(20)modifyTarget()
- temp.append(6)

# look at the tracked history of the attribute
(Pdb) from sqlalchemy import inspect
(Pdb) inspect(self).attrs.targetList.history
History(added=[None], unchanged=(), deleted=[[1, 2, 3]])

# we can see that the [1, 2, 3], we see in there is the same 
# list as above; SQLAlchemy certainly isn’t going to make a 
# copy of this, it wouldn’t know how in a generic sense 
# because this object could be anything, not just a list

(Pdb) id(inspect(self).attrs.targetList.history.deleted[0])
4357645232

# so if we append to the list...
(Pdb) temp.append(6)

# we are appending to the historical value that we’d be comparing to!
(Pdb) inspect(self).attrs.targetList.history
History(added=[None], unchanged=(), deleted=[[1, 2, 3, 6]])

# no matter what effort we make to reassign a brand new list to 
self.targetList….
(Pdb) self.targetList = list(temp)

# it will still *match* what’s being “replaced”, so no net change:
(Pdb) inspect(self).attrs.targetList.history
History(added=(), unchanged=[[1, 2, 3, 6]], deleted=())


basically, if you aren’t using the mutation system (which is fine, it’s
complicated), you should never be changing the value that was loaded, as
long as it isn’t expired.   Leave it alone, and when you assign, ensure that
the new value is only set up on a new object:

def modifyTarget(self):
self.targetList = self.targetList + [6]

Note it can also be made to work if you “expire” that value, so it isn’t
present in committed:

dbSession.expire(self, [“targetList”])

that would remove the “deleted” portion of history, and the attribute system
in this particular case will just assume it needs to update the value (which
is judged as cheaper in most cases than loading the old value first to test,
though this can be configured).


 def run(dbSession):
 targetInstance = Target(name='t1', targetList=[1, 2, 3])
 dbSession.add(targetInstance)
 dbSession.flush()
 
 targetInstance.modifyTarget()#pass dbSession if you like
 assert dbSession.is_modified(targetInstance)
 assert any (dbSession.dirty)
 print 'begin flush 2...'
 

Re: [sqlalchemy] session.transaction / session.begin and always rolling back

2015-03-02 Thread Michael Bayer


Jonathon Nelson jdnel...@dyn.com wrote:

 For a variety of reasons, sometimes it's nice to be able to do something like 
 this:
 
 with dbsess.begin(rollback=True):
do_stuff
 
 Where the expected behavior is that everything in the context runs in a 
 transaction, but upon exiting the context we issue a rollback. Sometimes this 
 is useful to *guarantee* that actions that take place within the context are 
 transactionally read-only.
 
 I had authored and am using this:
 
 def make_dbsess_contextual(sess_factory):
 @contextmanager
 def dbsess(on_success='commit'):
 sess = sess_factory()
 # we could use 'with sess.begin() here but it's
 # easier to have parity
 sess.begin()
 try:
 yield sess
 except:
 sess.rollback()
 sess.close()
 raise
 else:
 if on_success == 'commit':
 sess.commit()
 else:
 sess.rollback()
 sess.close()
 return dbsess

I might suggest that you just do the sess.close() without the commit(), but
not the sess.rollback() part; only because the rollback() is a relatively
expensive operation in that the Session is internally reverting things to
try to match a new transaction. If you do sess.close(), the connections are
discarded and the underlying database transactions are rolled back by the
connection pool.

For SQLAlchemy API I don’t really favor creating sessions with “autocommit”
which means that begin() doesn’t really have much place; a SQLAlchemy today
wouldn’t have this method.

the idiom we have without a begin() is:

with session.transaction:
# etc

which is actually what happens when you say “with session.begin()” in any
case.

I don’t have a strong sense of what additional context managers would be
broadly useful and obvious, and I don’t want to discourage individual
projects from working out exactly the idioms that are appropriate to their
own codebases which is what happens when more frameworky things like extra
context managers are added.

 and it's used like this:
 
 factory = 
 contextual_factory = make_dbsess_contextual(factory)
 
 with contextual_factory('rollback') as dbsess:
   do_stuff(dbsess)
 
 however it would be nice if the functionality could be integrated directly.
 I also strongly suspect that the way I've approached this is non-optimal.
 
 I looked at the code for how this might work and I have some ideas but I 
 thought I'd check here first to see if I'm completely off-base.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] relationship problem

2015-03-02 Thread Michael Bayer


Julien Cigar jci...@ulb.ac.be wrote:

 On Sun, Mar 01, 2015 at 01:53:30PM +0100, Julien Cigar wrote:
 On Fri, Feb 27, 2015 at 11:38:05PM -0500, Michael Bayer wrote:
 On Feb 26, 2015, at 5:56 AM, Julien Cigar jci...@ulb.ac.be wrote:
 
 On Wed, Feb 25, 2015 at 06:10:55PM -0500, Michael Bayer wrote:
 
 
 Julien Cigar jci...@ulb.ac.be wrote:
 
 On Thu, Feb 19, 2015 at 11:31:10AM -0500, Michael Bayer wrote:
 Julien Cigar jci...@ulb.ac.be wrote:
 
 On Thu, Feb 19, 2015 at 02:45:43PM +0100, Julien Cigar wrote:
 Hello,
 
 I'm using SQLAlchemy 0.9.8 with PostgreSQL and the reflection feature 
 of
 SQLAlchemy.
 
 I have the following tables (only relevant parts are show):
 https://gist.github.com/silenius/390bb9937490730741f2
 
 and the problematic mapper is the one of my association object:
 https://gist.github.com/silenius/1559a7db65ed30a1b079
 
 SQLAlchemy complains with the following error:
 sqlalchemy.exc.InvalidRequestError: One or more mappers failed to
 initialize - can't proceed with initialization of other mappers.
 Original exception was: Could not locate any simple equality 
 expressions
 involving locally mapped foreign key columns for primary join 
 condition
 'pool_invite_result.pool_invite_pool_id = pool_invite.pool_id AND
 pool_invite.pool_id = pool.id' on relationship PoolAccountResult.pool.
 Ensure that referencing columns are associated with a ForeignKey or
 ForeignKeyConstraint, or are annotated in the join condition with the
 foreign() annotation. To allow comparison operators other than '==', 
 the
 relationship can be marked as viewonly=True.
 
 The problem is that in the PoolAccountResult mapper I want a
 relationship to the Pool but the link is made through an intermediate
 table (pool_invite) ..
 
 Any idea how to handle this with SQLAlchemy ?
 
 Thanks :)
 
 Julien
 
 ... and I'm answering to myself: it seems to work with
 https://gist.github.com/silenius/e7e59c96a7277fb5879f 
 
 does it sound right ?
 
 Sure.  Also, you could use automap which does figure these out in 
 simple cases: 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html
 
 always with this, any idea why SQLAlchemy inserts NULL and
 NULL for my two relationship (line 51-79) instead of the pool_obj and
 dup.owner ids (line 89-90), https://dpaste.de/1Trz ..?
 
 getting a 404 on that link.
 
 Hi Mike,
 
 Thanks for your help!
 
 I took the time to make a complete test case, available from here
 https://gist.github.com/silenius/96d6ed2544d14753853f
 
 That's a very strange use of secondary, and I'm not sure I understand the 
 rationale for an odd schema like this.   It should be pretty clear that 
 when a table is set up as secondary, it is only used as a linkage between 
 those immediate classes and there are no features to track the state of 
 this table otherwise as though it were a mapped class.   Using the same 
 secondary table in two unrelated relationships is not the intended use.  
 
 It's true that the schema is a little odd, in SQL it translates as
 https://gist.github.com/silenius/6a67edc9e78101faef96 (simplified).
 
 The scenario is that an account can be invited to a pool (table
 pool_invite at line 45-57) and can submit one (or more) results for 
 that invitation (I have to record the submitted date too). This scenario
 is a good candidate for an association object, the only thing is that
 later someone can decide to remove all those submissions, but I have to
 remember that there was an invitation. That's why I have references to
 the table pool_invite in pool_invite_result (line 71-73) instead of pool
 and result :)
 
 Nevertheless, if you manipulate between Pool and User, that has no impact 
 whatsoever on PoolInviteResult... Especially since we're dealing with a 
 secondary table and not a first class mapped entity.You should add 
 event listeners as needed on attributes such that when an in-Python change 
 occurs between Pool and User, the desired change occurs for 
 PoolInviteResult as well.
 
 It has impacts, if I remove a Pool or an User all invitations
 (pool_invite) must be removed too, as well as all submissions
 (pool_invite_result).
 
 Anyway, I'll take a look at ORM Events :) thanks!
 
 Another thing that is a little cloudy to me is why it doesn't work with
 an alternate primaryjoin condition, such as
 https://gist.github.com/silenius/300729e312dad6b9b847

the relationship() has a simple job. It is given table A and table B, and it
needs to locate where columns from A are matched up to columns from B. The
relationship in that gist does not have this pattern within the primary join
condition; it is injecting the “pool_invite” table as an association table,
which is what the “secondary” argument is used for. This argument tells
relationship() that instead of searching for direct column comparisons
between A and B, it should look for comparisons between A and ATOB, and ATOB
and B.



 I'm using PostgreSQL, and I checked that all constraints are properly
 created on server-side

Re: [sqlalchemy] Flushed PickleType data disappearing

2015-02-28 Thread Michael Bayer


SQLRook dev...@gmail.com wrote:

 I'm having an issue where a list of user-defined objects is disappearing as a 
 function goes out of scope.  The list is stored in a PickleType column, and 
 as that type does not track changes in the objects of the list, I am 
 (knowingly, inefficiently) forcing the column to become dirty by setting the 
 corresponding Python object to an empty list, and then to the new list value. 
  This is Python 2.7.3 with SQLAlchemy 0.9.8.  The session I'm using in the 
 following code is created from a sessionmaker with default values aside form 
 expire_on_commit, which is False.
 
 The declaration of the class containing the list is simplified to the 
 following:
 
 class Target:
 __tablename__ = 'targets'
 id = Column(Integer, primary_key=True)
 name = Column(Unicode)
 targetList = Column(PickleType, default=[])
 
 
 def run(dbSession):
 targetInstance = makeTI(dbSession)#adds instance to session, flushes
 modifyList(targetInstance, dbSession, data)#blinks list values, dirties 
 object in session.  Flushes changes, and all list contents are still 
 subsequently present
 assert not any(dbSession.dirty)#assert passes
 assert not dbSession.is_modified(targetInstance)#assert passes
 print 'targetInstance list contents after init: ' + 
 str(targetInstance.targetList)#prints list contents as expected
 #last place list contents are present. Upon return of run(), list will be 
 empty again
 return
 
 After flow of control returns to the caller of run(), a commit is done, then 
 targetInstance is queried from the session, and the list is empty.  Even if 
 the list is accessed, or explicitly refreshed from the session, the list is 
 empty.  How is that possible if the once-dirty session was flushed?

what does your SQL echoing output say? Are you certain that the value you
expect is being flushed (e.g is the INSERT present with the correct data?) 
When you then access the list later on, is a SELECT
being emitted for that column, and if so, what’s in the result set? You can
use echo=‘debug’ or logging.DEBUG with “sqlalchemy.engine” to view all SQL
emitted as well as rows fetched.

if you have information as to where it’s going wrong (on the persistence
side, or on the query side), then that helps a lot as to figuring out 
what the issue is.


 I know that there is optional in-place mutation tracking, but that's not 
 being used here.
 The relevant links are the official mutable docs, an extended conversation 
 with Mike on BitBucket.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unexpected is_modified result after referencing a relationship

2015-02-28 Thread Michael Bayer


Eric Smith e...@esmithy.net wrote:

 Thanks, Mike. My problem was that I was thinking is_modified meant 
 uncommitted changes rather than unflushed changes. Thanks for straightening 
 me out.


right… these attributes were all built to support the flush process first
and foremost, so that’s one downside of making them public is that of course
it would be nicer if they represented commit-level changes (or somehow,
both). The docs should be clear, if they aren’t already, for the various
history / modified functions that we’re only talking about flushes. To
produce commit-level versions of the APIs right now would involve
aggregating changes across flushes, probably using flush events.


 --Eric
 
 On Fri, Feb 27, 2015 at 9:17 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 
 
 On Feb 27, 2015, at 4:48 PM, Eric Smith e...@esmithy.net wrote:
 
 To those more experienced than me, does this behavior make sense? If so, 
 could you further my education with an explanation?
 
 If I change an attribute, is_modified returns True (as expected).
 If I then reference a relationship, is_modified returns False (unexpected).
 
 My output for the following program (using SQLAlchemy 0.9.7) is:
 
 False
 True
 False
 
 Thanks,
 Eric
 
 
 from sqlalchemy import Column, Integer, String, create_engine, ForeignKey
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker, relationship
 
 
 Base = declarative_base()
 
 
 class Parent(Base):
 __tablename__ = 'parent'
 id = Column(Integer, primary_key=True, nullable=False)
 message = Column(String)
 children = relationship(Child, backref='parent')
 
 
 class Child(Base):
 __tablename__ = 'child'
 id = Column(Integer, primary_key=True, nullable=False)
 parent_id = Column(Integer, ForeignKey('parent.id'))
 
 
 def main():
 engine = create_engine('sqlite:///:memory:')
 Session = sessionmaker(bind=engine)
 Base.metadata.create_all(engine)
 
 db = Session()
 parent = Parent()
 db.add(parent)
 db.commit()
 
 print db.is_modified(parent)
 parent.message = hi
 print db.is_modified(parent)
 len(parent.children)  # Should this change is_modified?
 print db.is_modified(parent)
 
 Turn on sql echoing, which is always the first step towards understanding ORM 
 behavior, and you'll see that the call to parent.children invokes a lazy load 
 operation, which is a Query hence it autoflushes first, thereby synchronizing 
 the state of message and resetting the modified flag. 
 
 
 
 if __name__ == '__main__':
 main()
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to a topic in the Google 
 Groups sqlalchemy group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/sqlalchemy/Wq8Bzb76NBY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Unexpected is_modified result after referencing a relationship

2015-02-27 Thread Michael Bayer



 On Feb 27, 2015, at 4:48 PM, Eric Smith e...@esmithy.net wrote:
 
 To those more experienced than me, does this behavior make sense? If so, 
 could you further my education with an explanation?
 
 If I change an attribute, is_modified returns True (as expected).
 If I then reference a relationship, is_modified returns False (unexpected).
 
 My output for the following program (using SQLAlchemy 0.9.7) is:
 
 False
 True
 False
 
 Thanks,
 Eric
 
 
 from sqlalchemy import Column, Integer, String, create_engine, ForeignKey
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker, relationship
 
 
 Base = declarative_base()
 
 
 class Parent(Base):
 __tablename__ = 'parent'
 id = Column(Integer, primary_key=True, nullable=False)
 message = Column(String)
 children = relationship(Child, backref='parent')
 
 
 class Child(Base):
 __tablename__ = 'child'
 id = Column(Integer, primary_key=True, nullable=False)
 parent_id = Column(Integer, ForeignKey('parent.id'))
 
 
 def main():
 engine = create_engine('sqlite:///:memory:')
 Session = sessionmaker(bind=engine)
 Base.metadata.create_all(engine)
 
 db = Session()
 parent = Parent()
 db.add(parent)
 db.commit()
 
 print db.is_modified(parent)
 parent.message = hi
 print db.is_modified(parent)
 len(parent.children)  # Should this change is_modified?
 print db.is_modified(parent)

Turn on sql echoing, which is always the first step towards understanding ORM 
behavior, and you'll see that the call to parent.children invokes a lazy load 
operation, which is a Query hence it autoflushes first, thereby synchronizing 
the state of message and resetting the modified flag. 



 
 if __name__ == '__main__':
 main()
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] relationship problem

2015-02-27 Thread Michael Bayer



 On Feb 26, 2015, at 5:56 AM, Julien Cigar jci...@ulb.ac.be wrote:
 
 On Wed, Feb 25, 2015 at 06:10:55PM -0500, Michael Bayer wrote:
 
 
 Julien Cigar jci...@ulb.ac.be wrote:
 
 On Thu, Feb 19, 2015 at 11:31:10AM -0500, Michael Bayer wrote:
 Julien Cigar jci...@ulb.ac.be wrote:
 
 On Thu, Feb 19, 2015 at 02:45:43PM +0100, Julien Cigar wrote:
 Hello,
 
 I'm using SQLAlchemy 0.9.8 with PostgreSQL and the reflection feature of
 SQLAlchemy.
 
 I have the following tables (only relevant parts are show):
 https://gist.github.com/silenius/390bb9937490730741f2
 
 and the problematic mapper is the one of my association object:
 https://gist.github.com/silenius/1559a7db65ed30a1b079
 
 SQLAlchemy complains with the following error:
 sqlalchemy.exc.InvalidRequestError: One or more mappers failed to
 initialize - can't proceed with initialization of other mappers.
 Original exception was: Could not locate any simple equality expressions
 involving locally mapped foreign key columns for primary join condition
 'pool_invite_result.pool_invite_pool_id = pool_invite.pool_id AND
 pool_invite.pool_id = pool.id' on relationship PoolAccountResult.pool.
 Ensure that referencing columns are associated with a ForeignKey or
 ForeignKeyConstraint, or are annotated in the join condition with the
 foreign() annotation. To allow comparison operators other than '==', the
 relationship can be marked as viewonly=True.
 
 The problem is that in the PoolAccountResult mapper I want a
 relationship to the Pool but the link is made through an intermediate
 table (pool_invite) ..
 
 Any idea how to handle this with SQLAlchemy ?
 
 Thanks :)
 
 Julien
 
 ... and I'm answering to myself: it seems to work with
 https://gist.github.com/silenius/e7e59c96a7277fb5879f 
 
 does it sound right ?
 
 Sure.  Also, you could use automap which does figure these out in simple 
 cases: http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html
 
 always with this, any idea why SQLAlchemy inserts NULL and
 NULL for my two relationship (line 51-79) instead of the pool_obj and
 dup.owner ids (line 89-90), https://dpaste.de/1Trz ..?
 
 getting a 404 on that link.
 
 Hi Mike,
 
 Thanks for your help!
 
 I took the time to make a complete test case, available from here
 https://gist.github.com/silenius/96d6ed2544d14753853f

That's a very strange use of secondary, and I'm not sure I understand the 
rationale for an odd schema like this.   It should be pretty clear that when a 
table is set up as secondary, it is only used as a linkage between those 
immediate classes and there are no features to track the state of this table 
otherwise as though it were a mapped class.   Using the same secondary table in 
two unrelated relationships is not the intended use.  

Nevertheless, if you manipulate between Pool and User, that has no impact 
whatsoever on PoolInviteResult... Especially since we're dealing with a 
secondary table and not a first class mapped entity.You should add event 
listeners as needed on attributes such that when an in-Python change occurs 
between Pool and User, the desired change occurs for PoolInviteResult as well.

 
 I'm using PostgreSQL, and I checked that all constraints are properly
 created on server-side but I haven't checked with sqllite:// 
 
 
 
 
 
 
 
 Thanks,
 Julien
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google 
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group

Re: [sqlalchemy] relationship problem

2015-02-25 Thread Michael Bayer


Julien Cigar jci...@ulb.ac.be wrote:

 On Thu, Feb 19, 2015 at 11:31:10AM -0500, Michael Bayer wrote:
 Julien Cigar jci...@ulb.ac.be wrote:
 
 On Thu, Feb 19, 2015 at 02:45:43PM +0100, Julien Cigar wrote:
 Hello,
 
 I'm using SQLAlchemy 0.9.8 with PostgreSQL and the reflection feature of
 SQLAlchemy.
 
 I have the following tables (only relevant parts are show):
 https://gist.github.com/silenius/390bb9937490730741f2
 
 and the problematic mapper is the one of my association object:
 https://gist.github.com/silenius/1559a7db65ed30a1b079
 
 SQLAlchemy complains with the following error:
 sqlalchemy.exc.InvalidRequestError: One or more mappers failed to
 initialize - can't proceed with initialization of other mappers.
 Original exception was: Could not locate any simple equality expressions
 involving locally mapped foreign key columns for primary join condition
 'pool_invite_result.pool_invite_pool_id = pool_invite.pool_id AND
 pool_invite.pool_id = pool.id' on relationship PoolAccountResult.pool.
 Ensure that referencing columns are associated with a ForeignKey or
 ForeignKeyConstraint, or are annotated in the join condition with the
 foreign() annotation. To allow comparison operators other than '==', the
 relationship can be marked as viewonly=True.
 
 The problem is that in the PoolAccountResult mapper I want a
 relationship to the Pool but the link is made through an intermediate
 table (pool_invite) ..
 
 Any idea how to handle this with SQLAlchemy ?
 
 Thanks :)
 
 Julien
 
 ... and I'm answering to myself: it seems to work with
 https://gist.github.com/silenius/e7e59c96a7277fb5879f 
 
 does it sound right ?
 
 Sure.  Also, you could use automap which does figure these out in simple 
 cases: http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html
 
 always with this, any idea why SQLAlchemy inserts NULL and
 NULL for my two relationship (line 51-79) instead of the pool_obj and
 dup.owner ids (line 89-90), https://dpaste.de/1Trz ..?

getting a 404 on that link.





 
 Thanks,
 Julien
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inheritance with multiple foreign keys to the base table

2015-02-24 Thread Michael Bayer


Christopher Singley ch...@singleys.com wrote:

 I'm using declared_attr.cascading from sqlalchemy-1.0.0dev to declare a 
 foreign key on a subclass that also needs another separate foreign key 
 reference to the parent table.
 To let the Mapper know which column to join for the polymorphic inheritance 
 relationship, I'm trying to pass an inherit_condition argument.
 
 Relevant code snippet can be seen here:
 https://gist.github.com/anonymous/1b24768cb714fb9c7de7

this is fixed in 3a56c4f019052c5d and your example case now succeeds.


 
 Importing this code generates this warning:
 
 SAWarning: Implicitly combining column secinfo.id with column optinfo.id 
 under attribute 'id'.  Please configure one or more attributes for these 
 same-named columns explicitly.
 
 
 ...and then fails with this error:
 
 sqlalchemy.exc.ArgumentError: Mapper Mapper|OPTINFO|optinfo could not 
 assemble any primary key columns for mapped table 'Join object on 
 secinfo(23263120) and optinfo(19756496)'
 
 
 How can I configure this correctly?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] DetachedInstanceError

2015-02-23 Thread Michael Bayer




 On Feb 22, 2015, at 10:09 PM, Ed Rahn edsr...@gmail.com wrote:
 
 I am occassionally and randomly getting a DetachedInstanceError when I try to 
 access an object atttribute that is a reference. I will run this several 
 thousand times and only get it twice. What I don't understand is I use the 
 same exact base query, which is where the reference takes place, and it works 
 fine. Then I call a function which does some calculations and then it creeps 
 up. I don't close or commit the sesssion or anything weird.
 
 I did some research about this error, and some possible causes are duplicate 
 sessions and using the default expire_on_commit. I've updated my call to 
 session_maker to set expire_on_commit to False. And although I'm using 
 MultiProcessing and creating a new scopped_session for each instance, 

Not sure if this is a factor but if you are using multiprocessing it's 
essential that you use new connections in the child fork.   I added a new 
section 
http://docs.sqlalchemy.org/en/rel_0_9/core/pooling.html#using-connection-pools-with-multiprocessing
 detailing some approaches to handling this.


 the only conclusion I can come to is this is causing the problem, but why 
 only occassionally? The retreival of the problem object and all the proceding 
 function calls happen in the same process, so I'm stuck looking for other 
 solutions.
 
 I've tried calling session.merge() on the problem object and I have the same 
 problem.
 
 So I'm wondering what could also cause a problem like this or what shouldn't 
 I be doing that would cause this?
 
 There is a lot of code and it isn't anywhere public and I havn't been able to 
 make a minimial example, but this is a snippet of the problem code:
 https://gist.github.com/edrahn/ec4b1b757313a0f5c3c3
 
 And here is one of the tracebacks:
 https://gist.github.com/edrahn/bcc1aba82b3c70168a43
 
 Let mw know if you need anything else!
 
 
 Thanks.
 -- 
 Ed Rahn edsr...@gmail.com
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe Sent from my iPhone
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can I know which fields cause IntegrityError when inserting into a table with multiple unique fields?

2015-02-23 Thread Michael Bayer




 On Feb 23, 2015, at 8:49 AM, Maurice Schleußinger m.schleusin...@gmail.com 
 wrote:
 
 Thanks for the reply!
 
 Since I create objects concurrently I can not predict and therefore not 
 pre-select all rows which could possibly provoke IntegrityErrors. 
 
 On the other hand Session.merge() seams to provide the functionality which 
 could resolve my problem. In my setup many processes create a number of 
 objects which could occur multiple times (in other words throw an 
 IntegrityError on commit). That can happen in one process (which I can handle 
 otherwise), in between multiple processes and between a process and the 
 consistent state in the database (which is my main problem ATM). 
 
 I just read the corresponding part in the SQLAlchemy docs. So if I use 
 Session.merge() with the load=True argument (instead of Session.add()) the 
 session should create the corresponding object if it does not exists, avoid 
 duplicates within one session and also update an existing entry in the 
 database?
 
 Also it seems that merging only works for primary keys. So if I had the 
 problem with an unique key, I still would have to parse the exception, right?

Ah ok, yes session.merge() works this way, but it does emit a SELECT.   If you 
were looking to emit less SQL I was referring to an operation like MySQL's 
REPLACE statement.   

But if merge works for you then stick with that.   You can write your own 
function that does something similar for other fields that are not the PK.
The recipe below is one way to do this.

https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/UniqueObject



 
 On Thursday, February 19, 2015 at 5:29:56 PM UTC+1, Michael Bayer wrote:
 
 Maurice Schleußinger m.schle...@gmail.com wrote: 
 
  Is there no other way? 
  
  http://stackoverflow.com/questions/27635933/how-can-i-know-which-fiels-cause-integrityerror-when-inserting-into-a-table-with/27884632#27884632
   
  
  Parsing an exception with regex just doesn't feel right… 
 
 
 The two other ways are that you can pre-select the rows, or use a MERGE / 
 upsert approach (you’d have to roll that yourself).
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe Sent from my iPhone
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can I know which fields cause IntegrityError when inserting into a table with multiple unique fields?

2015-02-19 Thread Michael Bayer

Maurice Schleußinger m.schleusin...@gmail.com wrote:

 Is there no other way?
 
 http://stackoverflow.com/questions/27635933/how-can-i-know-which-fiels-cause-integrityerror-when-inserting-into-a-table-with/27884632#27884632
 
 Parsing an exception with regex just doesn't feel right…


The two other ways are that you can pre-select the rows, or use a MERGE / 
upsert approach (you’d have to roll that yourself).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] relationship problem

2015-02-19 Thread Michael Bayer


Julien Cigar jci...@ulb.ac.be wrote:

 On Thu, Feb 19, 2015 at 02:45:43PM +0100, Julien Cigar wrote:
 Hello,
 
 I'm using SQLAlchemy 0.9.8 with PostgreSQL and the reflection feature of
 SQLAlchemy.
 
 I have the following tables (only relevant parts are show):
 https://gist.github.com/silenius/390bb9937490730741f2
 
 and the problematic mapper is the one of my association object:
 https://gist.github.com/silenius/1559a7db65ed30a1b079
 
 SQLAlchemy complains with the following error:
 sqlalchemy.exc.InvalidRequestError: One or more mappers failed to
 initialize - can't proceed with initialization of other mappers.
 Original exception was: Could not locate any simple equality expressions
 involving locally mapped foreign key columns for primary join condition
 'pool_invite_result.pool_invite_pool_id = pool_invite.pool_id AND
 pool_invite.pool_id = pool.id' on relationship PoolAccountResult.pool.
 Ensure that referencing columns are associated with a ForeignKey or
 ForeignKeyConstraint, or are annotated in the join condition with the
 foreign() annotation. To allow comparison operators other than '==', the
 relationship can be marked as viewonly=True.
 
 The problem is that in the PoolAccountResult mapper I want a
 relationship to the Pool but the link is made through an intermediate
 table (pool_invite) ..
 
 Any idea how to handle this with SQLAlchemy ?
 
 Thanks :)
 
 Julien
 
 ... and I'm answering to myself: it seems to work with
 https://gist.github.com/silenius/e7e59c96a7277fb5879f 
 
 does it sound right ?

Sure.  Also, you could use automap which does figure these out in simple cases: 
http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html



 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 
 -- 
 Julien Cigar
 Belgian Biodiversity Platform (http://www.biodiversity.be)
 PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
 No trees were killed in the creation of this message.
 However, many electrons were terribly inconvenienced.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] session.add order

2015-02-15 Thread Michael Bayer
the order in which you put things into session.add() is significant only
within the scope of a certain mapped class, that is, the order that you put
a bunch of Foo() objects into session.add() will be maintained, however a
series of Bar() objects are handled separately.

Between different object hierarchies, the unit of work only takes into
account those dependencies established between hierarchies, which is most
commonly via relationship(). That is, if you have a series of Foo objects
which have one-to-many associations to a series of Bar objects, even if you
session.add() the Bar objects first, the unit of work can’t INSERT the Bar
objects first, as they are dependent on the Foo objects being present.

If there is no particular dependency between two classes, that is no
relationship(), then there’s no determinism to when the series of Foo or Bar
objects are inserted, they will be inserted in add() order only within the
scope of Foo and Bar. An exception to this occurs if there are
self-referential relationships involved, say if a Foo has a collection of
Foo objects related to it, or even a subclass, such as Foo-SubFoo(Foo); 
in that case, the order within Foo objects themselves needs to be based
on foreign key dependency first.

The aspect of dependencies between mappers is normally established by the
fact of a relatlonship() establishing a partial ordering between those two
mappers. However, there is semi-public API that may be used to add
additional dependencies between mappers at flush time. If you really can’t
add any kind of relationship() between the mappers in question, an approach
like the one below may be used, though this is only semi-public API. There
should ideally be some public way to add dependencies like this between
mappers.

from sqlalchemy import Column, create_engine, Integer
from sqlalchemy.orm import Session, unitofwork
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
from sqlalchemy import inspect

Base = declarative_base()


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)

e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)


@event.listens_for(Session, before_flush)
def set_b_dependent_on_a(session, flush_context, objects):
b_mapper = inspect(B)
a_mapper = inspect(A)
save_bs = unitofwork.SaveUpdateAll(flush_context, b_mapper)
save_as = unitofwork.SaveUpdateAll(flush_context, a_mapper)
flush_context.dependencies.add((save_as, save_bs))

s = Session(e)
s.add_all([A(), A(), B(), A(), B()])
s.commit()


Victor Poluksht vpoluk...@gmail.com wrote:

 I've found that sometimes sqlalchemy inserts objects during session.commit() 
 not in the order they have been added using session.add()
 
 I've put my code example and the output to the github gist.
 
 https://gist.github.com/vpol/8da4a512308ae351eaf6
 
 My question is similar to this one: 
 http://stackoverflow.com/questions/10154343/is-sqlalchemy-saves-order-in-adding-objects-to-session
 
 Is it possible to make sqlalchemy guarantee the order of insert.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] idle in transaction

2015-02-15 Thread Michael Bayer


Ed Rahn edsr...@gmail.com wrote:

 
 On 02/13/2015 11:30 PM, Michael Bayer wrote:
 Ed Rahn edsr...@gmail.com wrote:
 
 I have several programs that are Multi Process and long running, they open
 up 30 or so connections and do selects periodically. For a select query,
 the default behaviour is to begin a transaction if not currently in one,
 but not commit afterwards. This leaves a large number of postgresql
 processes with a status of idle in transaction. Which means they “lock”
 tables, so you can not drop them or alter them including add and drop
 indexes. I have also seen some problems were connections do not get closed
 if the connecting process exits, although I haven’t verified this is the
 cause.
 
 Is this a problem others have had in the past or am I just being overly
 worried?
 yeah that’s kind of bad. you want the connections to be “idle”, but
 definitely not “idle in transaction”. that will cause problems.
 
 
 
 If it is a problem is there any other way to fix it beside commit()’ing
 after each select query? I tried add an event handler after queries get
 run, but I didn't see a way to get to the current transaction.
 Well at least on the connection itself, if its used in non-autocommit mode
 (by which I refer to psycopg2’s autocommit flag), the “idle in transaction”
 will remain until either commit() or rollback() is called on that
 connection.
 
 So if you have a Session, and want to stay at that level, your options are
 to commit() it, to rollback() it, to close() it which returns the connection
 to the pool which does a connection-level rollback, or you could use the
 Session in autocommit=True mode, which means after each query it returns the
 connection to the pool for the same effect.
 I'd really like to not use autocommit mode. There are parts of the code that 
 I need to maintain DB consistency with transactions.
 And I need to keep the objects attached to a session so automatically closing 
 it isn't an option.
 So I guess my only option is to commit after each select, which seems like a 
 lot of work as the code base is fairly large.
 
 This seems like a fairly common use case, do people just not care about it or 
 how do they handle it?

Usually the application is written such that the start/end of how a Session
is used is just in one place in the app. Web applications can do this, and
also approaches like using decorators or custom context managers may be
used. Often, people just use the built in context manager of the Session,
such as:

with session.transaction:
# do stuff


I talk about this a lot in this section:
http://docs.sqlalchemy.org/en/rel_0_9/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it
at the end is an example of how to build a context manager to do what you
need.


 If you want to turn off transactions completely with the DBAPI connection
 itself, even though this overhead is very minimal for Postgresql you could
 set it to isolation level of AUTOCOMMIT which for psycopg2 sets the
 “autocommit” flag on the connection. The commit()/ rollback() calls from
 SQLAlchemy would have no effect.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Use multiple database for read and write operations

2015-02-14 Thread Michael Bayer


ahmadjaved...@gmail.com wrote:

 Hi,
 
 I am working on a web site using Python Django and SQLAlchemy ORM. I want to 
 do all read operations in separate database and write operations in separate 
 database. 
 So I tried Mike Bayer's solution 
 (http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/)
  but query.update() and query.delete() didn't work properly. I debugged some 
 code and modified SessionRouter class as following:
 
 class SQLAlchemySessionRouter(Session):
 
 def get_bind(self, mapper=None, clause=None):
 if self._flushing:
 return ENGINES['write-only']
 elif isinstance(clause, sqlalchemy.sql.expression.Update):
 return ENGINES['write-only']
 elif isinstance(clause, sqlalchemy.sql.expression.Delete):
 return ENGINES['write-only']
 else:
 return ENGINES['read-only']
 
 Now it is working fine in my case (I tested this manually).
 
 I need Mike Bayer or experts guidance. Do these modifications looks good? 
 Should I go with this solution? Or is there some thing better that can help 
 me?


Where this can go wrong is if you are using the Session in a transaction, it 
will run all your SELECT queries on the read engine, and the flushes on the 
write engine, but depending on how your replication and transaction isolation 
is set up, you might not see the rows that you just committed.

I think the blog post mentions this, I should probably update it to accommodate 
using transactions appropriately.if the recipe works for you as is, then 
keep it in place.



 
 Thanks in advance,
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Problem unpickling metadata for Postgres custom datatype

2015-02-14 Thread Michael Bayer


S P steve.palzew...@gmail.com wrote:

 I am using SQLAlchemy version 0.9.7.  I have ~ 400 tables that I 
 automatically reflect from the database.  I would like to cPickle the 
 metadata after reflection and store this, then subsequently unpickle the 
 metadata and use it (for speedup) rather than use reflection again.
  
 The pickle goes fine.  When trying to unpickle and assign to metadata for 
 subsequent use, I get the following error:
  
 metadata_cached = cPickle.load(cache)
 TypeError: ('__new__() takes exactly 2 arguments (3 given)', class 
 'sqlalchemy.sql.elements._defer_name', (u'material_type', None))
  
 material_type is one of several custom Postgres types we have defined on our 
 database.  I have searched but cannot find a way to address this.  Has anyone 
 else tried to do a similar thing?

this construct refers to the Enum or Boolean types regarding a CheckConstraint 
that is generated behind the scenes corresponding to these types.It creates 
a construct called “_defer_name()” which refers to the fact that the name of 
the constraint is not known for sure yet.  This object is not pickleable in 
0.9.7 due the regression introduced by 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3067 and this was repaired for 
issue https://bitbucket.org/zzzeek/sqlalchemy/issue/3144/, released in 0.9.8.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] idle in transaction

2015-02-13 Thread Michael Bayer


Ed Rahn edsr...@gmail.com wrote:

 I have several programs that are Multi Process and long running, they open
 up 30 or so connections and do selects periodically. For a select query,
 the default behaviour is to begin a transaction if not currently in one,
 but not commit afterwards. This leaves a large number of postgresql
 processes with a status of idle in transaction. Which means they “lock”
 tables, so you can not drop them or alter them including add and drop
 indexes. I have also seen some problems were connections do not get closed
 if the connecting process exits, although I haven’t verified this is the
 cause.
 
 Is this a problem others have had in the past or am I just being overly
 worried?

yeah that’s kind of bad. you want the connections to be “idle”, but
definitely not “idle in transaction”. that will cause problems.



 If it is a problem is there any other way to fix it beside commit()’ing
 after each select query? I tried add an event handler after queries get
 run, but I didn't see a way to get to the current transaction.

Well at least on the connection itself, if its used in non-autocommit mode
(by which I refer to psycopg2’s autocommit flag), the “idle in transaction”
will remain until either commit() or rollback() is called on that
connection.

So if you have a Session, and want to stay at that level, your options are
to commit() it, to rollback() it, to close() it which returns the connection
to the pool which does a connection-level rollback, or you could use the
Session in autocommit=True mode, which means after each query it returns the
connection to the pool for the same effect.

If you want to turn off transactions completely with the DBAPI connection
itself, even though this overhead is very minimal for Postgresql you could
set it to isolation level of AUTOCOMMIT which for psycopg2 sets the
“autocommit” flag on the connection. The commit()/ rollback() calls from
SQLAlchemy would have no effect.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Restricting a delete based on a many-to-many mapping (Using secondary)

2015-02-12 Thread Michael Bayer


Torsten Irländer torsten.irlaen...@googlemail.com wrote:

 
 
 Am Donnerstag, 12. Februar 2015 18:57:43 UTC+1 schrieb Michael Bayer:
 
 
 Torsten Irländer torsten@googlemail.com wrote: 
 
  Hi, 
  
  I have a similar prolem as described in 
  https://groups.google.com/forum/#!topic/sqlalchemy/OLeCERDZxyk 
  
  I am using the secondary attribute to define the many-to-many relationship 
  between User and Groups. 
  
  Now i want prevent to delete a group if a user is still in the group. 
  Setting the Foreign-Key to nullable=False as advised in the post above 
  does not work for me on SQLAlchemy level. It does work when trying to 
  delete the Group directly in Postgres (Error ist raised). I am using no 
  cascading deletes in the relation definition. 
 
 when you say “prevent”, do you mean, session.delete() will silently do 
 nothing, or that an error is raised (or how exactly are these deletes 
 originating)?  If the database is set up with constraints as you describe, 
 then it would raise constraint violations.   Or are you looking for some 
 error message to be raised sooner?
 
 Prevent means raise an error which could be excepted (and optionally 
 analysed) to show the user some feedback that the deletion of the groups can 
 not be done because there are still referencing objects. 
 The deletion is done by a session.delete(). 

What’s wrong with the IntegrityException here?  Not specific enough?   I’d 
recommend parsing the string message within it to determine its cause - this is 
a feasible approach that we use widely with openstack.The exceptions that 
relate to this particular violation will be predictable. The database does 
a great job of catching this error, so efforts for SQLAlchemy to do the same 
thing would be redundant.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Restricting a delete based on a many-to-many mapping (Using secondary)

2015-02-12 Thread Michael Bayer


Torsten Irländer torsten.irlaen...@googlemail.com wrote:

 Hi,
 
 I have a similar prolem as described in 
 https://groups.google.com/forum/#!topic/sqlalchemy/OLeCERDZxyk
 
 I am using the secondary attribute to define the many-to-many relationship 
 between User and Groups.
 
 Now i want prevent to delete a group if a user is still in the group. Setting 
 the Foreign-Key to nullable=False as advised in the post above does not 
 work for me on SQLAlchemy level. It does work when trying to delete the Group 
 directly in Postgres (Error ist raised). I am using no cascading deletes in 
 the relation definition.

when you say “prevent”, do you mean, session.delete() will silently do nothing, 
or that an error is raised (or how exactly are these deletes originating)?  If 
the database is set up with constraints as you describe, then it would raise 
constraint violations.   Or are you looking for some error message to be raised 
sooner?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Restricting a delete based on a many-to-many mapping (Using secondary)

2015-02-12 Thread Michael Bayer


Torsten Irländer torsten.irlaen...@googlemail.com wrote:

 First, thanks for your fast replies Michael!
 
 Do I understand it correctly that in contrast to plain SQL (Invoking the 
 delete command in e.g psql), there is no way to make the database side 
 constraints in the association table applicable in SQLAlchemy if I configure 
 this table in the relation using the secondary attribute? So even if there is 
 a constraint in the association table (Foreign key must not become null) this 
 is ignored by SQLAlchemy.

SQLAlchemy is not capable of “ignoring” a constraint.   For the use case of 
many-to-many relationships, if the relationship is established in the direction 
from the object being deleted towards the dependent rows, it will emit a DELETE 
for those rows in the association table before deleting the object itself.  So 
there is no constraint violation.  

The relationship will only take steps to alter the database if it’s not a “view 
only” relationship.  If you put viewonly=True on the side that you don’t want 
this deletion to occur (referring to relationships/backrefs here; it makes a 
difference which side you put it on), you’ll get the integrity violation when 
the object on the local side of that relationship is deleted, if remote rows 
exist.

There is also an option “passive_deletes” which supports the setting “all”, 
indicating that the remote objects should never be nulled out, however this 
feature only applies to one-to-many/many-to-one right now.  It could be made to 
support the “secondary” use case as well, in that it would prevent the UOW from 
ever deleting any rows in the association table, but this feature is not 
implemented right now.

If viewonly is not an option, then I’d recommend using events to check for this 
condition.   


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Restricting a delete based on a many-to-many mapping (Using secondary)

2015-02-12 Thread Michael Bayer


Torsten Irländer torsten.irlaen...@googlemail.com wrote:

 
 Nothing is wrong with the IntegrityException if this Exception is actually 
 raised :) That is currently my problem: I except such an exception but it 
 isn't raised. Instead the group is deleted (including all entries for this 
 group in the secondary table) although still having users referencing the 
 group. 

I made an answer for something like this over here: 
http://stackoverflow.com/questions/9234082/setting-delete-orphan-on-sqlalchemy-relationship-causes-assertionerror-this-att/9264556#9264556,
  over there they wanted the equivalent of User to be deleted if all the Groups 
are. There’s no integrity constraint applicable to an association table so 
you’d need to catch this in Python.If these are Session.delete() 
operations, then you’d need to check the object and assert that its user 
collection is empty within a before_flush() event.




 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   7   8   9   10   >