Re: [sqlalchemy] Cascade child updates onto the parent

2020-05-28 Thread Colton Allen
Perfect.  That's exactly what I ended up doing.  I added events 
(after_insert/update/delete) for each backref.

For each has-many relationship (through a secondary table) I had to 
consider the fact that the parent model would exist in session.dirty but 
not trigger the "onupdate" action on the column.  So I added a generic 
before_update/delete event on my models' base class which is basically just 
target.updated_at = dt.now().

On Thursday, May 28, 2020 at 11:06:28 AM UTC-5, Mike Bayer wrote:
>
>
>
> On Wed, May 27, 2020, at 3:57 PM, Colton Allen wrote:
>
> Hello,
>
> I'm trying to automate a backref update.  Basically, when a child model is 
> inserted or updated I want the parent model's "updated_at" column to 
> mutate.  The value should be the approximate time the user-child-model was 
> updated.  The updated_at value would not have to match the 
> created_at/updated_at value on the child.  It would just need to mutate to 
> a new time.
>
> class UserModel(db.Model):
> updated_at = db.Column(db.DateTime, default=db.now, onupdate=datetime.
> now)
>
>
> class UserChildModel(db.Model):
> user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=
> False)
> user = db.relationship('UserModel', backref='children')
>
> user = UserModel()
> save(user)
> print(user.updated_at) # x
>
> child = UserChildModel(user_id=user.id)
> save(child)
> print(user.updated_at) # y (value changed)
>
> Hopefully this pseudocode is sufficient.
>
> I'm wondering if there is an option I can specify on the orm.relationship 
> factory.  Or will I need to define an event?
>
>
> that could certainly be based on an event from the SQLAlchemy side. a 
> very straightforward one would be the before_insert / before_update / 
> after_insert / after_update suite of events, I would emit an UPDATE 
> statement against the parent table using the foreign key on the child row 
> that's being inserted/updated.  Another approach would be a DB trigger.
>
> the mapper level events are detailed at 
> https://docs.sqlalchemy.org/en/13/orm/events.html?highlight=before_insert#sqlalchemy.orm.events.MapperEvents.before_insert
>
> the "connection" right there is where you'd run your update, like:
>
> connection.execute(update(parent).values(updated_at=datetime.now()).where(
> parent.id == inserted.parent_id))
>
>
>
>
>
> Thanks!
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/490bfcd2-4ebd-4df3-98a8-516caeacbd6a%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/490bfcd2-4ebd-4df3-98a8-516caeacbd6a%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/555b8c57-85cb-48fa-9e29-491796877fd6%40googlegroups.com.


[sqlalchemy] Cascade child updates onto the parent

2020-05-27 Thread Colton Allen
Hello,

I'm trying to automate a backref update.  Basically, when a child model is 
inserted or updated I want the parent model's "updated_at" column to 
mutate.  The value should be the approximate time the user-child-model was 
updated.  The updated_at value would not have to match the 
created_at/updated_at value on the child.  It would just need to mutate to 
a new time.

class UserModel(db.Model):
updated_at = db.Column(db.DateTime, default=db.now, onupdate=datetime.
now)


class UserChildModel(db.Model):
user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False
)
user = db.relationship('UserModel', backref='children')

user = UserModel()
save(user)
print(user.updated_at) # x

child = UserChildModel(user_id=user.id)
save(child)
print(user.updated_at) # y (value changed)

Hopefully this pseudocode is sufficient.

I'm wondering if there is an option I can specify on the orm.relationship 
factory.  Or will I need to define an event?

Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/490bfcd2-4ebd-4df3-98a8-516caeacbd6a%40googlegroups.com.


[sqlalchemy] Advice for handling QueuePool limit exceptions (3o7r)

2020-05-06 Thread Colton Allen
Specifically: https://docs.sqlalchemy.org/en/13/errors.html#error-3o7r

I think I've got a good handle on what the problem is.  I just don't have 
the experience to know how to solve it effectively and with confidence.

Just some of my application's stats:

- Postgres database hosted with Amazon RDS (currently a t3.small).
- Default QueuePool limits: 5 + 10 overflow.
- Using flask + celery + gevent + concurrency of 1,000.

I only get this error when I'm saturating my celery queue.  So I'm thinking 
I can either reduce the concurrency or just add more connections.  I'm 
hesitant to reduce concurrency because the goal is to have even more 
throughput.  I could add more database connections (I think I have ~170 
available in total) but I'm unsure of the implications that action has (my 
app is the only app that uses the database).

If in the future I decide to add more concurrency is there some sort of 
equation I should follow.  Like "database connections * n = concurrency 
limit".

How does sqlalchemy decide to create a new connection(I'm using 
*scoped_session* scoped to flask's *_app_ctx_stack.__ident_func__*)?  Does 
it happen every time *scoped_session()* is called?

Let's say I want to scale up to a million concurrent users (as unlikely as 
that is) surely I can't have 100k active database connections on a single 
database.  I'd have to divide them among a couple slave databases.  So then 
how does the master database handle the write load?  Would I start putting 
my write operations into a queue and batching them?

Sorry for the brain dump but I'm very lost!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/2ec7fcb5-1ad2-432b-a11f-ae6b5e89ee74%40googlegroups.com.


Re: [sqlalchemy] Re: ObjectDeletedError in master/slave configuration

2020-03-26 Thread Colton Allen
Aside from disabling expire_on_commit, any thoughts on how I can prevent 
this error?  I guess I just need a method to force the attribute refresh to 
use the master database.  I'm just not sure where I should put that.  
Thoughts?

I think one of my previous comments got lost because of formatting.  
Quoting it here for safety.

"I agree with your assessment.  I think its because every time I call 
"session".  I'm actually saying "scoped_session(session_maker)()".  So the 
_flushing attribute will be reset because its a new session instance."

 

On Thursday, March 26, 2020 at 1:35:14 PM UTC-5, Mike Bayer wrote:
>
>
>
> On Thu, Mar 26, 2020, at 2:18 PM, Colton Allen wrote:
>
> > You can adjust `expire_on_commit` if you're only doing short-term 
> read-only actions.
>
> Can you expand on this?  Or link to docs/blog so I can do some research.  
> Google hasn't helped me so far.  Why would I want to expire after every 
> commit?
>
>
> because once the transaction is completed, there are other transactions 
> going on in a database concurrently which can change the state of the 
> objects as they are represented in the database.  in order that when you 
> next access these local objects so that they have the correct state, they 
> are automatically expired.   However this behavior is not desirable in many 
> cases, so this flag is very commonly used to disable this behavior when you 
> are not concerned about your objects having stale data relative to other 
> transactions going on, when the new transaction begins.
>
> this behavior is described at: 
> https://docs.sqlalchemy.org/en/13/orm/session_basics.html#committing
>
>
>
>
>
> ---
>
> I agree with your assessment.  I think its because every time I call 
> "session".  I'm actually saying "session_maker()".  So the _flushing 
> attribute will be reset because its a new session instance.
>
> On Thursday, March 26, 2020 at 1:02:18 PM UTC-5, Jonathan Vanasco wrote:
>
> My first guess is two things are going on:
>
> 1. This is a behavior of `expire_on_commit` on the session.  Once you 
> commit on the Primary database, the object is stale.
>https://docs.sqlalchemy.org/en/13/orm/session_api.html
>
> 2. The session is then trying to read off a Secondary database, but the 
> row has not yet synced.
>
> You can adjust `expire_on_commit` if you're only doing short-term 
> read-only actions. However,  I would explore to ensure this is trying to 
> read off the other database and why.
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/8185b96d-35cb-4973-a9cf-1e572ebd643b%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/8185b96d-35cb-4973-a9cf-1e572ebd643b%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/dd5c4d05-5af3-451e-b965-52ac50b432ac%40googlegroups.com.


[sqlalchemy] Re: ObjectDeletedError in master/slave configuration

2020-03-26 Thread Colton Allen
> You can adjust `expire_on_commit` if you're only doing short-term 
read-only actions.

Can you expand on this?  Or link to docs/blog so I can do some research.  
Google hasn't helped me so far.  Why would I want to expire after every 
commit?

---

I agree with your assessment.  I think its because every time I call 
"session".  I'm actually saying "session_maker()".  So the _flushing 
attribute will be reset because its a new session instance.

On Thursday, March 26, 2020 at 1:02:18 PM UTC-5, Jonathan Vanasco wrote:
>
> My first guess is two things are going on:
>
> 1. This is a behavior of `expire_on_commit` on the session.  Once you 
> commit on the Primary database, the object is stale.
>https://docs.sqlalchemy.org/en/13/orm/session_api.html
>
> 2. The session is then trying to read off a Secondary database, but the 
> row has not yet synced.
>
> You can adjust `expire_on_commit` if you're only doing short-term 
> read-only actions. However,  I would explore to ensure this is trying to 
> read off the other database and why.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/8185b96d-35cb-4973-a9cf-1e572ebd643b%40googlegroups.com.


[sqlalchemy] ObjectDeletedError in master/slave configuration

2020-03-26 Thread Colton Allen
Hi,

I'm using a custom session class to route requests to different database 
engines based on the type of action being performed.  Master for writes; 
slave for reads.  It looks my attributes on my models expire immediately 
after creation.  Anyway to prevent this?  Or should I not worry about 
preventing it the expiration?

Session class:

class RoutedSession(Session):

def execute(self, *args, **kwargs):
return super().execute(*args, **kwargs, use_master=True)

def get_bind(self, mapper=None, clause=None, use_master=False):
if use_master or self._flushing:
return ENGINES['master']
return random.choice(ENGINES['slaves'])

Application code:

model = MyModel()
session.add(model)
session.commit()

print(model.id)  # ObjectDeletedError: Instance '' has been 
deleted, or its row is otherwise not present.

Traceback:

File "tests/mock.py", line 2414, in populate_db_with_mock_data
  contact_id=contact2.id, account_id=account.id,
File "sqlalchemy/orm/attributes.py", line 275, in __get__
  return self.impl.get(instance_state(instance), dict_)
File "sqlalchemy/orm/attributes.py", line 669, in get
  value = state._load_expired(state, passive)
File "sqlalchemy/orm/state.py", line 632, in _load_expired
  self.manager.deferred_scalar_loader(self, toload)
File "sqlalchemy/orm/loading.py", line 985, in load_scalar_attributes
  raise orm_exc.ObjectDeletedError(state)


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/7409fb50-53b3-4505-bb2c-e81d8002ecba%40googlegroups.com.


Re: [sqlalchemy] Testing with a fake read replica

2020-03-11 Thread Colton Allen
Thank you.  My use case didn't permit me to use your examples but, based on 
your advice, I ended up using "after_flush_postexec" event.  Any 
consequences from doing this?  Seems to work fine for now.

On Wednesday, March 11, 2020 at 12:34:12 PM UTC-5, Mike Bayer wrote:
>
>
>
> On Wed, Mar 11, 2020, at 12:44 PM, Colton Allen wrote:
>
> Hi,
>
> Before we talk about the read-replica, let's talk about the test suite as 
> it is.  I have a sessionmaker in my test suite configured to use an 
> external transaction.  Basically identical to this: 
> https://docs.sqlalchemy.org/en/13/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites
>  
> .  This is great for performance and I have no complaints about it.
>
> To integrate a "read-replica" in my test suite I'm very clever.  I just 
> say "master = master_connection" and "slave = master_connection".  The test 
> suite has no idea there is a read-replica at all.
>
> Unfortunately, the coverage I get is not comprehensive.  Because a "slave" 
> connection in this context could write to the database and cause errors in 
> production.  So I need some way to differentiate the connections without 
> breaking the external transaction.  Any thoughts on how I could do this?  
> Any argument I can pass to the sessionmaker to make it read-only while 
> still making the data in the transaction available to each?
>
>
> IIUC you want an engine to fail a test if a write operation attempts to 
> proceed upon it.   lots of ways to do this:
>
> 1. set logging_name on create_engine(), then use a session before_flush() 
> handler to check the engine's logging_name  (e.g. 
> session.connection().engine.logging_name == 'readonly'), or use more 
> connection local events like before_insert, before_update, before_delete  
> https://docs.sqlalchemy.org/en/13/orm/events.html
>
> 2. use a checkout event on the read-only engine (
> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.PoolEvents.checkout)
>  
> <https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.PoolEvents.checkout>
>  
> put a token in the connection's info dictionary, then use the above 
> techniques
>
> 3. intercept INSERT/UPDATE/DELETE directly using cursor_execute() events: 
> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute
>
>
> basically use events to intercept operations and check the connection / 
> engine in use.
>
>
>
>
> Thanks!
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/a83a28d9-0982-4686-928e-a451f60ee791%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/a83a28d9-0982-4686-928e-a451f60ee791%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/c8399b15-3a40-41d9-b78a-4166f446c097%40googlegroups.com.


[sqlalchemy] Testing with a fake read replica

2020-03-11 Thread Colton Allen
Hi,

Before we talk about the read-replica, let's talk about the test suite as 
it is.  I have a sessionmaker in my test suite configured to use an 
external transaction.  Basically identical to this: 
https://docs.sqlalchemy.org/en/13/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites
 
.  This is great for performance and I have no complaints about it.

To integrate a "read-replica" in my test suite I'm very clever.  I just say 
"master = master_connection" and "slave = master_connection".  The test 
suite has no idea there is a read-replica at all.

Unfortunately, the coverage I get is not comprehensive.  Because a "slave" 
connection in this context could write to the database and cause errors in 
production.  So I need some way to differentiate the connections without 
breaking the external transaction.  Any thoughts on how I could do this?  
Any argument I can pass to the sessionmaker to make it read-only while 
still making the data in the transaction available to each?

Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a83a28d9-0982-4686-928e-a451f60ee791%40googlegroups.com.


[sqlalchemy] Re: Bulk insert creating duplicate primary-keys

2019-06-12 Thread Colton Allen
Also, I should mention the ID is not explicitly mentioned in the select 
query.  I am relying on the column's "default" argument to supply the UUID.

Also also, I made a typo.  It should read "when the select only finds one 
row".

On Wednesday, June 12, 2019 at 12:36:50 PM UTC-7, Colton Allen wrote:
>
> I'm using Postgres and I am getting a duplicate primary key error when 
> attempting to insert from a query.  My primary key is a UUID type.
>
> statement = insert(my_table).from_select(['a', 'b'], select([sometable.c.a
> , sometable.c.b])
> session.execute(statement)
> session.commit()
>
>
> Error: "DETAIL:  Key (id)=(f6bdf0e7-f2af-4f29-8122-5320e1ab428e) already 
> exists."
>
> This query runs successfully when the select on finds one row.  If there 
> are more it fails.  Is there a way to instruct the query to generate a UUID 
> for each row found?
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/b5856820-7fc5-43ae-9633-24578626aa1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Bulk insert creating duplicate primary-keys

2019-06-12 Thread Colton Allen
I'm using Postgres and I am getting a duplicate primary key error when 
attempting to insert from a query.  My primary key is a UUID type.

statement = insert(my_table).from_select(['a', 'b'], select([sometable.c.a, 
sometable.c.b])
session.execute(statement)
session.commit()


Error: "DETAIL:  Key (id)=(f6bdf0e7-f2af-4f29-8122-5320e1ab428e) already 
exists."

This query runs successfully when the select on finds one row.  If there 
are more it fails.  Is there a way to instruct the query to generate a UUID 
for each row found?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/36798ea6-8fa3-42d9-822d-c70dffe21b48%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Filtering by another table without joining it

2018-09-22 Thread Colton Allen
As the title says, I'd like to filter a related table without joining the 
table (I don't want to mess up my pagination).  Is there a way to enforce 
the from clause?  My SQL is a bit rusty but I'm pretty sure its possible.

On SQLAlchemy==1.2.12:

from sqlalchemy.ext.declarative import declarative_base

import sqlalchemy as sa


Model = declarative_base()


class Person(Model):
__tablename__ = 'person'

id = sa.Column(sa.Integer, primary_key=True)


class Note(Model):
__tablename__ = 'note'

id = sa.Column(sa.Integer, primary_key=True)
person_id = sa.Column(sa.Integer, sa.ForeignKey('person.id'))

person = sa.orm.relationship('Person', backref='notes')


engine = sa.create_engine('sqlite://')
Model.metadata.create_all(engine)


session_cls = sa.orm.sessionmaker(bind=engine)
session = session_cls()


person_query = session.query(Person).filter(Person.id == Note.person_id, 
Note.id == 1)
print(person_query)

"person_query" is:

SELECT person.id AS person_id 
FROM person, note 
WHERE person.id = note.person_id AND note.id = ?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Force relationship load in "after_insert" event

2018-06-06 Thread Colton Allen
In an after_insert event I'm trying to load the value of an 
"orm.relationship".  The foreign-key column has a value but it returns 
null.  Is there a way to force it to load?

model = Model(user_id=1) # some model is created with a foreign-key

@event.listens_for(Model, 'before_insert')
def print_user(connection, mapper, target):
print(target.user)  # None - desired: 
print(target.user_id). # 1


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Inconsistent flushing preventing "get_history" from working

2018-05-08 Thread Colton Allen
Okay thank you both for the help.  I'm now checking for changes before 
accessing relationships that might flush.  Basically:

if has_changes(self): write_revision(self)

Should do the trick.  Seems to be working already.

On Monday, May 7, 2018 at 9:33:27 PM UTC-7, Jonathan Vanasco wrote:
>
>
>
> On Monday, May 7, 2018 at 10:27:03 PM UTC-4, Mike Bayer wrote:
>>
>> can you perhaps place a "pdb.set_trace()" inside of session._flush()? 
>> using the debugger you can see the source of every flush() call. 
>> Generally, it occurs each time a query is about to emit SQL. 
>>
>>
> Building off what Mike said... it's going to emit sql + flush if you are 
> accessing any attributes or relationships that haven't been loaded 
> already.  So if the object only had a few columns loaded (via load_only or 
> deferred) or didn't load all the relationships, your code is iterating over 
> the columns and relationships so will trigger a load.
>
> It may make sense to turn autoflush off and manually call flush as needed.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Inconsistent flushing preventing "get_history" from working

2018-05-07 Thread Colton Allen
quot;, line 
1365, in _autoflush
api_1 | self.flush()
api_1 |   File 
"/usr/local/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 
2139, in flush
api_1 | self._flush(objects)
api_1 |   File "/app/common/sqlalchemy.py", line 17, in _flush
api_1 | super()._flush(objects)
api_1 |   File "/app/common/sqlalchemy.py", line 17, in _flush
api_1 | super()._flush(objects)
api_1 |   File "/usr/local/lib/python3.5/bdb.py", line 48, in 
trace_dispatch
api_1 | return self.dispatch_line(frame)
api_1 |   File "/usr/local/lib/python3.5/bdb.py", line 67, in 
dispatch_line
api_1     |     if self.quitting: raise BdbQuit
api_1 | bdb.BdbQuit

On Monday, May 7, 2018 at 7:27:03 PM UTC-7, Mike Bayer wrote:
>
> can you perhaps place a "pdb.set_trace()" inside of session._flush()? 
> using the debugger you can see the source of every flush() call. 
> Generally, it occurs each time a query is about to emit SQL. 
>
> On Mon, May 7, 2018 at 9:37 PM, Colton Allen <cmana...@gmail.com 
> > wrote: 
> > What exactly causes the session to flush?  I'm trying to track down a 
> nasty 
> > bug in my versioning system. 
> > 
> > Sorry for the long code dump.  I retooled 
> > examples/versioned_history/history_meta.py so it should look familiar. 
>  The 
> > function that's breaking is "column_has_changed". I've added some logs 
> as 
> > well. 
> > 
> > # WHEN IT WORKS! 
> > 
> > CRITICAL:root:BEFORE MAPPING NEW VALUES 
> > CRITICAL:root:BEFORE SAVE 
> > CRITICAL:root:BEFORE REVISE 
> > CRITICAL:root:CHECK COLUMN CHANGES 
> > CRITICAL:root:AFTER REVISE 
> > CRITICAL:root:flush! 
> > CRITICAL:root:AFTER SAVE 
> > CRITICAL:root:flush! 
> > 
> > # WHEN IT DOESN'T WORK! 
> > 
> > CRITICAL:root:BEFORE MAPPING NEW VALUES 
> > CRITICAL:root:BEFORE SAVE 
> > CRITICAL:root:BEFORE REVISE 
> > CRITICAL:root:flush! 
> > CRITICAL:root:CHECK COLUMN CHANGES 
> > CRITICAL:root:AFTER REVISE 
> > CRITICAL:root:AFTER SAVE 
> > CRITICAL:root:flush! 
> > 
> > controller.py 
> > 
> > for k, v in dict.items(): 
> > setattr(model, k, v) 
> > model.revise() 
> > db.session.add(model) 
> > db.session.commit() 
> > 
> > model.py 
> > 
> > class RevisionMixin: 
> > """Version control manager.""" 
> > 
> > def revise(self): 
> > db.session.add(self) 
> > write_revision(self) 
> > 
> > version.py 
> > 
> > def write_revision(target): 
> > target_mapper = orm.object_mapper(target) 
> > revision_class = target.__versioned__['model'] 
> > revision_mapper = revision_class.__mapper__ 
> > 
> > object_changed = False 
> > state = {} 
> > 
> > for column in iter_mapper_columns(target_mapper, revision_mapper): 
> > state[column.key] = getattr(target, column.key) 
> > column_changed = column_has_changed(target, column.key) 
> > object_changed = object_changed or column_changed 
> > 
> > for relationship, changed in iter_relationships(target, 
> target_mapper): 
> > if hasattr(revision_class, relationship.key): 
> > state[relationship.key] = getattr(target, relationship.key) 
> > object_changed = object_changed or changed 
> > 
> > if not isinstance(target.id, str) or object_changed: 
> > _write_revision(target, state) 
> > 
> > 
> > def _write_revision(target, state): 
> > version = target.version or 0 
> > version = version + 1 
> > state['version'] = version 
> > state['updated_at'] = db.now() 
> > state['primary'] = target 
> > 
> > revision = target.__versioned__['model'](**state) 
> > db.session.add(revision) 
> > 
> > target.version = version 
> > target.updated_at = state['updated_at'] 
> > 
> > 
> > def iter_mapper_columns(primary, revision): 
> > mappers = zip(primary.iterate_to_root(), revision.iterate_to_root()) 
> > for om, hm in mappers: 
> > if hm.single: 
> > continue 
> > for column in iter_shared_columns(om, hm): 
> > yield column 
> > 
> > 
> > def iter_shared_columns(mapper, comparison_mapper): 
> > for comparison_mapper_column in comparison_mapper.local_table.c: 
> > if 'version_meta' in comparison_mapper_column.info: 
> > contin

Re: [sqlalchemy] Inconsistent flushing preventing "get_history" from working

2018-05-07 Thread Colton Allen
ist(self)
api_1 |   File 
"/usr/local/lib/python3.5/site-packages/sqlalchemy/orm/query.py", line 2854, 
in __iter__
api_1 | self.session._autoflush()
api_1 |   File 
"/usr/local/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 
1365, in _autoflush
api_1 | self.flush()
api_1 |   File 
"/usr/local/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 
2139, in flush
api_1 | self._flush(objects)
api_1 |   File "/app/common/sqlalchemy.py", line 17, in _flush
api_1 | super()._flush(objects)
api_1 |   File "/app/common/sqlalchemy.py", line 17, in _flush
api_1 | super()._flush(objects)
api_1 |   File "/usr/local/lib/python3.5/bdb.py", line 48, in 
trace_dispatch
api_1 | return self.dispatch_line(frame)
api_1 |   File "/usr/local/lib/python3.5/bdb.py", line 67, in 
dispatch_line
api_1     | if self.quitting: raise BdbQuit
api_1 | bdb.BdbQuit



On Monday, May 7, 2018 at 7:27:03 PM UTC-7, Mike Bayer wrote:
>
> can you perhaps place a "pdb.set_trace()" inside of session._flush()? 
> using the debugger you can see the source of every flush() call. 
> Generally, it occurs each time a query is about to emit SQL. 
>
> On Mon, May 7, 2018 at 9:37 PM, Colton Allen <cmana...@gmail.com 
> > wrote: 
> > What exactly causes the session to flush?  I'm trying to track down a 
> nasty 
> > bug in my versioning system. 
> > 
> > Sorry for the long code dump.  I retooled 
> > examples/versioned_history/history_meta.py so it should look familiar. 
>  The 
> > function that's breaking is "column_has_changed". I've added some logs 
> as 
> > well. 
> > 
> > # WHEN IT WORKS! 
> > 
> > CRITICAL:root:BEFORE MAPPING NEW VALUES 
> > CRITICAL:root:BEFORE SAVE 
> > CRITICAL:root:BEFORE REVISE 
> > CRITICAL:root:CHECK COLUMN CHANGES 
> > CRITICAL:root:AFTER REVISE 
> > CRITICAL:root:flush! 
> > CRITICAL:root:AFTER SAVE 
> > CRITICAL:root:flush! 
> > 
> > # WHEN IT DOESN'T WORK! 
> > 
> > CRITICAL:root:BEFORE MAPPING NEW VALUES 
> > CRITICAL:root:BEFORE SAVE 
> > CRITICAL:root:BEFORE REVISE 
> > CRITICAL:root:flush! 
> > CRITICAL:root:CHECK COLUMN CHANGES 
> > CRITICAL:root:AFTER REVISE 
> > CRITICAL:root:AFTER SAVE 
> > CRITICAL:root:flush! 
> > 
> > controller.py 
> > 
> > for k, v in dict.items(): 
> > setattr(model, k, v) 
> > model.revise() 
> > db.session.add(model) 
> > db.session.commit() 
> > 
> > model.py 
> > 
> > class RevisionMixin: 
> > """Version control manager.""" 
> > 
> > def revise(self): 
> > db.session.add(self) 
> > write_revision(self) 
> > 
> > version.py 
> > 
> > def write_revision(target): 
> > target_mapper = orm.object_mapper(target) 
> > revision_class = target.__versioned__['model'] 
> > revision_mapper = revision_class.__mapper__ 
> > 
> > object_changed = False 
> > state = {} 
> > 
> > for column in iter_mapper_columns(target_mapper, revision_mapper): 
> > state[column.key] = getattr(target, column.key) 
> > column_changed = column_has_changed(target, column.key) 
> > object_changed = object_changed or column_changed 
> > 
> > for relationship, changed in iter_relationships(target, 
> target_mapper): 
> > if hasattr(revision_class, relationship.key): 
> > state[relationship.key] = getattr(target, relationship.key) 
> > object_changed = object_changed or changed 
> > 
> > if not isinstance(target.id, str) or object_changed: 
> > _write_revision(target, state) 
> > 
> > 
> > def _write_revision(target, state): 
> > version = target.version or 0 
> > version = version + 1 
> > state['version'] = version 
> > state['updated_at'] = db.now() 
> > state['primary'] = target 
> > 
> > revision = target.__versioned__['model'](**state) 
> > db.session.add(revision) 
> > 
> > target.version = version 
> > target.updated_at = state['updated_at'] 
> > 
> > 
> > def iter_mapper_columns(primary, revision): 
> > mappers = zip(primary.iterate_to_root(), revision.iterate_to_root()) 
> > for om, hm in mappers: 
> > if hm.single: 
> > continue 
> > for column in iter_shared_columns(om, hm): 
> > yield column 
> > 
> > 
> >

Re: [sqlalchemy] Inconsistent flushing preventing "get_history" from working

2018-05-07 Thread Colton Allen
New to pdb.  Does this look correct?

class DebugSession(Session):

def _flush(self, objects=None):
pdb.set_trace()
super()._flush(objects)


factory = sqlalchemy.orm.sessionmaker(bind=engine, class_=DebugSession)

Program terminates when trying to call _flush().

api_1 | > /app/common/sqlalchemy.py(17)_flush()
api_1 | -> super()._flush(objects)


On Monday, May 7, 2018 at 7:27:03 PM UTC-7, Mike Bayer wrote:
>
> can you perhaps place a "pdb.set_trace()" inside of session._flush()? 
> using the debugger you can see the source of every flush() call. 
> Generally, it occurs each time a query is about to emit SQL. 
>
> On Mon, May 7, 2018 at 9:37 PM, Colton Allen <cmana...@gmail.com 
> > wrote: 
> > What exactly causes the session to flush?  I'm trying to track down a 
> nasty 
> > bug in my versioning system. 
> > 
> > Sorry for the long code dump.  I retooled 
> > examples/versioned_history/history_meta.py so it should look familiar. 
>  The 
> > function that's breaking is "column_has_changed". I've added some logs 
> as 
> > well. 
> > 
> > # WHEN IT WORKS! 
> > 
> > CRITICAL:root:BEFORE MAPPING NEW VALUES 
> > CRITICAL:root:BEFORE SAVE 
> > CRITICAL:root:BEFORE REVISE 
> > CRITICAL:root:CHECK COLUMN CHANGES 
> > CRITICAL:root:AFTER REVISE 
> > CRITICAL:root:flush! 
> > CRITICAL:root:AFTER SAVE 
> > CRITICAL:root:flush! 
> > 
> > # WHEN IT DOESN'T WORK! 
> > 
> > CRITICAL:root:BEFORE MAPPING NEW VALUES 
> > CRITICAL:root:BEFORE SAVE 
> > CRITICAL:root:BEFORE REVISE 
> > CRITICAL:root:flush! 
> > CRITICAL:root:CHECK COLUMN CHANGES 
> > CRITICAL:root:AFTER REVISE 
> > CRITICAL:root:AFTER SAVE 
> > CRITICAL:root:flush! 
> > 
> > controller.py 
> > 
> > for k, v in dict.items(): 
> > setattr(model, k, v) 
> > model.revise() 
> > db.session.add(model) 
> > db.session.commit() 
> > 
> > model.py 
> > 
> > class RevisionMixin: 
> > """Version control manager.""" 
> > 
> > def revise(self): 
> > db.session.add(self) 
> > write_revision(self) 
> > 
> > version.py 
> > 
> > def write_revision(target): 
> > target_mapper = orm.object_mapper(target) 
> > revision_class = target.__versioned__['model'] 
> > revision_mapper = revision_class.__mapper__ 
> > 
> > object_changed = False 
> > state = {} 
> > 
> > for column in iter_mapper_columns(target_mapper, revision_mapper): 
> > state[column.key] = getattr(target, column.key) 
> > column_changed = column_has_changed(target, column.key) 
> > object_changed = object_changed or column_changed 
> > 
> > for relationship, changed in iter_relationships(target, 
> target_mapper): 
> > if hasattr(revision_class, relationship.key): 
> > state[relationship.key] = getattr(target, relationship.key) 
> > object_changed = object_changed or changed 
> > 
> > if not isinstance(target.id, str) or object_changed: 
> > _write_revision(target, state) 
> > 
> > 
> > def _write_revision(target, state): 
> > version = target.version or 0 
> > version = version + 1 
> > state['version'] = version 
> > state['updated_at'] = db.now() 
> > state['primary'] = target 
> > 
> > revision = target.__versioned__['model'](**state) 
> > db.session.add(revision) 
> > 
> > target.version = version 
> > target.updated_at = state['updated_at'] 
> > 
> > 
> > def iter_mapper_columns(primary, revision): 
> > mappers = zip(primary.iterate_to_root(), revision.iterate_to_root()) 
> > for om, hm in mappers: 
> > if hm.single: 
> > continue 
> > for column in iter_shared_columns(om, hm): 
> > yield column 
> > 
> > 
> > def iter_shared_columns(mapper, comparison_mapper): 
> > for comparison_mapper_column in comparison_mapper.local_table.c: 
> > if 'version_meta' in comparison_mapper_column.info: 
> > continue 
> > 
> > try: 
> > mapper_column = 
> > mapper.local_table.c[comparison_mapper_column.key] 
> > yield mapper.get_property_by_column(mapper_column) 
> > except UnmappedColumnError: 
> > continue 
> > 
> > 
> > def iter_relationships(target, mapper):

[sqlalchemy] Inconsistent flushing preventing "get_history" from working

2018-05-07 Thread Colton Allen
What exactly causes the session to flush?  I'm trying to track down a nasty 
bug in my versioning system.

Sorry for the long code dump.  I retooled 
examples/versioned_history/history_meta.py so it should look familiar.  The 
function that's breaking is "column_has_changed". I've added some logs as 
well.

# WHEN IT WORKS!

CRITICAL:root:BEFORE MAPPING NEW VALUES
CRITICAL:root:BEFORE SAVE
CRITICAL:root:BEFORE REVISE
CRITICAL:root:CHECK COLUMN CHANGES
CRITICAL:root:AFTER REVISE
CRITICAL:root:flush!
CRITICAL:root:AFTER SAVE
CRITICAL:root:flush!

# WHEN IT DOESN'T WORK!

CRITICAL:root:BEFORE MAPPING NEW VALUES
CRITICAL:root:BEFORE SAVE
CRITICAL:root:BEFORE REVISE
CRITICAL:root:flush!
CRITICAL:root:CHECK COLUMN CHANGES
CRITICAL:root:AFTER REVISE
CRITICAL:root:AFTER SAVE
CRITICAL:root:flush!

controller.py

for k, v in dict.items():
setattr(model, k, v)
model.revise()
db.session.add(model)
db.session.commit()

model.py

class RevisionMixin:
"""Version control manager."""

def revise(self):
db.session.add(self)
write_revision(self)

version.py

def write_revision(target):
target_mapper = orm.object_mapper(target)
revision_class = target.__versioned__['model']
revision_mapper = revision_class.__mapper__

object_changed = False
state = {}

for column in iter_mapper_columns(target_mapper, revision_mapper):
state[column.key] = getattr(target, column.key)
column_changed = column_has_changed(target, column.key)
object_changed = object_changed or column_changed

for relationship, changed in iter_relationships(target, target_mapper):
if hasattr(revision_class, relationship.key):
state[relationship.key] = getattr(target, relationship.key)
object_changed = object_changed or changed

if not isinstance(target.id, str) or object_changed:
_write_revision(target, state)


def _write_revision(target, state):
version = target.version or 0
version = version + 1
state['version'] = version
state['updated_at'] = db.now()
state['primary'] = target

revision = target.__versioned__['model'](**state)
db.session.add(revision)

target.version = version
target.updated_at = state['updated_at']


def iter_mapper_columns(primary, revision):
mappers = zip(primary.iterate_to_root(), revision.iterate_to_root())
for om, hm in mappers:
if hm.single:
continue
for column in iter_shared_columns(om, hm):
yield column


def iter_shared_columns(mapper, comparison_mapper):
for comparison_mapper_column in comparison_mapper.local_table.c:
if 'version_meta' in comparison_mapper_column.info:
continue

try:
mapper_column = mapper.local_table.c[comparison_mapper_column.
key]
yield mapper.get_property_by_column(mapper_column)
except UnmappedColumnError:
continue


def iter_relationships(target, mapper):
for prop in mapper.iterate_properties:
if isinstance(prop, RelationshipProperty):
passive = attributes.PASSIVE_NO_INITIALIZE
changed = attributes.get_history(
target, prop.key, passive=passive).has_changes()
yield prop, changed


def column_has_changed(target, column_name):
# Sometimes the instance state history can't be properly
# calculated?  No flushing during versioning.  Unsure why its not
# working.
added, _, deleted = attributes.get_history(target, column_name)
return bool(added or deleted)


def relationship_has_changed(prop):
for p in prop.local_columns:
if p.foreign_keys:
return True
return False


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] column_property query for a table that has not been defined yet

2018-04-02 Thread Colton Allen
I'm trying to write a column_property that queries a table that will be 
defined after the current table is defined.  I tried using raw SQL but I'm 
getting an error.

On the "page" table I define a "count_posts" column_property.

count_posts = orm.column_property(text(
'SELECT count(post.id) FROM post WHERE post.page_id = page.id'))
# AttributeError: 'TextClause' object has no attribute 'label'


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Duplicate primary key on table migration

2018-03-23 Thread Colton Allen
I'm moving data from one table to another.  During this move I'm preserving 
the ID of the old table before dropping it.  However, by doing so the 
sequence gets out of whack and the database will no longer allow inserts to 
the trigger table.  What can I do to fix the broken sequence?  The id 
column is a UUID.

# Migration
select = sa.select([outcome.c.id, outcome.c.a, outcome.c.b])
statement = sa.insert(trigger).from_select(['id', 'a', 'b'], select)
connection.execute(statement)

# Insert some time later
statement = sa.insert(trigger).values(a=1, b=2)
connection.execute(statement)  # duplicate key value violates unique 
constraint "trigger_pkey"

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Copying enum from one column to another

2018-03-14 Thread Colton Allen
I'm trying to copy an enum of one type to an enum of another type.  The 
below is the query I'm using:

import sqlalchemy as sa

statement = sa.insert(table_a).from_select(['enum_a'], sa.select(table_b
.c.enum_b))
connection.execute(statement)

Which raises this error in Postgres:

Error: (psycopg2.ProgrammingError) column "enum_a" is of type enum_a but 
expression is of type enum_b

Is there a way to parse the enum to its string-value and just insert that?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] QueuePool overflowing with websocket server

2017-11-28 Thread Colton Allen
I'm receiving this error: 

sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached, 
connection timed out, timeout 30


I'm curious how SQLAlchemy manages database connections. I've pasted below 
some psuedo-code to simplify the application.

# Create an engine
engine = create_engine('whatever')

# Create a scoped session factory
factory = sqlalchemy.orm.sessionmaker(bind=engine)
factory = sqlalchemy.orm.scoped_session(factory, scopefunc=
_app_ctx_stack.__ident_func__)

# Handle websocket connection
def my_websocket(ws):
while True:
# yield after arbitrary amount of time
model_id = yield_from_redis()

# Use the session to query the model.
session = factory()
model = session.query(SomeModel).get(model_id)
ws.send(model.seralized)

# breakout or whatever
factory.remove()



The engine and factory are being created once on a singleton.  The session 
is being created each time its needed.  Anytime I need a session, I call 
factory().  This hasn't been a problem on the HTTP server (I call remove() 
after each request) but since websockets are so long lived, I assume that 
creating all these session instances are somehow causing the overflow.

Should I be closing that session sooner?  I assume thats the case but I'm 
asking because I want to make sure the code is as stable as possible.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] "execute(query)" behaving strangely

2017-10-09 Thread Colton Allen
You were exactly right.  I needed to commit.

On Monday, October 9, 2017 at 4:08:05 PM UTC-7, Mike Bayer wrote:
>
> On Mon, Oct 9, 2017 at 3:57 PM, Colton Allen <cmana...@gmail.com 
> > wrote: 
> > I'm trying to execute a fairly simple UPDATE query. 
> > 
> > query = update(Model).where(Model.id.in_(list_of_ids)).values(x=1) 
> > 
> > I know of two methods to execute it.  One using the session and the 
> other 
> > using the engine.  However, depending on which I use, the results I get 
> are 
> > very different. 
> > 
> > db.session.execute(query)  # works in test suite but not live. 
> > db.engine.connect().execute(query) # works live but not in test suite. 
> > 
> > I'm trying to understand why this would be the case.  I believe either 
> my 
> > test suite of my implementation of sqlalchemy is broken.  I was 
> wondering if 
> > you had any tips. 
>
> well the session version won't be committed unless you call 
> session.commit().  so...it seems like your test suite is probably 
> looking at the data uncommitted, which is fine, but for live you'd 
> want to make sure data is commited. 
>
> I guess in the opposite case, your test suite which relies upon the 
> data being rolled back for teardown doesn't occur when you use 
> engine.connect() because that makes its own transaction that is 
> autocommitting. 
>
>
>
>
> > 
> > Test Suite: 
> > 
> > def setUp(self): 
> > """Create app test client.""" 
> > self.app = app 
> > self.app_context = self.app.app_context() 
> > self.app_context.push() 
> > self.client = self.app.test_client() 
> > 
> > self.transaction = connection.begin() 
> > db.session = scoped_session( 
> > sessionmaker(bind=connection, query_cls=db.query_class)) 
> > db.session.begin_nested() 
> > 
> > @event.listens_for(db.session, "after_transaction_end") 
> > def restart_savepoint(session, transaction): 
> > if transaction.nested and not transaction._parent.nested: 
> > session.expire_all() 
> > session.begin_nested() 
> > 
> > # this is a cleanup function rather than a teardown function in case 
> > # the db gets into a bad state and setup fails, in which case we 
> still 
> > # want the drop_all to be called 
> > self.addCleanup(self.cleanup) 
> > 
> > def cleanup(self): 
> > """Tear down database.""" 
> > db.session.close() 
> > self.transaction.rollback() 
> > self.app_context.pop() 
> > 
> > @classmethod 
> > def setUpClass(cls): 
> > """Create the database.""" 
> > global app, engine, connection 
> > 
> > app = cls._create_app() 
> > engine = db.engine 
> > connection = engine.connect() 
> > 
> > @classmethod 
> > def tearDownClass(cls): 
> > """Destroy the database.""" 
> > connection.close() 
> > engine.dispose() 
> > 
> > 
> > Session construction when live: 
> > 
> > @property 
> > def engine(self): 
> > """Return an engine instance.""" 
> > if not self._engine: 
> > database_uri = self.app.config.get('SQLALCHEMY_DATABASE_URI') 
> > self._engine = sqlalchemy.create_engine(database_uri) 
> > return self._engine 
> > 
> > @property 
> > def session(self): 
> > """Return database session.""" 
> > if not self._session: 
> > factory = sqlalchemy.orm.sessionmaker( 
> > bind=self.engine, query_cls=self.query_class) 
> > self._session = sqlalchemy.orm.scoped_session( 
> > factory, scopefunc=_app_ctx_stack.__ident_func__) 
> > return self._session() 
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: "execute(query)" behaving strangely

2017-10-09 Thread Colton Allen
Both are using postgres.  But I did try updating the bind to both the 
engine (db.engine) and the session (db.session) and it didn't have any 
affect.

On Monday, October 9, 2017 at 2:09:02 PM UTC-7, Jonathan Vanasco wrote:
>
> OTOMH (I didn't go through your code), are the two databases the same?
>
> If not, this is possibly related to database specific compiling (or the 
> lack of) and a common error.  Note the `bind` references in the docs: 
> http://docs.sqlalchemy.org/en/latest/core/tutorial.html#executing 
>  Depending on how you invoke a select, the statement will either be 
> compiled as a generic string or sqlalchemy knows that it should be 
> customized for the specific database.  Depending on the database, the 
> generic version will or will not work.  This might not be happening -- I 
> didn't go through your code -- but many situations of "It works when I ___ 
> but not when I " are related to this.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] "execute(query)" behaving strangely

2017-10-09 Thread Colton Allen
I'm trying to execute a fairly simple UPDATE query.

query = update(Model).where(Model.id.in_(list_of_ids)).values(x=1)

I know of two methods to execute it.  One using the session and the other 
using the engine.  However, depending on which I use, the results I get are 
very different.

db.session.execute(query)  # works in test suite but not live.
db.engine.connect().execute(query) # works live but not in test suite.

I'm trying to understand why this would be the case.  I believe either my 
test suite of my implementation of sqlalchemy is broken.  I was wondering 
if you had any tips.

Test Suite:

def setUp(self):
"""Create app test client."""
self.app = app
self.app_context = self.app.app_context()
self.app_context.push()
self.client = self.app.test_client()

self.transaction = connection.begin()
db.session = scoped_session(
sessionmaker(bind=connection, query_cls=db.query_class))
db.session.begin_nested()

@event.listens_for(db.session, "after_transaction_end")
def restart_savepoint(session, transaction):
if transaction.nested and not transaction._parent.nested:
session.expire_all()
session.begin_nested()

# this is a cleanup function rather than a teardown function in case
# the db gets into a bad state and setup fails, in which case we still
# want the drop_all to be called
self.addCleanup(self.cleanup)

def cleanup(self):
"""Tear down database."""
db.session.close()
self.transaction.rollback()
self.app_context.pop()

@classmethod
def setUpClass(cls):
"""Create the database."""
global app, engine, connection

app = cls._create_app()
engine = db.engine
connection = engine.connect()

@classmethod
def tearDownClass(cls):
"""Destroy the database."""
connection.close()
engine.dispose()


Session construction when live:

@property
def engine(self):
"""Return an engine instance."""
if not self._engine:
database_uri = self.app.config.get('SQLALCHEMY_DATABASE_URI')
self._engine = sqlalchemy.create_engine(database_uri)
return self._engine

@property
def session(self):
"""Return database session."""
if not self._session:
factory = sqlalchemy.orm.sessionmaker(
bind=self.engine, query_cls=self.query_class)
self._session = sqlalchemy.orm.scoped_session(
factory, scopefunc=_app_ctx_stack.__ident_func__)
return self._session()


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] "after_insert" event alternative?

2017-09-12 Thread Colton Allen
In an attempt to simplify my app's logic, I want to move some mandatory 
method calls into a model event.  Traditionally, I've used "after_insert" 
but it has some quirks in that you need to use the "connection" argument to 
make updates.  What I really want is more flexibility.  I'll need use of my 
"db.session" property, for example.  I want it to behave like any other 
part of my app.

A contrived example of the flow I have:

model = EventedModel()
session.add(model)
session.commit()

some_function_call(model)

Versus the flow I want:

@event.listens_for(EventedModel, 'after_commit')
def test(my_committed_model, *args, **kwargs):
some_function_call(my_committed_model)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Updating polymorphic parent from polymorphic child's event

2017-08-03 Thread Colton Allen
Worked perfectly.  Thanks for the help.

On Thursday, August 3, 2017 at 11:16:26 AM UTC-7, Mike Bayer wrote:
>
> On Thu, Aug 3, 2017 at 1:37 PM, Colton Allen <cmana...@gmail.com 
> > wrote: 
> > I've got an event that's trying to update another model with a foreign 
> key 
> > but it's raising a strange error that I'm having trouble debugging. 
> > "AttributeError: 'Join' object has no attribute 'implicit_returning'". 
> > 
> > The query worked in the past (before the OutboundModel became 
> polymorphic). 
> > I'm certain the query in the update event is incorrect but I'm unsure of 
> how 
> > to fix it. 
> > 
> > # Parent polymorphic models 
> > class OutboundModel(Model): 
> > __tablename__ = 'outbound' 
> > 
> > id = Column(Integer, primary_key=True) 
> > driver = Column( 
> > Enum('default', 'ses', name='outbound_driver'), 
> default='default', 
> > nullable=False) 
> > latest_result_id = Column(Integer, ForeignKey('outbound_result.id')) 
>
> > latest_result = orm.relationship( 
> > 'OutboundResultModel', backref='outbound', 
> > foreign_keys=[latest_result_id]) 
> > 
> > __mapper_args__ = { 
> > 'polymorphic_identity': 'default', 
> > 'polymorphic_on': driver, 
> > 'with_polymorphic': '*' 
> > } 
> > 
> > 
> > class OutboundSESModel(OutboundModel): 
> > __tablename__ = 'service_outbound_ses' 
> > __mapper_args__ = {'polymorphic_identity': 'ses'} 
> > 
> > id = Column(Integer, ForeignKey('outbound.id'), primary_key=True) 
> > 
> > 
> > # Child polymorphic models 
> > class OutboundResultModel(Model): 
> > __tablename__ = 'outbound_result' 
> > 
> > id = Column(Integer, primary_key=True) 
> > driver = Column( 
> > Enum('a', 'b', name='outbound_result_driver'), default='a', 
> > nullable=False) 
> > outbound_id = Column( 
> > Integer, ForeignKey('outbound.id'), nullable=False) 
> > outbound = orm.relationship( 
> > 'OutboundModel', backref='results', foreign_keys=[outbound_id]) 
> > __mapper_args__ = { 
> > 'polymorphic_identity': 'result', 
> > 'polymorphic_on': kind, 
> > 'with_polymorphic': '*' 
> > } 
> > 
> > 
> > # The event causing the error. 
> > # AttributeError: 'Join' object has no attribute 'implicit_returning' 
> > @event.listens_for(OutboundResultModel, 'after_insert', propagate=True) 
> > def set_latest_outbound_result(mapper, connection, target): 
> > """Set the latest_outbound relationship on the Outbound parent.""" 
> > statement = update(OutboundModel).where( 
> > OutboundModel.id == target.outbound_id).values( 
> > latest_result_id=target.id) 
> > connection.execute(statement) 
>
>
> That's Core update(), which is probably not appreciating that 
> OutboundModel now maps to a join() because of the with_polymorphic in 
> the mapper config.   If you're doing Core you should call upon 
> OutboundModel.__table__ to eliminate any interference from the mapping 
> (or if that doesn't work, because i can't remember if __table__ is 
> going to pull in the polymorphic selectable as well, do 
> inspect(OutboundModel).local_table). 
>
>
>
> > 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Updating polymorphic parent from polymorphic child's event

2017-08-03 Thread Colton Allen
I've got an event that's trying to update another model with a foreign key 
but it's raising a strange error that I'm having trouble debugging. 
 "AttributeError: 'Join' object has no attribute 'implicit_returning'".

The query worked in the past (before the OutboundModel became polymorphic). 
 I'm certain the query in the update event is incorrect but I'm unsure of 
how to fix it.

# Parent polymorphic models
class OutboundModel(Model):
__tablename__ = 'outbound'

id = Column(Integer, primary_key=True)
driver = Column(
Enum('default', 'ses', name='outbound_driver'), default='default',
nullable=False)
latest_result_id = Column(Integer, ForeignKey('outbound_result.id'))
latest_result = orm.relationship(
'OutboundResultModel', backref='outbound',
foreign_keys=[latest_result_id])

__mapper_args__ = {
'polymorphic_identity': 'default',
'polymorphic_on': driver,
'with_polymorphic': '*'
}


class OutboundSESModel(OutboundModel):
__tablename__ = 'service_outbound_ses'
__mapper_args__ = {'polymorphic_identity': 'ses'}

id = Column(Integer, ForeignKey('outbound.id'), primary_key=True)


# Child polymorphic models
class OutboundResultModel(Model):
__tablename__ = 'outbound_result'

id = Column(Integer, primary_key=True)
driver = Column(
Enum('a', 'b', name='outbound_result_driver'), default='a',
nullable=False)
outbound_id = Column(
Integer, ForeignKey('outbound.id'), nullable=False)
outbound = orm.relationship(
'OutboundModel', backref='results', foreign_keys=[outbound_id])
__mapper_args__ = {
'polymorphic_identity': 'result',
'polymorphic_on': kind,
'with_polymorphic': '*'
}


# The event causing the error.
# AttributeError: 'Join' object has no attribute 'implicit_returning'
@event.listens_for(OutboundResultModel, 'after_insert', propagate=True)
def set_latest_outbound_result(mapper, connection, target):
"""Set the latest_outbound relationship on the Outbound parent."""
statement = update(OutboundModel).where(
OutboundModel.id == target.outbound_id).values(
latest_result_id=target.id)
connection.execute(statement)


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Advice for Implementing a ShortUUID Column Type?

2017-05-19 Thread Colton Allen
I do not know the optimization trick.  I'd be interested to know!  It would 
be nice to not have to translate to and from the UUID type.

On Friday, May 19, 2017 at 4:04:55 PM UTC-7, Jonathan Vanasco wrote:
>
> side question - have you done any tests on how the UUID type queries as 
> your database scales?  It may make sense to do the shortuuid as a text 
> field, which you can put a substring index on.  if you don't know that 
> postgresql optimization trick, I can explain it further.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Advice for Implementing a ShortUUID Column Type?

2017-05-19 Thread Colton Allen
I get some issues when appending a model to an "orm.relationship" 
(UnorderableType error).  Which is annoying but can be worked around.

I also use SQLAlchemy-Continuum which seems to have issues with retrieving 
"orm.relationship"s.  It will attempt to query using the shortuuid type 
without transforming it to the uuid type.  Obviously this is third-party 
stuff but I am curious how I could go about creating my own relationship 
mapper.  Something that would be resilient enough to understand shortuuids 
need to be converted to UUIDs.

I thought there might be something obviously wrong with my implementation 
(since its my first one) which I why I lead off with that.

On Friday, May 19, 2017 at 6:19:45 AM UTC-7, Mike Bayer wrote:
>
> looks fine to me?   what did you have in mind? 
>
>
>
> On 05/18/2017 11:29 PM, Colton Allen wrote: 
> > I want to make my UUID's prettier so I've gone about implementing a 
> > ShortUUID column based on the shortuuid library[1].  The idea is to 
> > store the primary key as a UUID type in postgres (since its optimized 
> > for that) and transform the UUID to a shortuuid for presentation and 
> > querying.  This is my first attempt at implementing it.  It has some 
> > short comings. 
> > 
> > I was wondering if you had any advice for fully baking the 
> > implementation.  I've pasted the code I have so far. 
> > 
> > 
> > from sqlalchemy_utils.types.uuid import UUIDType 
> > 
> > import uuid 
> > import shortuuid 
> > 
> > 
> > def _decode_shortuuid(value): 
> >  try: 
> >  return shortuuid.decode(value) 
> >  except ValueError: 
> >  return None 
> > 
> > 
> > def _encode_shortuuid(value): 
> >  try: 
> >  if value is None: 
> >  return None 
> >  return shortuuid.encode(value) 
> >  except KeyError: 
> >  return None 
> > 
> > 
> > class ShortUUID(UUIDType): 
> >  """Converts UUIDs to ShortUUIDs for readability's sake.""" 
> > 
> >  def process_bind_param(self, value, dialect): 
> >  """Process a ShortUUID to a UUID.""" 
> >  if value is None: 
> >  return value 
> > 
> >  if type(value) != uuid.UUID: 
> >  value = _decode_shortuuid(value) 
> >  return super().process_bind_param(value, dialect) 
> > 
> >  def process_result_value(self, value, dialect): 
> >  """Return a ShortUUID encoded UUID.""" 
> >  value = super().process_result_value(value, dialect) 
> >  return _encode_shortuuid(value) 
> > 
> > 
> > 1. https://github.com/skorokithakis/shortuuid 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Advice for Implementing a ShortUUID Column Type?

2017-05-18 Thread Colton Allen
I want to make my UUID's prettier so I've gone about implementing a 
ShortUUID column based on the shortuuid library[1].  The idea is to store 
the primary key as a UUID type in postgres (since its optimized for that) 
and transform the UUID to a shortuuid for presentation and querying.  This 
is my first attempt at implementing it.  It has some short comings.

I was wondering if you had any advice for fully baking the implementation. 
 I've pasted the code I have so far.


from sqlalchemy_utils.types.uuid import UUIDType

import uuid
import shortuuid


def _decode_shortuuid(value):
try:
return shortuuid.decode(value)
except ValueError:
return None


def _encode_shortuuid(value):
try:
if value is None:
return None
return shortuuid.encode(value)
except KeyError:
return None


class ShortUUID(UUIDType):
"""Converts UUIDs to ShortUUIDs for readability's sake."""

def process_bind_param(self, value, dialect):
"""Process a ShortUUID to a UUID."""
if value is None:
return value

if type(value) != uuid.UUID:
value = _decode_shortuuid(value)
return super().process_bind_param(value, dialect)

def process_result_value(self, value, dialect):
"""Return a ShortUUID encoded UUID."""
value = super().process_result_value(value, dialect)
return _encode_shortuuid(value)


1. https://github.com/skorokithakis/shortuuid

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Joining an aliased model to a relationship with a primaryjoin

2017-04-03 Thread Colton Allen
No worries, we ended up dropping the relationship.  Still a curious error, 
though.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Joining an aliased model to a relationship with a primaryjoin

2017-04-03 Thread Colton Allen
class EntryModel(Model):

word_count = db.relationship(
'WordCountModel', backref='entry', secondary='transcription',
primaryjoin='and_('
'WordCountModel.transcription_id == TranscriptionModel.id,'
'TranscriptionModel.entry_id == EntryModel.id,'
'TranscriptionModel.is_latest.is_(True))', viewonly=True)


query = session.query(EntryModel).outerjoin(aliased_word_count_model, 
EntryModel.word_count)
print(query.all())  # ERROR

"""Error message.

HINT:  Perhaps you meant to reference the table alias "wordcount_1".
 [SQL: 'SELECT wordcount_1.id AS wordcount_1_id, wordcount_1.account_id AS 
wordcount_1_account_id, wordcount_1.created_by_id AS 
wordcount_1_created_by_id, wordcount_1.updated_by_id AS 
wordcount_1_updated_by_id, wordcount_1.transcription_id AS 
wordcount_1_transcription_id, wordcount_1.created_at AS 
wordcount_1_created_at, wordcount_1.updated_at AS wordcount_1_updated_at 
\nFROM entry LEFT OUTER JOIN (transcription AS transcription_1 JOIN 
wordcount AS wordcount_1 ON transcription_1.id = 
wordcount_1.transcription_id) ON wordcount.transcription_id = 
transcription_1.id AND transcription_1.entry_id = entry.id AND 
transcription_1.is_latest IS true \nWHERE entry.id IN (%(id_1)s)'] 
[parameters: {'id_1': UUID('c2a877a1-6140-4c7d-853d-11704ba502f3')}]
"""

First and foremost, sorry I didn't include functioning code.  I hope the 
problem can be understood without it.

I think the primaryjoin where I reference "WordCountModel" is throwing off 
my aliased join.  Is there a way to make a relative reference to 
"WordCountModel"?

You can see the join condition is super messed up:

LEFT OUTER JOIN (transcription AS transcription_1 JOIN wordcount AS 
wordcount_1 ON transcription_1.id = wordcount_1.transcription_id) ON 
wordcount.transcription_id = transcription_1.id AND 
transcription_1.entry_id = entry.id AND transcription_1.is_latest IS true

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Creating a transient model with a many-to-many relationship

2016-10-28 Thread Colton Allen
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

atob = Table(
'atob', Base.metadata,
Column('aid', ForeignKey('a.id'), primary_key=True),
Column('bid', ForeignKey('b.id'), primary_key=True)
)


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
bs = relationship("B", secondary=atob)


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)


session = Session(e)

a1 = A(bs=[B(), B(), B()])
session.add(a1)
session.commit()


# please illustrate failure here

#
# Example 1. M2M is empty after being made transient.
#

assert len(a1.bs) == 3

make_transient(a1)
a1.id = None
session.add(a1)
session.commit()

assert len(a1.bs) == 0

#
# Example 2. M2M remains empty ever after being assigned relationships.
#

model = session.query(A).first()
bs = model.bs.copy()
assert len(bs) == 3

make_transient(model)
model.id = None
model.bs = bs
session.add(model)
session.commit()

assert len(model.bs) == 0

#
# Example 3. M2M is populated from fresh query.
#

model = session.query(A).first()
bs = session.query(B).all()
assert len(bs) == 3

make_transient(model)
model.id = None
model.bs = bs
session.add(model)
session.commit()
assert len(model.bs) == 3


On Friday, October 28, 2016 at 9:47:44 AM UTC-4, Mike Bayer wrote:
>
> here we are, please complete and return, thanks 
>
> from sqlalchemy import * 
> from sqlalchemy.orm import * 
> from sqlalchemy.ext.declarative import declarative_base 
>
> Base = declarative_base() 
>
> atob = Table( 
>  'atob', Base.metadata, 
>  Column('aid', ForeignKey('a.id'), primary_key=True), 
>  Column('bid', ForeignKey('b.id'), primary_key=True) 
> ) 
>
>
> class A(Base): 
>  __tablename__ = 'a' 
>  id = Column(Integer, primary_key=True) 
>  bs = relationship("B", secondary=atob) 
>
>
> class B(Base): 
>  __tablename__ = 'b' 
>  id = Column(Integer, primary_key=True) 
>
> e = create_engine("sqlite://", echo=True) 
> Base.metadata.create_all(e) 
>
>
> session = Session(e) 
>
> a1 = A(bs=[B(), B(), B()]) 
> session.add(a1) 
> session.commit() 
>
>
> # please illustrate failure here 
>
>
>
>
>
> On 10/28/2016 09:13 AM, Colton Allen wrote: 
> > Okay I see.  That does remove the error.  But I am having trouble saving 
> > the relationship.  Check the example code.  It is much clearer than I 
> am. 
> > 
> > If I query a new set of items or specify an item model instance it will 
> > save perfectly.  But all of the instances in the instrumented list will 
> > not save to the relationship.  Is there a further expiration step? 
> > 
> > # This works 
> > session.expire(model, ['items']) 
> > make_transient(model) 
> > model.items = ItemModel.query.all() # [] 
> > session.add(model) 
> > session.commit() 
> > 
> > Test.query.first().items # [] 
> > 
> > # This does not save 
> > items = model.items.copy()  # [] 
> > session.expire(model, ['items']) 
> > make_transient(model) 
> > model.items = items 
> > session.add(model) 
> > session.commit() 
> > 
> > Test.query.first().items # [] 
> > 
> > On Thursday, October 27, 2016 at 6:15:39 PM UTC-4, Mike Bayer wrote: 
> > 
> > you'd say: 
> > 
> > session.expire(model, ["items"]) 
> > 
> > 
> > 
> > On 10/27/2016 04:47 PM, Colton Allen wrote: 
> > > Sorry I must have combined multiple attempts into one example. 
> >  When you 
> > > say expire the relationship, what do you mean? 
> > > 
> > > 
> > > On Thursday, October 27, 2016 at 4:30:36 PM UTC-4, Mike Bayer 
> wrote: 
> > > 
> > > Hello - 
> > > 
> > > 
> > > I've rebuilt your fragment into a full example in order to 
> > determine 
> > > what the behavior is here, as make_transient() has no defined 
> > behavior 
> > > for loaded relationships. 
> > > 
> > > Your error does not reproduce with your code fragment as 
> > given, because 
> > > you are setting the list of items to itself, and this means 
> > there is no 
> > > history at all; the unit of work will ignore it. 
> > > 
> > > Setting the list to a series of three brand new secondary 
> items 
> > > produces 
> > > the error you see be

Re: [sqlalchemy] Creating a transient model with a many-to-many relationship

2016-10-28 Thread Colton Allen
Okay I see.  That does remove the error.  But I am having trouble saving 
the relationship.  Check the example code.  It is much clearer than I am.

If I query a new set of items or specify an item model instance it will 
save perfectly.  But all of the instances in the instrumented list will not 
save to the relationship.  Is there a further expiration step?

# This works
session.expire(model, ['items'])
make_transient(model)
model.items = ItemModel.query.all() # []
session.add(model)
session.commit()

Test.query.first().items # []

# This does not save
items = model.items.copy()  # []
session.expire(model, ['items'])
make_transient(model)
model.items = items
session.add(model)
session.commit()

Test.query.first().items # []

On Thursday, October 27, 2016 at 6:15:39 PM UTC-4, Mike Bayer wrote:
>
> you'd say: 
>
> session.expire(model, ["items"]) 
>
>
>
> On 10/27/2016 04:47 PM, Colton Allen wrote: 
> > Sorry I must have combined multiple attempts into one example.  When you 
> > say expire the relationship, what do you mean? 
> > 
> > 
> > On Thursday, October 27, 2016 at 4:30:36 PM UTC-4, Mike Bayer wrote: 
> > 
> > Hello - 
> > 
> > 
> > I've rebuilt your fragment into a full example in order to determine 
> > what the behavior is here, as make_transient() has no defined 
> behavior 
> > for loaded relationships. 
> > 
> > Your error does not reproduce with your code fragment as given, 
> because 
> > you are setting the list of items to itself, and this means there is 
> no 
> > history at all; the unit of work will ignore it. 
> > 
> > Setting the list to a series of three brand new secondary items 
> > produces 
> > the error you see because this generates deletion events for the 
> > existing objects which fail because the state of the object is 
> > transient. 
> > 
> > The solution is to expire the relationship attribute first so that 
> > it is 
> > reset for the transient state. 
> > 
> > 
> > 
> > 
> > On 10/27/2016 03:43 PM, Colton Allen wrote: 
> > > I want to create a new row with all of the same data.  However, 
> > (using 
> > > make_transient) the many-to-many "items" relationship doesn't 
> > carry over 
> > > to the new model.  When I manually set the items I recieve a 
> > > "StaleDataError".  How can I insert the many-to-many relationships 
> so 
> > > this does not happen? 
> > > * 
> > > * 
> > > * 
> > > My code:* 
> > > 
> > > class Test(Model): 
> > > id = Column(Integer, primary_key=True) 
> > > items = orm.relationship('ItemModel', 
> secondary=test_item_table) 
> > > 
> > > 
> > > model = Test.query.first() 
> > > list_of_items = model.items.copy() 
> > > 
> > > make_transient(model) 
> > > model.id <http://model.id> = None 
> > > model.items = list_of_items 
> > > 
> > > session.add(model) 
> > > session.commit() 
> > > 
> > > # sqlalchemy.orm.exc.StaleDataError: DELETE statement on table 
> > > 'test_item' expected to delete 1 row(s); Only 0 were matched. 
> > > 
> > > -- 
> > > SQLAlchemy - 
> > > The Python SQL Toolkit and Object Relational Mapper 
> > > 
> > > http://www.sqlalchemy.org/ 
> > > 
> > > To post example code, please provide an MCVE: Minimal, Complete, 
> and 
> > > Verifiable Example. See http://stackoverflow.com/help/mcve 
> > <http://stackoverflow.com/help/mcve> for a full 
> > > description. 
> > > --- 
> > > You received this message because you are subscribed to the Google 
> > > Groups "sqlalchemy" group. 
> > > To unsubscribe from this group and stop receiving emails from it, 
> > send 
> > > an email to sqlalchemy+...@googlegroups.com  
> > > <mailto:sqlalchemy+unsubscr...@googlegroups.com  
> >. 
> > > To post to this group, send email to sqlal...@googlegroups.com 
> >  
> > > <mailto:sqlal...@googlegroups.com >. 
> > > Visit this group at https://groups.google.com/group/sqlalchemy 
> > <https://groups.google.com/group/sqlalchemy>. 
> > > For more options, visit https://groups.google.com/d/optout 
> > <http

[sqlalchemy] Re: Rolling back transactions across flushes?

2016-10-27 Thread Colton Allen
Try closing the session.  I've used similar code in my projects.  Also if 
you're using SQLite you need to do some additional tweaking so that it 
understands the transaction.

def setUp(self):
self.session = sessionmaker(bind=engine)()
self.session.begin_nested()

def tearDown(self):
self.session.rollback()
self.session.close()

#
# SQLite specific events
#

@classmethod
def setUpClass(cls):
"""Create the database."""
global engine

engine = create_engine('sqlite:///sqlalchemy.db')

@event.listens_for(engine, "connect")
def do_connect(dbapi_connection, connection_record):
dbapi_connection.isolation_level = None

@event.listens_for(engine, "begin")
def do_begin(conn):
conn.execute("BEGIN")

Base.metadata.create_all(engine)


@classmethod
def tearDownClass(cls):
"""Destroy the database."""
engine.dispose()

On Thursday, October 27, 2016 at 5:13:48 PM UTC-4, Zach wrote:
>
> Hi, I'd really appreciate some assistance with the below issue.
>
>
> Rolling back sqlalchemy transactions cross-flush currently appears 
> impossible to me. It’s necessary if you want to issue queries that rely on 
> the presence of an assigned primary key identifier (adding something to 
> the session isn’t sufficient to get a primary key assigned).
>
>
> But if you issue such a query, you’ll either get a query-invoked autoflush 
> (if autoflush is on), or you’ll have to flush first. And this apparently 
> seems to changes the scope of what gets rolled back on session.rollback(). 
> It seems to be only the stuff after the flush.
>
>
> Use case: rolling back on the teardown method of unit tests that require 
> flushes.
>
>
> Solution I'm looking for: A way to roll back *all* uncommitted changes 
> after a savepoint/virtual transaction/some other kind of transactional 
> wrapper is created in the setUp method of a unittest.TestCase subclass.
>
>
> Example:
>
>
> session = sessionmaker()
> class MyTest(unittest.TestCase):
>
> def setUp(self):
> session.begin_nested()
>
> def tearDown(self):
> session.rollback()
>
> def myTest(self):
> session.add(object) # now a flush is required because `id` is used 
> below
> query = session.query('select id from my_table where id = 
> {}'.format(object.id))
> # Problem: Now `object` will exist even after `tearDown`
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Creating a transient model with a many-to-many relationship

2016-10-27 Thread Colton Allen
I want to create a new row with all of the same data.  However, (using 
make_transient) the many-to-many "items" relationship doesn't carry over to 
the new model.  When I manually set the items I recieve a "StaleDataError". 
 How can I insert the many-to-many relationships so this does not happen?

*My code:*

class Test(Model):
id = Column(Integer, primary_key=True)
items = orm.relationship('ItemModel', secondary=test_item_table)


model = Test.query.first()
list_of_items = model.items.copy()

make_transient(model)
model.id = None
model.items = list_of_items

session.add(model)
session.commit()

# sqlalchemy.orm.exc.StaleDataError: DELETE statement on table 'test_item' 
expected to delete 1 row(s); Only 0 were matched.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Preventing mapper updates from triggering a "before_update" event?

2016-10-13 Thread Colton Allen
Hi all,

I have a primary model with a "before_update" event and a secondary model 
with a foreign key to the primary model.  When I create a secondary model, 
the primary model's "before_update" event is triggered.  I believe this is 
because the backref is being updated.  Can I prevent this behavior?


class Primary(Model):
id = Column(Integer)

@event.listens_for(Primary, 'before_update')
def example(m, c, t):
   print("I was triggered by a mapper update!")

class Secondary(Model):
primary_id = Column(ForeignKey)
primary = orm.relationship('Primary', backref='secondaries')


s = Secondary(primary_id=1)
session.add(s)
session.commit()

>>> I was triggered by a mapper update!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.