; action on the column. So I added a generic
before_update/delete event on my models' base class which is basically just
target.updated_at = dt.now().
On Thursday, May 28, 2020 at 11:06:28 AM UTC-5, Mike Bayer wrote:
>
>
>
> On Wed, May 27, 2020, at 3:57 PM, Colton Allen wro
Hello,
I'm trying to automate a backref update. Basically, when a child model is
inserted or updated I want the parent model's "updated_at" column to
mutate. The value should be the approximate time the user-child-model was
updated. The updated_at value would not have to match the
created_a
Specifically: https://docs.sqlalchemy.org/en/13/errors.html#error-3o7r
I think I've got a good handle on what the problem is. I just don't have
the experience to know how to solve it effectively and with confidence.
Just some of my application's stats:
- Postgres database hosted with Amazon RD
n Thursday, March 26, 2020 at 1:35:14 PM UTC-5, Mike Bayer wrote:
>
>
>
> On Thu, Mar 26, 2020, at 2:18 PM, Colton Allen wrote:
>
> > You can adjust `expire_on_commit` if you're only doing short-term
> read-only actions.
>
> Can you expand on this? Or link to do
> You can adjust `expire_on_commit` if you're only doing short-term
read-only actions.
Can you expand on this? Or link to docs/blog so I can do some research.
Google hasn't helped me so far. Why would I want to expire after every
commit?
---
I agree with your assessment. I think its becau
Hi,
I'm using a custom session class to route requests to different database
engines based on the type of action being performed. Master for writes;
slave for reads. It looks my attributes on my models expire immediately
after creation. Anyway to prevent this? Or should I not worry about
p
t;
>
> On Wed, Mar 11, 2020, at 12:44 PM, Colton Allen wrote:
>
> Hi,
>
> Before we talk about the read-replica, let's talk about the test suite as
> it is. I have a sessionmaker in my test suite configured to use an
> external transaction. Basically identical
Hi,
Before we talk about the read-replica, let's talk about the test suite as
it is. I have a sessionmaker in my test suite configured to use an
external transaction. Basically identical to this:
https://docs.sqlalchemy.org/en/13/orm/session_transaction.html#joining-a-session-into-an-external
Also, I should mention the ID is not explicitly mentioned in the select
query. I am relying on the column's "default" argument to supply the UUID.
Also also, I made a typo. It should read "when the select only finds one
row".
On Wednesday, June 12, 2019 at 12:36:
I'm using Postgres and I am getting a duplicate primary key error when
attempting to insert from a query. My primary key is a UUID type.
statement = insert(my_table).from_select(['a', 'b'], select([sometable.c.a,
sometable.c.b])
session.execute(statement)
session.commit()
Error: "DETAIL: Key
As the title says, I'd like to filter a related table without joining the
table (I don't want to mess up my pagination). Is there a way to enforce
the from clause? My SQL is a bit rusty but I'm pretty sure its possible.
On SQLAlchemy==1.2.12:
from sqlalchemy.ext.declarative import declarative
In an after_insert event I'm trying to load the value of an
"orm.relationship". The foreign-key column has a value but it returns
null. Is there a way to force it to load?
model = Model(user_id=1) # some model is created with a foreign-key
@event.listens_for(Model, 'before_insert')
def print_
Okay thank you both for the help. I'm now checking for changes before
accessing relationships that might flush. Basically:
if has_changes(self): write_revision(self)
Should do the trick. Seems to be working already.
On Monday, May 7, 2018 at 9:33:27 PM UTC-7, Jonathan Vanasco wrote:
>
>
>
>
, line 17, in _flush
api_1 | super()._flush(objects)
api_1 | File "/usr/local/lib/python3.5/bdb.py", line 48, in
trace_dispatch
api_1 | return self.dispatch_line(frame)
api_1 | File "/usr/local/lib/python3.5/bdb.py", line 67, in
dispatch_line
api_1
objects)
api_1 | File "/usr/local/lib/python3.5/bdb.py", line 48, in
trace_dispatch
api_1 | return self.dispatch_line(frame)
api_1 | File "/usr/local/lib/python3.5/bdb.py", line 67, in
dispatch_line
api_1 | if self.quitting: raise BdbQuit
api_1
() call.
> Generally, it occurs each time a query is about to emit SQL.
>
> On Mon, May 7, 2018 at 9:37 PM, Colton Allen > wrote:
> > What exactly causes the session to flush? I'm trying to track down a
> nasty
> > bug in my versioning system.
> &g
What exactly causes the session to flush? I'm trying to track down a nasty
bug in my versioning system.
Sorry for the long code dump. I retooled
examples/versioned_history/history_meta.py so it should look familiar. The
function that's breaking is "column_has_changed". I've added some logs a
I'm trying to write a column_property that queries a table that will be
defined after the current table is defined. I tried using raw SQL but I'm
getting an error.
On the "page" table I define a "count_posts" column_property.
count_posts = orm.column_property(text(
'SELECT count(post.id) F
I'm moving data from one table to another. During this move I'm preserving
the ID of the old table before dropping it. However, by doing so the
sequence gets out of whack and the database will no longer allow inserts to
the trigger table. What can I do to fix the broken sequence? The id
col
I'm trying to reset my primary key sequence and its raising a weird error.
My primary key is a UUID.
op.execute(
sa.select([
sa.func.setval(
'trigger_id_seq', sa.select([sa.func.max(trigger.c.id)]).
alias('max_trigger_id')
)
])
)
Wednesday, March 14, 2018 at 8:58:21 AM UTC-7, Mike Bayer wrote:
>
> On Wed, Mar 14, 2018 at 11:50 AM, Colton Allen > wrote:
> > I'm trying to copy an enum of one type to an enum of another type. The
> > below is the query I'm using:
> >
> > impo
I'm trying to copy an enum of one type to an enum of another type. The
below is the query I'm using:
import sqlalchemy as sa
statement = sa.insert(table_a).from_select(['enum_a'], sa.select(table_b
.c.enum_b))
connection.execute(statement)
Which raises this error in Postgres:
Erro
I'm receiving this error:
sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached,
connection timed out, timeout 30
I'm curious how SQLAlchemy manages database connections. I've pasted below
some psuedo-code to simplify the application.
# Create an engine
engine =
You were exactly right. I needed to commit.
On Monday, October 9, 2017 at 4:08:05 PM UTC-7, Mike Bayer wrote:
>
> On Mon, Oct 9, 2017 at 3:57 PM, Colton Allen > wrote:
> > I'm trying to execute a fairly simple UPDATE query.
> >
> > query = update(Model
Both are using postgres. But I did try updating the bind to both the
engine (db.engine) and the session (db.session) and it didn't have any
affect.
On Monday, October 9, 2017 at 2:09:02 PM UTC-7, Jonathan Vanasco wrote:
>
> OTOMH (I didn't go through your code), are the two databases the same?
I'm trying to execute a fairly simple UPDATE query.
query = update(Model).where(Model.id.in_(list_of_ids)).values(x=1)
I know of two methods to execute it. One using the session and the other
using the engine. However, depending on which I use, the results I get are
very different.
db.sessio
In an attempt to simplify my app's logic, I want to move some mandatory
method calls into a model event. Traditionally, I've used "after_insert"
but it has some quirks in that you need to use the "connection" argument to
make updates. What I really want is more flexibility. I'll need use of m
Worked perfectly. Thanks for the help.
On Thursday, August 3, 2017 at 11:16:26 AM UTC-7, Mike Bayer wrote:
>
> On Thu, Aug 3, 2017 at 1:37 PM, Colton Allen > wrote:
> > I've got an event that's trying to update another model with a foreign
> key
> > but it&
I've got an event that's trying to update another model with a foreign key
but it's raising a strange error that I'm having trouble debugging.
"AttributeError: 'Join' object has no attribute 'implicit_returning'".
The query worked in the past (before the OutboundModel became polymorphic).
I'm
I do not know the optimization trick. I'd be interested to know! It would
be nice to not have to translate to and from the UUID type.
On Friday, May 19, 2017 at 4:04:55 PM UTC-7, Jonathan Vanasco wrote:
>
> side question - have you done any tests on how the UUID type queries as
> your database
obviously wrong with my implementation
(since its my first one) which I why I lead off with that.
On Friday, May 19, 2017 at 6:19:45 AM UTC-7, Mike Bayer wrote:
>
> looks fine to me? what did you have in mind?
>
>
>
> On 05/18/2017 11:29 PM, Colton Allen wrote:
> > I
I want to make my UUID's prettier so I've gone about implementing a
ShortUUID column based on the shortuuid library[1]. The idea is to store
the primary key as a UUID type in postgres (since its optimized for that)
and transform the UUID to a shortuuid for presentation and querying. This
is m
No worries, we ended up dropping the relationship. Still a curious error,
though.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflo
class EntryModel(Model):
word_count = db.relationship(
'WordCountModel', backref='entry', secondary='transcription',
primaryjoin='and_('
'WordCountModel.transcription_id == TranscriptionModel.id,'
'TranscriptionModel.entry_id == EntryModel.id,'
'Transcriptio
id = Column(Integer, primary_key=True)
> bs = relationship("B", secondary=atob)
>
>
> class B(Base):
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
>
> e = create_engine("sqlite://", echo=True)
> Base
list to a series of three brand new secondary items
> > produces
> > the error you see because this generates deletion events for the
> > existing objects which fail because the state of the object is
> > transient.
> >
> > The solution is to e
Try closing the session. I've used similar code in my projects. Also if
you're using SQLite you need to do some additional tweaking so that it
understands the transaction.
def setUp(self):
self.session = sessionmaker(bind=engine)()
self.session.begin_nested()
def tearDown(self):
s
the relationship attribute first so that it is
> reset for the transient state.
>
>
>
>
> On 10/27/2016 03:43 PM, Colton Allen wrote:
> > I want to create a new row with all of the same data. However, (using
> > make_transient) the many-to-many "items" r
I want to create a new row with all of the same data. However, (using
make_transient) the many-to-many "items" relationship doesn't carry over to
the new model. When I manually set the items I recieve a "StaleDataError".
How can I insert the many-to-many relationships so this does not happen?
Hi all,
I have a primary model with a "before_update" event and a secondary model
with a foreign key to the primary model. When I create a secondary model,
the primary model's "before_update" event is triggered. I believe this is
because the backref is being updated. Can I prevent this behav
40 matches
Mail list logo