ot; action on the column. So I added a generic
before_update/delete event on my models' base class which is basically just
target.updated_at = dt.now().
On Thursday, May 28, 2020 at 11:06:28 AM UTC-5, Mike Bayer wrote:
>
>
>
> On Wed, May 27, 2020, at 3:57 PM, Colton Allen wrote:
>
Hello,
I'm trying to automate a backref update. Basically, when a child model is
inserted or updated I want the parent model's "updated_at" column to
mutate. The value should be the approximate time the user-child-model was
updated. The updated_at value would not have to match the
Specifically: https://docs.sqlalchemy.org/en/13/errors.html#error-3o7r
I think I've got a good handle on what the problem is. I just don't have
the experience to know how to solve it effectively and with confidence.
Just some of my application's stats:
- Postgres database hosted with Amazon
, March 26, 2020 at 1:35:14 PM UTC-5, Mike Bayer wrote:
>
>
>
> On Thu, Mar 26, 2020, at 2:18 PM, Colton Allen wrote:
>
> > You can adjust `expire_on_commit` if you're only doing short-term
> read-only actions.
>
> Can you expand on this? Or link to docs/blog so I ca
> You can adjust `expire_on_commit` if you're only doing short-term
read-only actions.
Can you expand on this? Or link to docs/blog so I can do some research.
Google hasn't helped me so far. Why would I want to expire after every
commit?
---
I agree with your assessment. I think its
Hi,
I'm using a custom session class to route requests to different database
engines based on the type of action being performed. Master for writes;
slave for reads. It looks my attributes on my models expire immediately
after creation. Anyway to prevent this? Or should I not worry about
t;
> On Wed, Mar 11, 2020, at 12:44 PM, Colton Allen wrote:
>
> Hi,
>
> Before we talk about the read-replica, let's talk about the test suite as
> it is. I have a sessionmaker in my test suite configured to use an
> external transaction. Basically identical to this:
Hi,
Before we talk about the read-replica, let's talk about the test suite as
it is. I have a sessionmaker in my test suite configured to use an
external transaction. Basically identical to this:
Also, I should mention the ID is not explicitly mentioned in the select
query. I am relying on the column's "default" argument to supply the UUID.
Also also, I made a typo. It should read "when the select only finds one
row".
On Wednesday, June 12, 2019 at 12:36:50 PM
I'm using Postgres and I am getting a duplicate primary key error when
attempting to insert from a query. My primary key is a UUID type.
statement = insert(my_table).from_select(['a', 'b'], select([sometable.c.a,
sometable.c.b])
session.execute(statement)
session.commit()
Error: "DETAIL:
As the title says, I'd like to filter a related table without joining the
table (I don't want to mess up my pagination). Is there a way to enforce
the from clause? My SQL is a bit rusty but I'm pretty sure its possible.
On SQLAlchemy==1.2.12:
from sqlalchemy.ext.declarative import
In an after_insert event I'm trying to load the value of an
"orm.relationship". The foreign-key column has a value but it returns
null. Is there a way to force it to load?
model = Model(user_id=1) # some model is created with a foreign-key
@event.listens_for(Model, 'before_insert')
def
Okay thank you both for the help. I'm now checking for changes before
accessing relationships that might flush. Basically:
if has_changes(self): write_revision(self)
Should do the trick. Seems to be working already.
On Monday, May 7, 2018 at 9:33:27 PM UTC-7, Jonathan Vanasco wrote:
>
>
>
>
if self.quitting: raise BdbQuit
api_1 | bdb.BdbQuit
On Monday, May 7, 2018 at 7:27:03 PM UTC-7, Mike Bayer wrote:
>
> can you perhaps place a "pdb.set_trace()" inside of session._flush()?
> using the debugger you can see the source of every flush() call.
ts)
api_1 | File "/usr/local/lib/python3.5/bdb.py", line 48, in
trace_dispatch
api_1 | return self.dispatch_line(frame)
api_1 | File "/usr/local/lib/python3.5/bdb.py", line 67, in
dispatch_line
api_1 | if self.quitting: raise BdbQuit
api_1 | bd
lush() call.
> Generally, it occurs each time a query is about to emit SQL.
>
> On Mon, May 7, 2018 at 9:37 PM, Colton Allen <cmana...@gmail.com
> > wrote:
> > What exactly causes the session to flush? I'm trying to track down a
> nasty
> > bug in my versioning s
What exactly causes the session to flush? I'm trying to track down a nasty
bug in my versioning system.
Sorry for the long code dump. I retooled
examples/versioned_history/history_meta.py so it should look familiar. The
function that's breaking is "column_has_changed". I've added some logs
I'm trying to write a column_property that queries a table that will be
defined after the current table is defined. I tried using raw SQL but I'm
getting an error.
On the "page" table I define a "count_posts" column_property.
count_posts = orm.column_property(text(
'SELECT count(post.id)
I'm moving data from one table to another. During this move I'm preserving
the ID of the old table before dropping it. However, by doing so the
sequence gets out of whack and the database will no longer allow inserts to
the trigger table. What can I do to fix the broken sequence? The id
I'm trying to copy an enum of one type to an enum of another type. The
below is the query I'm using:
import sqlalchemy as sa
statement = sa.insert(table_a).from_select(['enum_a'], sa.select(table_b
.c.enum_b))
connection.execute(statement)
Which raises this error in Postgres:
I'm receiving this error:
sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached,
connection timed out, timeout 30
I'm curious how SQLAlchemy manages database connections. I've pasted below
some psuedo-code to simplify the application.
# Create an engine
engine =
You were exactly right. I needed to commit.
On Monday, October 9, 2017 at 4:08:05 PM UTC-7, Mike Bayer wrote:
>
> On Mon, Oct 9, 2017 at 3:57 PM, Colton Allen <cmana...@gmail.com
> > wrote:
> > I'm trying to execute a fairly simple UPDATE query.
> >
>
Both are using postgres. But I did try updating the bind to both the
engine (db.engine) and the session (db.session) and it didn't have any
affect.
On Monday, October 9, 2017 at 2:09:02 PM UTC-7, Jonathan Vanasco wrote:
>
> OTOMH (I didn't go through your code), are the two databases the same?
I'm trying to execute a fairly simple UPDATE query.
query = update(Model).where(Model.id.in_(list_of_ids)).values(x=1)
I know of two methods to execute it. One using the session and the other
using the engine. However, depending on which I use, the results I get are
very different.
In an attempt to simplify my app's logic, I want to move some mandatory
method calls into a model event. Traditionally, I've used "after_insert"
but it has some quirks in that you need to use the "connection" argument to
make updates. What I really want is more flexibility. I'll need use of
Worked perfectly. Thanks for the help.
On Thursday, August 3, 2017 at 11:16:26 AM UTC-7, Mike Bayer wrote:
>
> On Thu, Aug 3, 2017 at 1:37 PM, Colton Allen <cmana...@gmail.com
> > wrote:
> > I've got an event that's trying to update another model with a foreign
> k
I've got an event that's trying to update another model with a foreign key
but it's raising a strange error that I'm having trouble debugging.
"AttributeError: 'Join' object has no attribute 'implicit_returning'".
The query worked in the past (before the OutboundModel became polymorphic).
I do not know the optimization trick. I'd be interested to know! It would
be nice to not have to translate to and from the UUID type.
On Friday, May 19, 2017 at 4:04:55 PM UTC-7, Jonathan Vanasco wrote:
>
> side question - have you done any tests on how the UUID type queries as
> your
hing obviously wrong with my implementation
(since its my first one) which I why I lead off with that.
On Friday, May 19, 2017 at 6:19:45 AM UTC-7, Mike Bayer wrote:
>
> looks fine to me? what did you have in mind?
>
>
>
> On 05/18/2017 11:29 PM, Colton Allen wrote:
> > I
I want to make my UUID's prettier so I've gone about implementing a
ShortUUID column based on the shortuuid library[1]. The idea is to store
the primary key as a UUID type in postgres (since its optimized for that)
and transform the UUID to a shortuuid for presentation and querying. This
is
No worries, we ended up dropping the relationship. Still a curious error,
though.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See
class EntryModel(Model):
word_count = db.relationship(
'WordCountModel', backref='entry', secondary='transcription',
primaryjoin='and_('
'WordCountModel.transcription_id == TranscriptionModel.id,'
'TranscriptionModel.entry_id == EntryModel.id,'
:
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
>
> e = create_engine("sqlite://", echo=True)
> Base.metadata.create_all(e)
>
>
> session = Session(e)
>
> a1 = A(bs=[B(), B(), B()])
> session.add(a1)
> session.commi
gt; On 10/27/2016 04:47 PM, Colton Allen wrote:
> > Sorry I must have combined multiple attempts into one example. When you
> > say expire the relationship, what do you mean?
> >
> >
> > On Thursday, October 27, 2016 at 4:30:36 PM UTC-4, Mike Bayer wrote:
&g
Try closing the session. I've used similar code in my projects. Also if
you're using SQLite you need to do some additional tweaking so that it
understands the transaction.
def setUp(self):
self.session = sessionmaker(bind=engine)()
self.session.begin_nested()
def tearDown(self):
I want to create a new row with all of the same data. However, (using
make_transient) the many-to-many "items" relationship doesn't carry over to
the new model. When I manually set the items I recieve a "StaleDataError".
How can I insert the many-to-many relationships so this does not
Hi all,
I have a primary model with a "before_update" event and a secondary model
with a foreign key to the primary model. When I create a secondary model,
the primary model's "before_update" event is triggered. I believe this is
because the backref is being updated. Can I prevent this
in my cases
and speeds up the system dramatically.
Once I understood that there wasn't really a memory leak, I just
optimized what was already there to use less memory.
-Allen
On Wed, Feb 25, 2009 at 6:59 PM, Peter Hansen pe...@engcorp.com wrote:
Allen Bierbaum wrote:
On Tue, Feb 24, 2009 at 4
On Tue, Feb 24, 2009 at 4:44 AM, Chris Miles miles.ch...@gmail.com wrote:
On Feb 22, 6:08 am, Allen Bierbaum abierb...@gmail.com wrote:
The python process. The number of objects seems to remain fairly
controlled. But the amount of resident memory used by the python
process does not decrease
.
Any ideas?
Thanks,
Allen
--
import os, sys, gc
import psycopg2
import sqlalchemy as sa
import sqlalchemy.exc
class DataProvider(object):
def __init__(self, dbUri, tableName):
self._dbUri = dbUri
topped out even though all handles to the results
have been cleared in python.
I am sure that I must be doing something very wrong, but I can't
figure it out. Can anyone point out my error?
-Allen
On Sat, Feb 21, 2009 at 7:52 AM, Allen Bierbaum abierb...@gmail.com wrote:
Hello all:
We
with them, thus decreasing the consumed
memory. Maybe this is an invalid assumption. Do you know any way to
ask python to shrink it's process size (ie. clear unused memory that
has been freed but evidently not given back to the OS)?
-Allen
On Sat, Feb 21, 2009 at 12:15 PM, Michael Bayer
mike
Does anyone have any ideas on this?
Does declarative simply not support the deferred property?
-Allen
On Sat, Nov 8, 2008 at 11:32 AM, Allen Bierbaum [EMAIL PROTECTED] wrote:
We have been using the declarative successfully in our codebase for a
couple months now with 0.4.x, but we have just
Nope. Very strange. It didn't come through to my gmail account.
Oh well, thanks for the pointer.
-Allen
On Fri, Nov 14, 2008 at 10:55 AM, King Simon-NFHD78
[EMAIL PROTECTED] wrote:
-Original Message-
From: sqlalchemy@googlegroups.com
[mailto:[EMAIL PROTECTED] On Behalf Of Allen
for
__mapper_args__['properties'] and merge it with the internally created
properties)
- Or is there some other way to use deferred columns with declarative?
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
item.input_output_type = 0
'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id ==
var_table.c.script_id,
var_table.c.input_output_type == 0),
collection_class=column_mapped_collection(var_table.c.name,
set_cb=input_cb)),
Any thoughts and/or pointers on how to implement this?
-Allen
:
item.notes['not-color'] = Note(value='blue')
and behind the scenes SA would call: new Note.keyword = 'not-color'
Any thoughts on this? Has anyone tried this in the past?
-Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
[in2] = var2
script.output_vars[out1] = var3
script.output_vars[out2] = var4
Is there some way to setup a many-to-many relationship to do this?
Thanks,
Allen
Here is the more complete code example to play with and see what I am
thinking so far for table definitions
On Thu, May 15, 2008 at 11:37 AM, Michael Bayer
[EMAIL PROTECTED] wrote:
On May 15, 2008, at 11:23 AM, Allen Bierbaum wrote:
# WOULD LIKE - #
# Can this be done using
# - Custom join condition on input_output_type
# - column_mapped_collection
#
it can be done. Try working
the declarative layer and have it
regenerate all automatically created metadata and mappers?
-Allen
On Mon, Apr 28, 2008 at 5:07 PM, Michael Bayer [EMAIL PROTECTED] wrote:
On Apr 28, 2008, at 5:42 PM, Allen Bierbaum wrote:
So, if I understand this right, I could import a base module
On Mon, May 5, 2008 at 4:29 PM, Michael Bayer [EMAIL PROTECTED] wrote:
On May 5, 2008, at 5:04 PM, Allen Bierbaum wrote:
Is there some way to clear the declarative layer and have it
regenerate all automatically created metadata and mappers?
not as of yet but this could be done
and bind the metadata for the Base class to an
engine for use.
Correct? (I apologize if I used the terms incorrectly).
If this is true, then I think I see how I can solve my problem.
-Allen
On Sun, Apr 27, 2008 at 6:28 PM, Michael Bayer [EMAIL PROTECTED] wrote:
On Apr 27, 2008, at 8:25 AM, Allen
imported
before the system has connected to a database.
Does anyone have a suggestion about how to handle this? For example
is there a way to create a lazy base class that doesn't actually do
anything until it has been connected to a database? Am I missing
something fundamental here?
-Allen
to get the code to work with SA
0.3.11. It looks like it expects some old naming conventions to get
mappers.
This seems to me like a very nice tool that could prove useful as an
addon to SA. Am I alone in thinking this or is anyone else
successfully using it?
-Allen
Thanks, that worked great.
Have their been any new capabilities added to this code?
-Allen
On Jan 17, 2008 12:21 PM, [EMAIL PROTECTED] wrote:
use sqlalchemy.orm.class_mapper(cls) instead of cls.mapper, and it should
work?
Allen Bierbaum wrote:
I was just taking a look at the recipes
On Dec 13, 2007 12:29 PM, Allen Bierbaum [EMAIL PROTECTED] wrote:
On Dec 13, 2007 10:47 AM, Michael Bayer [EMAIL PROTECTED] wrote:
On Dec 13, 2007, at 9:55 AM, Allen Bierbaum wrote:
In my current application I am running a rather expensive query in a
background thread, but then I need
with a situation like this? In
other words, what is the recommended practice for moving, reusing
objects from a session across multiple threads. Is there some way to
remap the object and attach it to the foreground session?
-Allen
--~--~-~--~~~---~--~~
You received
On Dec 13, 2007 10:47 AM, Michael Bayer [EMAIL PROTECTED] wrote:
On Dec 13, 2007, at 9:55 AM, Allen Bierbaum wrote:
In my current application I am running a rather expensive query in a
background thread, but then I need to use the results in the
foreground thread. The object I find
Thanks. This looks like it should work. I will give it a try.
-Allen
On Dec 9, 2007 10:39 PM, Michael Bayer [EMAIL PROTECTED] wrote:
On Dec 9, 2007, at 10:55 PM, Allen Bierbaum wrote:
I am using SA 0.3.11 and I would like to know if there is a way to get
a query object from
, -95.8 28.82), 4326), pt.c.pos))
- Can anyone point out a better way I could construct this query? Is
there anything I am missing?
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group
for creating a query object with the same settings as those used for
the query used to create the list for the relation property.
Is this possible in any way?
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
I forgot to mention, I am using SA 0.3.10.
Thanks,
Allen
On Dec 7, 2007 7:49 AM, Allen Bierbaum [EMAIL PROTECTED] wrote:
I am trying to create two queries with some of my SA ORM objects that
will use the sum of a field found through a relationship. To be a bit
more concrete, here
like a pretty nice feature to me :)
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send
function that takes the results from a query
with add_column and adds them back to the primary object as custom
attributes.
Now that I have a couple options, I think I can get at least one of
them to work. :)
-Allen
On Dec 7, 2007 3:11 PM, Paul Johnston [EMAIL PROTECTED] wrote:
Hi,
1. Return
of the code?
It seems that: http://erosson.com/migrate/trac/
is down right now.
-Allen
PS. I cc'ed the SA group in case there are migrate users that are not
on the new migrate mailing list.
On Aug 3, 11:31 am, Michael Bayer [EMAIL PROTECTED] wrote:
If someone is willing to step up and take
be using migrate? Are other people using migrate or some
other tool or just rolling your own code for database migration?
This seems like a *very* valuable capability to have in SA I am hoping
that there is a way to keep it going.
-Allen
--~--~-~--~~~---~--~~
You
) to be computing id from a
sequence behind the scenes. When the system tries to do this I end up
with conflicting keys and other problems.
Any ideas on how to disable autocreation of a serial column?
-Allen
--~--~-~--~~~---~--~~
You received this message because you
this directly with SA, so if anyone
can tell me a way to let SA know exactly how I want the object's value
to appear in the generated SQL statement please let me know so I can
refine my code.
Thanks,
Allen
On 2/25/07, Allen Bierbaum [EMAIL PROTECTED] wrote:
[snip]
When I use this with my table
whey option 3 is working and if there is a way to
do this directly with SA only?
Thanks,
Allen
On 2/23/07, Allen [EMAIL PROTECTED] wrote:
I would like to use SqlAlchemy with PostGIS to create, read, update,
and query spatial data. I have search around a bit and found a few
ideas of doing
creating the SQL
command to execute. This causes problems because them postgres doesn't
know it could be calling a method instead. I have tried returning an
sqlalchemy.func object but this doesn't work either.
Any ideas?
-Allen
--~--~-~--~~~---~--~~
You received
tracking.
Has anyone done anything like this?
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group
in the SA tutorial. This may help separate the SA plugin magic from
the fixture magic.
Anyway, see below for more detailed comments
On 2/6/07, Kumar McMillan [EMAIL PROTECTED] wrote:
On 2/3/07, Allen Bierbaum [EMAIL PROTECTED] wrote:
This works for creating the table structure
I am going to try to integrate this into my testing framework this
afternoon so I am sure I will have more questions after that. In the
meantime see below...
On 2/7/07, Kumar McMillan [EMAIL PROTECTED] wrote:
Thanks for taking a close look Allen. Here are some answers...
On 2/7/07, Allen
but I would like to keep from reinventing the wheel if I can help it.
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
I was considering the use of migrate (http://erosson.com/migrate/) for
a new project using SA and I just wondered if anyone else is using
it? Are there any plans to integrate this support into a future
version of SA?
Thanks,
Allen
--~--~-~--~~~---~--~~
You
.
-Allen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more
On 1/26/07, Michael Bayer [EMAIL PROTECTED] wrote:
On Jan 25, 7:28 pm, Allen [EMAIL PROTECTED] wrote:
The basic idea of a persistence layer (as I understand it) is that you
attempt to isolate applications from the database to the point that the
application and the data model can vary
everywhere
* use the sql construction language for all queries
As you can see, my list is quite short. :)
Does anyone have additional suggestions for someone just getting
started with SQLAlchemy?
Thanks,
Allen
--~--~-~--~~~---~--~~
You received this message because
78 matches
Mail list logo