On 11/18/2012 03:48 PM, Michael Bayer wrote:
There's a lot of documentation on this, not sure how you're not finding it,
unless you're not using 0.8 (which is required):
1. documentation for SQL bind/result column processing:
Hello list,
I can't find an example for a user defined type that has to produce
custom SQL. What I want is to implement a PostGIS geometry column and it
basically has to do two things:
1. Implement own SQL for insert/update, to produce for example:
INSERT INTO (..., xyzzy, )
To answer my own question, seems like SQLA won't automatically process
inner joins if you supply individual columns to the session.query(), if
had to pass the declarative model class itself for this to work as expected.
Unless I'm doing something wrong, I guess I should use deferred columns
Hello list,
I don't think my brain is currently capable of figuring out the SQL
itself, let alone how to do it with SQLA.
I've got the following models:
City
Location
Item
Category
Location belongs to one of many Cities via simple Location.city_id
foreign key
Location belongs to one or
On 08/13/2012 10:13 PM, Michael Bayer wrote:
we can keep turning the crank here, here's a full series of examples to make
this happen:
Thanks for the example, that's definitely something I'll need sooner or
later, already implemented my own PostgreSQL POINT data type and was
wondering how
Hi all,
what is the proper way to filter for element in a PostgreSQL ARRAY
column? in_() does not produce valid syntax for PostgreSQL ( does
(value) instead of array[value] ).
For now I'm doing the following, but I'm not sure how to bind values and
not have such a gaping sql injection
On 08/13/2012 08:53 PM, Michael Bayer wrote:
we don't currently have native support for ARRAY operations and there is some
infrastructure work that should make it easier in 0.8 to add.
for now, when you need custom operators use op():
Model.array_column.op(@)(somevalue)
I tried that, but
Hello,
I'm trying to use PostGIS with GeoAlchemy, being totally new to both.
According to available documentation and what I need (A point
representing longitude and latitude of row), I have
class PointOfInterest(Base):
__tablename__ = ...
...
geom =
I see two approaches here. The first is partitioning as it has been
suggested, except partitions need to be defined upfront, if I'm not
mistaken, at least in MySQL (expression + size), whereas in PostgreSQL
you need custom triggers on write operations on the main table, so I'm
not sure if
On 05/17/2012 05:09 PM, Claudio Freire wrote:
Precompiling queries in SQLA, to populate the various SQLA's compiler
caches, doing some queries that cause libpq and psycopg2 to excercise
(and thus to allocate whatever permanent data structures it needs to),
all at load time, will help keep
Hello.
I have a problem when processing relatively large number of rows.
For example, when selecting 5000 main rows, each having a number of
many-to-one relationships, memory usage shown by top skyrockets into
200+ MB range (RES), while heapy shows cca 20MB of
,
initialization of TypeEngine value processors, and such. But the
initial large resultset is the main thing causing the higher initial
memory footprint.You'll notice this isn't a leak at all, as it
doesn't grow.
On May 17, 2012, at 7:33 AM, Vlad K. wrote:
Hello.
I have a problem
Hi all.
I'm interested in learning proper application design patterns and best
practices when using a SQLA powered web application (Pyramid based) in a
DB cluster (PostgreSQL 9.1, psycopg2 driver).
The primary requirement for the cluster is realtime backup, so the idea
is to have two or
Hi all.
I'm posting this again because it seems my original post never reached
the list?
I'm interested in learning proper application design patterns and best
practices when using a SQLA powered web application (Pyramid based) in a
DB cluster (PostgreSQL 9.1, psycopg2 driver).
The
Hi all,
I was wondering if there is some kind of security announcements mailing
list or anything where we could be informed of security issues directly
from the horse's mouth? :) I did join the mailing list after 0.7.0 was
released so I can't tell if this particular issue was mentioned, a
Thanks for the info, I've commented on that separate issue as well.
.oO V Oo.
On 02/25/2012 04:14 AM, Yap Sok Ann wrote:
I too encountered the same problem that wasted me days, until I caught
pdtb_sqla_queries with my very limited memory profiling skill.
I have actually filed a bug here:
on.
Sorry for wasting your time, folks.
I'm sending this email to the SQLAlchemy list too, where I asked about
this problem, for further reference in case anyone else has the same
problem
.oO V Oo.
On 02/23/2012 02:24 PM, Vlad K. wrote:
Hi all.
I have a Pyramid bootstrapped script
.oO V Oo.
On 02/23/2012 02:50 PM, Michael Bayer wrote:
I'd note that those subqueryloads() render the effectiveness of yield_per() to
be almost nil.
I know. I've replicated the real use case in the application which has
yet to see proper optimization which includes better query planning
Hi,
thanks for your reply. I haven't yet tested this with a profiler to see
exactly what exactly is happening, but the bottom line is that the
overall memory use grows with each iteration (or transaction processed),
to the point of grinding the server to a halt, and top shows only the
Yes, definitely growing at a rate of 700-800 per iteration.
.oO V Oo.
On 02/22/2012 07:23 PM, Michael Bayer wrote:
When we want to test if a Python program has a leak, we do that via seeing
how many uncollected objects are present. This is done via gc:
import gc
print total number of
Okay, thanks to this article:
http://neverfear.org/blog/view/155/Investigating_memory_leaks_in_Python
I made similar plot of object counts in time, showing top 50 types. The
resulting PDF is here (you might wish to download it first, Google
messes it up for me):
Hi all,
I have to read thousands of rows from a table and compile some data, but
in certain conditions update those rows, all with same value. The ratio
of reads and writes here is widest possible. Sometimes no rows,
sometimes few, and sometimes all rows that are read have to be updated.
Thanks for your replies.
Using the IN list definitely speeds up the process, but I hate the
resulting query which uses bound variables for each and every element of
the list.
But I have another problem with this, there's a massive memory leak
somewhere. Take a look at this model:
class
I need to select for update but locking the row only in one table in a
query with outer joins. I'm using PostgreSQL backend. It has to be outer
join, so I need FOR UPDATE OF [tablename]. How do I do that with SQLA?
So far I was using Query.with_lockmode(update), but I can't specify
the table
Thanks, I think that's exactly what I was looking for!
.oO V Oo.
On 12/28/2011 06:10 PM, Michael Bayer wrote:
On Dec 27, 2011, at 8:37 PM, Vlad K. wrote:
Hi all.
I need to select some rows where pkey is in a sequence. How do I order by that
very sequence?
images_all =
session.query
Hi all.
I need to select some rows where pkey is in a sequence. How do I order
by that very sequence?
images_all =
session.query(AdImage).filter(AdImage.image_id.in_(images)).order_by(
? ).all()
Postgresql backend.
Thanks!
--
.oO V Oo.
--
You received this message because you
Hi all!
I have a few scenarios here that I believe are best solved without the
ORM overhead. For example, various log tables that do not require a
primary key, the rows are practically immutable, but are queried back
for statistical analysis. It is my understanding that I cannot use the
ORM
So basically, if I'm understanding the docs correctly, and what you just
wrote:
Using the session object does not mean using the ORM. The ORM comes in
play with Mapper and Mapped instances, which in turn require a primary
key defined. So, I can use session.execute() to do non-ORM querying?
Hi.
Imagine the following scenario:
session = DBSession()
readonly_model = session.query(ReadOnlyModel).get(id)
# Readonly means the model will NOT have its data changed in the life of
the transaction(s).
method_one(readonly_model.readonly_data, param_1, param_2, ...)
Hi, thanks for your reply.
On 11/13/2011 05:15 PM, Michael Bayer wrote:
From what I've read in the docs, I am supposed to do session.refresh(),
No that's not required at all. All the objects that are still referenced
outside the session, stay present in the session and will reload
On 11/13/2011 06:16 PM, Michael Bayer wrote:
But they're not, I'm getting Instance XY is not present in this
Session, for readonly_model when method_two is called, if there was a
rollback in method_one.
That would indicate you add()-ed it during the transaction. Any data
that was created
Hi,
I have two models, A and B. Model B contains two foreign keys into table
A, because it is a comparator model that describes certain logical
interaction between two A models. However, I want model B to contain a
relationship to both so I can access them through the model B instance:
wrote:
On Nov 11, 2011, at 3:46 AM, Vlad K. wrote:
Hi,
I have two models, A and B. Model B contains two foreign keys into table A, because it is
a comparator model that describes certain logical interaction between two A
models. However, I want model B to contain a relationship to both so I can
I have an impedance mismatch problem, namely a model in X categories,
each having its own set of fields and all having a set of common fields.
I solved that with:
1. master table, contains the sequenced pkey and common fields, as
well as category discriminator
2. individual category tables
. The
denormalized user_id would not pose any problems and I can cascade
updates and deletes via master table, which is in cascade from the users
table.
Thoughts?
.oO V Oo.
On 10/02/2011 04:12 PM, Vlad K. wrote:
I have an impedance mismatch problem, namely a model in X categories,
each having its own
Hi, thanks for your help! That works.
.oO V Oo.
On 09/19/2011 11:03 PM, Michael Bayer wrote:
On Sep 19, 2011, at 12:45 PM, Vlad K. wrote:
Hi!
I have a model, let's call it Resource. And another, let's call it Container. Each
container can have any number of Resources, so they're in many
Hi all!
I'm looking for a programming pattern to deal with a radio button like
behavior, thatis in a group of rows, all with same group_id, only one
can have a flag column set as true. Without using table triggers that
make all rows in the group lose the flag if the updating row carries it
No, I can't truncate the table for other reasons, as I mentioned in my
original question. :)
The issue here was not how to sync the data, but whether processed rows
stay in session even though the objects (model instances) are discarded
at the end of each iteration (each csv row), or in
(query sent to DB)
before the update called above it, in this particular example.
.oO V Oo.
On 09/08/2011 04:37 PM, Michael Bayer wrote:
On Sep 8, 2011, at 9:32 AM, Vlad K. wrote:
As a by the way to this question, I've noticed that the order of queries
given before flush
:
1. SELECTed row data
2. UPDATE on changed values (implicit flush of dirtied session data)
3. UPDATE as expected in step 3 above
4. COMMIT
.oO V Oo.
On 09/08/2011 10:04 PM, Michael Bayer wrote:
On Sep 8, 2011, at 3:32 PM, Vlad K. wrote:
For example the following:
row = session.query(Model
/2011 12:17 AM, Michael Bayer wrote:
On Sep 8, 2011, at 6:00 PM, Vlad K. wrote:
Yes that's how I know the order of events. I just checked the logs again and put some
sleep() between update() and merge(). It appears that the update() does some kind of
implicit flush because that commits the dirtied
Great, thanks!
.oO V Oo.
On 09/06/2011 04:48 PM, Michael Bayer wrote:
On Sep 6, 2011, at 10:40 AM, Vlad K. wrote:
I have a products database which is daily syncronized with an external source
via a csv file. There are several thousand rows in question. The
synchronization does two
I have a products database which is daily syncronized with an external
source via a csv file. There are several thousand rows in question. The
synchronization does two things:
1. Update only price if changed for existing products
2. Insert new products if they don't exist with all fields
Yes I'm using PostgreSQL and now that you've linked to the docs, I
remember there was a possibility for recursion. Thanks for suggestion,
I'll look into it.
.oO V Oo.
On 08/02/2011 03:41 AM, Gunnlaugur Briem wrote:
You could look for recursive CTE (Common Table Expressions), if your
Hi.
I have a problem and am not sure where to begin. I need to construct a
hierarchy tree, something like adjacency_list but in reverse. More
precisely, I need entire branch but only the branch containing given
node ID. In practice, I need this for a product category tree menu which
shows
Hi.
I have a situation where I have X data models, and while each has its
own properties, they all share a set of fields with common meaning, like
id, title, description, and some others.
What way would you recommend to approach this problem? If I wasn't using
SQLAlchemy, I'd have one main
Many thanks, that's exactly what I need!
.oO V Oo.
On 07/05/2011 10:13 PM, Conor wrote:
SQLAlchemy can model this via joined table inheritance:
http://www.sqlalchemy.org/docs/orm/inheritance.html#joined-table-inheritance.
You are still responsible for setting up the tables in the way you
47 matches
Mail list logo