You're probably right about my being overly optimistic about the load
imposed by EXPLAIN ANALYZE. It was just that in my previous experience
with it, I'd never seen such a large runtime discrepancy before. I even
allowed for a "caching effect" by making sure the server was all but
quiescent,
On 1/28/2005 2:49 PM, Christopher Browne wrote:
But there's nothing wrong with the idea of using "pg_dump --data-only"
against a subscriber node to get you the data without putting a load
on the origin. And then pulling the schema from the origin, which
oughtn't be terribly expensive there.
And th
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steven Rosenstein
> >> I don't think EXPLAIN ANALYZE puts that much overhead on a query.
I think you're being overly optimistic. The explain shows that the
Materialize subnode is being entered upwards of 32 million times:
-> Mater
"Leeuw van der, Tim" <[EMAIL PROTECTED]> writes:
> I don't think EXPLAIN ANALYZE puts that much overhead on a query.
EXPLAIN ANALYZE does indeed impose a significant overhead. What percentage of
the time is overhead depends heavily on how much i/o the query is doing.
For queries that are primar
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Steven Rosenstein
Sent: Sunday, February 06, 2005 8:51 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Can the V7.3 EXPLAIN ANALYZE be trusted?
While working on a previous question I posed to t
Many thanks to Gaetano Mendola and Tom Lane for the hints about using
fields from other tables in a DELETE's WHERE clause. That was the magic
bullet I needed, and my application is working as expected.
--- Steve
_
While working on a previous question I posed to this group, I ran a number
of EXPLAIN ANALYZE's to provide as examples. After sending up my last
email, I ran the same query *without* EXPLAIN ANALYZE. The runtimes were
vastly different. In the following example, I ran two identical queries
on
Steven Rosenstein <[EMAIL PROTECTED]> writes:
> Thank you for the link to the documentation page. I forgot to mention that
> we're still using version 7.3. When I checked the 7.3 documentation for
> DELETE, there was no mention of being able to use fields from different
> tables in a WHERE clause
Hi Michael,
Thank you for the link to the documentation page. I forgot to mention that
we're still using version 7.3. When I checked the 7.3 documentation for
DELETE, there was no mention of being able to use fields from different
tables in a WHERE clause. This feature must have been added
Gaetano Mendola wrote:
Steven Rosenstein wrote:
DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE
collect_date='2005-02-05';
You have to tell it what table you are deleting from. Select * from A
join B is both tables. What you want to do is fix the where clause.
DELETE FROM detai
On Sun, Feb 06, 2005 at 12:16:13PM -0500, Steven Rosenstein wrote:
>
> DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE
> collect_date='2005-02-05';
>
> But I keep getting a parser error. Am I not allowed to use JOINs in a
> DELETE statement, or am I just fat-fingering the SQL
Steven Rosenstein wrote:
DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE
collect_date='2005-02-05';
DELETE FROM detail WHERE detail.sum_id in ( select id from summary )
AND collect_date='2005-02-05';
Regards
Gaetano Mendola
---(end of broadcast)--
This is probably a very trivial question and I feel foolish in even posting
it, but I cannot seem to get it to work.
SCENARIO (abstracted):
Two tables, "summary" and "detail". The schema of summary looks like:
id int serial sequential record id
collect_date date date the de
[EMAIL PROTECTED] (Dirk Lutzebaeck) writes:
> SELECT DISTINCT ON (df.val_9, df.created, df.flatid) df.docindex,
> df.flatobj, bi.oid, bi.en
> FROM bi,df
> WHERE bi.rc=130170467
> ...
> ORDER BY df.val_9 ASC, df.created DESC
> LIMIT 1000 OFFSET 0
Just out of curiosity, what is this query supposed
Dirk Lutzebaeck wrote:
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test estimates
for
individual columns. What results do they come up with? If those are
inaccurate
then raising the statistics target is a good route. If those are
accurate
individually but the combina
On 1/20/2005 9:23 AM, Jean-Max Reymond wrote:
On Thu, 20 Jan 2005 15:03:31 +0100, Hervé Piedvache <[EMAIL PROTECTED]> wrote:
We were at this moment thinking about a Cluster solution ... We saw on the
Internet many solution talking about Cluster solution using MySQL ... but
nothing about PostgreSQL
I gave a bunch of "explain analyze select" commands to test estimates for
individual columns. What results do they come up with? If those are inaccurate
then raising the statistics target is a good route. If those are accurate
individually but the combination is inaccurate then you have a more dif
Tom Lane wrote:
Josh Berkus writes:
The issue here is that the planner is capable of "pushing down" the WHERE
criteria into the first view, but not into the second, "nested" view, and so
postgres materializes the UNIONed data set before perfoming the join.
Thing is, I seem to recall that this p
Dirk Lutzebaeck wrote:
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is still way off estimating
the rows. Is this normal? It still produces a 1 GB temp file.
I simplified the query a bit, now onl
19 matches
Mail list logo