cluster <[EMAIL PROTECTED]> writes:
> Queries and output from EXPLAIN ANALYZE can be seen here with some
> syntax highlighting:
> http://rafb.net/p/BJIW4p69.html
You are lying to us about how those queries were posed to Postgres
(and no I don't feel a need to explain how I know). In future p
cluster wrote:
SELECT keyId, sortNum, count(1)
FROM stats s, items i
WHERE s.keyId = i.keyId AND i.sortNum > 123
GROUP BY i.keyId, i.sortNum
ORDER BY i.sortNum
LIMIT 50
Limit (cost=3281.72..3281.84 rows=50 width=16) (actual
time=435.838..436.043 rows=50 loops=1)
InitPlan
-
I have two small queries which are both very fast to evaluate
separately. The first query, "Query 1", calculates some statistics and
the the second query, "Query 2", finds a subset of relevant keys.
When combined into a single query which calculates statistics from only
the subset of relevant ke
On 10/30/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
>
> I did some testing. I created a 300 partitioned empty table. Then, I
> inserted some rows on it and the perfomance was SLOW too.
Is the problem with inserting to the partitioned table or selecting from
it? It sounds like inserting is the
Christian Rengstl a écrit :
> My OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz;
> shared_buffers is set to 32MB (as I read it should be fairly low on
> Windows) and work_mem is set to 2500MB, but nevertheless the query takes
> about 38 seconds to finish. The table "table1" contains appr
Pablo Alcaraz wrote:
I had a big big big table. I tried to divide it in 300 partitions with
30M rows each one. The problem was when I used the table to insert
information: the perfomance was LOW.
That's very vague. What exactly did you do? Just inserted a few rows, or
perhaps a large bulk loa
Hi All!
I had a big big big table. I tried to divide it in 300 partitions with
30M rows each one. The problem was when I used the table to insert
information: the perfomance was LOW.
I did some testing. I created a 300 partitioned empty table. Then, I
inserted some rows on it and the perfoma
I am definitely interested in upgrading.
Is there a guide out there that perhaps was created to explain the
changes in the config files from 8.1 to 8.2 ?
Migration guide I guess?
On Oct 30, 2007, at 11:39 AM, Tom Lane wrote:
Richard Huxton <[EMAIL PROTECTED]> writes:
Ketema wrote:
5)Upg
>From: Christian Rengstl
>Subject: [PERFORM] Optimizing PostgreSQL for Windows
>
>Hi list,
>
>I have the following query:
>select t.a1, t.a2 from table1 t inner join table2 s
>using(id) where t.pid='xyz' and s.chromosome=9 order by s.pos;
>
>"-> Bitmap Heap Scan on table1 t (cost=388.25..
Richard Huxton <[EMAIL PROTECTED]> writes:
> Ketema wrote:
>> 5)Upgrade version of pg..currently is running 8.1.4
> Well every version gets better at planning, so it can't hurt.
+1 ... there are at least two things about this query that 8.2 could be
expected to be a great deal smarter about:
* mi
Although I'm not an expert on this stuff, but 32 MB of shared buffers
seems quite low to me, even for a windows machine. I'm running postgres
8.2 on my workstation with 2GB of ram and an AMD x64 3500+ with
shared_buffer set to 256MB without any trouble an it's running fine,
even on large datasets a
On Oct 30, 2007, at 9:31 AM, Michael Glaesemann wrote:
On Oct 30, 2007, at 7:18 , Ketema wrote:
here is the execution plan:
I've put this online here:
http://explain-analyze.info/query_plans/1259-ketema-2007-10-30
I have attached an erd of the tables used in this query. If it is
stripp
On Oct 30, 2007, at 9:23 AM, Richard Huxton wrote:
Ketema wrote:
I have the following query that is a part of a function:
Yikes! Difficult to get a clear view of what this query is doing.
It seems complicated because you only have a small subset of the
schema these tables tie into.
Be hap
On Oct 30, 2007, at 7:18 , Ketema wrote:
here is the execution plan:
I've put this online here:
http://explain-analyze.info/query_plans/1259-ketema-2007-10-30
I have attached an erd of the tables used in this query. If it is
stripped out it can be viewed here: http://www.ketema.net/
prov
Hi list,
I have the following query:
select t.a1, t.a2 from table1 t inner join table2 s
using(id) where t.pid='xyz' and s.chromosome=9 order by s.pos;
With the following output from analyze:
"Sort (cost=35075.03..35077.51 rows=991 width=14) (actual
time=33313.718..33321.935 rows=22599 loops=1)
Ketema wrote:
I have the following query that is a part of a function:
Yikes! Difficult to get a clear view of what this query is doing.
OK, I'm assuming you're vacuumed and analysed on all these tables...
My concern is with the sort step that takes 15 seconds by itself:
-> Sort (cost=12
I have the following query that is a part of a function:
select order_details.tpv_success_id, order_details.tpv_id,
order_details.ver_code, order_details.app_id,
order_details.acct_id, order_details.first_name || ' ' ||
order_details.last_name as customer_name,
order_details.order_
"Mark Kirkwood" <[EMAIL PROTECTED]> writes:
> Luke Lonergan wrote:
>> Sure - it's here:
>> http://momjian.us/mhonarc/patches_hold/msg00381.html
>
> To clarify - we've fixed this in Greenplum db - the patch as submitted is
> (hopefully) a hint about how to fix it in Postgres, rather than a workin
Hi,
I need to implement a Load Balancing solution in the PostgresSQL
database. Actually we are working with postgres 8.1 but postgres 8.2 is
planified to be upgrade in short.
I read about it, and would like to know any experience with this.
Did you ever need Load Balancing?
What did you implem
19 matches
Mail list logo