Greg Smith wrote:
Richard Neill wrote:
Am I missing something though, or is this project dormant, without
having released any files?
My bad--gave you the wrong url.
http://git.postgresql.org/gitweb?p=pg_top.git;a=summary has the project
I meant to point you toward.
Will try that out...
Justin Pitts wrote:
I don't know if I would call it "terribly" ugly. Its not especially
pretty, but it affords the needed degree of twiddling to get the job
done. Relying on the clients is fine - if you can. I suspect the vast
majority of DBAs would find that notion unthinkable. The usual res
Thanks very much for your help so far.
(it is pretty confusing that the HashAggregate reports ~6M rows, but
the sort does 41M rows, but maybe I can not read this).
Anyway, I think that if You up the work_mem for this query to 512M,
the sort will be in memory, an thus plenty faster.
Tried this
Richard Neill wrote:
Likewise, is there any way to check whether, for example, postgres is
running out of work memory?
It doesn't work like that; it's not an allocation. What happens is that
the optimizer estimates how much memory a sort is going to need, and
then uses work_mem to decide wheth
Richard Neill wrote:
Am I missing something though, or is this project dormant, without
having released any files?
My bad--gave you the wrong url.
http://git.postgresql.org/gitweb?p=pg_top.git;a=summary has the project
I meant to point you toward.
What I really want to know is, how far th
Justin Pitts wrote:
Set work_mem in postgresql.conf down to what the 200 clients need, which
sounds to me like the default setting.
In the session which needs more work_mem, execute:
SET SESSION work_mem TO '256MB'
Isn't that terribly ugly? It seems to me less hackish to rely on the
many cli
>>> max_connections = 500 # (change requires restart)
>>> work_mem = 256MB # min 64kB
>>
>> Not that it has to do with your current problem but this combination could
>> bog your server if enough clients run sorted queries simultaneously.
>> You prob
> -Mensaje original-
> De: Richard Neill
>
> Fernando Hevia wrote:
> >
> >
> >> -Mensaje original-
> >> De: Richard Neill
> >>
> >>
> >> max_connections = 500 # (change requires restart)
> >> work_mem = 256MB# min 64kB
> >
>
Fernando Hevia wrote:
-Mensaje original-
De: Richard Neill
max_connections = 500 # (change requires restart)
work_mem = 256MB# min 64kB
Not that it has to do with your current problem but this combination could
bog your server if
> -Mensaje original-
> De: Richard Neill
>
>
> max_connections = 500 # (change requires restart)
> work_mem = 256MB# min 64kB
Not that it has to do with your current problem but this combination could
bog your server if enough clients
Thom Brown wrote:
2009/11/20 Richard Neill mailto:rn...@cam.ac.uk>>
It might also help if you posted your postgresql.conf too.
Below (have removed the really non-interesting bits).
Thanks,
Richard
I can't actually see anything in your config that would cause this
pr
2009/11/20 Richard Neill
>
>
> It might also help if you posted your postgresql.conf too.
>>
>
> Below (have removed the really non-interesting bits).
>
> Thanks,
>
> Richard
>
>
> I can't actually see anything in your config that would cause this problem.
:/
As for seeing the progress of an upd
Thom Brown wrote:
Okay, have you tried monitoring the connections to your database?
Try: select * from pg_stat_activity;
Tried that - it's very useful as far as it goes. I can see that in most
cases, the DB is running just the one query.
What I really want to know is, how far through that
2009/11/20 Richard Neill
>
>
> Thom Brown wrote:
> >
>
>> It looks like your statistics are way out of sync with the real data.
>>
>> > Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual
>> time=248577.879..253168.466 rows=347308 loops=1)
>>
>> This shows that it thinks there
Thom Brown wrote:
>
It looks like your statistics are way out of sync with the real data.
> Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual
time=248577.879..253168.466 rows=347308 loops=1)
This shows that it thinks there will be 8,686 rows, but actually
traverses 347
Kevin Grittner wrote:
Richard Neill wrote:
SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )
FROM
core.demand,
viwcs.previous_wave
LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )
WHERE core.demand.id = viwcs.wave_end_demand.demand_id;
For comparison, how
Richard Neill wrote:
> SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )
> FROM
> core.demand,
> viwcs.previous_wave
> LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )
> WHERE core.demand.id = viwcs.wave_end_demand.demand_id;
For comparison, how does this do?:
03 AM
Subject: Re: [PERFORM] Postgres query completion status?
2009/11/20 Richard Neill
>
>
>>>>Greg Williamson wrote:
>>
>>>>>Richard --
>>>
>>>>>> You might post the results of "EXPLAIN ANALYZE ;" ... be
>>>&g
2009/11/20 Richard Neill
>
>> Greg Williamson wrote:
>>
>>> Richard --
>>>
>>> You might post the results of "EXPLAIN ANALYZE ;" ... be
>>> sure to run it in a transaction if you want to be able roll it back. Perhaps
>>> try "EXPLAIN ;" first as it is faster, but EXPLAIN ANALYZE shows
>>> what t
Richard Neill wrote:
As far as I know, the only tools that exist are
pg_stat_activity, top, and iotop
Have I missed one?
The ui for pgTop might be easier for what you're trying to do:
http://pgfoundry.org/projects/pgtop/
--
Greg Smith2ndQuadrant Baltimore, MD
PostgreSQL Training, Serv
Greg Williamson wrote:
Richard --
You might post the results of "EXPLAIN ANALYZE ;" ...
be sure to run it in a transaction if you want to be able roll it
back. Perhaps try "EXPLAIN ;" first as it is faster, but
EXPLAIN ANALYZE shows what the planner is doing.
Here's something very very o
Greg Williamson wrote:
Richard --
You might post the results of "EXPLAIN ANALYZE ;" ...
be sure to run it in a transaction if you want to be able roll it
back. Perhaps try "EXPLAIN ;" first as it is faster, but
EXPLAIN ANALYZE shows what the planner is doing.
Is there any way I can g
Thanks for your help. This issue splits into 2 bits:
1. Fixing specific queries.
2. Finding out when a specific running query is going to complete.
(At the moment, this is the bit I really need to know).
Greg Williamson wrote:
Richard --
You might post the results of "EXPLAIN ANALYZE ;" ...
Richard --
You might post the results of "EXPLAIN ANALYZE ;" ... be sure
to run it in a transaction if you want to be able roll it back. Perhaps try
"EXPLAIN ;" first as it is faster, but EXPLAIN ANALYZE shows what the
planner is doing.
You wrote:
>
> P.S. Sometimes, some queries seem to
Dear All,
I've just joined this list, so let me first thank you in advance for
your hospitality.
I'm having lots of trouble with variously slow running queries on a
production system. I've tried all the "obvious" fixes: changing the
query planner, checking for indexing, autovacuum, making su
25 matches
Mail list logo