On Fri, Mar 4, 2011 at 4:18 PM, Landreville
landrevi...@deadtreepages.com wrote:
create temporary table deltas on commit drop as
select * from get_delta_table(p_switchport_id, p_start_date,
p_end_date);
select round(count(volume_id) * 0.95) into v_95th_row from deltas;
On Tue, Feb 15, 2011 at 6:19 PM, Thomas Pöhler
t...@turtle-entertainment.de wrote:
Hi list,
See ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg
What is the bottom graph? queries/minute? Looks like Your database is
just getting hammered.
Maybe there is a really badly coded page
On Thu, Jan 14, 2010 at 8:17 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
. 48 GB RAM
2) Which Windows OS would you recommend? (currently 2008 x64 Server)
There is not a 64-bit windows build now - You would be limited to
shared_buffers at about a gigabyte. Choose Linux
Greetings
, to not have to
use the index on ID, and process the rows in physical order. First
make sure that newly inserted production data has the correct value in
the new column, and add 'where new_column is null' to the conditions.
But I have never tried this, use at Your own risk.
Greetings
Marcin Mank
On Tue, Nov 24, 2009 at 2:37 PM, Luca Tettamanti kronos...@gmail.com wrote:
- HashAggregate (cost=1031681.15..1033497.20 rows=181605 width=8)
(a
ctual time=571807.575..610178.552 rows=26185953 loops=1)
This is Your problem. The system`s estimate for the number of distinct
On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha fah...@email.unc.edu wrote:
Yes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf
and make sure all the information is current. Thanks for pointing out my
error.
excellent report!
about the copy problem: You seem to have
max_connections = 500 # (change requires restart)
work_mem = 256MB # min 64kB
Not that it has to do with your current problem but this combination could
bog your server if enough clients run sorted queries simultaneously.
You probably should
There is one thing I don`t understand:
- Nested Loop (cost=0.00..180564.28 rows=1806
width=37) (actual time=0.192..60.214 rows=3174 loops=1)
- Index Scan using visitors_userid_index2 on
visitors v (cost=0.00..2580.97 rows=1300 width=33) (actual
So the bottom line here is just that the estimated n_distinct is too
low. We've seen before that the equation we use tends to do that more
often than not. I doubt that consistently erring on the high side would
be better though :-(. Estimating n_distinct from a limited sample of
the
I hit an interestinhg paper on n_distinct calculation:
http://www.pittsburgh.intel-research.net/people/gibbons/papers/distinct-values-chapter.pdf
the PCSA algorithm described there requires O(1) calculation per
value. Page 22 describes what to do with updates streams.
This I think
Just as a question to Tom and team,
maybe it`s time for asktom.postgresql.org? Oracle has it :)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Is there anything I'm missing that is preventing it from using the index?
It
just seems weird to me that other joins like this work fine and fast
with indexes,
but this one won't.
Did You consider clustering both tables on the dsiacctno index?
I just checked that for a 4M rows table even
warehouse-# WHERE e.event_date now() - interval '2 days'
Try explicitly querying:
WHERE e.event_date '2006-06-11 20:15:00'
In my understanding 7.4 does not precalculate this timestamp value for the
purpose of choosing a plan.
Greetings
Marcin
---(end of
13 matches
Mail list logo