You didn't mention version, but 8.1.x has bitmap index scans that might
greatly speed this up...
On Sat, Apr 22, 2006 at 02:34:13PM -0700, [EMAIL PROTECTED] wrote:
Hi List
I have maybe an easy question but i do not find an answer, i have this
SQL query:
SELECT geom,group,production_facs
Hi, Bill,
Bill Moran wrote:
My understanding is basically that if you vacuum with the correct
frequency, you'll never need to vacuum full. This is why the
autovacuum system is so nice, it adjusts the frequency of vacuum according
to how much use the DB is getting.
Additonally, the
On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:
all dumpfiles total about 17Gb. It has been running for 50ish hrs and up
to about the fourth file (5-6 ish Gb) and this is on a raid 5 server.
RAID5 generally doesn't bode too well for performance; that could be
part of the issue.
--
Ever since I started working with PostgreSQL I've heard the need to
watch transaction IDs. The phrase transaction ID wraparound still
gives me a shiver. Attached it a short script that works with the
monitoring system Nagios to keep an eye on transaction IDs. It should
be easy to adapt to any
Everyone here always says that RAID 5 isn't good for Postgres. We
have an Apple Xserve RAID configured with RAID 5. We chose RAID 5
because Apple said their Xserve RAID was optimized for RAID 5. Not
sure if we made the right decision though. They give an option for
formatting as RAID 0+1.
They are not equivalent. As I understand it, RAID 0+1 performs about
the same as RAID 10 when everything is working, but degrades much less
nicely in the presence of a single failed drive, and is more likely to
suffer catastrophic data loss if multiple drives fail.
-- Mark
On Tue, 2006-05-02 at
On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
The script detects a wrap at 2 billion. It starts warning once one or
more databases show an age over 1 billion transactions. It reports
critical at 1.5B transactions. I hope everyone out there is vacuuming
*all* databases often.
Something seems
Vivek Khera wrote:
On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
The script detects a wrap at 2 billion. It starts warning once one or
more databases show an age over 1 billion transactions. It reports
critical at 1.5B transactions. I hope everyone out there is vacuuming
*all* databases
On 5/2/06, Vivek Khera [EMAIL PROTECTED] wrote:
On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
The script detects a wrap at 2 billion. It starts warning once one or
more databases show an age over 1 billion transactions. It reports
critical at 1.5B transactions. I hope everyone out there is
Title: RE: [PERFORM] Why is plan (and performance) different on partitioned table?
If you don't have anything in the parent table br_1min, then deleting
the (presumably obsolete) pg_statistic rows for it should fix your
immediate problem. Otherwise, consider applying the attached.
Tom,
RAID 10 is better than RAID 0+1. There is a lot of information on
the net about this, but here is the first one that popped up on
google for me.
http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html
The quick summary is that performance is about the same between the
two, but
On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:
At 10:39 06/04/29, Tom Lane wrote:
K C Lau [EMAIL PROTECTED] writes:
Without knowing the internals, I have this simplistic idea: if Postgres
maintains the current lowest transaction ID for all active
transactions, it
probably
On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
At this point, it seems like you need to do 2 things:
1) Schedule lazy vacuum to run, or configure autovacuum.
2) Schedule some downtime to run vacuum full to recover some disk space.
#2 only needs done once to get you back on
On Tue, May 02, 2006 at 12:06:30 -0700,
Tony Wasson [EMAIL PROTECTED] wrote:
Ah thanks, it's a bug in my understanding of the thresholds.
With the standard freezing policy, the age column will start at one
billion for a freshly-vacuumed database.
So essentially, 1B is normal, 2B is the
Mark Liberman [EMAIL PROTECTED] writes:
Now, the potentital bug:
It appears that after you truncate a table, the statistics for that =
table still remain in pg_statistics.
That's intentional, on the theory that when the table is re-populated
the new contents will probably resemble the old.
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It takes time to format the (large) result set for display.
2/ It has to count the time spent waiting for the (large) result
On Mon, May 01, 2006 at 01:54:49PM +0200, Steinar H. Gunderson wrote:
On Mon, May 01, 2006 at 03:05:54AM -0500, Scott Sipe wrote:
So, my question is, before I do any further digging, is super-smack
flawed?
It's sort of hard to say without looking at the source -- it certainly isn't
a
On 5/2/06, Bruno Wolff III [EMAIL PROTECTED] wrote:
On Tue, May 02, 2006 at 12:06:30 -0700,
Tony Wasson [EMAIL PROTECTED] wrote:
Ah thanks, it's a bug in my understanding of the thresholds.
With the standard freezing policy, the age column will start at one
billion for a freshly-vacuumed
On Mon, May 01, 2006 at 07:35:02PM -0400, Tom Lane wrote:
Nolan Cafferky [EMAIL PROTECTED] writes:
But, I'm guessing that random_page_cost = 1 is not a realistic value.
Well, that depends. If all your data can be expected to fit in memory
then it is a realistic value. (If not, you should
show all and grep are your friend. From my laptop with 8.1:
[EMAIL PROTECTED]:36]~:4%psql -tc 'show all' | grep vacuum_cost_delay|tr -s ' '
autovacuum_vacuum_cost_delay | -1 | Vacuum cost delay in milliseconds, for
autovacuum.
vacuum_cost_delay | 0 | Vacuum cost delay in milliseconds.
[EMAIL
On Tue, May 02, 2006 at 03:03:40PM -0400, Alvaro Herrera wrote:
That's right, because a database's age is only decremented in
database-wide vacuums. (Wow, who wouldn't want a person-wide vacuum if
it did the same thing ...)
The heck with age, I'd take a person-wide vacuum if it just got rid
On Tue, May 02, 2006 at 12:06:30PM -0700, Tony Wasson wrote:
Ah thanks, it's a bug in my understanding of the thresholds.
With the standard freezing policy, the age column will start at one
billion for a freshly-vacuumed database.
So essentially, 1B is normal, 2B is the max. The logic is
BTW, you should be able to check to see what the controller is actually
doing by pulling one of the drives from a running array. If it only
hammers 2 drives during the rebuild, it's RAID10. If it hammers all the
drives, it's 0+1.
As for Xserve raid, it is possible to eliminate most (or maybe even
On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It takes time to format the (large) result set for display.
2/ It has
On Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:
Thanks.
My first check was of course a grep/search of the postgres.conf, next it was
a complete source grep for vacuum_cost_delay.
It's there in head...
[EMAIL PROTECTED]:52]~/pgsql/HEAD/src:4%grep -ri vacuum_cost_delay *|wc -l
We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
having performance problems and running for very long times. The
commonality seems to be PostgreSQL 8.1 is choosing to use a nested
loop join because it estimates there will be only be a single row.
There are really thousands
On Tue, May 02, 2006 at 06:49:48PM -0400, Jan de Visser wrote:
On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
Pgadmin can give misleading times for queries that return large result
sets over a network, due to:
1/ It
On Mon, May 01, 2006 at 02:40:41PM -0400, Chris Mckenzie wrote:
I've got a quick and stupid question: Does Postgres 7.4 (7.x) support
vacuum_cost_delay?
No, it does not; it was introduced in 8.0.
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of
My database is used primarily in an OLAP-type environment. Sometimes my
users get a little carried away and find some way to slip past the
sanity filters in the applications and end up bogging down the server
with queries that run for hours and hours. And, of course, what users
tend to do is
Jim C. Nasby [EMAIL PROTECTED] wrote:
On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
At this point, it seems like you need to do 2 things:
1) Schedule lazy vacuum to run, or configure autovacuum.
2) Schedule some downtime to run vacuum full to recover some disk space.
#2
On 5/2/06, Dan Harris [EMAIL PROTECTED] wrote:
My database is used primarily in an OLAP-type environment. Sometimes my
users get a little carried away and find some way to slip past the
sanity filters in the applications and end up bogging down the server
with queries that run for hours and
On Apr 25, 2006, at 19:36, Tom Lane wrote:
Try one of the actual queries from the plpgsql function.
Here we go:
try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser + $3
try-# FROM generate_series(1,
On May 2, 2006, at 16:49, David Wheeler wrote:
On Apr 25, 2006, at 19:36, Tom Lane wrote:
Try one of the actual queries from the plpgsql function.
Here we go:
try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser
Tom Lane wrote
You should be using SIGINT, not SIGTERM.
regards, tom lane
Thank you very much for clarifying this point! It works :)
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi,
On Tue, 2006-05-02 at 17:19 -0600, Dan Harris wrote:
Is there some way I can just kill a query and not risk breaking
everything else when I do it?
Use pg_stat_activity view to find the pid of the process (pidproc
column) and send the signal to that process. I think you are now killing
On May 2, 2006, at 16:52, David Wheeler wrote:
Actually looks pretty good to me. Although is generate_series()
being rather slow?
Scratch that:
Bah, dammit, there were no rows in that relevant table. Please
disregard my previous EXPLAIN ANALYZE posts.
I've re-run my script and
There is also the statement_timeout setting in postgresql.conf, but
you have to be careful with this setting. I'm not sure about
postgres 8.0 or 8.1, but in 7.4.5 this setting will terminate the
COPY statements used by pg_dumpall for backups. So I actually use
the pg_stat_activity table
Hi Jim,
The output from bonnie on my boot drive is:
File './Bonnie.27964', size: 0
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 2...Seeker 1...Seeker 3...start 'em...done...done...done...
Ian Burrell [EMAIL PROTECTED] writes:
We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
having performance problems and running for very long times. The
commonality seems to be PostgreSQL 8.1 is choosing to use a nested
loop join because it estimates there will be only be
39 matches
Mail list logo