Bert wrote:
No i didn't defined any indexes for the table, I know the performance
will increase with an index, but this was not my question. My question
furthermore belongs to the access mode of the SQL statement.
Furthermore i do not understand why the Upper function should increase
the performa
Jim,
Have you seen this happening only on W2k3? I am wondering if I should try
out 2000 Pro or XP Pro.
Not my first choice, but if it works...
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 02, 2006 3:29 PM
To: Mark Kirkwood
Cc: Gregory Stewart; pgsql
I am using the onboard NVRAID controller. It has to be configured in the
BIOS and windows needs a raid driver at install to even see the raid drive.
But the onboard controller still utilizes system resources. So it is not a
"pure" software raid, but a mix of hardware (controller) / software I guess
On Wed, 26 Apr 2006 23:55:24 -0500, Jim C. Nasby wrote:
> On Wed, Apr 26, 2006 at 07:35:42PM -0700, Steve Wampler wrote:
>> On Wed, Apr 26, 2006 at 10:06:58PM -0400, Ketema Harris wrote:
>> > I was wondering if there were any performance issues with having a data
>> > directory that was an nfs moun
I have a quite large query that takes over a minute to run on my laptop.
On the db server it takes olmost 20 seconds, but I have 200+ concurent
users who will be running similair querries, and during the query the
I/O goes bezerk, I read 30MB/s reading (iostat tells so). So, before
going into deno
Tom Lane wrote:
Eric Lam <[EMAIL PROTECTED]> writes:
what is the quickest way of dumping a DB and restoring it? I have done a
"pg_dump -D database | split --line-bytes 1546m part"
Don't use "-D" if you want fast restore ...
regards, tom lane
t
No i didn't defined any indexes for the table, I know the performance
will increase with an index, but this was not my question. My question
furthermore belongs to the access mode of the SQL statement.
Furthermore i do not understand why the Upper function should increase
the performance.
The table
"Ian Burrell" <[EMAIL PROTECTED]> writes:
> We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
> having performance problems and running for very long times. The
> commonality seems to be PostgreSQL 8.1 is choosing to use a nested
> loop join because it estimates there will be o
Hi Jim,
The output from bonnie on my boot drive is:
File './Bonnie.27964', size: 0
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 2...Seeker 1...Seeker 3...start 'em...done...done...done...
---
There is also the statement_timeout setting in postgresql.conf, but
you have to be careful with this setting. I'm not sure about
postgres 8.0 or 8.1, but in 7.4.5 this setting will terminate the
COPY statements used by pg_dumpall for backups. So I actually use
the pg_stat_activity table t
On May 2, 2006, at 16:52, David Wheeler wrote:
Actually looks pretty good to me. Although is generate_series()
being rather slow?
Scratch that:
Bah, dammit, there were no rows in that relevant table. Please
disregard my previous EXPLAIN ANALYZE posts.
I've re-run my script and populated
Hi,
On Tue, 2006-05-02 at 17:19 -0600, Dan Harris wrote:
> Is there some way I can just kill a query and not risk breaking
> everything else when I do it?
Use pg_stat_activity view to find the pid of the process (pidproc
column) and send the signal to that process. I think you are now killing
po
Tom Lane wrote
You should be using SIGINT, not SIGTERM.
regards, tom lane
Thank you very much for clarifying this point! It works :)
---(end of broadcast)---
TIP 6: explain analyze is your friend
On May 2, 2006, at 16:49, David Wheeler wrote:
On Apr 25, 2006, at 19:36, Tom Lane wrote:
Try one of the actual queries from the plpgsql function.
Here we go:
try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser
On Apr 25, 2006, at 19:36, Tom Lane wrote:
Try one of the actual queries from the plpgsql function.
Here we go:
try=# PREPARE foo(int, int[], int) AS
try-# INSERT INTO entry_coll_tag (entry_id, tag_id, ord )
try-# SELECT $1, $2[gs.ser], gs.ser + $3
try-# FROM generate_series(1, array_upper(
On 5/2/06, Dan Harris <[EMAIL PROTECTED]> wrote:
My database is used primarily in an OLAP-type environment. Sometimes my
users get a little carried away and find some way to slip past the
sanity filters in the applications and end up bogging down the server
with queries that run for hours and ho
Dan Harris <[EMAIL PROTECTED]> writes:
> So, I have been searching for a way to kill an individual query. I read
> in the mailing list archives that you could 'kill' the pid. I've tried
> this a few times and more than once, it has caused the postmaster to
> die(!), terminating every query tha
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
> > At this point, it seems like you need to do 2 things:
> > 1) Schedule lazy vacuum to run, or configure autovacuum.
> > 2) Schedule some downtime to run "vacuum full" to recover some disk spa
My database is used primarily in an OLAP-type environment. Sometimes my
users get a little carried away and find some way to slip past the
sanity filters in the applications and end up bogging down the server
with queries that run for hours and hours. And, of course, what users
tend to do is
On Mon, May 01, 2006 at 02:40:41PM -0400, Chris Mckenzie wrote:
> I've got a quick and stupid question: Does Postgres 7.4 (7.x) support
> vacuum_cost_delay?
No, it does not; it was introduced in 8.0.
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)-
On Tue, May 02, 2006 at 06:49:48PM -0400, Jan de Visser wrote:
> On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:
> > On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
> > > Pgadmin can give misleading times for queries that return large result
> > > sets over a network, due to:
> > >
We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
having performance problems and running for very long times. The
commonality seems to be PostgreSQL 8.1 is choosing to use a nested
loop join because it estimates there will be only be a single row.
There are really thousands o
On Tue, May 02, 2006 at 05:47:15PM -0400, Chris Mckenzie wrote:
> Thanks.
>
> My first check was of course a grep/search of the postgres.conf, next it was
> a complete source grep for vacuum_cost_delay.
It's there in head...
[EMAIL PROTECTED]:52]~/pgsql/HEAD/src:4%grep -ri vacuum_cost_delay *|wc
On Tuesday 02 May 2006 16:28, Jim C. Nasby wrote:
> On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
> > Pgadmin can give misleading times for queries that return large result
> > sets over a network, due to:
> >
> > 1/ It takes time to format the (large) result set for display.
> > 2
BTW, you should be able to check to see what the controller is actually
doing by pulling one of the drives from a running array. If it only
hammers 2 drives during the rebuild, it's RAID10. If it hammers all the
drives, it's 0+1.
As for Xserve raid, it is possible to eliminate most (or maybe even
Title: RE: [PERFORM] Postgres 7.4 and vacuum_cost_delay.
Thanks.
My first check was of course a grep/search of the postgres.conf, next it was a complete source grep for vacuum_cost_delay.
I've come to the conclusion I need to simply start tracking all transactions and determining a cost/per
On Tue, May 02, 2006 at 12:06:30PM -0700, Tony Wasson wrote:
> Ah thanks, it's a bug in my understanding of the thresholds.
>
> "With the standard freezing policy, the age column will start at one
> billion for a freshly-vacuumed database."
>
> So essentially, 1B is normal, 2B is the max. The log
On Tue, May 02, 2006 at 03:03:40PM -0400, Alvaro Herrera wrote:
> That's right, because a database's age is only decremented in
> database-wide vacuums. (Wow, who wouldn't want a person-wide vacuum if
> it did the same thing ...)
The heck with age, I'd take a person-wide vacuum if it just got rid
show all and grep are your friend. From my laptop with 8.1:
[EMAIL PROTECTED]:36]~:4%psql -tc 'show all' | grep vacuum_cost_delay|tr -s ' '
autovacuum_vacuum_cost_delay | -1 | Vacuum cost delay in milliseconds, for
autovacuum.
vacuum_cost_delay | 0 | Vacuum cost delay in milliseconds.
[EMAIL PROT
On Mon, May 01, 2006 at 07:35:02PM -0400, Tom Lane wrote:
> Nolan Cafferky <[EMAIL PROTECTED]> writes:
> > But, I'm guessing that random_page_cost = 1 is not a realistic value.
>
> Well, that depends. If all your data can be expected to fit in memory
> then it is a realistic value. (If not, you
On 5/2/06, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
On Tue, May 02, 2006 at 12:06:30 -0700,
Tony Wasson <[EMAIL PROTECTED]> wrote:
>
> Ah thanks, it's a bug in my understanding of the thresholds.
>
> "With the standard freezing policy, the age column will start at one
> billion for a freshly-
On Mon, May 01, 2006 at 01:54:49PM +0200, Steinar H. Gunderson wrote:
> On Mon, May 01, 2006 at 03:05:54AM -0500, Scott Sipe wrote:
> > So, my question is, before I do any further digging, is super-smack
> > flawed?
>
> It's sort of hard to say without looking at the source -- it certainly isn't
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
> Pgadmin can give misleading times for queries that return large result
> sets over a network, due to:
>
> 1/ It takes time to format the (large) result set for display.
> 2/ It has to count the time spent waiting for the (large) res
"Mark Liberman" <[EMAIL PROTECTED]> writes:
> Now, the potentital bug:
> It appears that after you truncate a table, the statistics for that =
> table still remain in pg_statistics.
That's intentional, on the theory that when the table is re-populated
the new contents will probably resemble the ol
On Tue, May 02, 2006 at 12:06:30 -0700,
Tony Wasson <[EMAIL PROTECTED]> wrote:
>
> Ah thanks, it's a bug in my understanding of the thresholds.
>
> "With the standard freezing policy, the age column will start at one
> billion for a freshly-vacuumed database."
>
> So essentially, 1B is normal,
On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
> At this point, it seems like you need to do 2 things:
> 1) Schedule lazy vacuum to run, or configure autovacuum.
> 2) Schedule some downtime to run "vacuum full" to recover some disk space.
>
> #2 only needs done once to get you back on
On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:
>
> At 10:39 06/04/29, Tom Lane wrote:
> >K C Lau <[EMAIL PROTECTED]> writes:
> >> Without knowing the internals, I have this simplistic idea: if Postgres
> >> maintains the current lowest transaction ID for all active
> >transactions, it
>
RAID 10 is better than RAID 0+1. There is a lot of information on
the net about this, but here is the first one that popped up on
google for me.
http://www.pcguide.com/ref/hdd/perf/raid/levels/multLevel01-c.html
The quick summary is that performance is about the same between the
two, but
Title: RE: [PERFORM] Why is plan (and performance) different on partitioned table?
>If you don't have anything in the parent table br_1min, then deleting
>the (presumably obsolete) pg_statistic rows for it should fix your
>immediate problem. Otherwise, consider applying the attached.
Tom,
On May 2, 2006, at 3:03 PM, Alvaro Herrera wrote:
Something seems wrong... I just ran your script against my
development database server which is vacuumed daily and it said I was
53% of the way to 2B. Seemed strange to me, so I re-ran "vacuum -a -
z" to vacuum all databases (as superuser), rer
Alvaro Herrera wrote:
> Vivek Khera wrote:
> >
> > On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
> >
> > >The script detects a wrap at 2 billion. It starts warning once one or
> > >more databases show an age over 1 billion transactions. It reports
> > >critical at 1.5B transactions. I hope every
On 5/2/06, Vivek Khera <[EMAIL PROTECTED]> wrote:
On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
> The script detects a wrap at 2 billion. It starts warning once one or
> more databases show an age over 1 billion transactions. It reports
> critical at 1.5B transactions. I hope everyone out ther
Vivek Khera wrote:
>
> On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
>
> >The script detects a wrap at 2 billion. It starts warning once one or
> >more databases show an age over 1 billion transactions. It reports
> >critical at 1.5B transactions. I hope everyone out there is vacuuming
> >*all*
On May 2, 2006, at 2:26 PM, Tony Wasson wrote:
The script detects a wrap at 2 billion. It starts warning once one or
more databases show an age over 1 billion transactions. It reports
critical at 1.5B transactions. I hope everyone out there is vacuuming
*all* databases often.
Something seems
They are not equivalent. As I understand it, RAID 0+1 performs about
the same as RAID 10 when everything is working, but degrades much less
nicely in the presence of a single failed drive, and is more likely to
suffer catastrophic data loss if multiple drives fail.
-- Mark
On Tue, 2006-05-02 at
Everyone here always says that RAID 5 isn't good for Postgres. We
have an Apple Xserve RAID configured with RAID 5. We chose RAID 5
because Apple said their Xserve RAID was "optimized" for RAID 5. Not
sure if we made the right decision though. They give an option for
formatting as RAID 0+1.
Ever since I started working with PostgreSQL I've heard the need to
watch transaction IDs. The phrase "transaction ID wraparound" still
gives me a shiver. Attached it a short script that works with the
monitoring system Nagios to keep an eye on transaction IDs. It should
be easy to adapt to any ot
On Wed, Apr 26, 2006 at 05:14:41PM +0930, Eric Lam wrote:
> all dumpfiles total about 17Gb. It has been running for 50ish hrs and up
> to about the fourth file (5-6 ish Gb) and this is on a raid 5 server.
RAID5 generally doesn't bode too well for performance; that could be
part of the issue.
--
Hi, Bill,
Bill Moran wrote:
> My understanding is basically that if you vacuum with the correct
> frequency, you'll never need to vacuum full. This is why the
> autovacuum system is so nice, it adjusts the frequency of vacuum according
> to how much use the DB is getting.
Additonally, the "free
You didn't mention version, but 8.1.x has bitmap index scans that might
greatly speed this up...
On Sat, Apr 22, 2006 at 02:34:13PM -0700, [EMAIL PROTECTED] wrote:
> Hi List
> I have maybe an easy question but i do not find an answer, i have this
> SQL query:
>
> SELECT geom,group,production_facs
50 matches
Mail list logo