Re: [PERFORM] [PERFORMANCE] slow small delete on large table

2004-02-23 Thread Tom Lane
"Ed L." <[EMAIL PROTECTED]> writes: > If I could say it the way I think for a simple example, it'd be > like this: > delete from mytable > where posteddatetime < now() - '90 days' > limit 100; > Of course, that's not legal 7.3.4 syntax. Assuming you have a primary key on the ta

Re: [PERFORM] Column correlation drifts, index ignored again

2004-02-23 Thread Kevin Brown
Josh Berkus wrote: > John, > > > I think the key is to get the planner to correctly ballpark the number of > > rows in the date range. If it does, I can't imagine it ever deciding to > > read 1,000,000 rows instead of 1,000 with any sane "cost" setting. I'm > > assuming the defaults are sane :)

Re: [PERFORM] [PERFORMANCE] slow small delete on large table

2004-02-23 Thread Bruno Wolff III
On Mon, Feb 23, 2004 at 19:10:57 -0700, "Ed L." <[EMAIL PROTECTED]> wrote: > > A 7.3.4 question... > > I want to "expire" some data after 90 days, but not delete too > much at once so as not to overwhelm a system with precariously > balanced disk I/O and on a table with millions of rows. If I

Re: [PERFORM] [PERFORMANCE] slow small delete on large table

2004-02-23 Thread Christopher Kings-Lynne
Of course, that's not legal 7.3.4 syntax. These are both too slow due to sequential scan of table: delete from mytable where key in ( select key from mytable where posteddatetime < now() - '90 days' limit 100); Upgrade to 7.4

[PERFORM] [PERFORMANCE] slow small delete on large table

2004-02-23 Thread Ed L.
A 7.3.4 question... I want to "expire" some data after 90 days, but not delete too much at once so as not to overwhelm a system with precariously balanced disk I/O and on a table with millions of rows. If I could say it the way I think for a simple example, it'd be like this: delete fr

Re: [PERFORM] Slow join using network address function

2004-02-23 Thread Markus Bertheau
On ÐÐÐ, 2004-02-23 at 12:04 -0800, Josh Berkus wrote: > Eric, > > > Nested Loop > > (cost=189.00..27359887.76 rows=607947200 width=22) > >Join Filter: ("outer".address <<= "inner".address) > >-> Seq Scan on clients c > >(cost=0.00..2074.76 rows=102176 width=11) > >-> Materi

Re: [PERFORM] General performance questions about postgres on Apple

2004-02-23 Thread Sean Shanny
Simon Riggs wrote: Sean Shanny Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two 7 disk hardware based RAID5 sets software striped to form a RAID50 set. The DB, WALS, etc are all on that file set. Ru

Re: [PERFORM] General performance questions about postgres on Apple hardware...

2004-02-23 Thread Simon Riggs
>Sean Shanny > Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre > channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two 7 > disk hardware based RAID5 sets software striped to form a RAID50 set. > The DB, WALS, etc are all on that file set. Running OSX journaled

Re: [PERFORM] General performance questions about postgres on Apple

2004-02-23 Thread Josh Berkus
Scott, > I am certainly open to any suggestions on how to deal with speed issues > on these sorts of large tables, it isn't going to go away for us. :-( I'm not sure what to suggest. I can't think of anything off the top of my head that would improve cripplingly slow random seek times. This

Re: [PERFORM] General performance questions about postgres on Apple

2004-02-23 Thread Sean Shanny
Scott, We did try clustering on the date_key for the fact table below for a months worth of data as most of our requests for data are date range based, i.e. get me info for the time period between 2004-02-01 and 2004-02-07. This normally results in a plan that is doing an index scan on the da

Re: [PERFORM] General performance questions about postgres on Apple

2004-02-23 Thread scott.marlowe
On Sun, 22 Feb 2004, Sean Shanny wrote: > Tom, > > We have the following setting for random page cost: > > random_page_cost = 1# units are one sequential page fetch cost > > Any suggestions on what to bump it up to? > > We are waiting to hear back from Apple on the speed issues, so

Re: [PERFORM] Slow join using network address function

2004-02-23 Thread Steve Atkins
On Mon, Feb 23, 2004 at 12:48:02PM +0100, Eric Jain wrote: > I'm trying to join two tables on an inet column, where one of the > columns may contain a subnet rather than a single host. Somehow the > operation isn't completing quite fast enough, even though neither table > is very large: > >

[PERFORM] Slow join using network address function

2004-02-23 Thread Eric Jain
I'm trying to join two tables on an inet column, where one of the columns may contain a subnet rather than a single host. Somehow the operation isn't completing quite fast enough, even though neither table is very large: table| rows + clients