"Ed L." <[EMAIL PROTECTED]> writes:
> If I could say it the way I think for a simple example, it'd be
> like this:
> delete from mytable
> where posteddatetime < now() - '90 days'
> limit 100;
> Of course, that's not legal 7.3.4 syntax.
Assuming you have a primary key on the ta
Josh Berkus wrote:
> John,
>
> > I think the key is to get the planner to correctly ballpark the number of
> > rows in the date range. If it does, I can't imagine it ever deciding to
> > read 1,000,000 rows instead of 1,000 with any sane "cost" setting. I'm
> > assuming the defaults are sane :)
On Mon, Feb 23, 2004 at 19:10:57 -0700,
"Ed L." <[EMAIL PROTECTED]> wrote:
>
> A 7.3.4 question...
>
> I want to "expire" some data after 90 days, but not delete too
> much at once so as not to overwhelm a system with precariously
> balanced disk I/O and on a table with millions of rows. If I
Of course, that's not legal 7.3.4 syntax. These are both too
slow due to sequential scan of table:
delete from mytable where key in (
select key
from mytable
where posteddatetime < now() - '90 days'
limit 100);
Upgrade to 7.4
A 7.3.4 question...
I want to "expire" some data after 90 days, but not delete too
much at once so as not to overwhelm a system with precariously
balanced disk I/O and on a table with millions of rows. If I
could say it the way I think for a simple example, it'd be
like this:
delete fr
On ÐÐÐ, 2004-02-23 at 12:04 -0800, Josh Berkus wrote:
> Eric,
>
> > Nested Loop
> > (cost=189.00..27359887.76 rows=607947200 width=22)
> >Join Filter: ("outer".address <<= "inner".address)
> >-> Seq Scan on clients c
> >(cost=0.00..2074.76 rows=102176 width=11)
> >-> Materi
Simon Riggs wrote:
Sean Shanny
Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two
7
disk hardware based RAID5 sets software striped to form a RAID50 set.
The DB, WALS, etc are all on that file set. Ru
>Sean Shanny
> Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
> channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two
7
> disk hardware based RAID5 sets software striped to form a RAID50 set.
> The DB, WALS, etc are all on that file set. Running OSX journaled
Scott,
> I am certainly open to any suggestions on how to deal with speed issues
> on these sorts of large tables, it isn't going to go away for us. :-(
I'm not sure what to suggest. I can't think of anything off the top of my
head that would improve cripplingly slow random seek times.
This
Scott,
We did try clustering on the date_key for the fact table below for a
months worth of data as most of our requests for data are date range
based, i.e. get me info for the time period between 2004-02-01 and
2004-02-07. This normally results in a plan that is doing an index scan
on the da
On Sun, 22 Feb 2004, Sean Shanny wrote:
> Tom,
>
> We have the following setting for random page cost:
>
> random_page_cost = 1# units are one sequential page fetch cost
>
> Any suggestions on what to bump it up to?
>
> We are waiting to hear back from Apple on the speed issues, so
On Mon, Feb 23, 2004 at 12:48:02PM +0100, Eric Jain wrote:
> I'm trying to join two tables on an inet column, where one of the
> columns may contain a subnet rather than a single host. Somehow the
> operation isn't completing quite fast enough, even though neither table
> is very large:
>
>
I'm trying to join two tables on an inet column, where one of the
columns may contain a subnet rather than a single host. Somehow the
operation isn't completing quite fast enough, even though neither table
is very large:
table| rows
+
clients
13 matches
Mail list logo