Rod,
  I do this, PG gets forked many times, it is tough to find the max
number of times I can do this, but I have a Proc::Queue Manager Perl
driver that handles all of the copy calls.  I have a quad CPU machine.
Each COPY only hits ones CPU for like 2.1% but anything over about 5
kicks the load avg up.

  Ill get some explain analysis and table structures out there pronto.

-----Original Message-----
From: Rod Taylor [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 04, 2005 1:02 PM
To: Wager, Ryan D [NTK]
Cc: Postgresql Performance
Subject: Re: [PERFORM] query rewrite using materialized views

>   1)the 250 million records are currently whipped and reinserted as a
> "daily snapshot" and the fastest way I have found "COPY" to do this
from
> a file is no where near fast enough to do this.  SQL*Loader from
Oracle
> does some things that I need, ie Direct Path to the db files access
> (skipping the RDBMS), inherently ignoring indexing rules and saving a
> ton of time (Dropping the index, COPY'ing 250 million records, then
> Recreating the index just takes way too long).

If you have the hardware for it, instead of doing 1 copy, do 1 copy
command per CPU (until your IO is maxed out anyway) and divide the work
amongst them. I can push through 100MB/sec using methods like this --
which makes loading 100GB of data much faster.

Ditto for indexes. Don't create a single index on one CPU and wait --
send off one index creation command per CPU.

>   2)Finding a way to keep this many records in a fashion that can be
> easily queried.  I even tried breaking it up into almost 2800 separate
> tables, basically views of the data pre-broken down, if this is a
> working method it can be done this way, but when I tried it, VACUUM,
and
> the COPY's all seemed to slow down extremely.

Can you send us EXPLAIN ANALYSE output for the slow selects and a little
insight into what your doing? A basic table structure, and indexes
involved would be handy. You may change column and table names if you
like.

> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Josh
Berkus
> Sent: Tuesday, January 04, 2005 12:06 PM
> To: pgsql-performance@postgresql.org
> Cc: Yann Michel
> Subject: Re: [PERFORM] query rewrite using materialized views
> 
> Yann,
> 
> > are there any plans for rewriting queries to preexisting
materialized
> > views?  I mean, rewrite a query (within the optimizer) to use a
> > materialized view instead of the originating table?
> 
> Automatically, and by default, no.   Using the RULES system?  Yes, you
> can 
> already do this and the folks on the MattView project on pgFoundry are

> working to make it easier.
> 
-- 


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to