Suppose I have a table - lets say it was developed by someone with little or no 
understanding of database design - and it has 230 columns.

Now, it turns out that 99% of the time only about 8 colums are required, but 
all 230 columns are populated.  However, legacy applications (which are run 
nightly for batch processing, but not during the day, when heavy volume occurs) 
require the old table design.  New applications only select a limited number of 
columns, but require the old design names.

I want to do the following:

1.  Split the table into two tables (one with 6 columns, the other with 224 
columns), using the primary key to establish a 1-1 relationship between them.
2.  Create a view that displays the join between the two tables.

By doing this, I'm hoping that the query optimizer is smart enough to see that 
if a query comes in and requests only the six columns (that are in the narrower 
table) that PostgreSQL won't have to load the wider table into the buffer pool, 
and thereby actually have to only access about 10% the amount of disk that it 
presently does.

Is this a sound theory?  Is the query optimizer smart enough ignore part of a 
join when a portion of the join will have no effect on the end result set?

thanks


 __________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Reply via email to