On 8 Jul 2000 03:38:38 GMT, [EMAIL PROTECTED] (Victor Aina) wrote:

> I have the following scenario. All suggestions
> and pointers appreciated. 
> A model was developed on one set of dataset.
> The model was applied to a different dataset
> (different in the sense that the records were
> obtained 6 months later). 
> Now using the old model, predictions are generated
> using the new set of variables. 
> The problem is: Compared to actual values, the
> predicted values are smaller -- i.e. there is
> underprediction.

 - no, I beg to differ.  

(You really are going to have to tell us more about the problem....)
If the predicted values are smaller, that would be a bias in
prediction, as I would use the terminology.  "Under prediction" would
be the circumstance of prediction that was less extreme, to either
extreme -- smaller in absolute value, if considering predictions
around zero.

> 
> The question: which is the "best" way to adjust
> the predictions i.e. to minimize the gap between
> predicted and actual values?
> If at all possible, I prefer not to run a new
> regression.

Best adjustment for what purpose?  
For some purpose, you might just want to add on the observed bias.
Would your reason for not running a new regression shed light on this?

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to