This would be a possible explanation, and certainly is a problem with
low resolution refinements, but the free R indicates that overfitting
is not the problem here.  (I'm assuming that the proper choice of test
set has been made in this case.)  In my experience, for very isomorphous
pairs of structures, when a high resolution model is used as the starting
point for a low resolution refinement, even the R values before refinement
will be very good and that means fitting the noise can't be the cause.

   Our methods today are simply not as good at fitting low resolution data
in the absence of high resolution data as they are in its presence.

Dale Tronrud

On 06/01/10 04:51, Ian Tickle wrote:
> On Mon, May 31, 2010 at 9:15 PM, Dale Tronrud <det...@uoxray.uoregon.edu> 
> wrote:
>>   One of the great mysteries of refinement is that a model created using
>> high resolution data will fit a low resolution data set much better than
>> a model created only using the low resolution data.  It appears that there
>> are many types of errors that degrade the fit to low resolution data that
>> can only be identified and fixed by using the information from high
>> resolution data.
> 
> Is it such a mystery?  Isn't it just a case of overfitting to the
> experimental errors in the low res data if you tried to use the same
> parameterization & restraint weighting as for the high res refinement?
>  Consequently you are forced to use fewer parameters and/or higher
> restraint weighting at low res which obviously is not going to give as
> good a fit.
> 
> Cheers
> 
> -- Ian

Reply via email to