Sorry for not being clear enough.
If B-factors at the end of refinement are the "true B-factors" then they
represent a true property of data. They should be good enough to assess the
model quality directly. This is what I meant by B factor validation.
However, how far are the final B-factors similar to true B-factors is
another question.

Rangana


On Sun, Mar 8, 2020 at 7:06 PM Ethan A Merritt <merr...@uw.edu> wrote:

> On Sunday, 8 March 2020 01:08:32 PDT Rangana Warshamanage wrote:
> > "The best estimate we have of the "true" B factor is the model B factors
> > we get at the end of refinement, once everything is converged, after we
> > have done all the building we can.  It is this "true B factor" that is a
> > property of the data, not the model, "
> >
> > If this is the case, why can't we use model B factors to validate our
> > structure? I know some people are skeptical about this approach because B
> > factors are refinable parameters.
> >
> > Rangana
>
> It is not clear to me exactly what you are asking.
>
> B factors _should_ be validated, precisely because they are refined
> parameters
> that are part of your model.   Where have you seen skepticism?
>
> Maybe you thinking of the frequent question "should the averaged refined B
> equal the Wilson B reported by data processing?".  That discussion usual
> wanders off into explanations of why the Wilson B estimate is or is not
> reliable, what "average B" actually means, and so on.  For me the bottom
> line is that comparison of Bavg to the estimated Wilson B is an extremely
> weak validation test.  There are many better tests for model quality.
>
>         Ethan
>
>
>
>
>

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=CCP4BB&A=1

Reply via email to