Ahh, waters. Where would structure-related debate be without them?
I see. So if your default refinement procedure is to add an unspecified
number of waters, then yes Rwork might not be all that useful, as it
will depend on how the building goes.
Again, it all depends on what you want your da
Hi Clemens
OK so you're saying use only the reflections that are in common between all
datasets and keep the parameterisation the same. There are clearly two
distinctly different ways in which datasets can differ: either a different
set of indices due to different resolution cut-offs or completen
On Wed, Nov 03, 2021 at 12:54:00PM +, Ian Tickle wrote:
> Suppose you had to compare two datasets differing only in their
> high-resolution cut-offs. Now Rwork, Rfree and Rall will inevitably have
> higher values at the high d* end, which means that if you apply a cut-off
> at the high d* end
Hi, whilst I completely concur with James that Rfree is not a suitable
metric for this purpose for all the reasons he mentioned, it's not clear to
me that Rwork is much better. If you really want to go down that route, even
better would be Rall, i.e. ignoring the free R flags, though I realise tha
That's exactly what I am doing...
citing David...
"I expect the A and B data sets to be quite similar, but I would like to
evaluate which protocol was "better", and I want to do this quickly,
ideally looking at a single number."
and
"I do want to find a way to assess the various tweaks I can try
Hi David,
Why not do all those things with Rwork? It is much less noisy than
Rfree. Have you ever seen a case in such analysis where Rwork didn't
tell you the same thing Rfree did? If so, did you believe the difference?
Once when I was playing with lossy image compression if I picked just
t
Hi James,
What you wrote makes lots of sense. I had not heard about Rsleep, so that
looks like interesting reading, thanks.
I have often used Rfree as a simple tool to compare two protocols. If I am
not actually optimising against Rfree but just using it for a one-off
comparison then that is okay
Well, of all the possible metrics you could use to asses data quality
Rfree is probably the worst one. This is because it is a
cross-validation metric, and cross-validations don't work if you use
them as an optimization target. You can try, and might even make a
little headway, but then your
On Thu, Oct 28, 2021 at 06:28:05PM +0530, Shipra Bijpuria wrote:
> I would first look at the dataset stats and define a resolution range
> mainly based on I/sigI >1 and cc1/2 >0.5. Based on this, would take the
> good resolution datasets only.
Some probably obvious word of caution here: these (qui
I would first look at the dataset stats and define a resolution range
mainly based on I/sigI >1 and cc1/2 >0.5. Based on this, would take the
good resolution datasets only.
Further for comparing these mtz after refinement, I personally prefer
looking at the electron density maps rather than just g
THis is always a difficult decision. More commonly I have worried about the
best resolution cut off.
Judge on high Rmerges? Keep the overall R value acceptably low?? etc etc..
I always come back to the map - is it sharper with the extra data? Is more
unmodelled solvent showing up? etc..
But these
Hi,
An alternative is to assess the quality of the phases (which are the
result of the model refined against the data) via radiation-damage
maps (what we call "F(early)-F(late) maps"). Assuming you collected
high enough multiplicity data, autoPROC (which is what you used for
processing, if I under
Von: CCP4 bulletin board Im Auftrag von Murpholino
Peligro
Gesendet: Mittwoch, 27. Oktober 2021 19:59
An: CCP4BB@JISCMAIL.AC.UK
Betreff: Re: [ccp4bb] what would be the best metric to asses the quality of a
mtz file?
So... how can I get a metric for noise in electron density maps?
First thing
CCP4's PEAKMAX program would be quite scriptable.
Phil
On 10/27/21 1:58 PM, Murpholino Peligro wrote:
So... how can I get a metric for noise in electron density maps?
First thing that occurred to me
open in coot and do validate->difference map peaks-> get number of peaks
(is this scriptable?)
o
So... how can I get a metric for noise in electron density maps?
First thing that occurred to me
open in coot and do validate->difference map peaks-> get number of peaks
(is this scriptable?)
or
Second
phenix.real_space_correlation detail=residue file.pdb file.mtz
Thanks again
El mié, 27 de oct.
Hi,
It's hard to find a single metric...
Ultimately, the quality of electron density maps, lower noise in fo-fc?
Best
Vincent
Le 27/10/2021 à 17:44, Murpholino Peligro a écrit :
Let's say I ran autoproc with different combinations of options for a
specific dataset, producing dozens of differen
Let's say I ran autoproc with different combinations of options for a
specific dataset, producing dozens of different (but not so different) mtz
files...
Then I ran phenix.refine with the same options for the same structure but
with all my mtz zoo
What would be the best metric to say "hey this comb
17 matches
Mail list logo