I never use it except through te GUI but I thought the default was to
output Fs and Is
What happens if you try:
..
ctruncate -mtzin X6_3_aimless.mtz -mtzout X6_3_ctrunc.mtz
-
wrote:
> I am using CCP4
I am using CCP4 infrequently, and I am having problems getting ctruncate to
output structure factors. IMEAN and ISIGMA are written fine, but the values
for F and SIGF are all question marks. Is there something obvious wrong in
my script or what might be the problem here?
#!/bin/csh -f
ctruncate
Hi
I seem to be getting a lot of outliers rejected by Phaser with data processed
with the latest ctruncate which are not present when data is processed with
the older version (or old truncate) - has something been changed in the code
that would cause this?
With CCP4 6.4: ctruncate
Hmm - Phaser doesn't usually use such high resolution data? Surprised you
are getting any stuff from resolutions higher that 2A.
Whether the intensity at that resolution is meaningful would need careful
inspection of the truncate logs - is the wilson plot reasonable? Are the
4th moments linear,
June 2014 19:55
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate error
Yes.., I too had similar problem with Ctruncate, and used older truncate to
overcome the issue.
Best Wishes,
Partha
On Thu, Jun 19, 2014 at 2:38 PM, jie liu
jl1...@njms.rutgers.edumailto:jl1...@njms.rutgers.edu wrote:
Hi
I also
From: Parthasarathy Sampathkumar [spart...@gmail.com]
Sent: 19 June 2014 19:55
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate error
Yes.., I too had similar problem with Ctruncate, and used older
truncate to overcome the issue.
Best Wishes,
Partha
On Thu, Jun 19, 2014 at 2
Dear CCP4bb,
I am experiencing an unusual error when running truncate. The program appears
to be converting I's to F's, but then failing to output them in the resulting
mtz file, see mtzdump output below:
Col SortMinMaxNum % Mean Mean Resolution Type
Column
num
Sent: Thursday, June 19, 2014 1:11 PM
Subject: [ccp4bb] ctruncate error
Dear CCP4bb,
I am experiencing an unusual error when running truncate. The program
appears to be converting I's to F's, but then failing to output them in the
resulting mtz file, see mtzdump output below:
Col SortMin
of updates back). Choosing to run old-truncate
will get it around.
Best wishes
Jie
- Original Message - From: Stephen Carr
stephen.c...@rc-harwell.ac.uk
To: CCP4BB@JISCMAIL.AC.UK
Sent: Thursday, June 19, 2014 1:11 PM
Subject: [ccp4bb] ctruncate error
Dear CCP4bb,
I am
I can index my data through imosflm with no problems, but when I try to scale
using aimless, the program will not run if I also run truncate (or old
truncate). Here is the error message I get below. Could anyone elaborate on
what this means? Thanks.
P.S. I can take the scaled .mtz file from
On 27 March 2014 14:23, Jarrod Mousa jmo...@ufl.edu wrote:
Using I/sigI 3 with completeness above 0.85, the estimated useful
Resolution Range of this data is 17.964A to 8.982A
Hi, I would imagine that the message above has something to do with it.
Cheers
-- Ian
Hi Randy,
So I've been playing around with equations myself, and I have some alternative
results.
As I understand your Mathematica stuff, you are using the data model:
ip = ij + ib'
ib
where ip is the measured peak (before any background correction), and ij is a
random sample from the
On 8 July 2013 18:29, Douglas Theobald dtheob...@brandeis.edu wrote:
That's all very interesting --- do you have a good ref for TDS where I
can read up on the theory/practice? My protein xtallography books say
even less than SJ about TDS. Anyway, this appears to be a problem
beyond the
On 6/28/2013 5:13 PM, Douglas Theobald wrote:
I admittedly don't understand TDS well. But I thought it was generally
assumed that TDS contributes rather little to the conventional
background measurement outside of the spot (so Stout and Jensen tells
me :). So I was not even really considering
On Jul 7, 2013, at 1:44 PM, Ian Tickle ianj...@gmail.com wrote:
On 29 June 2013 01:13, Douglas Theobald dtheob...@brandeis.edu
wrote:
I admittedly don't understand TDS well. But I thought it was
generally assumed that TDS contributes rather little to the
conventional background
On 29 June 2013 01:13, Douglas Theobald dtheob...@brandeis.edu wrote:
Just because the detectors spit out positive numbers (unsigned ints) does
not mean that those values are Poisson distributed. As I understand it,
the readout can introduce non-Poisson noise, which is usually modeled as
The dominant source of error in an intensity measurement actually
depends on the magnitude of the intensity. For intensities near zero
and with zero background, the read-out noise of image plate or
CCD-based detectors becomes important. On most modern CCD detectors,
however, the read-out
Hi James,
On Sat, Jul 6, 2013 at 6:31 PM, James Holton jmhol...@lbl.gov wrote:
I think it is also important to point out here that the resolution
cutoff of the data you provide to refmac or phenix.refine is not
necessarily the resolution of the structure. This latter quantity,
although
On 21 June 2013 13:36, Ed Pozharski epozh...@umaryland.edu wrote:
Replacing Iobs with E(J) is not only unnecessary, it's ill-advised as it
will distort intensity statistics.
On 21 June 2013 18:40, Ed Pozharski epozh...@umaryland.edu wrote:
I think this is exactly what I was trying to
Ed, sorry, not sure what happened to the 1st attachment, it seems to have
vanished!
Cheers
-- Ian
attachment: Ltest-1.png
On Jun 27, 2013, at 12:30 PM, Ian Tickle ianj...@gmail.com wrote:
On 22 June 2013 19:39, Douglas Theobald dtheob...@brandeis.edu wrote:
So I'm no detector expert by any means, but I have been assured by those who
are that there are non-Poissonian sources of noise --- I believe mostly in
On 22 June 2013 19:39, Douglas Theobald dtheob...@brandeis.edu wrote:
So I'm no detector expert by any means, but I have been assured by those
who are that there are non-Poissonian sources of noise --- I believe mostly
in the readout, when photon counts get amplified. Of course this will
PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the
observations of a reflection (replicates and symmetry related) together when
you infer Itrue or F etc, otherwise you will bias the result even more. Thus
you
From: Jrh [jrhelliw...@gmail.com]
Sent: Monday, June 24, 2013 12:13 AM
To: Terwilliger, Thomas C
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
Dear Tom,
I find this suggestion of using the full images an excellent and visionary one.
So, how
@JISCMAIL.AC.UK] on behalf of Phil [
p...@mrc-lmb.cam.ac.uk]
Sent: Friday, June 21, 2013 2:50 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the
observations of a reflection (replicates and symmetry related) together
@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the
observations of a reflection (replicates and symmetry related) together
when you infer Itrue or F etc, otherwise you will bias the result even
more. Thus you cannot (easily
of Douglas Theobald
[dtheob...@brandeis.edu]
Sent: Sunday, June 23, 2013 1:52 AM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk
wrote:
A fascinating discussion (I've learnt a lot!); a quick sanity check
On 21 June 2013 19:45, Douglas Theobald dtheob...@brandeis.edu wrote:
The current way of doing things is summarized by Ed's equation:
Ispot-Iback=Iobs. Here Ispot is the # of counts in the spot (the area
encompassing the predicted reflection), and Iback is # of counts in the
background
Ian, I really do think we are almost saying the same thing. Let me try to
clarify.
You say that the Gaussian model is not the correct data model, and that
the Poisson is correct. I more-or-less agree. If I were being pedantic
(me?) I would say that the Poisson is *more* physically realistic
On Sat, Jun 22, 2013 at 1:04 PM, Douglas Theobald dtheob...@brandeis.eduwrote:
Feel free to prove me wrong --- can you derive Ispot-Iback, as an estimate
of Itrue, from anything besides a Gaussian?
OK, I'll prove myself wrong. Ispot-Iback can be derived as an estimate of
Itrue, even when
On 22 June 2013 18:04, Douglas Theobald dtheob...@brandeis.edu wrote:
Ian, I really do think we are almost saying the same thing. Let me try to
clarify.
I agree, but still only almost!
--- but in truth the Poisson model does not account for other physical
sources of error that arise
On Sat, Jun 22, 2013 at 1:56 PM, Ian Tickle ianj...@gmail.com wrote:
On 22 June 2013 18:04, Douglas Theobald dtheob...@brandeis.edu wrote:
--- but in truth the Poisson model does not account for other physical
sources of error that arise from real crystals and real detectors, such as
dark
A fascinating discussion (I've learnt a lot!); a quick sanity check,
though:
In what scenarios would these improved estimates make a significant
difference?
Or rather: are there any existing programs (as opposed to vapourware)
that would benefit significantly?
Cheers
phx
On
On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk
wrote:
A fascinating discussion (I've learnt a lot!); a quick sanity check, though:
In what scenarios would these improved estimates make a significant
difference?
Who knows? I always think that improved
On Sat, Jun 22, 2013 at 3:18 PM, Frank von Delft
frank.vonde...@sgc.ox.ac.uk wrote:
In what scenarios would these improved estimates make a significant
difference?
Perhaps datasets where a unusually large number of reflections are very
weak, for instance where TNCS is present, or where the
I agree with Frank. This thread has been fascinating and educational. Thanks
to all. Ron
On Sat, 22 Jun 2013, Douglas Theobald wrote:
On Jun 22, 2013, at 6:18 PM, Frank von Delft frank.vonde...@sgc.ox.ac.uk
wrote:
A fascinating discussion (I've learnt a lot!); a quick sanity check,
On 21 June 2013 13:36, Ed Pozharski epozh...@umaryland.edu wrote:
Replacing Iobs with E(J) is not only unnecessary, it's ill-advised as it
will distort intensity statistics. For example, let's say you have
translational NCS aligned with crystallographic axes, and hence some set of
On Jun 21, 2013, at 8:36 AM, Ed Pozharski epozh...@umaryland.edu wrote:
On 06/20/2013 01:07 PM, Douglas Theobald wrote:
How can there be nothing wrong with something that is unphysical?
Intensities cannot be negative.
I think you are confusing two things - the true intensities and
On 21 June 2013 17:10, Douglas Theobald dtheob...@brandeis.edu wrote:
Yes there is. The only way you can get a negative estimate is to make
unphysical assumptions. Namely, the estimate Ispot-Iback=Iobs assumes that
both the true value of I and the background noise come from a Gaussian
On 06/21/2013 10:19 AM, Ian Tickle wrote:
If you observe the symptoms of translational NCS in the diffraction
pattern (i.e. systematically weak zones of reflections) you must take
it into account when calculating the averages, i.e. if you do it
properly parity groups should be normalised
I kinda think we're saying the same thing, sort of.
You don't like the Gaussian assumption, and neither do I. If you make the
reasonable Poisson assumptions, then you don't get the Ispot-Iback=Iobs for the
best estimate of Itrue. Except as an approximation for large values, but we
are
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though errors in Fs may be less
critical
On Jun 21, 2013, at 2:48 PM, Ed Pozharski epozh...@umaryland.edu wrote:
Douglas,
Observed intensities are the best estimates that we can come up with in an
experiment.
I also agree with this, and this is the clincher. You are arguing that
Ispot-Iback=Iobs is the best estimate we can come
On Jun 21, 2013, at 2:52 PM, James Holton jmhol...@lbl.gov wrote:
Yes, but the DIFFERENCE between two Poisson-distributed values can be
negative. This is, unfortunately, what you get when you subtract the
background out from under a spot. Perhaps this is the source of confusion
here?
However you decide to argue the point, you must consider _all_ the observations
of a reflection (replicates and symmetry related) together when you infer Itrue
or F etc, otherwise you will bias the result even more. Thus you cannot
(easily) do it during integration
Phil
Sent from my iPad
On
From: CCP4 bulletin board [CCP4BB@JISCMAIL.AC.UK] on behalf of Phil
[p...@mrc-lmb.cam.ac.uk]
Sent: Friday, June 21, 2013 2:50 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] ctruncate bug?
However you decide to argue the point, you must consider _all_ the observations
of a reflection
As a maybe better alternative, we should (once again) consider to refine
against intensities (and I guess George Sheldrick would agree here).
I have a simple question - what exactly, short of some sort of historic inertia
(or memory lapse), is the reason NOT to refine against intensities?
Just trying to understand the basic issues here. How could refining directly
against intensities solve the fundamental problem of negative intensity values?
On Jun 20, 2013, at 11:34 AM, Bernhard Rupp hofkristall...@gmail.com wrote:
As a maybe better alternative, we should (once again)
If you are refining against F's you have to find some way to avoid
calculating the square root of a negative number. That is why people
have historically rejected negative I's and why Truncate and cTruncate
were invented.
When refining against I, the calculation of (Iobs - Icalc)^2
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though errors in Fs may be less
critical then).
-- Ian
On 20 June 2013 17:20, Dale Tronrud
bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones it:
you still need the Fs for maps! (though
...@brandeis.edu]
Sent: 20 June 2013 17:49
To: Bellini, Domenico (DLSLtd,RAL,DIA); ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Seems to me that the negative Is should be dealt with early on, in the
integration step. Why exactly do integration programs report negative Is to
begin with?
On Jun
board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only
Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs for maps! (though
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem
...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't
@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you still need the Fs
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against
?
More of a question rather than a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based refinement!
Anyway, refining against Is doesn't solve
the background and push all
the Is to positive values?
More of a question rather than a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors
: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't
like I-based refinement!
Anyway, refining against Is doesn't solve the
problem, it only postpones it: you still need the Fs
for maps! (though errors in Fs may be less critical
then). -- Ian
On 20 June 2013 17:20
?
More of a question rather than a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement
On 20 June 2013 20:46, Douglas Theobald dtheob...@brandeis.edu wrote:
Well, I tend to think Ian is probably right, that doing things the
proper way (vs French-Wilson) will not make much of a difference in the
end.
Nevertheless, I don't think refining against the (possibly negative)
[mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of Ian
Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't solve the problem, it only postpones
it: you
a suggestion ...
D
From: CCP4 bulletin board [mailto:CCP4BB@JISCMAIL.AC.UK] On Behalf Of
Ian Tickle
Sent: 20 June 2013 17:34
To: ccp4bb
Subject: Re: [ccp4bb] ctruncate bug?
Yes higher R factors is the usual reason people don't like I-based
refinement!
Anyway, refining against Is doesn't
Hi James,
Concerning XDSCONV, I cannot reproduce your plot. A Linux (64bit) program
test_xdsconv which allows to input I, sigI, I, and mode, where
I: measured intensity
sigI: sigma(I)
I: average I in resolution shell
mode: -1/0/1 for truncated normal/acentric/centric prior
is at
On Wed, 19 Jun 2013 14:19:19 +0100, Kay Diederichs
kay.diederi...@uni-konstanz.de wrote:
I wonder if problem b) is why Evans and Murshudov observe little contribution
of reflections in shells with CC1/2 below 0.27 in one of their test cases,
which had very anisotropic data.
sorry, forgot the
To add to the discussion a plot of the acentric KW from -10 to 10 (normalised
wrt sqrt(sigma) ). ftp://ftp.ccp4.ac.uk/ccb/aZF2.pdf,
black dots are F/sqrt(sigma) while blue is corresponding plot for sigma
The value drops from 0.42 to 0.28 going from h = -4 to h = -10.
Note: for this we are
Dear Kay and Jeff,
frankly, I do not see much justification for any rejection based on
h-cutoff.
FrenchWilson only talk about I/sigI cutoff, which also warrants further
scrutiny. It probably could be argued that reflections with I/sigI-4
are still more likely to be weak than strong so F~0
Hi Ed,
While I don't think French and Wilson argue explicitly for the h-4.0
requirement in their main manuscript, if you look at the source code
included in the supplementary material for this paper, they include this in
their implementation, which is what I worked from.
Charles, do you happen
On Wed, 19 Jun 2013 11:01:22 -0400, Ed Pozharski epozh...@umaryland.edu wrote:
Dear Kay and Jeff,
frankly, I do not see much justification for any rejection based on
h-cutoff.
I agree
FrenchWilson only talk about I/sigI cutoff, which also warrants further
scrutiny. It probably could be
Dear Ed,
AFAIK James Holton found the same issue, and a similar problem also existed in
XDSCONV. In my view, it is an example of the problem that most programs so far
have dealt with weak data in a suboptimal way, and have undergone little
testing with such data.
The latest version of XDSCONV
Hi Kay - could you elaborate on the latest version of XDSCONV has a fix
for it? (A look around The Google did not help me.)
Cheers
Frank
On 18/06/2013 11:38, Kay Diederichs wrote:
Dear Ed,
AFAIK James Holton found the same issue, and a similar problem also existed in
XDSCONV. In my view,
Hi Frank,
older versions of XDSCONV, for datasets with weak high-resolution data, printed
a long list starting with:
SUSPICIOUS REFLECTIONS NOT INCLUDED IN OUTPUT DATA SET
(at most 100 are listed below)
SUSPICIOUS REFLECTIONS NOT INCLUDED IN OUTPUT DATA SET
(at
Hi Ed,
Thanks for including the code block.
I've looked back over the FW paper, and the reason for the h-4.0 cutoff
is that the entire premise assumes that the true intensities are normally
distributed, and the formulation breaks down at that far out of an
outlier. For most datasets I haven't
Actually, Jeff, the problem goes even deeper than that. Have a look at these
Wilson plots:
http://bl831.als.lbl.gov/~jamesh/wilson/wilsons.png
For these plots I took Fs from a unit cell full of a random collection of
atoms, squared them, added Gaussian noise with RMS = 1, and then ran them back
Hi Jeff,
what I did in XDSCONV is to mitigate the numerical difficulties associated with
low h (called Score in XDSCONV output) values, and I removed the h -4
cutoff. The more negative h becomes, the closer to zero is the resulting
amplitude, so not applying a h cutoff makes sense (to me,
I noticed something strange when processing a dataset with imosflm. The
final output ctruncate_etc.mtz, contains IMEAN and F columns, which
should be the conversion according to FrenchWilson. Problem is that
IMEAN has no missing values (100% complete) while F has about 1500
missing (~97%
Hi Ed,
I'm not directly familiar with the ctruncate implementation of French and
Wilson, but from the implementation that I put into Phenix (based on the
original FW paper) I can tell you that any reflection where (I/sigI) -
(sigI/mean_intensity) is less than a defined cutoff (in our case -4.0),
Jeff,
thanks - I can see the same equation and cutoff applied in ctruncate
source.Here is the relevant part of the code
// Bayesian statistics tells us to modify I/sigma by
subtracting off sigma/S
// where S is the mean intensity in the resolution shell
h =
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dear Hari,
this is hard to tell without debugging, but I have two guesses:
1) did you untick 'use anomalous data' but the script only partially
picked this up? In that case, the 'colano'-columns would not be present
2) some column name is too long
Hi All,
I am running the latest ccp4 ( auto-updated using the new autoupdate tool
built into ccp4i)
I was running what should be a routine scalepack to mtz conversion and I
got an error which I have never seen before with ctruncate.
When I run the same job with old truncate it succeeds.
Any
Hi Frank
this means that the anisotropy corrected data is used to calculate the twinning
statistics, moments etc. The corrected data is not used in the truncate
procedure.
Charles
On 1 May 2012, at 19:28, Frank von Delft wrote:
Hello, a colleague just pointed me to an innocuous sentence in
Hello, a colleague just pointed me to an innocuous sentence in the
ctruncate:
CTRUNCATE looks for anisotropy in the data and performs anisotropy
correction.
What exactly does that involve...?
phx.
This has been reported and I thought fixed??
Eleanor
On Mon, 8 Aug 2011 16:01:27 +0100, Charles Ballard
charles.ball...@stfc.ac.uk wrote:
Hi Chris
In Stephen's case the problem was that the column names were too long.
They are limited to 30 characters by the mtz standard
running
Dear all,
We have a problem where cTruncate generates an error message when trying to
process data from Scala as part of a Scale and Merge Intensities task in the
GUI. Scala appears to run ok. cTruncate produces
***
*
Hi Chris
In Stephen's case the problem was that the column names were too long. They
are limited to 30 characters by the mtz standard
running through it looks like the problem is name truncation on the output
columns. Whether it is the
total name, /crystal/dataset/column , or just the
Dear CCP4,
I have encountered the following error message when scaling a recently
collected data set.
The program run with command:
/home/applications/CCP4-6.1.13/ccp4-6.1.13/bin/ctruncate -hklin
/tmp/tfr35668/Diamond270211_21_2_mtz.tmp -hklout
Are you sure these are real FP=0 or reflections which werent measured
but have been added for completeness of the h k l list.
The check is whether the SigF is also 0.00 - in that case they are
genuinely missing..
Eleanor
On 02/09/2011 11:34 PM, Ed Pozharski wrote:
I observe under some
Excellent point and no, these are not missing reflections. SigF is not
zero. Also, if I am not mistaken, missing reflections in the MTZ format
are recorded as NaN.
Ed.
On Thu, 2011-02-10 at 12:17 +, Eleanor Dodson wrote:
Are you sure these are real FP=0 or reflections which werent
This does sound like a bug. If one of h,k,l are zero then the reflection is
centric in P222 so the truncation will be different
I just ran a test on an orthorhombic dataset (P212121) of mine and I do indeed
see some strange F=0, sigF0 reflections, but in Charles Ballard's
development version
I observe under some conditions that ctruncate sets some reflections
amplitudes to zero. AFAIU, this should not be happening as even
negative intensities (there are none in this particular dataset) should
produce FP0 upon truncation.
66 out of ~23000 reflections are zeros after ctruncate is
I just installed CCP4 on a new XP box last week and I get the same
error, as well as a windows error ctruncate.exe has encountered a
problem and needs to close. We are sorry for the inconvenience
Stupid windows.
On Mon, Jul 26, 2010 at 9:40 AM, Scott Pegan scott.d.pe...@gmail.com wrote:
Has
Dear alls
I am sorry but I have a problem with the installation of last version
of CCP4i on systems Window 97 and Window Vista
I am trying to run Scalepack2mtz and I get this message after failing
The program run with command: ctruncate -hklin
C:/Ccp4Temp/PROJECT_1_1_mtz.tmp -hklout
Dear all,
I'm converting intensities from scala to amplitudes with ctruncate like so:
ctruncate -mtzin scala_protein_3_001_180.mtz -colin /*/*/[IMEAN,SIGIMEAN]
The data are native. An mtz file is generated, and it looks ok, but
ctruncate doesn't terminate properly. Instead, after Anisotropy
Looks like a problem with your file.
Can you try:
od -a scala_protein_3_001_180.mtz | head
If your file is an mtz file, you should get output starting a bit like this:
000 M T Z sp ! o etx nul D A nul nul nul nul nul nul
If that is OK, then try:
strings
1 - 100 of 107 matches
Mail list logo