Hi Edward,
I submitted this as a bug report. I modified the full_analysis.py
file after a SVN refresh. Unless you have a quick way of doing so, I
will test the cleaned up version (submitted as a patch to the bug
report) tomorrow.
Doug
On Sep 17, 2007, at 4:12 PM, Edward d'Auvergne wrote
Hi,
In your previous post
(https://mail.gna.org/public/relax-users/2007-09/msg00011.html,
Message-id: <[EMAIL PROTECTED]>) I think
you were spot on with the diagnosis. The reading of the results files
with None in all model positions will create variables called 'model'
with the value set to None
Hi Edward,
No problem. I am working on this now, and will try to submit the bug
report later tonight or tomorrow.
Doug
On Sep 17, 2007, at 4:12 PM, Edward d'Auvergne wrote:
> Hi,
>
> In your previous post
> (https://mail.gna.org/public/relax-users/2007-09/msg00011.html,
> Message-id: <[EMAI
Hi,
On 9/17/07, Sebastien Morin <[EMAIL PROTECTED]> wrote:
>
> Hi Ed,
>
> First, there were some bad assignments in my data set. I used the automatic
> assignment (which takes an assigned peak list and propagates it to other
> peak lists) procedure within NMRPipe for the first time and some peak
As a followup, my changes to full_analysis.py solved my problem. I
will clean up my code and post it within the next day or so. Would
you prefer that I attach the script as an attachment, or inline in an
email, or provide a patch, or change the CVS code myself?
Doug
On Sep 17, 2007, at 1
My tests are still running, but here is what I think might be
happening. I think there is a problem with comparing the string
representation of the previous and current runs.
When the previous results are loaded in the 'load_tensor' definition
(starting on line 496), information for all re
Hi Ed,
First, there were some bad assignments in my data set. I used the
automatic assignment (which takes an assigned peak list and propagates
it to other peak lists) procedure within NMRPipe for the first time and
some peaks were badly assigned.
Second, the PDB file is quite good as it is a rep
Hi Edward,
I am running a test, but will post more information as soon as it
finishes.
RE:
> Between these 2 rounds, are you sure that all models for all residues
> are identical? From your data that you posted at
> https://mail.gna.org/public/relax-users/2007-06/msg00017.html
> (Message-id:
Hi,
The problem is likely to be due to a circular looping around very
similar solutions close to the universal solution, as I have defined
in:
d'Auvergne EJ, Gooley PR. Set theory formulation of the model-free
problem and the diffusion seeded model-free paradigm. Mol Biosyst.
2007 Jul;3(7):483-94
Hi,
I'm unsure if this is a bug in full_analysis.py, in the internal
relax code, or user error. The optimization of the 'sphere' model
will not converge, now after 160+ rounds. The chi-squared test has
converged (long, long ago):
"" from output
Chi-squared test:
chi
10 matches
Mail list logo