This may be of interest too for an information theoretic measure of
the first kind:
http://linkinghub.elsevier.com/retrieve/pii/S1383762101000194
Regards,
Rudi
2009/12/11 Israel Herraiz :
> Excerpts from Alan Blackwell's message of Thu Dec 10 19:00:29 +0100 2009:
>> A) the entropy in
Excerpts from Alan Blackwell's message of Thu Dec 10 19:00:29 +0100 2009:
> A) the entropy in the program itself (e.g. graph of control structure,
> data structure, type structure etc)
>
> B) the entropy in the programming tools being used (language
> syntax, execution model, libraries etc)
>
>
On Dec 10, 2009, at 12:50, Israel Herraiz wrote:
Excerpts from Alan Blackwell's message of Thu Dec 10 18:28:22 +0100
2009:
Absolutely. So the Harrison paper is "An entropy-based measure of
software complexity" from IEEE Trans Software Eng, 18(11)
1025-1029.
That paper might be a good startin
Alan,
The cost, to the reader, of obtaining the information is also
an important issue.
That paper might be a good starting point for a discussion of
what would be a meaningful information content measure in
comparing software source code.
If the software was written by French speakers the id
Thanks Israel - will read with interest.
So that measure might provide a good basis for cognitive
comparison in the way that Chris and I were proposing.
By the way, we need to keep in mind that there are two sources of
entropy that seem to get conflated in discussion of program
metrics:
A) the
c.do...@open.ac.uk said:
> the notion of cognitive complexity (whatever that may be).
In terms of experimental measures that are derived from an
operationalised cognitive theory, one could consider:
1. Memory (reconstruction/recall/recognition) measures
2. Information-finding measures
3. Problem
Excerpts from Alan Blackwell's message of Thu Dec 10 18:28:22 +0100 2009:
> Absolutely. So the Harrison paper is "An entropy-based measure of
> software complexity" from IEEE Trans Software Eng, 18(11)
> 1025-1029.
>
> That paper might be a good starting point for a discussion of
> what would be
---Original Message-
From: Derek M Jones [mailto:de...@knosof.co.uk]
Sent: 10 December 2009 17:13
To: Ppig-Discuss-List
Subject: Re: validation of complexity metrics as measure for ease of
comprehension?
Alan,
> My own experience, based on recent investigation with a grad
> student to compil
On Dec 10, 2009, at 11:57, Alan Blackwell wrote:
My own experience, based on recent investigation with a grad
student to compile more evidence for his claims regarding
complexity metrics (in this case Harrison's entropy-based measure
rather than McCabe) was that the closer we looked at the mea
> I think that information content is the way to go.
Absolutely. So the Harrison paper is "An entropy-based measure of
software complexity" from IEEE Trans Software Eng, 18(11)
1025-1029.
That paper might be a good starting point for a discussion of
what would be a meaningful information content
Alan,
My own experience, based on recent investigation with a grad
student to compile more evidence for his claims regarding
complexity metrics (in this case Harrison's entropy-based measure
rather than McCabe) was that the closer we looked at the measure
I think that information content is th
My own experience, based on recent investigation with a grad
student to compile more evidence for his claims regarding
complexity metrics (in this case Harrison's entropy-based measure
rather than McCabe) was that the closer we looked at the measure
itself, the more flawed it seemed to be - both o
James,
metric is useful to predict bugs, but I often hear the further
interpretation that complexity actually causes more bugs (or inhibits
their fixes) because the code is harder to understand.
That interpretation seems to need stronger validation than the
correlational studies.
The probl
Over on the libresoft mailing list we're having a conversation about
interpretation of complexity metrics (e.g. McCabe cyclometic
complexity etc). The studies that we know on this metric demonstrate
that the metric is useful to predict bugs, but I often hear the
further interpretation that
14 matches
Mail list logo