The real question is, are the issues "real" or not. If they are, then no matter how many there are, the information is of use!

Well, one idea that I had is that "real" doesn't have to be determined at the present time. One of the cool things about Hackystat is that we can store these results over time and use this data to determine if the prediction comes true. For example, if we do a code review and identify a piece of code as a valid defect or if we actually find a runtime bug in the code, then we can view the issue as "real".

This all kind of goes back to PRI, if a class has a lot of code issues, has never been reviewed, and has low coverage, then it could potentially be at risk and needs to be reviewed.

thanks, aaron

One idea I've had is to make every third or fourth stable release a "quality improvement" release---i.e. minimal functional improvement, with the focus on quality issues--eliminating Eclipse warnings, PMD, FindBugs, etc.

I like this idea.. Our Javadocs could use some work. Also, different components that accomplish the similar tasks but are implemented differently need to be reviewed. For example, our DailyProjectData implementations, our sensors (pMap), SDTs (eSDTs),

thanks, aaron



At 07:07 PM 2/9/2006, Philip Johnson wrote:
Great job getting PMD and FindBugs working, Cedric! The reports are quite interesting. I look forward to the telemetry!

It seems that there are too many issues found by these tools... We
probably need to narrow it down before the information can be  of any
real use.

The real question is, are the issues "real" or not. If they are, then no matter how many there are, the information is of use!

One idea I've had is to make every third or fourth stable release a "quality improvement" release---i.e. minimal functional improvement, with the focus on quality issues--eliminating Eclipse warnings, PMD, FindBugs, etc.

Cheers,
Philip

Reply via email to