I began thinking more about this and would like to return to the topic. 
I think that a tool should help MOST people do their work right while 
being flexible at the same time. I think the ideas below are flexible 
enough that they don't stop others from using reviewboard as they were 
already, but would provide significant more value for users who chose to 
use them.

*Time - Let the reviewer estimate it*

Here is an overview of the previous discussions on this topic:

    * One idea was to keep track of time spent viewing the page. A timer
      would start on scrolling the diff viewer page or screeshot page
      and keep the timer running while the comment dialog is displayed.
    * My suggestion is to add a button that will allow reviewers to
      "Accept" a review or to have the user acknowledge that they are
      doing a review. Otherwise, RB would not let the user add any
      comments, etc...

I really think that the suggestion of having the reviewers "accept" 
would really let RB be used in many more organizations where keeping 
track of metrics is very important. This feature could be optional and 
turned off by default, but when turned on, by introducing a simple idea 
to accept the review first would allow us to gain a lot. Once we have 
this "Accept" feature, then we could integrate RB with a company's 
internal time-management/task-management system. Each user who clicks on 
"Accept" would generate a new "task" event for themselves that would be 
channeled into the company's time-tracking solution. Then, the user can 
him/herself estimate the time spent doing a particular review.
.
If the time-tracking mechanism is left embedded as part of RB itself 
(where RB counts the number of minutes a user spent watching the diff), 
then we are discounting the possibility that the user would ever need to 
download the changeset and try to build the product or continue the 
review off-line. I think that most GOOD source code review checklists 
should have the "Smoke Test" as the first item to check. Does the code 
build properly, are there any compiler warnings, etc... For this aspect, 
the reviewer is forced to download the changeset and try it on his/her 
machine. It would be impossible to automatically keep track of this 
significant time

Letting the user estimate the time spent doing the review is probably 
going to be the most accurate method of time-tracking. All other methods 
would be prone to huge errors.

I was told that I could do this as an extension, but given that I don't 
know what that feature is going to look like I could not comment on it. 
In any case, having the "Accept" functionality (turned off by default) 
be part of the product would not be very intrusive and it save us extra 
effort in writing an extension.

*Comment Types - Make it flexible yet provide it in the default product*

Another aspect of metrics is trying to determine the type of the 
"defect".  For instance, the reviewer could indicate that this is a 
"Performance" issue, or a "Security" issue. This would entail adding a 
"Comment Type" drop down list on the Comment dialog. I think that by 
default we could have a few common comment types, but allow the user to 
add more.

Even if we add the following 4 most default comments we would let 
companies derive significantly more meaning from their Reviewboard :

    * *General*:  This is a general comment. It may not be a bug or
      error, just a comment/suggestion.
    * *Coding Style*: This is an issue with the coding style.
    * *Functional Error*:  This issue means that the code is
      functionally wrong.
    * *Non-Functional*:  This is related to non-functional requirements
      such as Security, Performance, etc...

I was told that this could also be done as an extension but why not 
allow the users to have better review practices by gathering the right 
metrics out of the box?

Hovanes

hschnit wrote:
> I was also looking at ways to measure metrics. This is necessary to
> make sure that we do a good job at code reviews and it's good for
> process improvements. Without measures we have no way to know if the
> time spent reviewing code is worth the effort.
> So I second Hovanes in saying that there is a need for this kind of
> feature.
>
> I like your approach for measurements. They are non-intrusive for the
> users which is great.
>
> Additionally I was wondering if we should add a way to classify the
> type of comments (defects, coding styling, bad practice, ...). I'm not
> sure if it is possible without driving the developers and reviewers
> crazy, though :-)
>
> Cheers,
> -Herve
>   


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"reviewboard" group.
To post to this group, send email to reviewboard@googlegroups.com
To unsubscribe from this group, send email to 
reviewboard+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/reviewboard?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to