Hi Aaron,

Great thoughts, as always. If you keep doing this, I'm going to have no choice but to make you do a Ph.D. :-)

There are several ways to introduce context.. [1. using streams, 2. using tooltips
generated from commit comments].

I agree. It shouldn't be that hard to generate a stream which has data points for commits and then generates a tooltip
with the comment from the commit. Of course, we'd have to refine our sensor to send the comment if it isn't already.


Telemetry - we need to know when there are problems - red flags
Currently, we are guessing whether things are going good or bad based on telemetry
streams.  But, we have these good or bad indicators in Build Results and Issue Defects.
I would claim that we need to use Build Results and Defects to indicate "Red Flags" in
the telemetry streams.  Each Red Flag in a stream or scene would indicate a place to
look for spikes, declines, or steady spots.  This would be an improvement over our
current process of just looking for spikes, declines or steadiness because who knows if
they actually relate to something significant.  Red Flags flip the process of 'looking
for changes leads to something interesting' to 'something interesting (aka red flags)
will help us understand interesting changes'.

For example, if we see a Build Failure, then we can focus our attention to a time frame
and telemetry streams that could indicate why that Build Failure happened. This is
another example of putting the telemetry streams into a context that we can understand.
Reported Defects can have a similar effect.

Yes, I agree. One can work from two directions. The first is bottom-up, or "opportunistically": looking at the data for potentially interesting co-variances. This has the advantage of surprise, but the disadvantage of having to validate that what we saw was causal. That was the question we were having with Cedric's active time vs. coverage charts yesterday.


The second way is top-down, or goal-driven. Cedric is now embarked upon a telemetry exploration that is intended to help us understand the kinds of build failures that occur and whether they can be correlated with other attributes of project state at that point in time. In this case, he is starting from a "known bad" condition: daily build failure, and working from there.

Telemetry Design - we need help from Information Architects or HCI people
Wow.. I thought I'd never say this, but we could use the help of HCI and IA experts to
help us display information effectively.  Consider this, when I first look at a new
telemetry stream it takes me a little while to figure out what are all those lines (I
have to figure that out before I can start to think about what they mean).  I would
claim that there must be better ways of presenting this information.... Anyway that was
just a thought.

I think this is more of a resource issue than anything else. It is quite easy to generate ways to improve the usability of the telemetry streams, but finding developer resources to make it happen is harder. I think the telemetry usability is slowly improving, and I guess we'll have to be satisfied with that for a while.


JIRA Sensor improvement - what to do with old data

Good points. I had a meeting with Burt and he is going to work on this issue.

Similarly, the Jupiter Review Eclipse Plugin Sensor uses that same model.  In fact, we
have conducted about 4 reviews without the Jupiter sensor working and that data will be
in "Hackystat limbo" unless someone reprocesses them. In an much earlier email I stated
that this could be a potential problem and suggest that an ant sensor send these review
issues off to the sensor, thus ensuring that all issues are accounted for in Hackystat.
Although, this would be harder, I feel that the Jupiter Sensor should collect Review
Activity and the Ant Sensor should collect metrics about the actual Review Issue.
Again, I would claim that IDE based sensor should just collect activity type sensor
data and Ant Sensors should collect product type data. This model seems to work well
for Activity and FileMetrics.

Ok.. I know your thinking that this proposal will make it harder for people who don't
use Ant.. Could we do both? Or just have that option?

I finally get your point. Both would be good. We just have to make sure that we use a consistent timestamp so that we can send the data more than once and not get duplicates on the server.


JIRA Sensor
Burt, did you find a "shutdown" event handler? Which should just execute send on all
sensor shells one last time.

He's still waiting on a reply for Atlassian.

Cheers,
Philip




Reply via email to