Yeah this is all good stuff.

We're basically talking about 'personal CI' here I think[1]. Nothing wrong with that, but we must remember some of this can be deferred until check-in.

[1]http://silkandspinach.net/2009/01/18/a-different-use-for-cruisecontrol/

On 14 Apr 2009, at 21:13, Andrew Premdas wrote:

One simple thing I asked about the other day was running multiple instances of autotest to do different things. Currently I'd like to run one for my specs and one for my features, but you could easily extend this idea. Creating several profiles that run at the same time, with the long running ones having a low priority would give a range of feedback that eventually would be fairly complete (on a big project it might fully catch up overnight, or at the weekend) whilst providing sufficient feedback to be able to iterate quickly with reasonable confidence

2009/4/14 Stephen Eley <sfe...@gmail.com>
On Mon, Apr 13, 2009 at 7:46 AM, aslak hellesoy
<aslak.helle...@gmail.com> wrote:
>
> So if someone develops a better AutoTest with a plugin architecture, and
> that doesn't have to run as a long/lived process, then I'd be very
> interested in writing the neural network part - possibly backed by FANN
> (http://leenissen.dk/fann/)

In the immortal words of Socrates, "That rocks."

The nice thing about separating concerns like this -- the reason why
design patterns appeal so much to me -- is that pieces can rather
easily be built up incrementally.  As Matt said, thinking 'neural
networks' with this is way beyond the level of anything I'd had in my
head.  But it's a damn cool idea.

I can think of a few ways to handle this particular chunk, with levels
of complexity that would scale up:

1.) DUMB REACTION: Just re-run the tests that failed in this run
cycle.  Then, periodically, run tests that succeeded in the background
as a regression check.  (This isn't much beyond what Autotest does
now.)

2.) OBSERVERS: Allow handlers to register themselves against certain
files, so that when a file is changed, that handlers gets run.
Multiple handlers can observe any given file, and a handler can
declare multiple rules, including directories or pattern matches.
(Again, Autotest has something _sort of_ like this, but not nearly as
flexible.)

3.) PERSISTENCE: Track the history of tests and the times they were
created, edited, last run, and last failed.  Also track file
modification times.  When a file changes, run the tests first that are
either new or have failed since the last time the file was changed.
Then run the tests that went from failing to passing in that time.
(This could certainly be improved -- I haven't sat down to figure out
the actual rule set -- but you get the gist.  Know when things changed
and set priorities accordingly.)

4.) INTELLIGENCE: Aslak's neural network.  Let the system figure out
which tests matter to which files, and run what it thinks it ought to
run.  Maybe use code coverage analysis.  It can 'learn' and improve
when the full suite is run and it discovers new failures.

In all four of these cases, I still think it's imperative to run the
full suite.  None of these methods are foolproof, and code is tricky
and makes weird things happen in weird crevices.  That's _why_ testing
must be done.  But running the suite doesn't have to be a 'blocking'
activity, like it is with Autotest now.  It can happen in bits and
pieces, when nothing else is going on, and it can be configured to
only grab your attention when something failed unexpectedly.

(That's one of the prime reasons for the 'multiple output views'
feature, by the way.  When I said output views I wasn't just thinking
of the console or a window.  I'm also thinking Dashboard widgets, or
gauges in the toolbar, or RSS feeds, or dynamic wallpaper, or whatever
else anyone can think of.  Stuff that stays out of your way until it
either needs you or you choose to look at it.)

Still making sense?  This is starting to sound pretty big and pretty
complex -- but I don't think it strictly needs to be.  #1 and #2 above
are pretty easy.  The others don't have to be built before releasing
something.  And, of course, you wouldn't have to pick just one
selection module.  You could run or disable all of these depending on
your project's needs, or come up with your own system involving Tarot
cards and Linux running on a dead badger.(*)

I just want to build a core that detects changes in stuff, tells other
stuff about it, and passes on what that stuff says about it to a third
set of stuff.  The rest is implementation-specific details.  >8->

(* http://www.strangehorizons.com/2004/20040405/badger.shtml )


--
Have Fun,
  Steve Eley (sfe...@gmail.com)
  ESCAPE POD - The Science Fiction Podcast Magazine
  http://www.escapepod.org
_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Matt Wynne
http://beta.songkick.com
http://blog.mattwynne.net



_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Reply via email to