I'm not sure it makes sense to  all the testing packages under a different 
umbrella that covers the code they test.
While there might be commonalities building a test harness, I would think that 
each testing tool would need to have deep knowledge of the tool's internals 
that it is testing. as such it would need someone with the experience to code 
it.

I don't see what advantage combining PigUnit & say 'MRUnit' would be for 
example. 
--I
On Feb 16, 2011, at 2:50 PM, Konstantin Boudnik wrote:

> Steve.
> 
> If the project under discussion will provide a common harness where such a 
> test
> artifact (think of a Maven artifact for example) will click and will be
> executed automatically with all needed tools and dependencies resolved for you
> - would it be appealing for end-users' cause?
> 
> As Joep said this "...will reduce the effort to take any (set of ) changes
> from development into production." Take it one step further: when your cluster
> is 'assembled' you need to validate it (on top of a concrete OS, etc.); is it
> desirable to follow N-steps process to bring about whatever testing work-load
> you need or you'd prefer to simply do something like:
> 
>    wget http://workloads.internal.mydomain.com/stackValidations/v12.4.pom \
>        && mvn verify
> 
> and check the results later on?
> 
> These gonna be the same tools that dev. use for their tasks although worksets
> will be different. So what?
> 
> Cos
> 
> On Wed, Feb 16, 2011 at 11:37AM, Steve Loughran wrote:
>> On 15/02/11 21:58, Konstantin Boudnik wrote:
>>> While MrUnit discussion draws to its natural conclusion I would like
>>> to bring up another point which might be well aligned with that
>>> discussion. Patrick Hunt has brought up this idea earlier today and I
>>> believe it has to be elaborated further.
>>> 
>>> A number of testing projects both for Hadoop and Hadoop-related
>>> component were brought to life over last year or two. Among those are
>>> MRUnit, PigUnit, YCSB, Herriot, and perhaps a few more. They all
>>> focusing on more or less the same problem e.g. validation of Hadoop or
>>> on-top-of-Hadoop components, or application level testing for Hadoop.
>>> However, the fact that they all are spread across a wide variety of
>>> projects seems to confuse/mislead Hadoop users.
>>> 
>>> How about incubating a bigger Hadoop (Pig, Oozie, HBase) testing
>>> project which will take care about development and support of common
>>> (where's possible) tools, frameworks and the like? Please feel free to
>>> share your thoughts :)
>>> --
>> 
>> I think it would be good though specific projects will need/have their  
>> own testing needs -I'd expect more focus for testing redistributables to  
>> be on helping Hadoop users test their stuff against subsets of data,  
>> rather than the hadoop-*-dev problem of "stressing the hadoop stack once  
>> your latest patch is applied".
>> 
>> That said, the whole problem of qualifying an OS, Java release and  
>> cluster is something we'd expect most end user teams to have to do  
>> -right now terasort is the main stress test.

Reply via email to