BIG +1 from my side.
Also with our history that we had multiple leaks and stuff, it would make sense 
to have such a framewortk to model "long running" tests which then monitor the 
resource usage and stuff.
This would be a huge step forwards also for us internal, then we do no langer 
have to do the canary testing in staging or prod : )

Julian

PS.: If we from pragmatic minds can support that somehow with our infra or 
stuff, we are open!

Am 18.02.20, 09:54 schrieb "Christofer Dutz" <[email protected]>:

    Hi all,
    
    so we have more and more ported drivers, which is a good thing. However all 
of these are mostly not covered by unit- or integration-tests.
    I wouldn’t want to release them like that.
    
    So I was thinking how we can write tests for these in a universal way where 
you don’t have to learn a completely new approach to testing for every driver.
    
    The idea I had, and for which would like your feedback, would be more an 
Integration-Testsuite.
    
    We already have a XML based Unit-Test framework for the parsers which help 
get the messages themselves correct and can prove the parsers and serializers 
are doing what we want them too … here a lot more tests could be created.
    
    Based on this Framework I would like to build something that takes things 
one step further.
    
    There is one transport called “test” … this allows passing bytes into a 
pipeline and making assertions to both ends of the Netty pipelines. Also does 
it allow to read output from the pipeline.
    
    I would now like to combine the XML notation used in the unit-test 
framework to specify the expected interaction with the driver … in this we 
could treat one testcase as a sequence of “send” and “expect” elements. The 
framework would step through each element from the top to the bottom. If it 
gets a “send” element it will parse the XML message, serialize it and send 
those bytes to the pipeline. If it processes an “expect” it will wait till it 
gets a byte[] from the pipeline, parse it, serialize it as XML and compare that 
to the expected xml in the “expected” tag.
    
    I think with a setup like this we could produce a lot of integration-tests 
that should get the coverage up pretty fast and it should help with defining 
scenarios for bug reports pretty easily.
    
    What do you think?
    
    Chris
    

Reply via email to