Overall, it sounds like a good idea to add more long time tests and stability 
tests and load tests to be more proactive about hat we encounter in high load 
real life usage scenarios (and can only report / fix then).

Julian

Am 19.02.20, 13:26 schrieb "Strljic, Matthias Milan" 
<matthias.strl...@isw.uni-stuttgart.de>:

    +1 from my side. I think it is very important to include by design a test 
suite if we autogenerate parts of our stack. The proposed idea seems to build a 
nice first base which could be extended for combinations of autogenerated and 
manual coded protocol semantics.
    
    And ty @Chris for your effort 😊
    
    Greetings Mathi
    Matthias Strljic, M.Sc.
    
    ………………………………………………………………………………………………
    Interesse an Steuerungstechnik aus der Cloud und anderen Innovationen?
    Informieren Sie sich über die Stuttgarter Innovationstage vom 03.-04. März 
2020.
    https://www.stuttgarter-innovationstage.de
    ………………………………………………………………………………………………
    
    Universität Stuttgart
    Institut für Steuerungstechnik der Werkzeugmaschinen und 
Fertigungseinrichtungen (ISW)
    
    Seidenstraße 36
    70174 Stuttgart
    GERMANY
    
    Tel: +49 711 685-84530
    Fax: +49 711 685-74530
    
    E-Mail: matthias.strl...@isw.uni-stuttgart.de
    Web: http://www.isw.uni-stuttgart.de
    
    -----Ursprüngliche Nachricht-----
    Von: Christofer Dutz <christofer.d...@c-ware.de> 
    Gesendet: Tuesday, February 18, 2020 9:54 AM
    An: dev@plc4x.apache.org
    Betreff: [TESTNG] Proposal for easily testing generated drivers
    
    Hi all,
    
    so we have more and more ported drivers, which is a good thing. However all 
of these are mostly not covered by unit- or integration-tests.
    I wouldn’t want to release them like that.
    
    So I was thinking how we can write tests for these in a universal way where 
you don’t have to learn a completely new approach to testing for every driver.
    
    The idea I had, and for which would like your feedback, would be more an 
Integration-Testsuite.
    
    We already have a XML based Unit-Test framework for the parsers which help 
get the messages themselves correct and can prove the parsers and serializers 
are doing what we want them too … here a lot more tests could be created.
    
    Based on this Framework I would like to build something that takes things 
one step further.
    
    There is one transport called “test” … this allows passing bytes into a 
pipeline and making assertions to both ends of the Netty pipelines. Also does 
it allow to read output from the pipeline.
    
    I would now like to combine the XML notation used in the unit-test 
framework to specify the expected interaction with the driver … in this we 
could treat one testcase as a sequence of “send” and “expect” elements. The 
framework would step through each element from the top to the bottom. If it 
gets a “send” element it will parse the XML message, serialize it and send 
those bytes to the pipeline. If it processes an “expect” it will wait till it 
gets a byte[] from the pipeline, parse it, serialize it as XML and compare that 
to the expected xml in the “expected” tag.
    
    I think with a setup like this we could produce a lot of integration-tests 
that should get the coverage up pretty fast and it should help with defining 
scenarios for bug reports pretty easily.
    
    What do you think?
    
    Chris
    

Reply via email to