Hura sounds nice šŸ˜‰
I hope I find time to play a bit around with it in the next few days.

Greetings
Matthias Strljic, M.Sc.

UniversitƤt Stuttgart
Institut fĆ¼r Steuerungstechnik der Werkzeugmaschinen und 
Fertigungseinrichtungen (ISW)

SeidenstraƟe 36
70174 Stuttgart
GERMANY

Tel: +49 711 685-84530
Fax: +49 711 685-74530

E-Mail:Ā matthias.strl...@isw.uni-stuttgart.de
Web:Ā http://www.isw.uni-stuttgart.de

-----Original Message-----
From: Christofer Dutz <christofer.d...@c-ware.de> 
Sent: Tuesday, June 4, 2019 4:16 PM
To: dev@plc4x.apache.org
Subject: [CodeGen] Performance values

Hi all,

so as I mentioned in Slack yesterday I was able to successfully parse a S7 
packet with the code generated by the code-generator.
There I am using Apache Jexl for evaluating the expressions we are using all 
over the place. It got things working quite easily.
However my gut-feeling told me all these Jexl evaluators Iā€™m creating canā€™t be 
that ideal.

But not wanting to pre-maturely optimize something thatā€™s already good, I did a 
measurement:

So I did a little test, in which I let my parser parse one message 20000 times.

It came up with an average time of 0,8ms ā€¦ this didnā€™t sound too bad compared 
to the about 20ms of the interpreted daffodil approach.
But is this already good? Itā€™s probably not ideal to compare some results with 
the ones we know are bad, instead I wanted to compare it to the ones we are 
proud of.

In order to find out I whipped up a little manual test with the existing S7 
driver.
For this I simply plugged the 3 layers together with an Embedded channel and 
used a custom handler at the end to return the message.
This seems to work quite nicely and I was able to run the same test with the 
Netty based S7 driver layers we have been using for the last 2 years.

The results were:
Parsed 20000 packets in 796ms
That's 0.0398ms per packet

So this is a HUGE difference .

As one last check I ran JProfiler over the benchmark and it confirmed that 87% 
of the time was used by Jexl.
However the creation of Jexl expressions, not their evaluation.
So one optimization Iā€™ll try is to do, is to have the expressions created 
statically and then to simply reference them.
This will increase the complexity of the template, but should speed things up.
And Iā€™ll also change the code Iā€™m generating for the Type-Switches to work 
without jexl.

Simply assuming this would eliminate the time wasted by jexl (great 
simplification), we would reduce the parse time to 0,1ms which is still about 3 
times that of the existing driver.
I am assuming that this might be related to the BitInputStream I am using ā€¦ but 
weā€™ll deal with that as soon as weā€™re done with getting rid of the time wasted 
by Jexl.

So far an update on the generated-drivers front.

Chris


Reply via email to