Hi Chris, I would really really really (...) love to jump in... but I'm a bit stressed out at the moment. My approach would be to have a well specified language (similar to Spring EL, or Camels Simple Language) which is "well documented" and parsed via ANTLR (4!) to one of my ASTs because I can then generate "bodies" in arbitrary languages. This should take about one day, I think, so perhaps I can contribute something soon.
Perhaps it would be good to specify what we want to support and what the "default" methods are... like "getSize()" or so. Julian Am 10.06.19, 11:38 schrieb "Christofer Dutz" <christofer.d...@c-ware.de>: Hi all, While thinking of it, I remembered that abut 10 years ago I once created exactly such a grammar, but as ANTLR3 version. So what I'll do, is create a new grammar which is aimed only as Expression language cause I think this could be something useful in general. Perhaps you all could give some feedback as soon as I've got something. Chris Am 10.06.19, 10:40 schrieb "Christofer Dutz" <christofer.d...@c-ware.de>: Hi all, so on my trip home a few days ago I managed to also get the serialization running. I now am able to parse a byte message into a model and deserialize the model back to bytes and the byte arrays are equal. However the serialization performance I am not that happy with as it takes quite a lot longer to serialize than to parse, which shouldn't be the case. The main reason is, that while simply reading the implicit fields during the parsing, when serializing them, a lot of Evaluations executions have to be performed. They are usually quite simple expressions such as this: exItems = jexl.createExpression("parameter.numItems"); The best option would be to improve the antlr grammar to parse the expressions a little more formally correct and to implement a model for these expressions and have them automatically translated to code like: this.getParameter().getNumItems(); It should be possible and a lot faster ... anyone up for the challenge? @Julian? .. could you please help with this? As you did that great job with the initial spec ANTLR grammar. Chris Am 05.06.19, 10:09 schrieb "Christofer Dutz" <christofer.d...@c-ware.de>: Hi all, In the train today I'll be working on the serialization (Which will be a challenge) But I am sure this will be a lot of hard work but also a great step forward. Is there any progress on the Driver-Logic generation front? Otherwise I would probably try to whip up a hand-written Netty layer using the generated model. Without all the parser/serializer code this should only be a fragment of the existing driver code. Chris Am 05.06.19, 09:59 schrieb "Strljic, Matthias Milan" <matthias.strl...@isw.uni-stuttgart.de>: Hura sounds nice 😉 I hope I find time to play a bit around with it in the next few days. Greetings Matthias Strljic, M.Sc. Universität Stuttgart Institut für Steuerungstechnik der Werkzeugmaschinen und Fertigungseinrichtungen (ISW) Seidenstraße 36 70174 Stuttgart GERMANY Tel: +49 711 685-84530 Fax: +49 711 685-74530 E-Mail: matthias.strl...@isw.uni-stuttgart.de Web: http://www.isw.uni-stuttgart.de -----Original Message----- From: Christofer Dutz <christofer.d...@c-ware.de> Sent: Tuesday, June 4, 2019 4:16 PM To: dev@plc4x.apache.org Subject: [CodeGen] Performance values Hi all, so as I mentioned in Slack yesterday I was able to successfully parse a S7 packet with the code generated by the code-generator. There I am using Apache Jexl for evaluating the expressions we are using all over the place. It got things working quite easily. However my gut-feeling told me all these Jexl evaluators I’m creating can’t be that ideal. But not wanting to pre-maturely optimize something that’s already good, I did a measurement: So I did a little test, in which I let my parser parse one message 20000 times. It came up with an average time of 0,8ms … this didn’t sound too bad compared to the about 20ms of the interpreted daffodil approach. But is this already good? It’s probably not ideal to compare some results with the ones we know are bad, instead I wanted to compare it to the ones we are proud of. In order to find out I whipped up a little manual test with the existing S7 driver. For this I simply plugged the 3 layers together with an Embedded channel and used a custom handler at the end to return the message. This seems to work quite nicely and I was able to run the same test with the Netty based S7 driver layers we have been using for the last 2 years. The results were: Parsed 20000 packets in 796ms That's 0.0398ms per packet So this is a HUGE difference . As one last check I ran JProfiler over the benchmark and it confirmed that 87% of the time was used by Jexl. However the creation of Jexl expressions, not their evaluation. So one optimization I’ll try is to do, is to have the expressions created statically and then to simply reference them. This will increase the complexity of the template, but should speed things up. And I’ll also change the code I’m generating for the Type-Switches to work without jexl. Simply assuming this would eliminate the time wasted by jexl (great simplification), we would reduce the parse time to 0,1ms which is still about 3 times that of the existing driver. I am assuming that this might be related to the BitInputStream I am using … but we’ll deal with that as soon as we’re done with getting rid of the time wasted by Jexl. So far an update on the generated-drivers front. Chris