Hi Lukasz, this was exactly the part I was referring to. I also thought a hard limit could be a solution, but then thought perhaps something detecting a general problem earlier might be a better solution. The later the requests fail, the more data will be missing.
I would opt for failing the new requests. This is far simpler to implement and causes less cleanup CPU time to be wasted. Also it is more what I would expect to happen. But perhaps there are some commonly used design patterns used in other domains that must cope with such problems too. As the only thing I understand is binary protocols I was hoping for some alternate input ;-) Chris -----Ursprüngliche Nachricht----- Von: Łukasz Dywicki <[email protected]> Gesendet: Mittwoch, 7. April 2021 10:42 An: [email protected] Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much too fast This thing is related to RequestTransactionManager. Currently its internal queue (workLog) is not bound by size hence it will keep storing all requests in a hope of passing the spikes. Looking at possible solution - request queue should be bound in size (maybe configurable), in case if new request comes and there is no space it should be failed without placing in the queue with appropriate error code (client busy?). This is simplest way, the other way would be failing oldest requests, ones which are retained in queue closer to its head. Places to look at are RequestTransaction#submit and RequestTransactionManager#submit. Best, Łukasz On 07.04.2021 10:15, Christofer Dutz wrote: > Hi all, > > Today I learnt of a case where a S7 PLC was asked for a lot of data in very > short periods. This seems to have caused the send-queue to build up faster > than it could be drained. In the end this caused OutOfMemory errors. However, > we should probably detect this situation in which over a longer period of > time the queue grows. > > How do you think would be the best way to address this? I know we're using > Julian's transaction thingy to take care of the sequence in which things are > sent to the PLC ... so I would assume this would be the place to fix it as > this would fix the problem for all drivers. > > @Julian Feinauer<mailto:[email protected]> could you possibly find > some time to come up with a "solution" to this problem? I think we can't > gracefully handle it as this is something where the user is trying to do the > impossible, but at least we should probably fail requests if the queue is > growing too fast ... OutOfMemeory errors are unfortunately nothing you can > gracefully recover from. > > Chris > >
