HI Lukasz,

I think the whole idea was to have some options for the transport, some for the 
driver and some for shared functionality.

Chris


-----Ursprüngliche Nachricht-----
Von: Łukasz Dywicki <l...@code-house.org> 
Gesendet: Mittwoch, 7. April 2021 18:29
An: dev@plc4x.apache.org
Betreff: Re: AW: Drivers causing OutOfMemory exceptions when querying too much 
too fast

I don't mind camel way:
?tx.requestCountMax=100&tx.requestCount=200&tx.requestTimeout=300&transactionManager=#tm

;-)

If we can get this same for all drivers then it is great. Currently we have 
dedicated setting just in S7 but not in other drivers.

I don't mind also exposing stats from driver or netty pipeline up to its 
callers (ie. byte count, request count)! ;-)

Best,
Łukasz

On 07.04.2021 14:13, Christofer Dutz wrote:
> It would definitely be the quickest fix ... 
> 
> could be a default value and we could add a config option to increase 
> that (Yes ... I know Lukasz ... Camel and stuff ;-))
> 
> Chris
> 
> -----Ursprüngliche Nachricht-----
> Von: Sebastian Rühl <sebastian.ruehl...@googlemail.com.INVALID>
> Gesendet: Mittwoch, 7. April 2021 10:53
> An: dev@plc4x.apache.org
> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too 
> much too fast
> 
> We could do it like golang does it with its queues:
> You size them and if the queue is full the scheduling would simply 
> block (blocking the async call so it won’t return)
> 
> Sebastian
> 
>> Am 07.04.2021 um 10:50 schrieb Christofer Dutz <christofer.d...@c-ware.de>:
>>
>> Hi Lukasz,
>>
>> this was exactly the part I was referring to.
>> I also thought a hard limit could be a solution, but then thought perhaps 
>> something detecting a general problem earlier might be a better solution. 
>> The later the requests fail, the more data will be missing.
>>
>> I would opt for failing the new requests. This is far simpler to implement 
>> and causes less cleanup CPU time to be wasted. Also it is more what I would 
>> expect to happen.
>>
>> But perhaps there are some commonly used design patterns used in 
>> other domains that must cope with such problems too. As the only 
>> thing I understand is binary protocols I was hoping for some 
>> alternate input ;-)
>>
>> Chris
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Łukasz Dywicki <l...@code-house.org>
>> Gesendet: Mittwoch, 7. April 2021 10:42
>> An: dev@plc4x.apache.org
>> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too 
>> much too fast
>>
>> This thing is related to RequestTransactionManager. Currently its internal 
>> queue (workLog) is not bound by size hence it will keep storing all requests 
>> in a hope of passing the spikes.
>>
>> Looking at possible solution - request queue should be bound in size (maybe 
>> configurable), in case if new request comes and there is no space it should 
>> be failed without placing in the queue with appropriate error code (client 
>> busy?).
>> This is simplest way, the other way would be failing oldest requests, ones 
>> which are retained in queue closer to its head.
>>
>> Places to look at are RequestTransaction#submit and 
>> RequestTransactionManager#submit.
>>
>> Best,
>> Łukasz
>>
>> On 07.04.2021 10:15, Christofer Dutz wrote:
>>> Hi all,
>>>
>>> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
>>> short periods. This seems to have caused the send-queue to build up faster 
>>> than it could be drained. In the end this caused OutOfMemory errors. 
>>> However, we should probably detect this situation in which over a longer 
>>> period of time the queue grows.
>>>
>>> How do you think would be the best way to address this? I know we're using 
>>> Julian's transaction thingy to take care of the sequence in which things 
>>> are sent to the PLC ... so I would assume this would be the place to fix it 
>>> as this would fix the problem for all drivers.
>>>
>>> @Julian Feinauer<mailto:j.feina...@pragmaticminds.de> could you possibly 
>>> find some time to come up with a "solution" to this problem? I think we 
>>> can't gracefully handle it as this is something where the user is trying to 
>>> do the impossible, but at least we should probably fail requests if the 
>>> queue is growing too fast ... OutOfMemeory errors are unfortunately nothing 
>>> you can gracefully recover from.
>>>
>>> Chris
>>>
>>>
> 

Reply via email to