AW: AW: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Christofer Dutz
HI Lukasz,

I think the whole idea was to have some options for the transport, some for the 
driver and some for shared functionality.

Chris


-Ursprüngliche Nachricht-
Von: Łukasz Dywicki  
Gesendet: Mittwoch, 7. April 2021 18:29
An: dev@plc4x.apache.org
Betreff: Re: AW: Drivers causing OutOfMemory exceptions when querying too much 
too fast

I don't mind camel way:
?tx.requestCountMax=100=200=300=#tm

;-)

If we can get this same for all drivers then it is great. Currently we have 
dedicated setting just in S7 but not in other drivers.

I don't mind also exposing stats from driver or netty pipeline up to its 
callers (ie. byte count, request count)! ;-)

Best,
Łukasz

On 07.04.2021 14:13, Christofer Dutz wrote:
> It would definitely be the quickest fix ... 
> 
> could be a default value and we could add a config option to increase 
> that (Yes ... I know Lukasz ... Camel and stuff ;-))
> 
> Chris
> 
> -Ursprüngliche Nachricht-
> Von: Sebastian Rühl 
> Gesendet: Mittwoch, 7. April 2021 10:53
> An: dev@plc4x.apache.org
> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too 
> much too fast
> 
> We could do it like golang does it with its queues:
> You size them and if the queue is full the scheduling would simply 
> block (blocking the async call so it won’t return)
> 
> Sebastian
> 
>> Am 07.04.2021 um 10:50 schrieb Christofer Dutz :
>>
>> Hi Lukasz,
>>
>> this was exactly the part I was referring to.
>> I also thought a hard limit could be a solution, but then thought perhaps 
>> something detecting a general problem earlier might be a better solution. 
>> The later the requests fail, the more data will be missing.
>>
>> I would opt for failing the new requests. This is far simpler to implement 
>> and causes less cleanup CPU time to be wasted. Also it is more what I would 
>> expect to happen.
>>
>> But perhaps there are some commonly used design patterns used in 
>> other domains that must cope with such problems too. As the only 
>> thing I understand is binary protocols I was hoping for some 
>> alternate input ;-)
>>
>> Chris
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-
>> Von: Łukasz Dywicki 
>> Gesendet: Mittwoch, 7. April 2021 10:42
>> An: dev@plc4x.apache.org
>> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too 
>> much too fast
>>
>> This thing is related to RequestTransactionManager. Currently its internal 
>> queue (workLog) is not bound by size hence it will keep storing all requests 
>> in a hope of passing the spikes.
>>
>> Looking at possible solution - request queue should be bound in size (maybe 
>> configurable), in case if new request comes and there is no space it should 
>> be failed without placing in the queue with appropriate error code (client 
>> busy?).
>> This is simplest way, the other way would be failing oldest requests, ones 
>> which are retained in queue closer to its head.
>>
>> Places to look at are RequestTransaction#submit and 
>> RequestTransactionManager#submit.
>>
>> Best,
>> Łukasz
>>
>> On 07.04.2021 10:15, Christofer Dutz wrote:
>>> Hi all,
>>>
>>> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
>>> short periods. This seems to have caused the send-queue to build up faster 
>>> than it could be drained. In the end this caused OutOfMemory errors. 
>>> However, we should probably detect this situation in which over a longer 
>>> period of time the queue grows.
>>>
>>> How do you think would be the best way to address this? I know we're using 
>>> Julian's transaction thingy to take care of the sequence in which things 
>>> are sent to the PLC ... so I would assume this would be the place to fix it 
>>> as this would fix the problem for all drivers.
>>>
>>> @Julian Feinauer<mailto:j.feina...@pragmaticminds.de> could you possibly 
>>> find some time to come up with a "solution" to this problem? I think we 
>>> can't gracefully handle it as this is something where the user is trying to 
>>> do the impossible, but at least we should probably fail requests if the 
>>> queue is growing too fast ... OutOfMemeory errors are unfortunately nothing 
>>> you can gracefully recover from.
>>>
>>> Chris
>>>
>>>
> 


Re: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Cesar Garcia
Hello,

In the case of high communications load to the PLC, it is best for the
driver to maintain an image of the device, so that this image is consulted
first. It is a more complicated design but for high loads it is the best
solution.

In some tests I use a disruptor as an entry point for the driver clients
and the task assigned by the disruptor handles the PLC4X driver and
associated image.

At the time it can be implemented by the optimization layer of the PLC4X
drivers.

My grain of sand

Greetings to all,


El mié, 7 abr 2021 a las 4:16, Christofer Dutz ()
escribió:

> Hi all,
>
> Today I learnt of a case where a S7 PLC was asked for a lot of data in
> very short periods. This seems to have caused the send-queue to build up
> faster than it could be drained. In the end this caused OutOfMemory errors.
> However, we should probably detect this situation in which over a longer
> period of time the queue grows.
>
> How do you think would be the best way to address this? I know we're using
> Julian's transaction thingy to take care of the sequence in which things
> are sent to the PLC ... so I would assume this would be the place to fix it
> as this would fix the problem for all drivers.
>
> @Julian Feinauer could you possibly
> find some time to come up with a "solution" to this problem? I think we
> can't gracefully handle it as this is something where the user is trying to
> do the impossible, but at least we should probably fail requests if the
> queue is growing too fast ... OutOfMemeory errors are unfortunately nothing
> you can gracefully recover from.
>
> Chris
>
>

-- 
*CEOS Automatización, C.A.*
*GALPON SERVICIO INDUSTRIALES Y NAVALES FA, C.A.,*
*PISO 1, OFICINA 2, AV. RAUL LEONI, SECTOR GUAMACHITO,*

*FRENTE A LA ASOCIACION DE GANADEROS,BARCELONA,EDO. ANZOATEGUI*
*Ing. César García*

*Cel: +58 414-760.98.95*

*Hotline Técnica SIEMENS: 0800 1005080*

*Email: support.aan.automat...@siemens.com
*


Re: AW: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Łukasz Dywicki
I don't mind camel way:
?tx.requestCountMax=100=200=300=#tm

;-)

If we can get this same for all drivers then it is great. Currently we
have dedicated setting just in S7 but not in other drivers.

I don't mind also exposing stats from driver or netty pipeline up to its
callers (ie. byte count, request count)! ;-)

Best,
Łukasz

On 07.04.2021 14:13, Christofer Dutz wrote:
> It would definitely be the quickest fix ... 
> 
> could be a default value and we could add a config option to increase that 
> (Yes ... I know Lukasz ... Camel and stuff ;-)) 
> 
> Chris
> 
> -Ursprüngliche Nachricht-
> Von: Sebastian Rühl  
> Gesendet: Mittwoch, 7. April 2021 10:53
> An: dev@plc4x.apache.org
> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much 
> too fast
> 
> We could do it like golang does it with its queues:
> You size them and if the queue is full the scheduling would simply block 
> (blocking the async call so it won’t return)
> 
> Sebastian
> 
>> Am 07.04.2021 um 10:50 schrieb Christofer Dutz :
>>
>> Hi Lukasz,
>>
>> this was exactly the part I was referring to.
>> I also thought a hard limit could be a solution, but then thought perhaps 
>> something detecting a general problem earlier might be a better solution. 
>> The later the requests fail, the more data will be missing.
>>
>> I would opt for failing the new requests. This is far simpler to implement 
>> and causes less cleanup CPU time to be wasted. Also it is more what I would 
>> expect to happen.
>>
>> But perhaps there are some commonly used design patterns used in other 
>> domains that must cope with such problems too. As the only thing I 
>> understand is binary protocols I was hoping for some alternate input ;-)
>>
>> Chris
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-
>> Von: Łukasz Dywicki  
>> Gesendet: Mittwoch, 7. April 2021 10:42
>> An: dev@plc4x.apache.org
>> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much 
>> too fast
>>
>> This thing is related to RequestTransactionManager. Currently its internal 
>> queue (workLog) is not bound by size hence it will keep storing all requests 
>> in a hope of passing the spikes.
>>
>> Looking at possible solution - request queue should be bound in size (maybe 
>> configurable), in case if new request comes and there is no space it should 
>> be failed without placing in the queue with appropriate error code (client 
>> busy?).
>> This is simplest way, the other way would be failing oldest requests, ones 
>> which are retained in queue closer to its head.
>>
>> Places to look at are RequestTransaction#submit and 
>> RequestTransactionManager#submit.
>>
>> Best,
>> Łukasz
>>
>> On 07.04.2021 10:15, Christofer Dutz wrote:
>>> Hi all,
>>>
>>> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
>>> short periods. This seems to have caused the send-queue to build up faster 
>>> than it could be drained. In the end this caused OutOfMemory errors. 
>>> However, we should probably detect this situation in which over a longer 
>>> period of time the queue grows.
>>>
>>> How do you think would be the best way to address this? I know we're using 
>>> Julian's transaction thingy to take care of the sequence in which things 
>>> are sent to the PLC ... so I would assume this would be the place to fix it 
>>> as this would fix the problem for all drivers.
>>>
>>> @Julian Feinauer<mailto:j.feina...@pragmaticminds.de> could you possibly 
>>> find some time to come up with a "solution" to this problem? I think we 
>>> can't gracefully handle it as this is something where the user is trying to 
>>> do the impossible, but at least we should probably fail requests if the 
>>> queue is growing too fast ... OutOfMemeory errors are unfortunately nothing 
>>> you can gracefully recover from.
>>>
>>> Chris
>>>
>>>
> 


AW: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Christofer Dutz
It would definitely be the quickest fix ... 

could be a default value and we could add a config option to increase that (Yes 
... I know Lukasz ... Camel and stuff ;-)) 

Chris

-Ursprüngliche Nachricht-
Von: Sebastian Rühl  
Gesendet: Mittwoch, 7. April 2021 10:53
An: dev@plc4x.apache.org
Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much too 
fast

We could do it like golang does it with its queues:
You size them and if the queue is full the scheduling would simply block 
(blocking the async call so it won’t return)

Sebastian

> Am 07.04.2021 um 10:50 schrieb Christofer Dutz :
> 
> Hi Lukasz,
> 
> this was exactly the part I was referring to.
> I also thought a hard limit could be a solution, but then thought perhaps 
> something detecting a general problem earlier might be a better solution. 
> The later the requests fail, the more data will be missing.
> 
> I would opt for failing the new requests. This is far simpler to implement 
> and causes less cleanup CPU time to be wasted. Also it is more what I would 
> expect to happen.
> 
> But perhaps there are some commonly used design patterns used in other 
> domains that must cope with such problems too. As the only thing I understand 
> is binary protocols I was hoping for some alternate input ;-)
> 
> Chris
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Łukasz Dywicki  
> Gesendet: Mittwoch, 7. April 2021 10:42
> An: dev@plc4x.apache.org
> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much 
> too fast
> 
> This thing is related to RequestTransactionManager. Currently its internal 
> queue (workLog) is not bound by size hence it will keep storing all requests 
> in a hope of passing the spikes.
> 
> Looking at possible solution - request queue should be bound in size (maybe 
> configurable), in case if new request comes and there is no space it should 
> be failed without placing in the queue with appropriate error code (client 
> busy?).
> This is simplest way, the other way would be failing oldest requests, ones 
> which are retained in queue closer to its head.
> 
> Places to look at are RequestTransaction#submit and 
> RequestTransactionManager#submit.
> 
> Best,
> Łukasz
> 
> On 07.04.2021 10:15, Christofer Dutz wrote:
>> Hi all,
>> 
>> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
>> short periods. This seems to have caused the send-queue to build up faster 
>> than it could be drained. In the end this caused OutOfMemory errors. 
>> However, we should probably detect this situation in which over a longer 
>> period of time the queue grows.
>> 
>> How do you think would be the best way to address this? I know we're using 
>> Julian's transaction thingy to take care of the sequence in which things are 
>> sent to the PLC ... so I would assume this would be the place to fix it as 
>> this would fix the problem for all drivers.
>> 
>> @Julian Feinauer<mailto:j.feina...@pragmaticminds.de> could you possibly 
>> find some time to come up with a "solution" to this problem? I think we 
>> can't gracefully handle it as this is something where the user is trying to 
>> do the impossible, but at least we should probably fail requests if the 
>> queue is growing too fast ... OutOfMemeory errors are unfortunately nothing 
>> you can gracefully recover from.
>> 
>> Chris
>> 
>> 



Re: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Sebastian Rühl
We could do it like golang does it with its queues:
You size them and if the queue is full the scheduling would simply block 
(blocking the async call so it won’t return)

Sebastian

> Am 07.04.2021 um 10:50 schrieb Christofer Dutz :
> 
> Hi Lukasz,
> 
> this was exactly the part I was referring to.
> I also thought a hard limit could be a solution, but then thought perhaps 
> something detecting a general problem earlier might be a better solution. 
> The later the requests fail, the more data will be missing.
> 
> I would opt for failing the new requests. This is far simpler to implement 
> and causes less cleanup CPU time to be wasted. Also it is more what I would 
> expect to happen.
> 
> But perhaps there are some commonly used design patterns used in other 
> domains that must cope with such problems too. As the only thing I understand 
> is binary protocols I was hoping for some alternate input ;-)
> 
> Chris
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Łukasz Dywicki  
> Gesendet: Mittwoch, 7. April 2021 10:42
> An: dev@plc4x.apache.org
> Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much 
> too fast
> 
> This thing is related to RequestTransactionManager. Currently its internal 
> queue (workLog) is not bound by size hence it will keep storing all requests 
> in a hope of passing the spikes.
> 
> Looking at possible solution - request queue should be bound in size (maybe 
> configurable), in case if new request comes and there is no space it should 
> be failed without placing in the queue with appropriate error code (client 
> busy?).
> This is simplest way, the other way would be failing oldest requests, ones 
> which are retained in queue closer to its head.
> 
> Places to look at are RequestTransaction#submit and 
> RequestTransactionManager#submit.
> 
> Best,
> Łukasz
> 
> On 07.04.2021 10:15, Christofer Dutz wrote:
>> Hi all,
>> 
>> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
>> short periods. This seems to have caused the send-queue to build up faster 
>> than it could be drained. In the end this caused OutOfMemory errors. 
>> However, we should probably detect this situation in which over a longer 
>> period of time the queue grows.
>> 
>> How do you think would be the best way to address this? I know we're using 
>> Julian's transaction thingy to take care of the sequence in which things are 
>> sent to the PLC ... so I would assume this would be the place to fix it as 
>> this would fix the problem for all drivers.
>> 
>> @Julian Feinauer<mailto:j.feina...@pragmaticminds.de> could you possibly 
>> find some time to come up with a "solution" to this problem? I think we 
>> can't gracefully handle it as this is something where the user is trying to 
>> do the impossible, but at least we should probably fail requests if the 
>> queue is growing too fast ... OutOfMemeory errors are unfortunately nothing 
>> you can gracefully recover from.
>> 
>> Chris
>> 
>> 



AW: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Christofer Dutz
Hi Lukasz,

this was exactly the part I was referring to.
I also thought a hard limit could be a solution, but then thought perhaps 
something detecting a general problem earlier might be a better solution. 
The later the requests fail, the more data will be missing.

I would opt for failing the new requests. This is far simpler to implement and 
causes less cleanup CPU time to be wasted. Also it is more what I would expect 
to happen.

But perhaps there are some commonly used design patterns used in other domains 
that must cope with such problems too. As the only thing I understand is binary 
protocols I was hoping for some alternate input ;-)

Chris




-Ursprüngliche Nachricht-
Von: Łukasz Dywicki  
Gesendet: Mittwoch, 7. April 2021 10:42
An: dev@plc4x.apache.org
Betreff: Re: Drivers causing OutOfMemory exceptions when querying too much too 
fast

This thing is related to RequestTransactionManager. Currently its internal 
queue (workLog) is not bound by size hence it will keep storing all requests in 
a hope of passing the spikes.

Looking at possible solution - request queue should be bound in size (maybe 
configurable), in case if new request comes and there is no space it should be 
failed without placing in the queue with appropriate error code (client busy?).
This is simplest way, the other way would be failing oldest requests, ones 
which are retained in queue closer to its head.

Places to look at are RequestTransaction#submit and 
RequestTransactionManager#submit.

Best,
Łukasz

On 07.04.2021 10:15, Christofer Dutz wrote:
> Hi all,
> 
> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
> short periods. This seems to have caused the send-queue to build up faster 
> than it could be drained. In the end this caused OutOfMemory errors. However, 
> we should probably detect this situation in which over a longer period of 
> time the queue grows.
> 
> How do you think would be the best way to address this? I know we're using 
> Julian's transaction thingy to take care of the sequence in which things are 
> sent to the PLC ... so I would assume this would be the place to fix it as 
> this would fix the problem for all drivers.
> 
> @Julian Feinauer<mailto:j.feina...@pragmaticminds.de> could you possibly find 
> some time to come up with a "solution" to this problem? I think we can't 
> gracefully handle it as this is something where the user is trying to do the 
> impossible, but at least we should probably fail requests if the queue is 
> growing too fast ... OutOfMemeory errors are unfortunately nothing you can 
> gracefully recover from.
> 
> Chris
> 
> 


Re: Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Łukasz Dywicki
This thing is related to RequestTransactionManager. Currently its
internal queue (workLog) is not bound by size hence it will keep storing
all requests in a hope of passing the spikes.

Looking at possible solution - request queue should be bound in size
(maybe configurable), in case if new request comes and there is no space
it should be failed without placing in the queue with appropriate error
code (client busy?).
This is simplest way, the other way would be failing oldest requests,
ones which are retained in queue closer to its head.

Places to look at are RequestTransaction#submit and
RequestTransactionManager#submit.

Best,
Łukasz

On 07.04.2021 10:15, Christofer Dutz wrote:
> Hi all,
> 
> Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
> short periods. This seems to have caused the send-queue to build up faster 
> than it could be drained. In the end this caused OutOfMemory errors. However, 
> we should probably detect this situation in which over a longer period of 
> time the queue grows.
> 
> How do you think would be the best way to address this? I know we're using 
> Julian's transaction thingy to take care of the sequence in which things are 
> sent to the PLC ... so I would assume this would be the place to fix it as 
> this would fix the problem for all drivers.
> 
> @Julian Feinauer could you possibly find 
> some time to come up with a "solution" to this problem? I think we can't 
> gracefully handle it as this is something where the user is trying to do the 
> impossible, but at least we should probably fail requests if the queue is 
> growing too fast ... OutOfMemeory errors are unfortunately nothing you can 
> gracefully recover from.
> 
> Chris
> 
> 


Drivers causing OutOfMemory exceptions when querying too much too fast

2021-04-07 Thread Christofer Dutz
Hi all,

Today I learnt of a case where a S7 PLC was asked for a lot of data in very 
short periods. This seems to have caused the send-queue to build up faster than 
it could be drained. In the end this caused OutOfMemory errors. However, we 
should probably detect this situation in which over a longer period of time the 
queue grows.

How do you think would be the best way to address this? I know we're using 
Julian's transaction thingy to take care of the sequence in which things are 
sent to the PLC ... so I would assume this would be the place to fix it as this 
would fix the problem for all drivers.

@Julian Feinauer could you possibly find 
some time to come up with a "solution" to this problem? I think we can't 
gracefully handle it as this is something where the user is trying to do the 
impossible, but at least we should probably fail requests if the queue is 
growing too fast ... OutOfMemeory errors are unfortunately nothing you can 
gracefully recover from.

Chris