On 01/08/2016 01:07 AM, Stefano Miccoli wrote:
On 07 Jan 2016, at 14:45, Martin Patzak (GMX) <martin.pat...@gmx.de
<mailto:martin.pat...@gmx.de>> wrote:
I am using the pyownet library to communicate to a local owserver.
I have 25 temp-sensors and 2 io 2408 modules on one powered bus with
an usb-link as master.
I do use simultaneous to read all 25 sensors every 30 seconds as well
(see below)
From a seperate task I am reading the state of the 1-wire IOs every
second.
Another task lets a LED flash, that is connected to one of the
Outputs, also every second (as an indicator of the state of the system)
The system runs 24/7 and is mainly busy running the heating system in
our house.
I agree: since owserver is multi-threaded by design there is nothing
wrong with multiple client tasks/threads accessing it concurrently.
One should just avoid clogging the server wit an excessive number of
requests.
ok good you can confirm that...
In this case it is the same sensor that takes longer to read in
consequent reads for 7 times.
When a read takes longer, than it is quite often, that the same
sensor takes longer to read soon again.
And it can be any sensor, in the beginning, in the end, anywhere.
Then it can be weeks until this sensor makes a problem again.
Could this be cause by asynchronous and parallel requests to owserver?
Sorry, no idea: this question is for the OWFS gurus
did anybody expierience similar issues?
I did upgrade to 3.1 and it still is the same.
With the new version I have the debug functionality back, which was
missing in 2.9p8, so I will have to send the debug of owserver to a file
and put the result up here if this would help finding the problem...
Stefano, would it be possible to have a max timeout while issuing a
read command? How will I know that sometimes it won't take even longer?
In pyownet.protocol there is a 2 second timeout on the socket
operations, see
https://github.com/miccoli/pyownet/blob/master/src/pyownet/protocol.py#L117
https://github.com/miccoli/pyownet/blob/master/src/pyownet/protocol.py#L310
this means that if the owserver crashes or the network connection is
somehow interrupted the pyownet operation will fail with a
pyownet.protocol.ConnError exception.
yes, that works fine. I did some tests before changing over to your
library and found it very behaved in all sorts of special situations...
In your case, since the read takes longer than 2s (4 seconds and
more), owserver is sending it’s keep-alive packets back to pyownet, so
no socket timeout. This situation is handled below line
https://github.com/miccoli/pyownet/blob/master/src/pyownet/protocol.py#L347
As you can see, as long as owserver keeps sending it’s -1 payload
frames, pyownet will block indefinitely. However this should not
happen in practice, because owserver aborts the connection after 30
seconds or so.
Here it could be possible to start a timer, and abort the connection
pyownet side after a user configurable amount of time; or simply count
the number of frames received and abort after a user configurable
amount of frames.
I don’t know which is the correct strategy: fortunately these long
reads are very rare events: maybe the present code is OK. pyownet
simply trusts owserver and patiently keeps waiting as long as owserver
tells him to do so.
I would say, it would be nice to have, but as you said not urgently
necessary.
So far I have seen only one sensor with a delayed read-out at a time, so
it is acceptable.
For some sensors however it is important that they will be read every 30
seconds, and with a timeout I could ensure to match that interval, and
to give up reading when it takes too long and just use the extrapolated
value from the last couple of readings instead.
Cheers
Martin
------------------------------------------------------------------------------
_______________________________________________
Owfs-developers mailing list
Owfs-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/owfs-developers