First, just a quick statement. I am only looking at the way owserver operates 
with a TCP connection. Please note that I am *not* suggesting a change to the 
owserver protocol itself, but the way it uses the TCP connections from clients. 
And now that I think about it, persistent connections makes sense between 
owservers too. I suppose the protocol could be extended to indicate that the 
client wishes a persistent connection, so nothing is broken for existing 
clients, but I digress.

The number of connections is kind of a lot. Why you don't see any significant 
effect is probably because of the way you have used and tested it. Run your 
remote connections over high latency links and you will see the effect. If you 
have 70-100ms end to end latency, those handshakes add up. BTW, I did notice 
the similarity to http operation. Perhaps  you have noticed that the HTTP spec 
made it all the way to 1.1 before pipelining was included ;) The issues are the 
same.

I haven't made any direct comparisons between reading from owfs vs. reading 
from owserver directly using TCP. It's interesting that you mentioned it. It 
was the first thing I was considering as a comparison. I expect I'll have to do 
it now :) It would be an interesting comparison... I'm not sure it's a fair 
comparison though. Creating and tearing down the TCP connection for every 
request is kind of like mounting and unmounting the filesystem each time. OWFS 
effectively has a persistent connection, while the remote owserver connections 
do not.

Responding to your list of persistent connection issues:

1. By your statement I assume you mean that since each connection consumes 
resources, using persistent connections may tend to increase resource 
consumption. I think this depends on the way the connection is used. Leaving a 
connection open, but unused for a long period would be an increase. Leaving a 
connection open and using it every few seconds would be a decrease.

2. I don't believe this is a significant issue. While there may be no hard 
limits on the number of concurrent connections now, the practical limit is 1. 
The 1-wire is single access and can not be shared. Trying to use multiple 
connections simultaneously winds up making them all slow, with frequent 
timeouts.

Some TCP stacks have an effective KEEP_ALIVE mechanism which can be used to 
tear down lost connections. Others do not, so the application (owserver) would 
have to do it. There are some tricks that make it easier.

3. I don't see any difference in this regard. You could bombard the current 
owserver with connections, send requests for some larger memory reads and close 
the TCP window to 0 (not very nice, I know). There should be some kind of limit 
to the number of concurrent connections in any case. There are many ways to 
effectively perpetrate a DOS attack. Usually, you are only able to limit the 
effect.

Perhaps it depends on your point of view, but IMHO application state exists 
outside of the data transport layers. Interactive online games are a very good 
example. I know of no game that uses TCP for transport, and there is a large 
amount of state information that needs to be maintained between the game server 
and the clients. The main difference between TCP and UDP is primarily that TCP 
is guaranteed delivery while UDP is not. However, that assurance of delivery 
comes with a price.

I am in the middle of fixing some more issues with the ownet python module, and 
I noticed some unpleasantness in the network traces as well. (More on those in 
future posts.) Once those bugs have been ironed out, I'll do some benchmark 
runs. The server is the NSLU2, and I'll be able to the client and server on the 
same machine (the NSLU2), client on the same LAN at 100Mbit, and the client at 
two very different remote locations. I expect that running the client and 
server together on the NSLU2 will skew things, but that's the platform in use 
(for better or worse). 

I understand your explanation. The implementation is simple, and in most cases 
is quick enough. However, I think it can be demonstrated that when WANs are 
being used, performance could be improved. My initial results indicate that a 
local LAN client can run my test script in about 8min 42sec. A remote client 
with only 20ms latency takes about 12min 40sec. This is preliminary (only three 
runs each), but it gives you some idea. I'll post the results when I've done a 
more formal job.

Thanks for reading!

Paul 
   
On Sun, 7 Jan 2007 20:38:49 -0500, "Paul Alfille" <[EMAIL PROTECTED]> wrote:
> Interesting questions.
> 
> On 1/7/07, Paul Davis <[EMAIL PROTECTED]> wrote:
>>
>> I've been working with the ownet python access method to owserver for
>> the past few weeks. In doing so, I became aware of the fact that a
>> separate connection is opened to owserver for each operation (read,
>> dir, write). This means that for even the simplest operation, 3
>> separate connections are typically required. For more complex
>> operations the count goes up dramatically. Performing a scan for
>> DS2401 devices on a hub branch requires at least 22 separate
>> connections to discover two sensors.
> 
> 
> I know it seems like a lot, but there is no measurable impact on
> performance
> compared to, say, owfs, directly. The 1-wire bus is FAR slower than tcp.
> 
> How did this design of owserver come to be? There is a lot of
> 
> 
> The design mimics http design.
> Persistent connections have their own set of issues:
> 1. Unbounded memory for number of connections
> 2. Arbitrary limits on connections or durations, with timers, cleanup
> threads.
> 3. Susceptibility to DOS attack (even inadvertantly) when many connections
> are made but not finished.
> 
> Persistent connections are most useful when there is state information
> that
> persists. OWFS is essentially stateless. Each request for the 1-wire bus
> is
> independent of history. So the only thing we would be preserving is the
> actual network connection.
> 
>> overhead associated with TCP connection startup and shutdown. I don't
>> understand why multiple operations could not be carried out over a
>> persistent connection, eliminating the over head of all these
>> connections. This also makes me wonder why there is not an option to
>> use UDP as opposed to TCP. I didn't see any discussion of this in the
> 
> 
> UDP would probably work, but if the data (for the larger memory reads)
> gets
> split, there is some overhead in checking completeness and order. My read
> was that UDP is best for data updates where missing old data can be safely
> discarded.
> 
>> archives (could have missed it I suppose). If this was a conscious
>> decision, I'd like to understand the reasoning. I've been studying
>> the owserver source, but I don't have enough of a handle on it yet to
>> provide any insight into this. Comments?
> 
> 
> There is no reason why we can't have a version of owserver that uses UDP,
> or
> persistent TCP connections, or UNIX sockets. The current design seemed
> relatively simple, robust, and pretty fast.  Have you any test results
> that
> suggest a change?
> 
> Paul Alfille
> 
> P.S. I'm working on a directory speedup that would essentially send the
> entire directory in a single message (since many of the clients do that
> anyways).
> 
> 


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Owfs-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/owfs-developers

Reply via email to