On Dec 17 19:24, Corinna Vinschen wrote:
> On Dec 17 12:28, Lev Bishop wrote:
> >  If we keep having to work
> > around more issues like this, perhaps we'd be better off bypassing the
> > afd layer entirely, by setting SO_SNDBUF to 0, using overlapped IO,
> > and managing buffers ourselves. I'm sure this would bring it's own set
> > of complications, [...]
> 
> Sorry, I'm unfamiliar with the native NT socket interface :} Is there
> somewhere a (good) tutorial for the native NT socket stuff?  Even
> without using the native API, we could also just set the Winsock
> SO_RCVBUF/SO_SNDBUF settings to 0 and intercept the setsockopt/getsockopt
> calls to maintain our own buffers, right?

On re-reading, my reply seems a bit off-track.  You're suggesting to use
SO_SNDBUF==0 with overlapped I/O.  I'm asking to keep the standard
nonblocking semantics when maintaining our own per-socket buffer.

At one point the socket stuff was implemented using overlapped I/O, but
I had serious trouble with that.  What happened was that the overlapped
code waited for the socket operation to complete in a WaitForMultipleEvents
call.  When a signal arrived, I canceled the I/O operation using
CancelIO.  The problem was that the send operation is not atomic, and
there is no way to find out how much bytes from the current send buffer
have been actually sent.  So it was not possible to return the correct
number of sent bytes to the application.  Instead the code always
returned with EINTR.  This in turn could result in data corruption or
lost connections.

If there is some way to accomplish that, for instance in the native API,
then we could revert to overlapped I/O.  If not, well...


Corinna

-- 
Corinna Vinschen                  Please, send mails regarding Cygwin to
Cygwin Project Co-Leader          cygwin AT cygwin DOT com
Red Hat

--
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple
Problem reports:       http://cygwin.com/problems.html
Documentation:         http://cygwin.com/docs.html
FAQ:                   http://cygwin.com/faq/

Reply via email to