I'm OK with this as it, but here are a few comments:

The logic around the initial setting of the timeout seems a little unnecessary (and the additional pointer), but not wrong.

The comments should also be updated.

Given this problem should the build be setting USE_SELECT?

-Chris.

On 01/23/12 10:32 PM, Michael McMahon wrote:
On 23/01/12 21:30, Alan Bateman wrote:
On 23/01/2012 17:09, Michael McMahon wrote:
Can I get the following change reviewed please?

http://cr.openjdk.java.net/~michaelm/7131399/webrev.1/

The problem is that poll(2) doesn't seem to work in a specific edge
case tested by JCK,
namely, when a zero length UDP message is sent on a DatagramSocket.
The problem is only
detected on timed reads, ie. normal blocking reads work fine.

The fix is to make the NET_Timeout() function use select() instead of
poll().

Thanks,
Michael.

The first argument to select is s+1, shouldn't is be 1?

-alan
No, I don't think so. fd_sets are bit masks and you have to specify the
highest numbered bit in the
mask (+1).

- Michael.

Reply via email to