Thanks the explanations. As I said, I'm fine with the change as is.

-Chris.

On 01/24/12 10:41 AM, Michael McMahon wrote:
On 24/01/12 10:23, Chris Hegarty wrote:
I'm OK with this as it, but here are a few comments:

The logic around the initial setting of the timeout seems a little
unnecessary (and the additional pointer), but not wrong.

Chris,

The setting of the timeval could be combined ok, but I wanted to avoid
calling gettimeofday()
in the case where timeout == 0
The comments should also be updated.

Yes, I'll do that. It still refers to poll() instead of select() now.
I'll add a comment about
the bug too.
Given this problem should the build be setting USE_SELECT?

I thought about that, and decided against it for two reasons.

1) it's only used by PlainSocketImpl in the networking code, and that
usage doesn't seem
to be affected by this problem. poll() is potentially a little more
efficient than select() also

2) USE_SELECT is referenced in other places in other native code (in AWT)
and I don't want to affect those usages. Granted that raises a question
as to whether they
are affected by this bug. But, I don't believe they are since, as far as
I can see UDP sockets
aren't used there.

- Michael


-Chris.

On 01/23/12 10:32 PM, Michael McMahon wrote:
On 23/01/12 21:30, Alan Bateman wrote:
On 23/01/2012 17:09, Michael McMahon wrote:
Can I get the following change reviewed please?

http://cr.openjdk.java.net/~michaelm/7131399/webrev.1/

The problem is that poll(2) doesn't seem to work in a specific edge
case tested by JCK,
namely, when a zero length UDP message is sent on a DatagramSocket.
The problem is only
detected on timed reads, ie. normal blocking reads work fine.

The fix is to make the NET_Timeout() function use select() instead of
poll().

Thanks,
Michael.

The first argument to select is s+1, shouldn't is be 1?

-alan
No, I don't think so. fd_sets are bit masks and you have to specify the
highest numbered bit in the
mask (+1).

- Michael.

Reply via email to