On Mon, Jan 23, 2012 at 7:09 PM, Michael McMahon <
michael.x.mcma...@oracle.com> wrote:

> Can I get the following change reviewed please?
>
> http://cr.openjdk.java.net/~**michaelm/7131399/webrev.1/<http://cr.openjdk.java.net/%7Emichaelm/7131399/webrev.1/>
>
> The problem is that poll(2) doesn't seem to work in a specific edge case
> tested by JCK,
> namely, when a zero length UDP message is sent on a DatagramSocket.  The
> problem is only
> detected on timed reads, ie. normal blocking reads work fine.
>
> The fix is to make the NET_Timeout() function use select() instead of
> poll().
>
> Thanks,
> Michael.
>
>
>
>

Hi

I don't work at Oracle or anything, but IMHO this is a bad idea.

The finite length bitset used by select() means that there is a limit on
the maximum integer that can fit in the bitset. With 1024 bits (a common
value), you only have to create >= 1021 file descriptors (and of course
stdin/stdout/stderr) to exceed this limit, and end up with a file
descriptor for which FD_SET breaks. This will be the case even if that file
descriptor is the only file descriptor you are trying to add to the bitset.

Please reconsider.

Regards
Damjan Jovanovic

Reply via email to