Hi, I've got this really strange problem with my client/server programs. Both are running Red Hat 6.1 (2.2.12-20), the server is a dual Pentium III 400 (Dell Precision), and the client is a Pentium 166 (Micron). The server is sending has a TCP connection from the client and depending on the message size (>576 bytes) the server will send the message to the client via the TCP socket otherwise via multicast. The multicast is working fine but with the TCP messages, my length field in the packet is showing a strange behavior. For all messages (multicast and TCP) I convert it (u_int32_t) to network byte order but on the client side, when I receive a TCP message the length field is entirely too large after I convert it to host byte order. ** Strange Part ** One time I even noted that its network byte order value was equal to its host byte order value, even though the Intel CPU's are little endian. ** Even Stranger part ** If I keep my client process still running and attach DDD to it and step through the function where I read the TCP message, everything is fine. The full message gets to the client and parses correctly. I tried this with 3 client boxes and they all showed this behavior. Is there any known issue with the stock RH 6.1 kernel (2.2.12-20) or glibc 2.1.2? Should I try the server on a single CPU box running 6.1 and 6.2? Should I try the clients on a RH 6.2 box? Thanks, Tuan -- Tuan Hoang The MITRE Corporation [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to [EMAIL PROTECTED]
