On 13 Jan 2006, at 11:51, [EMAIL PROTECTED] wrote:
With regard to clustering, I also want to mention a remote option, which
is to use infiniband RDMA for inter-node communication.

With an infiniband link between two machines you can copy a buffer
directly from the memory of one to the memory of the other, without
switching context. This means the kernel scheduler is not involved at all,
and there are no copies.

I love infiniband RDMA! :)


I think the bandwidth can be up to 30Gbps right now. Pathscale makes an IB adapter which plugs into the new HTX hypertransport slot, which is to say it bypasses the pci bus (!). They report an 8-byte message latency of 1.32
microseconds.

I think IB costs about $500 per node. But the cost is going down steadily because the people who use IB typically buy thousands of network cards at
a time (for supercomputers.)

The infiniband transport would be native code, so you could use JNI.
However, it would definitely be worth it.

Agreed! I'd *love* a Java API to Infiniband! Have wanted one for ages & google every once in a while to see if one shows up :)

It looks like MPI has support for Infiniband; would it be worth trying to wrap that in JNI?
http://www-unix.mcs.anl.gov/mpi/
http://www-unix.mcs.anl.gov/mpi/mpich2/

James
-------
http://radio.weblogs.com/0112098/

Reply via email to