With regard to clustering, I also want to mention a remote option, which
is to use infiniband RDMA for inter-node communication.

With an infiniband link between two machines you can copy a buffer
directly from the memory of one to the memory of the other, without
switching context. This means the kernel scheduler is not involved at all,
and there are no copies.

I think the bandwidth can be up to 30Gbps right now. Pathscale makes an IB
adapter which plugs into the new HTX hypertransport slot, which is to say
it bypasses the pci bus (!). They report an 8-byte message latency of 1.32
microseconds.

I think IB costs about $500 per node. But the cost is going down steadily
because the people who use IB typically buy thousands of network cards at
a time (for supercomputers.)

The infiniband transport would be native code, so you could use JNI.
However, it would definitely be worth it.

Guglielmo

Reply via email to