On 18 April 2014 10:25, Michel Pelletier <pelletier.mic...@gmail.com> wrote:
> I'm afraid I can't help you with you specific pgm problem, but if you > don't mind me playing devil's advocate for a second, it seems like you're > doing a lot of engineering work to distribute a file to 20 servers. Have > you considered using an existing multicast tool like uftp or udpcast to > distribute the file? > > Yes, I see this a lot. Yes multicast is ideal for fast file distribution but congestion control and reliability are not a given. One of the first PGM implementations was created and predominantly used for wide file distribution, but they conveniently gave a file transfer protocol above that and everything was designed for satellite style high bandwidth, high latency, low packet rates. These days peer-to-peer distribution above TCP overlay networks is significantly cheaper to deploy and only costs additional latency through multi-hop traversal. OpenPGM was created to be flexible but only applied to high packet rate, low latency applications, and ZeroMQ has incorporated this model. There is a congestion control protocol taken from an earlier SmartPGM implementation but it has not aged well at all and so it is disabled by default and not accessible at all through the ZeroMQ interface. NORM would be a better choice of protocol here, if only because of it being stable and proven with additional features, but at the cost of some level of performance. This is the challenge, no stable scalable high performance congestion protocol has been invented yet suitable for 10GigE+ multicast. The new link for UFTP is here: http://uftp-multicast.sourceforge.net/ -- Steve-o
_______________________________________________ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev