* Tim Caswell
> *Sender: * nodejs@googlegroups.com
> *Date: *Fri, 24 Feb 2012 10:03:40 -0600
> *To: *
> *ReplyTo: * nodejs@googlegroups.com
> *Subject: *Re: [nodejs] Best practice for pushing lots of data over TCP
> to a distant server
>
> Dan, push or pull only matters for who init
i, 24 Feb 2012 10:03:40
To:
Reply-To: nodejs@googlegroups.com
Subject: Re: [nodejs] Best practice for pushing lots of data over TCP to a
distant server
Dan, push or pull only matters for who initiates the connection. The
reading too fast and buffering issue is 100% local. It's between
* Tim Caswell
> *Sender: * nodejs@googlegroups.com
> *Date: *Thu, 23 Feb 2012 09:00:30 -0600
> *To: *
> *ReplyTo: * nodejs@googlegroups.com
> *Subject: *Re: [nodejs] Best practice for pushing lots of data over TCP
> to a distant server
>
> Tcp itself handles a lot of the conne
or pushing lots of data over TCP to a
distant server
Tcp itself handles a lot of the connection issues, so it's fairly reliable.
You can still get disconnects at this level and you'll need to write
reconnect logic if you want to be that robust. As far as bandwidth
throttling, use Stream.
Tcp itself handles a lot of the connection issues, so it's fairly reliable.
You can still get disconnects at this level and you'll need to write
reconnect logic if you want to be that robust. As far as bandwidth
throttling, use Stream.prototype.pipe which implements "backpressure".
That is, it s
Daemon A needs to push lots of data to Daemon B, which is located on
another continent. The bandwidth available is usually quite high, but
it's highly variable and might drop out entirely at times.
Is there an accepted "best strategy" for implementing this in Node?
Does A just write as fast as i