Saturation is the goal.  Wasted bandwidth is bad.

Super-saturation, where you application is blasting everything else
into oblivion, is what you need to avoid (and you should specifically
test how your app competes with TCP).

In practice you typically oscillate around the saturation point,
sometimes leaving a little bandwidth unused, sometimes
super-saturating and causing packets to queue (increasing RTT and
eventually packet loss).  You can't really avoid this because the
saturation point moves depending on what other applications are doing.

On Sat, Apr 01, 2006 at 06:28:25PM -0800, David Barrett wrote:
> That makes sense, but it's a bit of a catch-22:
> 
> In order to not saturate the connection you need to know what's available.
> But to know what's available, you need to saturate the connection.
> 
> I'm curious if there's another way.
> 
> -david
> 
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> > Sent: Saturday, April 01, 2006 6:14 PM
> > To: Peer-to-peer development.; David Barrett
> > Cc: 'Peer-to-peer development.'
> > Subject: RE: [p2p-hackers] Hard question....
> > 
> > I've missed part of this conversation but here is my two cents on this
> > specific
> > question -  just keep increasing the amount of data that you are sending
> > in
> > bursts and the speed of those bursts until you achieve a certain target
> > error
> > rate.  i.e. 2% or whatever.  After bumping up against failures, you should
> > be
> > able to get a sense of an optimal rate.  Be sensitive to TCP congestion at
> > the
> > same time.  I back off if the round trip time starts spiking.
> > 
> > Thanks
> > -greg
> > 
> > 
> > Quoting David Barrett <[EMAIL PROTECTED]>:
> > > > From: coderman
> > > > Sent: Saturday, April 01, 2006 5:20 PM
> > > > To: Peer-to-peer development.
> > > > Subject: Re: [p2p-hackers] Hard question....
> > > >
> > > > On 4/1/06, David Barrett <[EMAIL PROTECTED]> wrote:
> > > > > ...
> > > > > Incidentally, how are you measuring "available bandwidth"?
> > > >
> > > > right now i pass the buck and let the user pick a suitable limit.  if
> > > > excessive loss is detected continuously the stack can cut by half or
> > > > exit with error.
> > > >
> > > > i'm still looking for better ways to do this; ideally it would be tied
> > > > to kernel level shaping and based on a historical view of channel
> > > > capacity.
> > >
> > > Got it.  Has anyone else had good experience trying to measure this
> > > automatically in the real world?
> > >
> > > -david
> > >
> > >
> > > _______________________________________________
> > > p2p-hackers mailing list
> > > p2p-hackers@zgp.org
> > > http://zgp.org/mailman/listinfo/p2p-hackers
> > > _______________________________________________
> > > Here is a web page listing P2P Conferences:
> > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences
> > >
> > 
> 
> 
> _______________________________________________
> p2p-hackers mailing list
> p2p-hackers@zgp.org
> http://zgp.org/mailman/listinfo/p2p-hackers
> _______________________________________________
> Here is a web page listing P2P Conferences:
> http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences
> 

-- 
Daniel Stutzbach                           Computer Science Ph.D Student
http://www.barsoom.org/~agthorr                     University of Oregon
_______________________________________________
p2p-hackers mailing list
p2p-hackers@zgp.org
http://zgp.org/mailman/listinfo/p2p-hackers
_______________________________________________
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

Reply via email to