On Mon, 16 Jul 2001, Terry Lambert wrote:
> Matt Dillon wrote:
> > Also, the algorithm is less helpful when it has to figure out the
> > optimal transmit buffer size for every new connection (consider a web
> > server). I am considering ripping out the ssthresh junk from the stack,
Matt Dillon wrote:
> Also, the algorithm is less helpful when it has to figure out the
> optimal transmit buffer size for every new connection (consider a web
> server). I am considering ripping out the ssthresh junk from the stack,
> which does not work virtually at all, and usin
Ah, I didn't realize that it only affects the transmit end - so I am
guessing it is similar to what ALTQ does?
BTW, I didn't mean to imply that it was an idle link - I saturated the
link with a download in the background while testing. I am also running
an MTU of 576 already.
Note that I could
:Now, we add adjustable queue sizes.. and suddenly we are overflowing the
:intermediate
:queue, and dropping packets. Since we don't have SACK we are resending
:lots of data and dropping back the window size at regular intervals. thus
:it is possible that under some situations teh adjustable buff
:
:On Sun, Jul 15, 2001 at 10:05:16AM -0700, Matt Dillon wrote:
:> Well, 4 connections isn't enough to generate packet loss. All
:> that happens is that routers inbetween start buffering the packets.
:> If you had a *huge* tcp window size then the routers inbetween could
:> run o
On Sun, Jul 15, 2001 at 10:05:16AM -0700, Matt Dillon wrote:
> Well, 4 connections isn't enough to generate packet loss. All
> that happens is that routers inbetween start buffering the packets.
> If you had a *huge* tcp window size then the routers inbetween could
> run out of pa
:Packet loss is not always a bad thing. Let me use an admittedly
:extreme example:
:
:Consider a backup server across country from four machines it's
:trying to back up nightly. So we have high (let's say 70ms) RTT's,
:and let's say for the sake of argument the limiting factor is a
:DS-3 in the
:
:Cool! We were just commenting that it's too bad dummynet/ALTQ really
:couldn't help the interactive response for us dial-up users. Anyway, I
:just tried this on my dial-up connection on a fresh -STABLE but don't
:really notice any appreciable difference.
:
:net.inet.tcp.tcp_send_dynamic_enab
On Sun, Jul 15, 2001 at 06:19:15AM -0500, Tim wrote:
> Cool! We were just commenting that it's too bad dummynet/ALTQ really
> couldn't help the interactive response for us dial-up users. Anyway, I
> just tried this on my dial-up connection on a fresh -STABLE but don't
> really notice any appreci
On Sun, Jul 15, 2001 at 01:13:11AM -0700, Julian Elischer wrote:
> This is all getting a bit far from the original topic, but
> I do worry that we may increase our packet loss with variable buffers and thus
> reduce throughout in the cases where teh fixed buffer was getting 80%
> or so of the the
Cool! We were just commenting that it's too bad dummynet/ALTQ really
couldn't help the interactive response for us dial-up users. Anyway, I
just tried this on my dial-up connection on a fresh -STABLE but don't
really notice any appreciable difference.
net.inet.tcp.tcp_send_dynamic_enable: 1
net
Ok, here is a patch set that tries to adjust the transmit congestion
window and socket buffer space according to the bandwidth product of
the link. THIS PATCH IS AGAINST STABLE!
I make calculations based on bandwidth and round-trip-time. I spent
a lot of time trying to wri
Matt Dillon wrote:
>
> I took a look at the paper Leo pointed out to me:
>
> http://www.psc.edu/networking/auto.html
>
> It's a very interesting paper, and the graphs do in fact show the type
> of instability that can occur. The code is a mess, though. I think it
> is
Leo Bicknell wrote:
> > The problem is that the tcpcb's, inpcb's, etc., are all
> > pre-reserved out of the KVA space map, so that they can
> > be allocated safely at interrupt, or because "that's how
> > the zone allocator works".
>
> I think the only critical resource here is MBUF's, which toda
On Fri, Jul 13, 2001 at 01:48:57PM -0700, Julian Elischer wrote:
> terry is servicing 1,000,000 connections..
> so I'm sure the savings are real to him...
I will be the first to suggest that there are some small number
of server configurations that require some amount of hand tuning
in order to g
terry is servicing 1,000,000 connections..
so I'm sure the savings are real to him...
Julian
On Fri, 13 Jul 2001, Leo Bicknell wrote:
> On Fri, Jul 13, 2001 at 11:47:19AM -0700, Matt Dillon wrote:
> > Well, you'd be surprised. 90% of the world still uses modems, so
> > from the point
On Fri, Jul 13, 2001 at 11:47:19AM -0700, Matt Dillon wrote:
> Well, you'd be surprised. 90% of the world still uses modems, so
> from the point of view of a web server it would be a big win. The
Doesn't that sort of make my point though? With the current defaults of
16k/socket there i
:
:On Fri, Jul 13, 2001 at 10:08:57AM -0700, Matt Dillon wrote:
:> The basic problem with calculating the bandwidth delay product is that
:> it is an inherently unstable calculation. It has to be a continuous,
:
:I think you're doing good work, but I'm concerned you're going down
:a road
Leo Bicknell wrote:
> On Thu, Jul 12, 2001 at 08:17:14PM -0600, Drew Eckhardt wrote:
> > You can reduce the window size with each ACK, although this is frowned
> > upon.
>
> There's "frowned upon" and "frowned upon". :-) For instance, if
> the only reason it's discouraged is because it causes c
Mike Silbersack wrote:
> > Actually, we can shrink the window, but that's strongly discouraged
> > by a lot of papers/books.
>
> I doubt you really need to shrink the window ever - the fact that you've
> hit the mbuf limit basically enforces that limit. And, if we're only
> upping the limit base
On Fri, Jul 13, 2001 at 10:58:06AM -0700, Terry Lambert wrote:
> > We can do all of this without ripping out the pre-allocation of
> > buffer space. I.E. forget trying to do something fancy like
> > swapping out buffers or virtualizing buffers or advertising more
> > then we actually have etc etc
Matt Dillon wrote:
> This is fairly easy to do for the transmit side of things and
> would yield an immediate improvement in available mbuf space.
> For the receive side of things we can't really do anything
> with existing connections (because we've already advertised
> that the space is availabl
> Date: Fri, 13 Jul 2001 13:29:03 -0400
> From: Leo Bicknell <[EMAIL PROTECTED]>
(The window autotuning was an interesting read...)
> I think you're doing good work, but I'm concerned you're going
> down a road that's going to take a very long time to get right.
> It is not necessary to calculat
On Fri, Jul 13, 2001 at 10:08:57AM -0700, Matt Dillon wrote:
> The basic problem with calculating the bandwidth delay product is that
> it is an inherently unstable calculation. It has to be a continuous,
I think you're doing good work, but I'm concerned you're going down
a road that's g
Ok, I'm about half way through writing up a patch set to implement
the bandwidth delay product write-buffer calculation. It may still be a
few days before it is presentable. The TCP stack already has almost
everything required to do the calculation. Adding a fair-share piece in
On Jul 13, Dan Nelson <[EMAIL PROTECTED]> wrote:
>
> Considering that w2k and Linux both have sack enabled by default, it's
> not going away. Do you have a link to the thread that says sack
> doesn't help?
I agree SACK is useful, but like I say, I ended up in a flamewar IIRC
because people coul
On Fri, Jul 13, 2001 at 09:52:28AM -0500, Dan Nelson wrote:
> Considering that w2k and Linux both have sack enabled by default, it's
> not going away. Do you have a link to the thread that says sack
> doesn't help?
The best I can find is at the bottom of
http://ftp.ee.lbl.gov/floyd/sacks.html
wh
In the last episode (Jul 12), Leo Bicknell said:
> On Thu, Jul 12, 2001 at 05:55:39PM +0100, Paul Robinson wrote:
> > When I asked about SACK about 18 months ago (IIRC), the general
> > consensus was that it was a pile of crap, and that FBSD SHOULDN'T
> > implement it if possible. I however, agree
On Thu, 12 Jul 2001, Matt Dillon wrote:
> yield an immediate improvement in available mbuf space. For the receive
> side of things we can't really do anything with existing connections
> (because we've already advertised that the space is available to the
> remote end),
In emerg
> Date: Thu, 12 Jul 2001 21:09:44 -0400
> From: Leo Bicknell <[EMAIL PROTECTED]>
> http://www.iet.unipi.it/~luigi/sack.html
H. I don't yet know enough about kernel architecture to know
in advance how I'd fare trying to patch that into 4.x (I expect
the line number to be off, obviously), but
On Thu, Jul 12, 2001 at 09:27:54PM -0500, Mike Silbersack wrote:
> I'd like to do this also, provided that we also change the mbuf to cluster
> ratio from 4/1 to 2/1. This will ensure that the doubled per-socket
> memory usage doesn't cause systems to run out of clusters earlier than
> before.
T
On Thu, Jul 12, 2001 at 08:17:14PM -0600, Drew Eckhardt wrote:
> You can reduce the window size with each ACK, although this is frowned
> upon.
There's "frowned upon" and "frowned upon". :-) For instance, if
the only reason it's discouraged is because it causes connections
to start running slow
On Thu, 12 Jul 2001, Alfred Perlstein wrote:
> * Matt Dillon <[EMAIL PROTECTED]> [010712 20:28] wrote:
>
> Actually, we can shrink the window, but that's strongly discouraged
> by a lot of papers/books.
I doubt you really need to shrink the window ever - the fact that you've
hit the mbuf limit
On Thu, 12 Jul 2001, Matt Dillon wrote:
> This is fairly easy to do for the transmit side of things and would
> yield an immediate improvement in available mbuf space. For the receive
> side of things we can't really do anything with existing connections
> (because we've already
* Matt Dillon <[EMAIL PROTECTED]> [010712 20:28] wrote:
>
> This is fairly easy to do for the transmit side of things and would
> yield an immediate improvement in available mbuf space. For the receive
> side of things we can't really do anything with existing connections
> (beca
In message <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
ane.com writes:
>This is fairly easy to do for the transmit side of things and would
>yield an immediate improvement in available mbuf space. For the receive
>side of things we can't really do anything with existing connections
>(b
I think the crux of situation here is that the correct solution is
to introduce more dynamicy in the way the kernel handles buffer space
for tcp connections.
For example, we want to be able to sysctl net.inet.tcp.sendspace and
recvspace to high values (e.g. 65535, 100) w
On Thu, Jul 12, 2001 at 05:55:39PM +0100, Paul Robinson wrote:
> When I asked about SACK about 18 months ago (IIRC), the general consensus
> was that it was a pile of crap, and that FBSD SHOULDN'T implement it if
> possible. I however, agree that there are a lot of things in SACK that would
> mass
On Thu, Jul 12, 2001 at 01:50:05PM -0400, [EMAIL PROTECTED] wrote:
> The window is there for flow control and data integrity. You seek to
> undermine those concepts, which doesnt seem like a good idea for an "out of
> the box" operating system
Not at all. Nothing I've suggested removes the wi
In a message dated 07/11/2001 7:51:11 PM Eastern Daylight Time,
[EMAIL PROTECTED] writes:
> So, bottom line, in the end I would like a FreeBSD host that out
> of the box can get 2-4 MBytes/sec across country (or better), but
> that manages it in such a way that your standard web server running
On Jul 12, Julian Elischer <[EMAIL PROTECTED]> wrote:
> AND we still don't have a working standard SACK implementation.
When I asked about SACK about 18 months ago (IIRC), the general consensus
was that it was a pile of crap, and that FBSD SHOULDN'T implement it if
possible. I however, agree tha
Some good points.
On Wed, 11 Jul 2001, Leo Bicknell wrote:
>
>
>
> * FreeBSD is at the middle-bottom of the pack when it comes to
> defaults. http://www.psc.edu/networking/perf_tune.html
AND we still don't have a working standard SACK implementation.
>
> There are a number of issues t
On Wed, 11 Jul 2001, Leo Bicknell wrote:
>
> I'm going to bring up a topic that is sure to spark a great debate
> (read: flamefest), but I think it's an important issue. I've put
> my nomex on, let's see where this goes.
I don't think this will start a flamefest; most of what you suggest is
de
I love responding to my own messages, but I do have something to add.
The following link, which seems to be along the right lines was given
to me by an interested party. I have BCC'ed them on this message so
they can show themselves if they want, or stay in the shadows for the
time being.
Take
I'm going to bring up a topic that is sure to spark a great debate
(read: flamefest), but I think it's an important issue. I've put
my nomex on, let's see where this goes.
I work for an international ISP. One of the customer complaints
that has been on the rise is poor transfer rates across ou
45 matches
Mail list logo