Hi Guys, Just thought I'd inject my two cents. Pro-actively adjusting the TCP window and MSS size are much better way of doing traffic shaping. The ack-floods you can get from hard dropping packets (a la QoS) can be just of much of a headache as the bandwidth surge you're trying to quell.
Doing a Packeteer-style traffic shaping in the Solaris network stack would absolutely rock! Completely unbiased here...I wouldn't happen to have an app that would benefit or anything... ;-) -J On 2/20/07, Thomas Rampelberg <[EMAIL PROTECTED]> wrote:
Roch - PAE wrote: > Paul Durrant writes: > > On 2/19/07, Thomas Rampelberg <[EMAIL PROTECTED]> wrote: > > > > > > Pure packet dropping or queuing wouldn't work in this instance, I > > > believe, because the bandwidth would get reclaimed to some level but > > > you'd loose a lot still and in the case of a false positive, the service > > > would be completely unusable based on how many dropped packets would be > > > occurring. > > > > > > Therefore, after looking through my networking books again here, I > > > believe the solution to this problem is fixed by reaching down into the > > > guts of TCP itself. Because TCP has to handle limited connections all > > > the time, the protocol has some allowances for finding the optimal > > > bandwidth of a connection a reshaping the incoming packets so that it's > > > filled correctly. It does this through the TCP window, the MSS header > > > option and TCP congestion avoidance. From everything I've been able to > > > find about the network stack in S10, there aren't any hooks for me to be > > > able to edit these settings on the fly at a per-connection level. > > > > > > So, the first obvious question, is this possible at all? And/or has > > > someone already done something like this? > > > > > > > But that's the entire point of packet dropping. If you drop packets > > from a TCP connection, preferably not using tail-drop then the sender > > should start to close down the window due to the re-transmissions that > > start to occur. The window will close down until the re-transmission > > rate drops (i.e. the receiver stops dropping packets because the b/w > > has fallen sufficiently low). > > > > Paul > > > > While drops are a necessity where hard resources are > contended, I think we can open the debate when the said > resources are managed/controlled. > > Peer's behavior will transmit as much as can fit in the > smallest of the congestion window and received socket buffer > size. In the absence of drops, the cwnd just grows but it > would seems to me that, by tweaking the advertised receive > window and assuming the round trip time is stable (?), we > can control the incoming bandwidth. That BW should never > exceed (advertised socket buffer / rtt). > > -r > > I'd not thought about the exact repercussions of packet dropping on actual incoming bandwidth but from my experiments, pure packet dropping does not provide the QOS that I would like/need for this application. Roch, you've got the idea ..... of course, as I'm coming to find out, none of the hooks for this are in the kernel atm and adding them would theoretically add a performance hit to something that's extremely performance sensitive. However, what if something along the lines of DTrace probes were added? The idea being that only when the extra functionality was needed, a module could be loaded and the performance hit realized. For servers handling this task specifically, the performance hit would be more than acceptable for me. If no one's seen the similarity yet, I'm proposing something along the lines of a programmatic Packeteer interface for that kind of in depth shaping that occurs without the QOS implications of queuing or dropping packets. _______________________________________________ crossbow-discuss mailing list [EMAIL PROTECTED] http://opensolaris.org/mailman/listinfo/crossbow-discuss
_______________________________________________ networking-discuss mailing list [email protected]
