On 06/15/2015 11:04 PM, Ken Pizzini wrote: > On Mon, Jun 15, 2015 at 10:12:12PM -0700, Sarah Newman wrote: >>> I'd say as a first approximation, you could try: >>> >>> (billed-capacity/current-usage) * (billed-capacity/MB-allocated) >>> >>> [with billed-capacity and current-usage being in whatever units are >>> used by your upstream provider(s)] as the network-rate per MB to offer. >>> Fudge the number a bit to allow for future users being different than >>> today's users (perhaps halve it for the first iteration?), then round >>> off to some convenient number >> >> Unfortunately I'm not sure if extrapolating from current usage makes >> sense since there is probably a self selection bias. > > I'm not understanding what "self selection" you're referring to here, > unless perhaps it is "folk not electing to by the service due to > too-low bandwidth limits", in which case I'm not clear what is the > value of a discussion on this topic among current users.
Luke was asking about data rates, not just just overall allocation. Since that has been left undefined it's hard to know what the expectations are. For example, it sounds like unconditionally dropping the maximum data rate after exceeding a soft data cap would not be acceptable. I like the idea of something like a "low usage" maximum bandwidth, "heavy usage" maximum bandwidth, and an "overage usage" maximum bandwidth applied to some group only when there's congestion. Is that too complicated? > > Perhaps my suggestion above is no good, but I still think it could > serve as a useful starting point. While "past performance is no > guarantee of future results", the concept of caching is premised on > the observation that "(recent) past behavior is the best (practical) > predictor of (near) future behavior available". I'd think the weakest > part of the suggestion is the fudge factor I chose out of thin air; > season it to taste, then see if the output of the formula feels sane. It's really hard to decrease allocations if we need to, so we'll probably start by looking at costs and edge up from there. _______________________________________________ discuss mailing list [email protected] http://lists.prgmr.com/mailman/listinfo/discuss
