I don't know if pf can do this, but I've seen ISPs throttle connections the longer they're open. This allows legitimate traffic like HTTP to get their small webpage, but larger downloads (such as P2P, but also large HTTP downloads) take exponentially longer.

This can still be circumvented by stopping and resuming p2p downloads, but it catches the less savvy p2p users. I agree that the real long term solution is to use a content proxy.

ml

On Tue, 11 Oct 2005, Stuart Henderson wrote:

--On 11 October 2005 17:15 +0200, David Elze wrote:

Apart from blocking ports I just see two possibilities:
[..]

You might investigate how many source states users would normally use for permitted protocols, how many states are involved with non-permitted use, and (ab?)use max-src-states with an overload table to try and contain the problem. Expect both false positives and false negatives. beck@ recently suggested using overload tables in conjunction with a http redirector to a website saying "you've been {evil|stupid}" <paraphrasing :)> which may be appropriate depending on your client base...

- slow connections down very hard on well known
  p2p-ports, so the p2p-clients can connect but
  don't get speed at all (still, other dynamic
  ports could be used)

that's not a bad idea, but over time I'd not be surprised to see software to test speeds on different ports in an attempt to circumvent this type of thing.

Some other ideas involve proxies - either block everything except to trusted proxies, or permit other traffic but heavily throttle it.

Reply via email to