Matthew Dillon wrote:
> 
> :>  If you have a genuine need for 500Gig of news spool,
> :
> :This is roughly 10 days of newsfeed, btw.
> 
>      This is roughly 20 days of newsfeed if one take the porn, warez, and
>      binaries groups, which contain mostly junk, and try to hold onto them
>      for the full expiration time.  If the person setting up the system
>      were to spend a little time filtering out the junk and/or adjusting the
>      expiration it is fairly easy to get away with much smaller spools and an
>      order of magnitude cheaper system.  At BEST I wound up using around a
>      40G spool.  If the person isn't willing to filter he pretty much deserves
>      all the pain he creates for himself :-(.  Roughly speaking, less then 1%
>      of a typical userbase even bothers to read usenet news.

I think if such a high amount of space with such a high
number of parallel users is a must then a lot simpler 
approach would be to divide the news spool (say, by 
newsgroups) among multiple machines and set up one 
machine as a kind of proxy, to accept requests
from the users and forward them to a machine which really
has this particular newsgroup on it. That would also
solve the problem with the CPU load and memory size.
For further scalability there may be multiple proxy
machines and a load balancing appliance of the same
kind as the ones used for the web servers.

-SB


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to