Greg Stein wrote on Mon, Nov 12, 2012 at 21:48:23 -0500:
> On Mon, Nov 12, 2012 at 9:13 PM, Philip Martin
> <philip.mar...@wandisco.com> wrote:
> > Daniel Shahaf <d...@daniel.shahaf.name> writes:
> >
> >> Greg Stein wrote on Mon, Nov 12, 2012 at 19:01:25 -0500:
> >>>
> >>> In October, svn.apache.org generated about 900M of logs(*). Is that a
> >>> problem? I wouldn't think so. At that rate, a simple 1T drive could
> >>> hold over 83 years of logs. Are there installations busier than
> >>
> >> How many years would those 1TB disks last for if all neon clients were
> >> converted to serf?
> >
> > I have a checkout of the gcc tree, it has 78,000 files.  Now it uses
> > svn: but if it were to use http: then the serf checkout log would be 4
> > orders of magnitude bigger than the neon log.  83 years becomes 1 or 2
> > days.
> >
> > The neon log is independent of the size of the checkout, the serf log
> > scales with the size of the checkout.  If this were memory we would say
> > we have a scaling problem.  Do scaling problems not apply to disk space?
> 
> The log is proportional to the work done by the server. If you want to
> perform capacity planning, then "REPORT" doesn't tell you much. The
> serf requests enable better balancing, use of multiple cores,
> reverse-proxies to balance across machines, etc.
> 
> As Justin states, there are well-known solutions to dealing with logs.
> 

All the same, if the logs grow by four orders of magnitude, it's a change
of behaviours, so we should warn admins so they can deploy those solutions
before they run into the issue themselves.

r1408579 takes a quick stab at this -- feel free to edit to taste.

> Cheers,
> -g

Reply via email to