On Sun, 26 Feb 2006 22:08:56 +0000
Leo Lapworth <[EMAIL PROTECTED]> wrote:

> On 26 Feb 2006, at 20:42, [EMAIL PROTECTED] wrote:
> 
> > Good conversations...
> >
> > One question that I keep asking myself about RAD frameworks like  
> > Catalyst is yeah, they're nice to develop a quick solution but how  
> > well do they scale?
> >
> > In particular, I'd like to use Catalyst but I haven't seen much  
> > traffic about large application success stories...
> 
> [snip]
>
> So my personal approach is get it working, then refractor and keep  
> doing so as you need to, focus on where you have to optimise
> (usually just a few
> core pages on most sites), but make the rest of it easy to add  
> features, maintain etc.

  I think this is a problem we all deal with at some point. I am as
  guilty as the next guy. It is generally known as "premature
  optimization". The 37Signals guys have blogged about it, I just
  recently blogged about it.  

  The truth is that 99.9% of this effort is a waste of time. You can
  never know for absolute sure where your performance problems are
  going to show up.  Some sections of your app may get used more than
  you expect, some less.  You may add a feature 6 months after launch
  that impacts your performance or doesn't fit well with your
  particular framework more than any development you did pre-launch.

  Sure you can make a few guesses in advance as to what is going to 
  slow down your particular application and avoid making a stupid 
  mistake. This gets easier and easier as you acquire more experience
  with your particular set of tools whatever they may be. 
 
  But even some of the generally accepted "best practices" can be
  proven wrong.  I've heard many people say something along the 
  lines of "You can't scale an application to any real size without
  pooling your database connections", but I've done it and seen it
  done by others.

  For every "X doesn't scale" or even "X doesn't scale as well as Y"
  argument, I'm willing to bet there is someone out there doing it. 
  
  The reason it is a waste of time is that you're taking time now to
  at least ponder, probably contemplate for awhile, or worse change 
  how you are developing your application based upon something that
  MIGHT happen in the future if certain conditions are met. 

  I view it like insurance.  You have to balance the likely hood of
  the problem with the amount of time spent on it.  I have car 
  insurance because car accidents are pretty likely.  But I don't
  have a special insurance policy to protect me from a rabid gorilla
  attacking me with an aluminum baseball bat, while I'm in the shower,
  on a Tuesday or Thursday in months that begin with the letter J,
  because it is pretty unlikely that it will happen. 

  Not to sound like a PHB, but you have to think about the ROI as
  well.  In general hardware is cheaper than programmer time.  It's
  not uncommon for companies to spend $10k on programming labor to
  fix a problem that could be solved with $5k of additional hardware.

  Do yourself a favor and just go code.  Worry about performance
  problems when you have them. You literally don't have a performance
  problem until you have an application.  

  Sure, if you know for a fact that you need to support 20,000
  simultaneous users out of the gate, then you have to plan accordingly,
  but if you aren't 100% sure and end up with only 20 users all of that
  work is an absolute waste of time that could have been better spent on
  something else.

  Like filling out the insurance paperwork for your new rabid gorilla
  policy. :) 

 ---------------------------------
   Frank Wiles <[EMAIL PROTECTED]>
   http://www.wiles.org
 ---------------------------------

Reply via email to